id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
14,076,526 | https://en.wikipedia.org/wiki/National%20Warning%20System | The National Warning System (NAWAS) is an automated telephone system used to convey warnings to United States–based federal, state and local governments, as well as the military and civilian population. The original mission of NAWAS was to warn of an imminent enemy attack or an actual accidental missile launch upon the United States. NAWAS still supports this mission but the emphasis is on natural and technological disasters. Organizations are able to disseminate and coordinate emergency alerts and warning messages through NAWAS and other public systems by means of the Integrated Public Alert and Warning System.
NAWAS is operated and fully funded by the Federal Emergency Management Agency (FEMA).
Today, the system consists of what is essentially a 2200+ telephone party line. The phone instruments are designed to provide protection for lightning strikes so they may be used during storms. The interconnecting lines provide some protection by avoiding local telephone switches. This ensures they are available even when the local system is down or overloaded. NAWAS has major terminals at each state Emergency Operations Center and State Emergency Management Facility. Other secondary terminals include local emergency management agencies, National Weather Service field offices and Public-safety answering points (PSAPs).
NAWAS is used to disseminate warning information concerning natural and technological disasters to approximately 2200 warning points throughout the continental United States, Alaska, Hawaii and the Virgin Islands. This information includes acts of terrorism including Weapons of Mass Destruction (WMD), after aircraft incidents/accidents, earthquakes, floods, hurricanes, nuclear incidents/accidents, severe thunderstorms, tornadoes, tsunamis and winter storms or blizzards. NAWAS allows issuance of warnings to all stations nationwide or to selected stations as dictated by the situation.
When the NAWAS is not being used for emergency traffic/tests, State and local government personnel are encouraged to use it for official business.
See also
Emergency Alert System
Integrated Public Alert and Warning System
NOAA Weather Radio
References
External links and sources
Integrated Public Alert and Warning System (IPAWS)
FEMA Manual 211-2-1: National Warning System Operations
Warning systems
disaster preparedness in the United States
Organizations established in 1978
1978 establishments in the United States | National Warning System | [
"Technology",
"Engineering"
] | 437 | [
"Warning systems",
"Safety engineering",
"Measuring instruments"
] |
14,077,738 | https://en.wikipedia.org/wiki/Optical%20power%20meter | An optical power meter (OPM) is a device used to measure the power in an optical signal. The term usually refers to a device for testing average power in fiber optic systems. Other general purpose light power measuring devices are usually called radiometers, photometers, laser power meters (can be photodiode sensors or thermopile laser sensors), light meters or lux meters.
A typical optical power meter consists of a calibrated sensor, measuring amplifier and display.
The sensor primarily consists of a photodiode selected for the appropriate range of wavelengths and power levels.
On the display unit, the measured optical power and set wavelength is displayed. Power meters are calibrated using a traceable calibration standard.
A traditional optical power meter responds to a broad spectrum of light, however, the calibration is wavelength dependent. This is not normally an issue, since the test wavelength is usually known, however, it has a couple of drawbacks. Firstly, the user must set the meter to the correct test wavelength, and secondly, if there are other spurious wavelengths present, then wrong readings will result.
Optical power meters are available as stand-alone bench or handheld instruments or combined with other test functions such as an Optical Light Source (OLS), Visual Fault Locator (VFL), or as sub-system in a larger or modular instrument. Commonly, a power meter on its own is used to measure absolute optical power, or used with a matched light source to measure loss.
When combined with a light source, the instrument is called an Optical Loss Test Set, or OLTS, typically used to measure optical power and end-to-end optical loss. More advanced OLTS may incorporate two or more power meters, and so can measure Optical Return Loss. GR-198, Generic Requirements for Hand-Held Stabilized Light Sources, Optical Power Meters, Reflectance Meters, and Optical Loss Test Sets, discusses OLTS equipment in depth.
Alternatively, an Optical Time Domain Reflectometer (OTDR) can measure optical link loss if its markers are set at the terminus points for which the fiber loss is desired. However, this is an indirect measurement. A single-direction measurement may quite inaccurate if there are multiple fibers in a link, since the back-scatter coefficient is variable between fibers. Accuracy can be increased if a bidirectional average is made. GR-196, Generic Requirements for Optical Time Domain Reflectometer (OTDR) Type Equipment, discusses OTDR equipment in depth.
Sensors
The major semiconductor sensor types are Silicon (Si), Germanium (Ge) and Indium Gallium Arsenide (InGaAs). Additionally, these may be used with attenuating elements for high optical power testing, or wavelength selective elements so they only respond to particular wavelengths. These all operate in a similar type of circuit, however, in addition to their basic wavelength response characteristics, each one has some other particular characteristics:
Si detectors tend to saturate at relatively low power levels, and they are only useful in the visible and 850 nm bands, where they offer generally good performance.
Ge detectors saturate at the highest power levels, but have poor low power performance, poor general linearity over the entire power range, and are generally temperature sensitive. They are only marginally accurate for "1550 nm" testing, due to a combination of temperature and wavelength affecting responsivity at e.g. 1580 nm, however they provide useful performance over the commonly used 850 / 1300 / 1550 nm wavelength bands, so they are extensively deployed where lower accuracy is acceptable. Other limitations include: non-linearity at low power levels, and poor responsivity uniformity across the detector area.
InGaAs detectors saturate at intermediate levels. They offer generally good performance, but are often very wavelength sensitive around 850 nm. So they are largely used for single-mode fiber testing at 1270 - 1650 nm.
An important part of an optical power meter sensor, is the fiber optic connector interface. Careful optical design is required to avoid significant accuracy problems when used with the wide variety of fiber types and connectors typically encountered.
Another important component, is the sensor input amplifier. This needs very careful design to avoid significant performance degradation over a wide range of conditions.
Power measuring range
A typical OPM is linear from about 0 dBm (1 milli Watt) to about -50 dBm (10 nano Watt), although the display range may be larger. Above 0 dBm is considered "high power", and specially adapted units may measure up to nearly + 30 dBm ( 1 Watt). Below -50 dBm is "low power", and specially adapted units may measure as low as -110 dBm. Irrespective of power meter specifications, testing below about -50 dBm tends to be sensitive to stray ambient light leaking into fibers or connectors. So when testing at "low power", some sort of test range / linearity verification (easily done with attenuators) is advisable. At low power levels, optical signal measurements tend to become noisy, so meters may become very slow due to use of a significant amount of signal averaging.
To calculate dBm from power meter output :
The linear-to-dBm calculation method is:
dB = 10 log ( P1 / P2 )
where P1 = measured power level ( e.g. in mWatts ), P2 = reference power level, which is 1 mW
Calibration and accuracy
Optical Power Meter calibration and accuracy is a contentious issue. The accuracy of most primary reference standards (e.g. Weight, Time, Length, Volt, etc.) is known to a high accuracy, typically of the order of 1 part in a billion. However the optical power standards maintained by various National Standards Laboratories, are only defined to about one part in a thousand. By the time this accuracy has been further degraded through successive links, instrument calibration accuracy is usually only a few %. The most accurate field optical power meters claim 1% calibration accuracy. This is orders of magnitude less accurate than a comparable electrical meter.
Calibration processes for optical power meters are given in IEC 61315 Ed. 3.0 b:2019 - Calibration of fibre-optic power meters.
Further, the in-use accuracy achieved is usually significantly lower than the claimed calibration accuracy, by the time additional factors are taken into account. In typical field applications, factors may include: ambient temperature, optical connector type, wavelength variations, linearity variations, beam geometry variations, detector saturation.
Therefore, achieving a good level of practical instrument accuracy and linearity is something that requires considerable design skill, and care in manufacturing.
With the increasing global importance in the reliability of data transmission and optical fiber, and also the sharply reducing optical loss margin of these systems in data centres, there is increased emphasis on the accuracy of optical power meters, and also proper traceability compliance via International Laboratory Accreditation Cooperation (ILAC) accredited calibration, which includes metrological traceability to national standards and external laboratory accreditation to ISO/IEC 17025 to improve confidence in overall accuracy claims.
Extended sensitivity meters
A class of laboratory power meters has an extended sensitivity, of the order of -110 dBm. This is achieved by using a very small detector and lens combination, and also a mechanical light chopper at typically 270 Hz, so the meter actually measures AC light. This eliminates unavoidable dc electrical drift effects. If the light chopping is synchronized with an appropriate synchronous (or "lock-in") amplifier, further sensitivity gains are achieved. In practice, such instruments usually achieve lower absolute accuracy due to the small detector diode, and for the same reason, may only be accurate when coupled with single-mode fiber. Occasionally such an instrument may have a cooled detector, though with the modern abandonment of Germanium sensors, and the introduction of InGaAs sensors, this is now increasingly uncommon.
Pulse power measurement
Optical power meters usually display time-averaged power. So for pulse measurements, the signal duty cycle must be known to calculate the peak power value. However, the instantaneous peak power must be less than the maximum meter reading, or the detector may saturate, resulting in wrong average readings. Also, at low pulse repetition rates, some meters with data or tone detection may produce improper or no readings.
A class of "high power" meters has some type of optical attenuating element in front of the detector, typically allowing about a 20 dB increase in maximum power reading. Above this level, an entirely different class of "laser power meter" instrument is used, usually based on thermal detection.
Common fiber optic test applications
Measuring the absolute power in a fiber optic signal. For this application, the power meter needs to be properly calibrated at the wavelength being tested, and set to this wavelength.
Measuring the optical loss in a fiber, in combination with a suitable stable light source. Since this is a relative test, accurate calibration is not a particular requirement, unless two or more meters are being used due to distance issues. If a more complex two-way loss test is performed, then power meter calibration can be ignored, even when two meters are used.
Some instruments are equipped for optical test tone detection, to assist in quick cable continuity testing. Standard test tones are usually 270 Hz, 1 kHz, 2 kHz. Some units can also determine one of 12 tones, for ribbon fiber continuity testing.
Test automation
Typical test automation features usually apply to loss testing applications, and include:
The ability to set the unit to read 0 dB at a reference power level, typically the test source.
The ability to store readings into internal memory, for subsequent recall and download to a computer.
The ability to synchronize the wavelength with a test source, so that the meter sets to the source wavelength. This requires a specifically matched source. The simplest way of achieving this, is by recognizing a test tone, but the best way is by transfer of data. The data method has benefits that the source can send additional useful data such as nominal source power level, serial number etc.
Wavelength-selective meters
An increasingly common special-purpose OPM, commonly called a "PON Power Meter" is designed to hook into a live PON (Passive Optical Network) circuit, and simultaneously test the optical power in different directions and wavelengths. This unit is essentially a triple power meter, with a collection of wavelength filters and optical couplers. Proper calibration is complicated by the varying duty cycle of the measured optical signals. It may have a simple pass/ fail display, to facilitate easy use by operators with little expertise.
Wavelength sensitivity of fiber optic power meter is a problem when using a photodiode for voltage current measurement. If the temperature-sensitive measurement replaces voltage-current measurement by photodiode the wavelength sensitivity of an OPM can be reduced. Thus if the photodiode is reverse biased by a constant voltage source and supplied with constant current, when triggered by light the junction dissipates power. The temperature of the junction rises and the temperature rise measured by thermistor is directly proportional to the Optical power. Due to constant current supply, the reflection of power to photodiode is nearly zero and the transition to and fro of electrons between valence band and conduction band is stable.
See also
Optical attenuator
References
External links
OPM Application Notes
greenTEG Application Note Laser Power Measurement
Guidelines for specifying OPMs
Optical instruments
Electromagnetic radiation meters
Fiber optics
Telecommunications equipment | Optical power meter | [
"Physics",
"Technology",
"Engineering"
] | 2,355 | [
"Measuring instruments",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electromagnetic radiation meters"
] |
1,732,908 | https://en.wikipedia.org/wiki/Bass%20reflex | A bass reflex system (also known as a ported, vented box or reflex port) is a type of loudspeaker enclosure that uses a port (hole) or vent cut into the cabinet and a section of tubing or pipe affixed to the port. This port enables the sound from the rear side of the diaphragm to increase the efficiency of the system at low frequencies as compared to a typical sealed- or closed-box loudspeaker or an infinite baffle mounting.
A reflex port is the distinctive feature of this popular enclosure type. The design approach enhances the reproduction of the lowest frequencies generated by the woofer or subwoofer. The port generally consists of one or more tubes or pipes mounted in the front (baffle) or rear face of the enclosure. Depending on the exact relationship between driver parameters, the enclosure volume (and filling if any), and the tube cross-section and length, the efficiency can be substantially improved over the performance of a similarly sized sealed-box enclosure.
Explanation
Unlike closed-box loudspeakers, which are nearly airtight, a bass reflex system has an opening called a port or vent cut into the cabinet, generally consisting of a pipe or duct (typically circular or rectangular cross section). The air mass in this opening resonates with the "springiness" of the air inside the enclosure in exactly the same fashion as the air in a bottle resonates when a current of air is directed across the opening. Another metaphor often used is to think of the air like a spring or rubber band. The frequency at which the box/port system resonates, known as the Helmholtz resonance, depends upon the effective length and cross sectional area of the duct, the internal volume of the enclosure, and the speed of sound in air. In the early years of ported speakers, speaker designers had to do extensive experimentation to determine the ideal diameter of the port and length of the port tube or pipe; however, more recently, there are numerous tables and computer programs that calculate, for a given size of cabinet, how large the port should be and how long the tube should be. Even with these programs, however, some experimentation with prototypes is still necessary to determine if the enclosure sounds good.
If this vent air mass/box air springiness resonance is so chosen as to lie lower in frequency than the natural resonance frequency of the bass driver, an interesting phenomenon happens: the backwave of the bass driver sound emission is inverted in polarity for the frequency range between the two resonances. Since the backwave is already in opposite polarity with the front wave, this inversion brings the two emissions in phase (although the vent emission is lagging by one wave period) and therefore they reinforce each other. This has the useful purpose of producing higher output (for any given driver excursion compared to a closed box) or, conversely, a similar output with a smaller excursion (which means less driver distortion). The penalty incurred for this reinforcement is time smearing: in essence the vent resonance augments main driver output by imposing a "resonant tail" on it. For frequencies above the natural resonance of the driver, the reflex alignment has no influence. For frequencies below the vent resonance, polarity inversion is not accomplished, and backwave cancellation occurs. Furthermore, the driver behaves as though suspended in free air, as box air springiness is absent.
When speakers are designed for home use or for high-volume live performance settings (e.g., with bass amplifier speaker cabinets and PA system speakers and subwoofers), manufacturers often consider the advantages of porting (increased bass response, lower bass response, improved efficiency) to outweigh the disadvantages (port noise, resonance problems). The design is popular among consumers and manufacturers (speakers cabinets can be smaller and lighter, for more or less equivalent performance) but the increase in bass output requires close matching of driver, the enclosure, and port. Poorly matched reflex designs can have unfortunate characteristics or drawbacks, sometimes making them unsuitable for settings requiring high accuracy and neutrality of sound, e.g. studio monitor speakers for use by audio engineers in monitoring facilities, recording studios etc. However it is possible to design a bass reflex system that mostly overcomes these drawbacks; and quality bass reflex designs are commonly found in demanding environments across the world.
Comparison with passive radiator
Passive radiators are similar in operation to ported bass reflex systems, and both methods are used for the same reason: to extend the system's low frequency response. A passive radiator is the use of one or more additional cones (diaphragms) in a cabinet instead of ports. These passive diaphragms do not have a magnet or voice coil and are not connected to a power amplifier. Acoustically, they behave largely the same around their tuning frequency as a port, as they also act as a Helmholtz resonator excited by the rear side of the bass driver's diaphragm. Passive radiators can be tuned independent of their dimensions by adding or removing mass from the diaphragm of the passive radiator. This makes them useful for smaller enclosures with the same box tuning where an equivalently tuned port would be impractical. They also sidestep the midrange pipe resonances that can be an issue on ported enclosures in full range systems. However, to be effective, they require a much greater surface area on the cabinet exterior than a port. They are also considerably more expensive than a port tube, as they are effectively a speaker driver minus its voice coil and motor magnet.
History
The effect of the various speaker parameters, enclosure sizes and port (and duct) dimensions on the performance of bass reflex systems was not well understood until the early 1960s. Subsequently, pioneering analyses by A.N. Thiele, J.E. Benson and Richard H. Small presented the theoretical foundations for the synthesis of bass reflex loudspeaker systems to meet specified low-frequency performance criteria were developed into a series of "alignments" (sets of the relevant speaker parameters) that allowed designers to produce useful, predictable responses. Keele extended the design options by presenting a new set of 6th-order vented-box loudspeaker system alignments. All of these results made it possible for speaker manufacturers to design bass reflex loudspeakers to match various sizes of enclosures, and to match enclosures to given speakers with great predictability. Due to the physical electromechanical constraints, it is not possible to have a small speaker in a small enclosure producing extended bass response at high efficiencies (i.e., requiring only a low-powered amplifier). It is possible to have two of these attributes, but not all; this has been termed Hofmann's Iron Law after J. Anton Hofmann of KLH's summary (with Henry Kloss) of Edgar Villchur work years earlier. The sound pressure produced depends upon the efficiency of the speaker, the mechanical or thermal power handling of the driver, the power input and the size of the driver.
Advantages
Novak concluded that a bass reflex enclosure can have greater acoustic output for a given amount of distortion, lower total harmonic intermodulation, and transient distortion than a completely closed-box of similar size. Such a resonant system augments the bass response of the driver and, if designed properly, can extend the frequency response of the driver/enclosure combination to below the range the driver would reproduce in a similarly sized sealed box. The enclosure resonance has a secondary benefit in that it limits cone movement in a band of frequencies centered around the tuning frequency, reducing distortion in that frequency range. Ported cabinet systems are cheaper than a passive radiator speaker with the same performance; whereas a passive radiator system requires one or two "drone cone" speakers, a ported system requires only a hole or port and a length of tubing.
Limitations
Compared to closed-box loudspeakers, bass reflex cabinets have poorer transient response, resulting in longer decay times associated with the bass notes. Some example step responses for various high-pass filter functions are shown in the relevant figure, where each filter has an identical −3 dB cut-off frequency of 50 Hz. In that figure, (a) represents the step response of a conventional B4 vented box alignment, while (b) represents the step response of a B2 closed-box alignment with Q = 0.7071. The transient response of a vented-box loudspeaker can be improved by choosing a QB3 alignment similar to (c), which results in a more well damped transient response than that produced by the B4 alignment. However, a C4 vented-box alignment similar to (e) results in a less well damped transient response.
In order to achieve their bass output, ported loudspeaker enclosures stagger two resonances: one from the driver and the boxed air, and another from the boxed air and the port. At the vent tuning frequency, the output from the port is the primary source of sound output, as the displacement of the woofer is at a minimum. This comprises a more complex, higher-order system than an equivalent closed-box loudspeaker enclosure. The interaction between the two resonances results in a system that possesses less damping and increased time delay (increased group delay). Due to the latter, a flat steady-state bass response does not occur at the same time as the rest of the sonic output at higher frequencies in the operating region. Instead, it starts later (lags) and the lag increases, accumulating over time as a longish resonant "tail" arriving behind the main "body" of the acoustic signal. As a result of their electrodynamic characteristics, ported enclosures, which are well approximated as 4th-order high-pass filter systems, generally result in poorer transient response at low frequencies than do closed-box loudspeaker systems, which are 2nd-order high-pass filter systems.
Another trade-off for this augmentation is that, at frequencies below the vent tuning frequency, the port unloads the cone and allows it to move with very large displacements. This means the speaker can be driven past its safe mechanical operating limits at frequencies below the tuning frequency with much less power than in an equivalently-sized sealed enclosure. For this reason, high-powered systems using a bass reflex design are often protected by a high-pass filter that removes signals below the vent tuning frequency. Unfortunately, electrical filtering adds further frequency-dependent group delay. Even if such filtering can be adjusted not to remove musical content, it may interfere with sonic information connected with the size and ambiance of the recording location or venue, information that often exists in the low bass spectrum.
Whether or not the effects of these in a properly designed system are audible remains a matter of debate. A poorly designed bass reflex system, generally one whose vent is incorrectly tuned too high or too low in frequency, tends to produce excessive output at the tuning frequency relative to the rest of the pass-band of the loudspeaker system. This behaviour can add a "booming" one-note quality to the reproduction of the bass frequencies. Although some may consider that this is due to the port resonance imposing its characteristics to the note being played, it is simply the result of a non-maximally flat frequency response function. If such a peak in the bass response of a bass reflex enclosure coincides with one of the resonant modes of the room, a not unusual occurrence, the effects will be further exacerbated. In general, the lower in frequency a port is tuned, the less objectionable these problems are likely to be.
Ports often are placed on the front baffle, and may thus allow transmission of unwanted midrange frequencies reflected from within the box into the listening environment. If it is undersized, a port may also generate "wind noise" or "chuffing" sounds, due to the turbulence around the port openings at high air speeds. Enclosures with a rear-facing port mask these effects to some extent, but they cannot be placed directly against a wall without causing audible problems. They require some free space around the port so they can perform as intended. Some manufacturers incorporate a floor-facing port within the speaker stand or base, offering predictable and repeatable port performance within the design constraints.
Port compression
Port compression is a reduction in port effectiveness as sound pressure levels increase. As a ported system plays louder, the efficiency of the port reduces, and distortion emitted by the port increases. This can be reduced by port design, but not totally eliminated. Asymmetrical loading of the driver cone during high level usage can be reduced by placing a baffle at the inside end of the port tube. This baffle can also serve as a stiffening structural element of the enclosure.
Applications
Subwoofer cabinets used in home cinema and sound reinforcement systems are often fitted with ports or vents. Bass amp speaker cabinets and keyboard amp speaker cabinets, which have to reproduce low-frequency sounds down to 41 Hz or below, are often built with ports or vents, which are typically on the front of the cabinet (though they are also placed on the rear). Even some expensive hi-fi speakers have carefully designed ports.
See also
Acoustic suspension – a method of loudspeaker cabinet design and utilisation that uses one or more loudspeaker drivers mounted in a sealed box or cabinet.
Loudspeaker enclosure
Passive radiator
Transmission line loudspeaker
References
Loudspeaker technology
Audio engineering
Bass (sound) | Bass reflex | [
"Engineering"
] | 2,799 | [
"Electrical engineering",
"Audio engineering"
] |
1,733,444 | https://en.wikipedia.org/wiki/Catalan%20vault | The Catalan vault (), also called thin-tile vault, Catalan turn, Catalan arch, boveda ceiling (Spanish bóveda 'vault'), or timbrel vault, is a type of low brickwork arch forming a vaulted ceiling that often supports a floor above. It is constructed by laying a first layer of light bricks lengthwise "in space", without centering or formwork, and has a much gentler curve than most other methods of construction.
Of Roman origin, it is a traditional form in regions around the Mediterranean including Catalonia (where it is widely used), and has spread around the world in more recent times through the work of Catalan architects such as Antoni Gaudí and Josep Puig i Cadafalch, and the Valencian architect Rafael Guastavino.
A study on the stability of the Catalan vault is kept at the archive of the Institute of Catalan Studies, where it is said to have been entrusted by Josep Puig i Cadafalch.
Though it is popularly called the Catalan vault, this construction method is found throughout the Mediterranean and the invention of the term "Catalan vault" occurred in 1904 at an architectural congress in Madrid.
The technique was brought to New Spain (colonial Mexico), and is still used in parts of contemporary Mexico.
In the United States
Valencian architect and builder Rafael Guastavino introduced the technique to the United States in the 1880s, where it is called Guastavino tile. It is used in many major buildings across the United States, including the Boston Public Library, the New York Grand Central Terminal, and many others.
See also
List of architectural vaults
References
External links
Ramage, Michael. "Construction of a Vault". details the process of constructing a six-foot by six-foot vault.
Architecture in Spain
Catalan architecture
Spanish Colonial architecture in Mexico
Arches and vaults
Ceilings
Brick buildings and structures
Building engineering | Catalan vault | [
"Engineering"
] | 384 | [
"Structural engineering",
"Building engineering",
"Civil engineering",
"Ceilings",
"Architecture"
] |
1,735,128 | https://en.wikipedia.org/wiki/Disproportionation | In chemistry, disproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation state. The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as symproportionation.
More generally, the term can be applied to any desymmetrizing reaction where two molecules of one type react to give one each of two different types:
This expanded definition is not limited to redox reactions, but also includes some molecular autoionization reactions, such as the self-ionization of water. In contrast, some authors use the term redistribution to refer to reactions of this type (in either direction) when only ligand exchange but no redox is involved and distinguish such processes from disproportionation and comproportionation.For example, the Schlenk equilibrium
is an example of a redistribution reaction.
History
The first disproportionation reaction to be studied in detail was:
This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it .
Examples
Mercury(I) chloride disproportionates upon UV-irradiation:
Phosphorous acid disproportionates upon heating to 200°C to give phosphoric acid and phosphine:
Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate:
The oxidation numbers remain constant in this acid-base reaction.
Another variant on disproportionation is radical disproportionation, in which two radicals form an alkene and an alkane.
{2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3}
Disproportionation of sulfur intermediates by microorganisms is widely observed in sediments.
Chlorine gas reacts with concentrated sodium hydroxide to form sodium chloride, sodium chlorate and water. The ionic equation for this reaction is as follows:
The chlorine reactant is in oxidation state 0. In the products, the chlorine in the Cl− ion has an oxidation number of −1, having been reduced, whereas the oxidation number of the chlorine in the ion is +5, indicating that it has been oxidized.
Decomposition of numerous interhalogen compounds involve disproportionation. Bromine fluoride undergoes a disproportionation reaction to form bromine trifluoride and bromine in non-aqueous media:
The dismutation of superoxide free radical to hydrogen peroxide and oxygen, catalysed in living systems by the enzyme superoxide dismutase:
The oxidation state of oxygen is − in the superoxide free radical anion, −1 in hydrogen peroxide and 0 in dioxygen.
In the Cannizzaro reaction, an aldehyde is converted into an alcohol and a carboxylic acid. In the related Tishchenko reaction, the organic redox reaction product is the corresponding ester. In the Kornblum–DeLaMare rearrangement, a peroxide is converted to a ketone and an alcohol.
The disproportionation of hydrogen peroxide into water and oxygen catalysed by either potassium iodide or the enzyme catalase:
In the Boudouard reaction, carbon monoxide disproportionates to carbon and carbon dioxide. The reaction is for example used in the HiPco method for producing carbon nanotubes; high-pressure carbon monoxide disproportionates when catalysed on the surface of an iron particle:
Nitrogen has oxidation state +4 in nitrogen dioxide, but when this compound reacts with water, it forms both nitric acid and nitrous acid, where nitrogen has oxidation states +5 and +3 respectively:
In hydrazoic acid and sodium azide, each of the 3 nitrogen atoms of these very energetic linear polyatomic species has an oxidation state of −. These unstable and highly toxic compounds will disproportionate in aqueous solution to form gaseous nitrogen () and ammonium ions, or ammonia, depending on pH conditions, as it can be conveniently verified by means of the Frost diagram for nitrogen:
Under acidic conditions, hydrazoic acid disproportionates as:
Under neutral, or basic, conditions, the azide anion disproportionates as:
Dithionite undergoes acid hydrolysis to thiosulfate and bisulfite:
Dithionite also undergoes alkaline hydrolysis to sulfite and sulfide:
Dithionate is prepared on a larger scale by oxidizing a cooled aqueous solution of sulfur dioxide with manganese dioxide:
Polymer chemistry
In free-radical chain-growth polymerization, chain termination can occur by a disproportionation step in which a hydrogen atom is transferred from one growing chain molecule to another one, which produces two dead (non-growing) chains.
Chain—CH2–CHX• + Chain—CH2–CHX• → Chain—CH=CHX + Chain—CH2–CH2X
in which, Chain— represents the already formed polymer chain, and • indicates a reactive free radical.
Biochemistry
In 1937, Hans Adolf Krebs, who discovered the citric acid cycle bearing his name, confirmed the anaerobic dismutation of pyruvic acid into lactic acid, acetic acid and CO2 by certain bacteria according to the global reaction:
The dismutation of pyruvic acid in other small organic molecules (ethanol + CO2, or lactate and acetate, depending on the environmental conditions) is also an important step in fermentation reactions. Fermentation reactions can also be considered as disproportionation or dismutation biochemical reactions. Indeed, the donor and acceptor of electrons in the redox reactions supplying the chemical energy in these complex biochemical systems are the same organic molecules simultaneously acting as reductant or oxidant.
Another example of biochemical dismutation reaction is the disproportionation of acetaldehyde into ethanol and acetic acid.
While in respiration electrons are transferred from substrate (electron donor) to an electron acceptor, in fermentation part of the substrate molecule itself accepts the electrons. Fermentation is therefore a type of disproportionation, and does not involve an overall change in oxidation state of the substrate. Most of the fermentative substrates are organic molecules. However, a rare type of fermentation may also involve the disproportionation of inorganic sulfur compounds in certain sulfate-reducing bacteria.
Disproportionation of sulfur intermediates
Sulfur isotopes of sediments are often measured for studying environments in the Earth's past (paleoenvironment). Disproportionation of sulfur intermediates, being one of the processes affecting sulfur isotopes of sediments, has drawn attention from geoscientists for studying the redox conditions in the oceans in the past.
Sulfate-reducing bacteria fractionate sulfur isotopes as they take in sulfate and produce sulfide. Prior to 2010s, it was thought that sulfate reduction could fractionate sulfur isotopes up to 46 ‰ and fractionation larger than 46 ‰ recorded in sediments must be due to disproportionation of sulfur intermediates in the sediment. This view has changed since the 2010s. As substrates for disproportionation are limited by the product of sulfate reduction, the isotopic effect of disproportionation should be less than 16 ‰ in most sedimentary settings.
Disproportionation can be carried out by microorganisms obligated to disproportionation or microorganisms that can carry out sulfate reduction as well. Common substrates for disproportionation include elemental sulfur (), thiosulfate () and sulfite ().
Claus reaction: a comproportionation reaction
The Claus reaction is an example of comproportionation reaction (the inverse of disproportionation) involving hydrogen sulfide () and sulfur dioxide () to produce elemental sulfur and water as follows:
The Claus reaction is one of the chemical reactions involved in the Claus process used for the desulfurization of gases in the oil refinery plants and leading to the formation of solid elemental sulfur (), which is easier to store, transport, reuse when possible, and dispose of.
See also
Dismutase
Oxidoreductase
Fermentation (biochemistry)
References
Chemical reactions
Chemical processes
Organic reactions
Biochemistry | Disproportionation | [
"Chemistry",
"Biology"
] | 1,846 | [
"Biochemistry",
"Organic reactions",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
1,736,031 | https://en.wikipedia.org/wiki/Atom%20economy | Atom economy (atom efficiency/percentage) is the conversion efficiency of a chemical process in terms of all atoms involved and the desired products produced. The simplest definition was introduced by Barry Trost in 1991 and is equal to the ratio between the mass of desired product to the total mass of reactants, expressed as a percentage. The concept of atom economy (AE) and the idea of making it a primary criterion for improvement in chemistry, is a part of the green chemistry movement that was championed by Paul Anastas from the early 1990s. Atom economy is an important concept of green chemistry philosophy, and one of the most widely used metrics for measuring the "greenness" of a process or synthesis.
Good atom economy means most of the atoms of the reactants are incorporated in the desired products and only small amounts of unwanted byproducts are formed, reducing the economic and environmental impact of waste disposal.
Atom economy can be written as:
For example, if we consider the reaction
where C is the desired product, then
Optimal atom economy is 100%.
Atom economy is a different concern than chemical yield, because a high-yielding process can still result in substantial byproducts. Examples include the Cannizzaro reaction, in which approximately 50% of the reactant aldehyde becomes the other oxidation state of the target; the Wittig and Suzuki reactions which use high-mass reagents that ultimately become waste; and the Gabriel synthesis, which produces a stoichiometric quantity of phthalic acid salts.
If the desired product has an enantiomer the reaction needs to be sufficiently stereoselective even when atom economy is 100%. A Diels-Alder reaction is an example of a potentially very atom efficient reaction that also can be chemo-, regio-, diastereo- and enantioselective. Catalytic hydrogenation comes the closest to being an ideal reaction that is extensively practiced both industrially and academically.
Atom economy can also be adjusted if a pendant group is recoverable, for example Evans auxiliary groups. However, if this can be avoided it is more desirable, as recovery processes will never be 100%. Atom economy can be improved upon by careful selection of starting materials and a catalyst system.
Poor atom economy is common in fine chemicals or pharmaceuticals synthesis, and especially in research, where the aim to readily and reliably produce a wide range of complex compounds leads to the use of versatile and dependable, but poorly atom-economical reactions. For example, synthesis of an alcohol is readily accomplished by reduction of an ester with lithium aluminium hydride, but the reaction necessarily produces a voluminous floc of aluminum salts, which have to be separated from the product alcohol and disposed of. The cost of such hazardous material disposal can be considerable. Catalytic hydrogenolysis of an ester is the analogous reaction with a high atom economy, but it requires catalyst optimization, is a much slower reaction and is not applicable universally.
Creating reactions utilizing atom economy
It is fundamental in chemical reactions of the form A+B→ C+D that two products are necessarily generated though product C may have been the desired one. That being the case, D is considered a byproduct. As it is a significant goal of green chemistry to maximize the efficiency of the reactants and minimize the production of waste, D must either be found to have use, be eliminated or be as insignificant and innocuous as possible. With the new equation of the form A+B→C, the first step in making chemical manufacturing more efficient is the use of reactions that resemble simple addition reactions with the only other additions being catalytic materials.
References
Stoichiometry
Green chemistry | Atom economy | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 750 | [
"Green chemistry",
"Chemical reaction engineering",
"Stoichiometry",
"Chemical engineering",
"Environmental chemistry",
"nan"
] |
1,736,264 | https://en.wikipedia.org/wiki/Second%20moment%20of%20area | The second moment of area, or second area moment, or quadratic moment of area and also known as the area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an (for an axis that lies in the plane of the area) or with a (for an axis perpendicular to the plane). In both cases, it is calculated with a multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its unit of dimension, when working with the International System of Units, is meters to the fourth power, m4, or inches to the fourth power, in4, when working in the Imperial System of Units or the US customary system.
In structural engineering, the second moment of area of a beam is an important property used in the calculation of the beam's deflection and the calculation of stress caused by a moment applied to the beam. In order to maximize the second moment of area, a large fraction of the cross-sectional area of an I-beam is located at the maximum possible distance from the centroid of the I-beam's cross-section. The planar second moment of area provides insight into a beam's resistance to bending due to an applied moment, force, or distributed load perpendicular to its neutral axis, as a function of its shape. The polar second moment of area provides insight into a beam's resistance to torsional deflection, due to an applied moment parallel to its cross-section, as a function of its shape.
Different disciplines use the term moment of inertia (MOI) to refer to different moments. It may refer to either of the planar second moments of area (often or with respect to some reference plane), or the polar second moment of area (, where r is the distance to some reference axis). In each case the integral is over all the infinitesimal elements of area, dA, in some two-dimensional cross-section. In physics, moment of inertia is strictly the second moment of mass with respect to distance from an axis: , where r is the distance to some potential rotation axis, and the integral is over all the infinitesimal elements of mass, dm, in a three-dimensional space occupied by an object . The MOI, in this sense, is the analog of mass for rotational problems. In engineering (especially mechanical and civil), moment of inertia commonly refers to the second moment of the area.
Definition
The second moment of area for an arbitrary shape with respect to an arbitrary axis ( axis is not drawn in the adjacent image; is an axis coplanar with x and y axes and is perpendicular to the line segment ) is defined as
where
is the infinitesimal area element, and
is the distance from the axis.
For example, when the desired reference axis is the x-axis, the second moment of area (often denoted as ) can be computed in Cartesian coordinates as
The second moment of the area is crucial in Euler–Bernoulli theory of slender beams.
Product moment of area
More generally, the product moment of area is defined as
Parallel axis theorem
It is sometimes necessary to calculate the second moment of area of a shape with respect to an axis different to the centroidal axis of the shape. However, it is often easier to derive the second moment of area with respect to its centroidal axis, , and use the parallel axis theorem to derive the second moment of area with respect to the axis. The parallel axis theorem states
where
is the area of the shape, and
is the perpendicular distance between the and axes.
A similar statement can be made about a axis and the parallel centroidal axis. Or, in general, any centroidal axis and a parallel axis.
Perpendicular axis theorem
For the simplicity of calculation, it is often desired to define the polar moment of area (with respect to a perpendicular axis) in terms of two area moments of inertia (both with respect to in-plane axes). The simplest case relates to and .
This relationship relies on the Pythagorean theorem which relates and to and on the linearity of integration.
Composite shapes
For more complex areas, it is often easier to divide the area into a series of "simpler" shapes. The second moment of area for the entire shape is the sum of the second moment of areas of all of its parts about a common axis. This can include shapes that are "missing" (i.e. holes, hollow shapes, etc.), in which case the second moment of area of the "missing" areas are subtracted, rather than added. In other words, the second moment of area of "missing" parts are considered negative for the method of composite shapes.
Examples
See list of second moments of area for other shapes.
Rectangle with centroid at the origin
Consider a rectangle with base and height whose centroid is located at the origin. represents the second moment of area with respect to the x-axis; represents the second moment of area with respect to the y-axis; represents the polar moment of inertia with respect to the z-axis.
Using the perpendicular axis theorem we get the value of .
Annulus centered at origin
Consider an annulus whose center is at the origin, outside radius is , and inside radius is . Because of the symmetry of the annulus, the centroid also lies at the origin. We can determine the polar moment of inertia, , about the axis by the method of composite shapes. This polar moment of inertia is equivalent to the polar moment of inertia of a circle with radius minus the polar moment of inertia of a circle with radius , both centered at the origin. First, let us derive the polar moment of inertia of a circle with radius with respect to the origin. In this case, it is easier to directly calculate as we already have , which has both an and component. Instead of obtaining the second moment of area from Cartesian coordinates as done in the previous section, we shall calculate and directly using polar coordinates.
Now, the polar moment of inertia about the axis for an annulus is simply, as stated above, the difference of the second moments of area of a circle with radius and a circle with radius .
Alternatively, we could change the limits on the integral the first time around to reflect the fact that there is a hole. This would be done like this.
Any polygon
The second moment of area about the origin for any simple polygon on the XY-plane can be computed in general by summing contributions from each segment of the polygon after dividing the area into a set of triangles. This formula is related to the shoelace formula and can be considered a special case of Green's theorem.
A polygon is assumed to have vertices, numbered in counter-clockwise fashion. If polygon vertices are numbered clockwise, returned values will be negative, but absolute values will be correct.
where are the coordinates of the -th polygon vertex, for . Also, are assumed to be equal to the coordinates of the first vertex, i.e., and .
See also
List of second moments of area
List of moments of inertia
Radius of gyration
References
Applied geometry
Beam theory
Structural analysis
Mechanical quantities
Moment (physics) | Second moment of area | [
"Physics",
"Mathematics",
"Engineering"
] | 1,509 | [
"Structural engineering",
"Mechanical quantities",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Structural analysis",
"Aerospace engineering",
"Mechanics",
"Geometry",
"Mechanical engineering",
"Applied geometry",
"Moment (physics)"
] |
1,739,001 | https://en.wikipedia.org/wiki/Wetting | Wetting is the ability of a liquid to displace gas to maintain contact with a solid surface, resulting from intermolecular interactions when the two are brought together. These interactions occur in the presence of either a gaseous phase or another liquid phase not miscible with the wetting liquid. The degree of wetting (wettability) is determined by a force balance between adhesive and cohesive forces. There are two types of wetting: non-reactive wetting and reactive wetting.
Wetting is important in the bonding or adherence of two materials. The wetting power of a liquid, and surface forces which control wetting, are also responsible for related effects, including capillary effects. Surfactants can be used to increase the wetting power of liquids such as water.
Wetting has gained increasing attention in nanotechnology and nanoscience research, following the development of nanomaterials over the past two decades (i.e., graphene, carbon nanotube, boron nitride nanomesh).
Explanation
Wetting of a solid material with a liquid substance occurs when adhesive forces cause the liquid (as a droplet) to spread across the surface of the solid at the solid-liquid interface. However, cohesive forces acting on the liquid - at the liquid-vapor interface - counteract the adhesive forces to prevent the droplet from making full contact with the surface.
The contact angle (θ), as seen in Figure 1, is the angle at which the liquid–vapor interface meets the solid–liquid interface, and is determined by the balance between adhesive and cohesive forces. As the tendency of a drop to spread out over a flat, solid surface increases, the contact angle decreases. Thus, the contact angle is used as an inverse measure of wettability.
A contact angle less than 90° (low contact angle) usually indicates that wetting of the surface is very favorable, and the fluid will spread over a large area of the surface. Contact angles greater than 90° (high contact angle) generally mean that wetting of the surface is unfavorable, so the fluid will minimize contact with the surface and form a compact liquid droplet.
For water, a wettable surface may also be termed hydrophilic and a nonwettable surface hydrophobic. Superhydrophobic surfaces have contact angles greater than 150°, showing almost no contact between the liquid drop and the surface. This is sometimes referred to as the "Lotus effect". The table describes varying contact angles and their corresponding solid/liquid and liquid/liquid interactions. For nonwater liquids, the term lyophilic is used for low contact angle conditions and lyophobic is used when higher contact angles result. Similarly, the terms omniphobic and omniphilic apply to both polar and apolar liquids.
High-energy vs. low-energy surfaces
Liquids can interact with two main types of solid surfaces. Traditionally, solid surfaces have been divided into high-energy and low-energy solids. The relative energy of a solid has to do with the bulk nature of the solid itself. Solids such as metals, glasses, and ceramics are known as 'hard solids' because the chemical bonds that hold them together (e.g., covalent, ionic, or metallic) are very strong. Thus, it takes a large amount of energy to break these solids (alternatively, a large amount of energy is required to cut the bulk and make two separate surfaces), so they are termed "high-energy". Most molecular liquids achieve complete wetting with high-energy surfaces.
The other type of solid is weak molecular crystals (e.g., fluorocarbons, hydrocarbons, etc.) where the molecules are held together essentially by physical forces (e.g., van der Waals forces and hydrogen bonds). Since these solids are held together by weak forces, a very low amount of energy is required to break them, thus they are termed "low-energy". Depending on the type of liquid chosen, low-energy surfaces can permit either complete or partial wetting.
Dynamic surfaces have been reported that undergo changes in surface energy upon the application of an appropriate stimuli. For example, a surface presenting photon-driven molecular motors was shown to undergo changes in water contact angle when switched between bistable conformations of differing surface energies.
Wetting of low-energy surfaces
Low-energy surfaces primarily interact with liquids through dispersive (van der Waals) forces. William Zisman produced several key findings:
Zisman observed that cos θ increases linearly as the surface tension (γLV) of the liquid decreased. Thus, he was able to establish a linear function between cos θ and the surface tension (γLV) for various organic liquids.
A surface is more wettable when γLV and θ is low. Zisman termed the intercept of these lines when cos θ = 1 as the critical surface tension (γc) of that surface. This critical surface tension is an important parameter because it is a characteristic of only the solid.
Knowing the critical surface tension of a solid, it is possible to predict the wettability of the surface.
The wettability of a surface is determined by the outermost chemical groups of the solid.
Differences in wettability between surfaces that are similar in structure are due to differences in the packing of the atoms. For instance, if a surface has branched chains, it will have poorer packing than a surface with straight chains.
Lower critical surface tension means a less wettable material surface.
Ideal solid surfaces
An ideal surface is flat, rigid, perfectly smooth, chemically homogeneous, and has zero contact angle hysteresis. Zero hysteresis implies the advancing and receding contact angles are equal. In other words, only one thermodynamically stable contact angle exists. When a drop of liquid is placed on such a surface, the characteristic contact angle is formed as depicted in Figure 1. Furthermore, on an ideal surface, the drop will return to its original shape if it is disturbed. The following derivations apply only to ideal solid surfaces; they are only valid for the state in which the interfaces are not moving and the phase boundary line exists in equilibrium.
Minimization of energy, three phases
Figure 3 shows the line of contact where three phases meet. In equilibrium, the net force per unit length acting along the boundary line between the three phases must be zero. The components of net force in the direction along each of the interfaces are given by:
where α, β, and θ are the angles shown and γij is the surface energy between the two indicated phases. These relations can also be expressed by an analog to a triangle known as Neumann's triangle, shown in Figure 4. Neumann's triangle is consistent with the geometrical restriction that , and applying the law of sines and law of cosines to it produce relations that describe how the interfacial angles depend on the ratios of surface energies.
Because these three surface energies form the sides of a triangle, they are constrained by the triangle inequalities, γij < γjk + γik meaning that not one of the surface tensions can exceed the sum of the other two. If three fluids with surface energies that do not follow these inequalities are brought into contact, no equilibrium configuration consistent with Figure 3 will exist.
Simplification to planar geometry, Young's relation
If the β phase is replaced by a flat rigid surface, as shown in Figure 5, then β = π, and the second net force equation simplifies to the Young equation,
which relates the surface tensions between the three phases: solid, liquid and gas. Subsequently, this predicts the contact angle of a liquid droplet on a solid surface from knowledge of the three surface energies involved. This equation also applies if the "gas" phase is another liquid, immiscible with the droplet of the first "liquid" phase.
Simplification to planar geometry, Young's relation derived from variational computation
Consider the interface as a curve for where is a free parameter. The free energy to be minimized is
with the constraints which we can write as and fixed volume .
The modified Lagrangian, taking into account the constraints is therefore
where are Lagrange multipliers. By definition, the momentum and the Hamiltonian which is computed to be:
Now, we recall that the boundary is free in the direction and is a free parameter. Therefore, we must have:
At the boundary and , therefore we recover the Young equation.
Non-ideal smooth surfaces and the Young contact angle
The Young equation assumes a perfectly flat and rigid surface often referred to as an ideal surface. In many cases, surfaces are far from this ideal situation, and two are considered here: the case of rough surfaces and the case of smooth surfaces that are still real (finitely rigid). Even in a perfectly smooth surface, a drop will assume a wide spectrum of contact angles ranging from the so-called advancing contact angle, , to the so-called receding contact angle, . The equilibrium contact angle () can be calculated from and as was shown by Tadmor as,
where
The Young–Dupré equation and spreading coefficient
The Young–Dupré equation (Thomas Young 1805; Anthanase Dupré and Paul Dupré 1869) dictates that neither γSG nor γSL can be larger than the sum of the other two surface energies. The consequence of this restriction is the prediction of complete wetting when γSG > γSL + γLG and zero wetting when γSL > γSG + γLG. The lack of a solution to the Young–Dupré equation is an indicator that there is no equilibrium configuration with a contact angle between 0 and 180° for those situations.
A useful parameter for gauging wetting is the spreading parameter S,
When S > 0, the liquid wets the surface completely (complete wetting). When S < 0, partial wetting occurs.
Combining the spreading parameter definition with the Young relation yields the Young–Dupré equation:
which only has physical solutions for θ when S < 0.
A generalized model for the contact angle of droplets on flat and curved surfaces
With improvements in measuring techniques such as AFM, confocal microscopy and SEM, researchers were able to produce and image droplets at ever smaller scales. With the reduction in droplet size came new experimental observations of wetting. These observations confirm that the modified Young's equation does not hold at the micro-nano scales. In addition the sign of the line tension is not maintained through the modified Young's equation.
For a sessile droplet, the free energy of the three phase system can be expressed as:
At constant volume in thermodynamic equilibrium, this reduces to:
Usually, the VdP term has been neglected for large droplets, however, VdP work becomes significant at small scales. The variation in pressure at constant volume at the free liquid-vapor boundary is due to the Laplace pressure, which is proportional to the mean curvature of the droplet, and is non zero. Solving the above equation for both convex and concave surfaces yields:
Where the constant parameters A, B, and C are defined as:
, and
This equation relates the contact angle , a geometric property of a sessile droplet to the bulk thermodynamics, the energy at the three phase contact boundary, and the curvature of the surface α. For the special case of a sessile droplet on a flat surface (α=0),
The first two terms are the modified Young's equation, while the third term is due to the Laplace pressure. This nonlinear equation correctly predicts the sign and magnitude of κ, the flattening of the contact angle at very small scales, and contact angle hysteresis.
Computational prediction of wetting
For many surface/adsorbate configurations, surface energy data and experimental observations are unavailable. As wetting interactions are of great importance in various applications, it is often desired to predict and compare the wetting behavior of various material surfaces with particular crystallographic orientations, with relation to water or other adsorbates. This can be done from an atomistic perspective with tools including molecular dynamics and density functional theory. In the theoretical prediction of wetting by ab initio approaches such as DFT, ice is commonly substituted for water. This is because DFT calculations are generally conducted assuming conditions of zero thermal movement of atoms, essentially meaning the simulation is conducted at absolute zero. This simplification nevertheless yields results that are relevant for the adsorption of water under realistic conditions and the use of ice for the theoretical simulation of wetting is commonplace.
Non-ideal rough solid surfaces
Unlike ideal surfaces, real surfaces do not have perfect smoothness, rigidity, or chemical homogeneity. Such deviations from ideality result in phenomenon called contact angle hysteresis, which is defined as the difference between the advancing (θa) and receding (θr) contact angles
When the contact angle is between the advancing and receding cases, the contact line is considered to be pinned and hysteretic behaviour can be observed, namely contact angle hysteresis. When these values are exceeded, the displacement of the contact line, such as the one in Figure 3, will take place by either expansion or retraction of the droplet. Figure 6 depicts the advancing and receding contact angles. The advancing contact angle is the maximum stable angle, whereas the receding contact angle is the minimum stable angle. Contact angle hysteresis occurs because many different thermodynamically stable contact angles are found on a nonideal solid. These varying thermodynamically stable contact angles are known as metastable states.
Such motion of a phase boundary, involving advancing and receding contact angles, is known as dynamic wetting. The difference between dynamic and static wetting angles is proportional to the capillary number, , When a contact line advances, covering more of the surface with liquid, the contact angle is increased and is generally related to the velocity of the contact line. If the velocity of a contact line is increased without bound, the contact angle increases, and as it approaches 180°, the gas phase will become entrained in a thin layer between the liquid and solid. This is a kinetic nonequilibrium effect which results from the contact line moving at such a high speed that complete wetting cannot occur.
A well-known departure from ideal conditions is when the surface of interest has a rough texture. The rough texture of a surface can fall into one of two categories: homogeneous or heterogeneous. A homogeneous wetting regime is where the liquid fills in the grooves of a rough surface. A heterogeneous wetting regime, though, is where the surface is a composite of two types of patches. An important example of such a composite surface is one composed of patches of both air and solid. Such surfaces have varied effects on the contact angles of wetting liquids. Cassie–Baxter and Wenzel are the two main models that attempt to describe the wetting of textured surfaces. However, these equations only apply when the drop size is sufficiently large compared with the surface roughness scale. When the droplet size is comparable to that of the underlying pillars, the effect of line tension should be considered.
Wenzel's model
The Wenzel model describes the homogeneous wetting regime, as seen in Figure 7, and is defined by the following equation for the contact angle on a rough surface:
where is the apparent contact angle which corresponds to the stable equilibrium state (i.e. minimum free energy state for the system). The roughness ratio, r, is a measure of how surface roughness affects a homogeneous surface. The roughness ratio is defined as the ratio of true area of the solid surface to the apparent area.
θ is the contact angle for a system in thermodynamic equilibrium, defined for a perfectly flat surface. Although Wenzel's equation demonstrates the contact angle of a rough surface is different from the intrinsic contact angle, it does not describe contact angle hysteresis.
Cassie–Baxter model
When dealing with a heterogeneous surface, the Wenzel model is not sufficient. A more complex model is needed to measure how the apparent contact angle changes when various materials are involved. This heterogeneous surface, like that seen in Figure 8, is explained using the Cassie–Baxter equation (Cassie's law):
Here the rf is the roughness ratio of the wet surface area and f is the fraction of solid surface area wet by the liquid. When f = 1 and rf = r, the Cassie–Baxter equations becomes the Wenzel equation. On the other hand, when there are many different fractions of surface roughness, each fraction of the total surface area is denoted by .
A summation of all equals 1 or the total surface. Cassie–Baxter can also be recast in the following equation:
Here is the Cassie–Baxter surface tension between liquid and vapor, is the solid vapor surface tension of every component, and is the solid liquid surface tension of every component. A case that is worth mentioning is when the liquid drop is placed on the substrate and creates small air pockets underneath it. This case for a two-component system is denoted by:
Here the key difference to notice is that there is no surface tension between the solid and the vapor for the second surface tension component. This is because of the assumption that the surface of air that is exposed is under the droplet and is the only other substrate in the system. Subsequently, the equation is then expressed as (1 – f). Therefore, the Cassie equation can be easily derived from the Cassie–Baxter equation. Experimental results regarding the surface properties of Wenzel versus Cassie–Baxter systems showed the effect of pinning for a Young angle of 180 to 90°, a region classified under the Cassie–Baxter model. This liquid/air composite system is largely hydrophobic. After that point, a sharp transition to the Wenzel regime was found where the drop wets the surface, but no further than the edges of the drop. Actually, the Young, Wenzel and Cassie-Baxter equations represent the transversality conditions of the variational problem of wetting.
Precursor film
With the advent of high resolution imaging, researchers have started to obtain experimental data which have led them to question the assumptions of the Cassie–Baxter equation when calculating the apparent contact angle. These groups believe the apparent contact angle is largely dependent on the triple line. The triple line, which is in contact with the heterogeneous surface, cannot rest on the heterogeneous surface like the rest of the drop. In theory, it should follow the surface imperfection. This bending in the triple line is unfavorable and is not seen in real-world situations. A theory that preserves the Cassie–Baxter equation while at the same time explaining the presence of the minimized energy state of the triple line hinges on the idea of a precursor film. This film of submicrometer thickness advances ahead of the motion of the droplet and is found around the triple line. Furthermore, this precursor film allows the triple line to bend and take different conformations that were originally considered unfavorable. This precursor fluid has been observed using environmental scanning electron microscopy (ESEM) in surfaces with pores formed in the bulk. With the introduction of the precursor film concept, the triple line can follow energetically feasible conformations, thereby correctly explaining the Cassie–Baxter model.
"Petal effect" vs. "lotus effect"
The intrinsic hydrophobicity of a surface can be enhanced by being textured with different length scales of roughness. The red rose takes advantage of this by using a hierarchy of micro- and nanostructures on each petal to provide sufficient roughness for superhydrophobicity. More specifically, each rose petal has a collection of micropapillae on the surface and each papilla, in turn, has many nanofolds. The term "petal effect" describes the fact that a water droplet on the surface of a rose petal is spherical in shape, but cannot roll off even if the petal is turned upside down. The water drops maintain their spherical shape due to the superhydrophobicity of the petal (contact angle of about 152.4°), but do not roll off because the petal surface has a high adhesive force with water.
When comparing the "petal effect" to the "lotus effect", it is important to note some striking differences. The surface structure of the lotus leaf and the rose petal, as seen in Figure 9, can be used to explain the two different effects.
The lotus leaf has a randomly rough surface and low contact angle hysteresis, which means the water droplet is not able to wet the microstructure spaces between the spikes. This allows air to remain inside the texture, causing a heterogeneous surface composed of both air and solid. As a result, the adhesive force between the water and the solid surface is extremely low, allowing the water to roll off easily (i.e. "self-cleaning" phenomenon).
The rose petal's micro- and nanostructures are larger in scale than those of the lotus leaf, which allows the liquid film to impregnate the texture. However, as seen in Figure 9, the liquid can enter the larger-scale grooves, but it cannot enter into the smaller grooves. This is known as the Cassie impregnating wetting regime. Since the liquid can wet the larger-scale grooves, the adhesive force between the water and solid is very high. This explains why the water droplet will not fall off even if the petal is tilted at an angle or turned upside down. This effect will fail if the droplet has a volume larger than 10 μL because the balance between weight and surface tension is surpassed.
Cassie–Baxter to Wenzel transition
In the Cassie–Baxter model, the drop sits on top of the textured surface with trapped air underneath. During the wetting transition from the Cassie state to the Wenzel state, the air pockets are no longer thermodynamically stable and liquid begins to nucleate from the middle of the drop, creating a "mushroom state" as seen in Figure 10. The penetration condition is given by:
where
θC is the critical contact angle
Φ is the fraction of solid/liquid interface where drop is in contact with surface
r is solid roughness (for flat surface, r = 1)
The penetration front propagates to minimize the surface energy until it reaches the edges of the drop, thus arriving at the Wenzel state. Since the solid can be considered an absorptive material due to its surface roughness, this phenomenon of spreading and imbibition is called hemiwicking. The contact angles at which spreading/imbibition occurs are between 0 and π/2.
The Wenzel model is valid between θC and π/2. If the contact angle is less than ΘC, the penetration front spreads beyond the drop and a liquid film forms over the surface. Figure 11 depicts the transition from the Wenzel state to the surface film state. The film smoothes the surface roughness and the Wenzel model no longer applies. In this state, the equilibrium condition and Young's relation yields:
By fine-tuning the surface roughness, it is possible to achieve a transition between both superhydrophobic and superhydrophilic regions. Generally, the rougher the surface, the more hydrophobic it is.
Spreading dynamics
If a drop is placed on a smooth, horizontal surface, it is generally not in the equilibrium state. Hence, it spreads until an equilibrium contact radius is reached (partial wetting). While taking into account capillary, gravitational, and viscous contributions, the drop radius as a function of time can be expressed as
For the complete wetting situation, the drop radius at any time during the spreading process is given by
where
γLG is surface tension of the fluid
V is drop volume
η is viscosity of the fluid
ρ is density of the fluid
g is gravitational constant
λ is shape factor, 37.1m−1
t0 is experimental delay time
re is drop radius in equilibrium
Modifying wetting properties
Surfactants
Many technological processes require control of liquid spreading over solid surfaces. When a drop is placed on a surface, it can completely wet, partially wet, or not wet the surface. By reducing the surface tension with surfactants, a nonwetting material can be made to become partially or completely wetting. The excess free energy (σ) of a drop on a solid surface is:
γ is the liquid–vapor interfacial tension
γSL is the solid–liquid interfacial tension
γSV is the solid–vapor interfacial tension
S is the area of liquid–vapor interface
P is the excess pressure inside liquid
R is the radius of droplet base
Based on this equation, the excess free energy is minimized when γ decreases, γSL decreases, or γSV increases. Surfactants are absorbed onto the liquid–vapor, solid–liquid, and solid–vapor interfaces, which modify the wetting behavior of hydrophobic materials to reduce the free energy. When surfactants are absorbed onto a hydrophobic surface, the polar head groups face into the solution with the tail pointing outward. In more hydrophobic surfaces, surfactants may form a bilayer on the solid, causing it to become more hydrophilic. The dynamic drop radius can be characterized as the drop begins to spread. Thus, the contact angle changes based on the following equation:
θ0 is initial contact angle
θ∞ is final contact angle
τ is the surfactant transfer time scale
As the surfactants are absorbed, the solid–vapor surface tension increases and the edges of the drop become hydrophilic. As a result, the drop spreads.
Surface changes
Ferrocene is a redox-active organometallic compound which can be incorporated into various monomers and used to make polymers which can be tethered onto a surface. Vinylferrocene (ferroceneylethene) can be prepared by a Wittig reaction and then polymerized to form polyvinylferrocene (PVFc), an analog of polystyrene. Another polymer which can be formed is poly( ferrocenecarboxylate), PFcMA. Both PVFc and PFcMA have been tethered onto silica wafers and the wettability measured when the polymer chains are uncharged and when the ferrocene moieties are oxidised to produce positively charged groups, as illustrated at right. The contact angle with water on the PFcMA-coated wafers was 70° smaller following oxidation, while in the case of PVFc the decrease was 30°, and the switching of wettability has been shown to be reversible. In the PFcMA case, the effect of longer chains with more ferrocene groups (and also greater molar mass) has been investigated, and it was found that longer chains produce significantly larger contact angle reductions.
Oxygen vacancies
Rare earth oxides exhibit intrinsic hydrophobicity, and hence can be used in thermally stable heat exchangers and other applications involving high-temperature hydrophobicity. The presence of oxygen vacancies at surfaces of ceria or other rare earth oxides is instrumental in governing surface wettability. Adsorption of water at oxide surfaces can occur as molecular adsorption, in which H2O molecules remain intact at the terminated surface, or as dissociative adsorption, in which OH and H are adsorbed separately at solid surfaces. The presence of oxygen vacancies is generally found to enhance hydrophobicity while promoting dissociative adsorption.
See also
References
Further reading
External links
What is wettability?
Fluid mechanics
Surface science
Hysteresis | Wetting | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,752 | [
"Physical phenomena",
"Fluid mechanics",
"Surface science",
"Materials science",
"Civil engineering",
"Condensed matter physics",
"Hysteresis"
] |
3,308,154 | https://en.wikipedia.org/wiki/Reprogramming | In biology, reprogramming refers to erasure and remodeling of epigenetic marks, such as DNA methylation, during mammalian development or in cell culture. Such control is also often associated with alternative covalent modifications of histones.
Reprogrammings that are both large scale (10% to 100% of epigenetic marks) and rapid (hours to a few days) occur at three life stages of mammals. Almost 100% of epigenetic marks are reprogrammed in two short periods early in development after fertilization of an ovum by a sperm. In addition, almost 10% of DNA methylations in neurons of the hippocampus can be rapidly altered during formation of a strong fear memory.
After fertilization in mammals, DNA methylation patterns are largely erased and then re-established during early embryonic development. Almost all of the methylations from the parents are erased, first during early embryogenesis, and again in gametogenesis, with demethylation and remethylation occurring each time. Demethylation during early embryogenesis occurs in the preimplantation period. After a sperm fertilizes an ovum to form a zygote, rapid DNA demethylation of the paternal DNA and slower demethylation of the maternal DNA occurs until formation of a morula, which has almost no methylation. After the blastocyst is formed, methylation can begin, and with formation of the epiblast a wave of methylation then takes place until the implantation stage of the embryo. Another period of rapid and almost complete demethylation occurs during gametogenesis within the primordial germ cells (PGCs). Other than the PGCs, in the post-implantation stage, methylation patterns in somatic cells are stage- and tissue-specific with changes that presumably define each individual cell type and last stably over a long time.
Embryonic development
The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. After fertilization, the paternal chromosome is almost completely demethylated in six hours by an active process, before DNA replication (blue line in Figure). In the mature oocyte, about 40% of its CpG sites are methylated. Demethylation of the maternal chromosome largely takes place by blockage of the methylating enzymes from acting on maternal-origin DNA and by dilution of the methylated maternal DNA during replication (red line in Figure). The morula (at the 16 cell stage), has only a small amount of DNA methylation (black line in Figure). Methylation begins to increase at 3.5 days after fertilization in the blastocyst, and a large wave of methylation then occurs on days 4.5 to 5.5 in the epiblast, going from 12% to 62% methylation, and reaching maximum level after implantation in the uterus. By day seven after fertilization, the newly formed primordial germ cells (PGC) in the implanted embryo segregate from the remaining somatic cells. At this point the PGCs have about the same level of methylation as the somatic cells.
The newly formed primordial germ cells (PGC) in the implanted embryo devolve from the somatic cells. At this point the PGCs have high levels of methylation. These cells migrate from the epiblast toward the gonadal ridge. Now the cells are rapidly proliferating and beginning demethylation in two waves. In the first wave, demethylation is by replicative dilution, but in the second wave demethylation is by an active process. The second wave leads to demethylation of specific loci. At this point the PGC genomes display the lowest levels of DNA methylation of any cells in the entire life cycle [at embryonic day 13.5 (E13.5), see the second figure in this section].
After fertilization some cells of the newly formed embryo migrate to the germinal ridge and will eventually become the germ cells (sperm and oocytes) of the next generation. Due to the phenomenon of genomic imprinting, maternal and paternal genomes are differentially marked and must be properly reprogrammed every time they pass through the germline. Therefore, during the process of gametogenesis the primordial germ cells must have their original biparental DNA methylation patterns erased and re-established based on the sex of the transmitting parent.
After fertilization, the paternal and maternal genomes are demethylated in order to erase their epigenetic signatures and acquire totipotency. There is asymmetry at this point: the male pronucleus undergoes a quick and active demethylation. Meanwhile the female pronucleus is demethylated passively during consecutive cell divisions. The process of DNA demethylation involves base excision repair and likely other DNA-repair-based mechanisms. Despite the global nature of this process, there are certain sequences that avoid it, such as differentially methylated regions (DMRS) associated with imprinted genes, retrotransposons and centromeric heterochromatin. Remethylation is needed again to differentiate the embryo into a complete organism.
In vitro manipulation of pre-implantation embryos has been shown to disrupt methylation patterns at imprinted loci and plays a crucial role in cloned animals.
Learning and Memory
Learning and memory have levels of permanence, differing from other mental processes such as thought, language, and consciousness, which are temporary in nature. Learning and memory can be either accumulated slowly (multiplication tables) or rapidly (touching a hot stove), but once attained, can be recalled into conscious use for a long time. Rats subjected to one instance of contextual fear conditioning create an especially strong long-term memory. At 24 h after training, 9.17% of the genes in the rat genomes of hippocampus neurons were found to be differentially methylated. This included more than 2,000 differentially methylated genes at 24 hours after training, with over 500 genes being demethylated. The hippocampus region of the brain is where contextual fear memories are first stored (see figure of the brain, this section), but this storage is transient and does not remain in the hippocampus. In rats contextual fear conditioning is abolished when the hippocampus is subjected to hippocampectomy just 1 day after conditioning, but rats retain a considerable amount of contextual fear when a long delay (28 days) is imposed between the time of conditioning and the time of hippocampectomy.
Molecular stages
Three molecular stages are required for reprogramming the DNA methylome. Stage 1: Recruitment. The enzymes needed for reprogramming are recruited to genome sites that require demethylation or methylation. Stage 2: Implementation. The initial enzymatic reactions take place. In the case of methylation, this is a short step that results in the methylation of cytosine to 5-methylcytosine. Stage 3: Base excision DNA repair. The intermediate products of demethylation are catalysed by specific enzymes of the base excision DNA repair pathway that finally restore cystosine in the DNA sequence.
The Figure in this section indicates the central roles of ten-eleven translocation methylcytosine dioxygenases (TETs) in the demethylation of 5-methylcytosine to form cytosine. As reviewed in 2018, 5mC is very often initially oxidized by TET dioxygenases to generate 5-hydroxymethylcytosine (5hmC). In successive steps (see Figure) TET enzymes further hydroxylate 5hmC to generate 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). Thymine-DNA glycosylase (TDG) recognizes the intermediate bases 5fC and 5caC and excises the glycosidic bond resulting in an apyrimidinic site (AP site). In an alternative oxidative deamination pathway, 5hmC can be oxidatively deaminated by APOBEC (AID/APOBEC) deaminases to form 5-hydroxymethyluracil (5hmU) or 5mC can be converted to thymine (Thy). 5hmU can be cleaved by TDG, SMUG1, NEIL1, or MBD4. AP sites and T:G mismatches are then repaired by base excision repair (BER) enzymes to yield cytosine (Cyt).
TET family
The isoforms of the TET enzymes include at least two isoforms of TET1, one of TET2 and three isoforms of TET3. The full-length canonical TET1 isoform appears virtually restricted to early embryos, embryonic stem cells and primordial germ cells (PGCs). The dominant TET1 isoform in most somatic tissues, at least in the mouse, arises from alternative promoter usage which gives rise to a short transcript and a truncated protein designated TET1s. The isoforms of TET3 are the full length form TET3FL, a short form splice variant TET3s, and a form that occurs in oocytes and neurons designated TET3o. TET3o is created by alternative promoter use and contains an additional first N-terminal exon coding for 11 amino acids. TET3o only occurs in oocytes and neurons and was not expressed in embryonic stem cells or in any other cell type or adult mouse tissue tested. Whereas TET1 expression can barely be detected in oocytes and zygotes, and TET2 is only moderately expressed, the TET3 variant TET3o shows extremely high levels of expression in oocytes and zygotes, but is nearly absent at the 2-cell stage. It is possible that TET3o, high in neurons, oocytes and zygotes at the one cell stage, is the major TET enzyme utilized when very large scale rapid demethylations occur in these cells.
Recruitment of TET to DNA
The TET enzymes do not specifically bind to 5-methylcytosine except when recruited. Without recruitment or targeting, TET1 predominantly binds to high CG promoters and CpG islands (CGIs) genome-wide by its CXXC domain that can recognize un-methylated CGIs. TET2 does not have an affinity for 5-methylcytosine in DNA. The CXXC domain of the full-length TET3, which is the predominant form expressed in neurons, binds most strongly to CpGs where the C was converted to 5-carboxycytosine (5caC). However, it also binds to un-methylated CpGs.
For a TET enzyme to initiate demethylation it must first be recruited to a methylated CpG site in DNA. Two of the proteins shown to recruit a TET enzyme to a methylated cytosine in DNA are OGG1 (see figure Initiation of DNA demthylation) and EGR1.
OGG1
Oxoguanine glycosylase (OGG1) catalyses the first step in base excision repair of the oxidatively damaged base 8-OHdG. OGG1 finds 8-OHdG by sliding along the linear DNA at 1,000 base pairs of DNA in 0.1 seconds. OGG1 very rapidly finds 8-OHdG. OGG1 proteins bind to oxidatively damaged DNA with a half maximum time of about 6 seconds. When OGG1 finds 8-OHdG it changes conformation and complexes with 8-OHdG in the binding pocket of OGG1. OGG1 does not immediately act to remove the 8-OHdG. Half maximum removal of 8-OHdG takes about 30 minutes in HeLa cells in vitro, or about 11 minutes in the livers of irradiated mice. DNA oxidation by reactive oxygen species preferentially occurs at a guanine in a methylated CpG site, because of a lowered ionization potential of guanine bases adjacent to 5-methylcytosine. TET1 binds (is recruited to) the OGG1 bound to 8-OHdG (see figure). This likely allows TET1 to demethylate an adjacent methylated cytosine. When human mammary epithelial cells (MCF-10A) were treated with H2O2, 8-OHdG increased in DNA by 3.5-fold and this caused large scale demethylation of 5-methylcytosine to about 20% of its initial level in DNA.
EGR1
The gene early growth response protein 1 (EGR1) is an immediate early gene (IEG). The defining characteristic of IEGs is the rapid and transient up-regulation—within minutes—of their mRNA levels independent of protein synthesis. EGR1 can rapidly be induced by neuronal activity. In adulthood, EGR1 is expressed widely throughout the brain, maintaining baseline expression levels in several key areas of the brain including the medial prefrontal cortex, striatum, hippocampus and amygdala. This expression is linked to control of cognition, emotional response, social behavior and sensitivity to reward. EGR1 binds to DNA at sites with the motifs 5′-GCGTGGGCG-3′ and 5'-GCGGGGGCGG-3′ and these motifs occur primarily in promoter regions of genes. The short isoform TET1s is expressed in the brain. EGR1 and TET1s form a complex mediated by the C-terminal regions of both proteins, independently of association with DNA. EGR1 recruits TET1s to genomic regions flanking EGR1 binding sites. In the presence of EGR1, TET1s is capable of locus-specific demethylation and activation of the expression of downstream genes regulated by EGR1.
History
The first person to successfully demonstrate reprogramming was John Gurdon, who in 1962 demonstrated that differentiated somatic cells could be reprogrammed back into an embryonic state when he managed to obtain swimming tadpoles following the transfer of differentiated intestinal epithelial cells into enucleated frog eggs. For this achievement he received the 2012 Nobel Prize in Medicine alongside Shinya Yamanaka. Yamanaka was the first to demonstrate (in 2006) that this somatic cell nuclear transfer or oocyte-based reprogramming process (see below), that Gurdon discovered, could be recapitulated (in mice) by defined factors (Oct4, Sox2, Klf4, and c-Myc) to generate induced pluripotent stem cells (iPSCs). Other combinations of genes have also been used, including LIN25 and Homeobox protein NANOG.
Phases of reprogramming
With the discovery that cell fate could be altered, the question of what progression of events occurs signifies a cell undergoing reprogramming. As the final product of iPSC reprogramming was similar in morphology, proliferation, gene expression, pluripotency, and telomerase activity, genetic and morphological markers were used as a way to determine what phase of reprogramming was occurring. Reprogramming is defined into three phase: initiation, maturation, and stabilization.
Initiation
The initiation phase is associated with the downregulation of cell type specific genes and the upregulation of pluripotent genes. As the cells move towards pluripotency, the telomerase activity is reactivated to extend telomeres. The cell morphology can directly affect the reprogramming process as the cell is modifying itself to prepare for the gene expression of pluripotency. The main indicator that the initiation phase has completed is that the first genes associated with pluripotency are expressed. This includes the expression of Oct-4 or Homeobox protein NANOG, while undergoing a mesenchymal–epithelial transition (MET), and the loss of apoptosis and senescence.
If the cell is directly reprogrammed from one somatic cell to another, the genes associated with each cell type begin to be upregulated and downregulated accordingly. This can either occur through direct cell reprogramming or creating an intermediate, such as a iPSC, and differentiating into the desired cell type.
The initiation phase is completed through one of three pathways: nuclear transfer, cell fusion, or defined factors (microRNA, transcription factor, epigenetic markers, and other small molecules).
Somatic cell nuclear transfer
An oocyte can reprogram an adult nucleus into an embryonic state after somatic cell nuclear transfer, so that a new organism can be developed from such cell.
Reprogramming is distinct from development of a somatic epitype, as somatic epitypes can potentially be altered after an organism has left the developmental stage of life. During somatic cell nuclear transfer, the oocyte turns off tissue specific genes in the somatic cell nucleus and turns back on embryonic specific genes. This process has been shown through cloning, as seen through John Gurdon with the tadpoles and Dolly the Sheep. Notably, these events have shown that cell fate is a reversible process.
Cell fusion
Cell fusion is used to create a multi nucleated cell called a heterokaryon. The fused cells allow for otherwise silenced genes to become reactivated and expressive. As the genes are reactivated, the cells can re-differentiate. There are instances where transcriptional factors, such as the Yamanaka factors, are still needed to aid in heterokaryon cell reprogramming.
Defined factors
Unlike nuclear transfer and cell fusion, defined factors do not require a full genome, only reprogramming factors. These reprogramming factors include microRNA, transcription factor, epigenetic markers, and other small molecules. The original transcription factors, that lead to iPSC development, discovered by Yamanaka include Oct4, Sox2, Klf4, and c-Myc (OSKM factors). Although the OSKM factors have been shown to induce and aid in pluripotency, other transcription factors such as Homeobox protein NANOG, LIN25, TRA-1-60, and C/EBPα aid in the efficiency of reprogramming. The use of microRNA and other small molecule-driven processes has been utilized as a means of increasing the efficiency of the differentiation from somatic cells to pluripotency.
Maturation
The maturation phase begins at the end of the initiation phase, when the first pluripotent genes are expressed. The cell is preparing itself to be independent from the defined factors, that started the reprogramming process. The first genes to be detected in iPSCs are Oct4, Homeobox protein NANOG, and Esrrb, followed later by Sox2. In the later stages of maturation, transgene silencing marks the start of the cell becoming independent from the induced transcription factor. Once the cell is independent, the maturation phase ends and the stabilization phase begins.
As reprogramming efficiency has proven to be a variable and low efficiency process, not all the cells complete the maturation phase and achieve pluripotency. Some cells that undergo reprogramming still remain under apoptosis at the beginning of the maturation stage from oxidative stress brought on by the stresses of gene expression change. The use of microRNA, proteins, and different combinations of the OSKM factors have started to lead towards a higher efficiency rate of reprogramming.
Stabilization
The stabilization phase refers to the processes in the cell that occur after the cell reaches pluripotency. One genetic marker is the expression of Sox2 and X chromosome reactivation, while epigenetic changes include the telomerase extending the telomeres and loss of the cell’s epigenetic memory. The epigenetic memory of a cell is reset by the changes in DNA methylation, using activation-induced cytidine deaminase (AID), TET enzymes (TET), and DNA methyltransferase (DMNTs), starting in the maturation phase and into the stabilization stage. Once the epigenetic memory of the cell is lost, the possibility of differentiation into the three germ layers is achieved. This is considered a fully reprogrammed cell as it can be passaged without reverting to its original somatic cell type.
In cell culture systems
Reprogramming can also be induced artificially through the introduction of exogenous factors, usually transcription factors. In this context, it often refers to the creation of induced pluripotent stem cells from mature cells such as adult fibroblasts. This allows the production of stem cells for biomedical research, such as research into stem cell therapies, without the use of embryos. It is carried out by the transfection of stem-cell associated genes into mature cells using viral vectors such as retroviruses.
Transcription factors
One of the first transacting factors discovered to change a cell was found in a myoblast when the complementary DNA (cDNA) coding for MyoD was expressed and converted a fibroblast to a myoblast. Another transacting factor that directly transformed a lymphoid cell into a myeloid cell was C/EBPα. MyoD and C/EBPα are examples of a small number of single factors that can transform cells. More often, a combination of transcription factors work in conjunction to reprogram a cell.
OSKM
The OSKM factors (Oct4, Sox2, Klf4, and c-Myc) were initially discovered by Yamanaka in 2006, by the induction of a mouse fibroblast into an induced pluripotent stem cell (iPSCs). Within the following year, these factors were used to induce human fibroblasts into iPSCs.
Oct4 is part of the core regulatory genes needed for pluripotency, as it is seen in both embryonic stem cells and tumors. The use of Oct4 even in small increases allows for the start differentiation into pluripotency. Oct4 works in conjecture with Sox2 for the expression of FGF4 which could aid in differentiation.
Sox2 is a gene used in maintaining pluripotency in stem cells. Oct4 and Sox2 work together to regulate hundreds of genes utilized in pluripotency. However, Sox2 is not the only possible Sox family member to participate in gene regulation with Oct4 – Sox4, Sox11, and Sox15 also participate, as the Sox protein is redundant throughout the stem cell genome.
Klf4 is a transcription factor used in proliferation, differentiation, apoptosis, and somatic cell reprogramming. When being utilized in cellular reprogramming, Klf4 prevents cell division of damaged cells using its apoptotic ability, and aids in histone acetyltransferase activity.
c-Myc is also known as an oncogene, and in certain conditions can become cancer causing. In cellular reprogramming, c-Myc is used for cell cycle progression, apoptosis, and cellular transformation for further differentiation.
NANOG
Homeobox protein NANOG (NANOG) is a transcription factor used to aid in the efficiency of generating iPSCs by maintaining pluripotency and suppressing cell determination factors. NANOG works by promoting chromatin accessibility through repression of histone markers, such as H3K27me3. NANOG aids recruitment of Oct4, Sox2, and Esrrb used in transcription, while also recruiting Brahma-related gene-1 (BRG1) for chromatin accessibility.
C/EBPα
CEBPA is a commonly used factor when reprogramming cells into not only iPSCs, but also other cells. C/EBPα has shown itself to be a single transacting factor during direct reprogramming of a lymphoid cell into a myeloid cell. C/EBPα is considered a 'path breaker' to aid in preparing the cell for intake of the OSKM factors and specific transcription events. C/EBPα has also been shown to increase the efficiency of the reprogramming events.
Variability
The properties of cells obtained after reprogramming can vary significantly, in particular among iPSCs. Factors leading to variation in the performance of reprogramming and functional features of end products include genetic background, tissue source, reprogramming factor stoichiometry and stressors related to cell culture.
See also
Induced stem cells
Epigenome editing
References
DNA
Epigenetics
Induced stem cells | Reprogramming | [
"Biology"
] | 5,249 | [
"Induced stem cells",
"Stem cell research"
] |
3,308,312 | https://en.wikipedia.org/wiki/Materials%20Today | Materials Today is a monthly peer-reviewed scientific journal, website, and journal family. The parent journal was established in 1998 and covers all aspects of materials science. It is published by Elsevier and the editors-in-chief are Jun Lou (Rice University) and Gleb Yushin (Georgia Institute of Technology). The journal principally publishes invited review articles, but other formats are also included, such as primary research articles, news items, commentaries, and opinion pieces on subjects of interest to the field. The website publishes news, educational webinars, podcasts, and blogs, as well as a jobs and events board. According to the Journal Citation Reports, the journal has a 2020 impact factor of 31.041.
The journal family includes Applied Materials Today, Materials Today Chemistry, Materials Today Energy, Materials Today Physics, Materials Today Nano, Materials Today Sustainability, Materials Today Communications, Materials Today Advances and Materials Today: Proceedings; as well as an extended collection of related publications.
History
The journal was established in 1998 as a collaboration between Elsevier and the European Materials Research Society. The founding editor was Phil Mestecky. The journal was distributed free of charge to society members and to anyone else who requested a subscription. The spin-off titles Materials Today Communications, Materials Today: Proceedings, and Applied Materials Today were launched between 2014 and 2015. In October 2016, Materials Today announced plans to further develop the journal and related family: including the appointment of new editors, the inclusion of primary research articles, and the planned launch of an extended family of titles. The journal transitioned into an open access publication in 2012 but announced the introduction of subscription articles alongside open-access articles from 2017.
References
External links
European Materials Research Society
English-language journals
Academic journals established in 1998
Professional and trade magazines
Materials science journals
Monthly journals
Elsevier academic journals
Hybrid open access journals | Materials Today | [
"Materials_science",
"Engineering"
] | 373 | [
"Materials science journals",
"Materials science"
] |
3,308,500 | https://en.wikipedia.org/wiki/International%20Technology%20Roadmap%20for%20Semiconductors | The International Technology Roadmap for Semiconductors (ITRS) is a set of documents that was coordinated and organized by Semiconductor Research Corporation and produced by a group of experts in the semiconductor industry. These experts were representative of the sponsoring organisations, including the Semiconductor Industry Associations of Taiwan, South Korea, the United States, Europe, Japan, and China.
As of 2017, ITRS is no longer being updated. Its successor is the International Roadmap for Devices and Systems.
The documents carried disclaimer: "The ITRS is devised and intended for technology assessment only and is without regard to any commercial considerations pertaining to individual products or equipment".
The documents represent best opinion on the directions of research and time-lines up to about 15 years into the future for the following areas of technology:
History
Constructing an integrated circuit, or any semiconductor device, requires a series of operations—photolithography, etching, metal deposition, and so on. As the industry evolved, each of these operations were typically performed by specialized machines built by a variety of commercial companies. This specialization may potentially make it difficult for the industry to advance, since in many cases it does no good for one company to introduce a new product if the other needed steps are not available around the same time. A technology roadmap can help this by giving an idea when a certain capability will be needed. Then each supplier can target this date for their piece of the puzzle.
With the progressive externalization of production tools to the suppliers of specialized equipment, participants identified a need for a clear roadmap to anticipate the evolution of the market and to plan and control the technological needs of IC production. For several years, the Semiconductor Industry Association (SIA) gave this responsibility of coordination to the United States, which led to the creation of an American style roadmap, the National Technology Roadmap for Semiconductors (NTRS).
In 1998, the SIA became closer to its European, Japanese, Korean, and Taiwanese counterparts by creating the first global roadmap: The International Technology Roadmap for Semiconductors (ITRS). This international group has (as of the 2003 edition) 936 companies which were affiliated with working groups within the ITRS.
The organization was divided into Technical Working Groups (TWGs) which eventually grew in number to 17, each focusing on a key element of the technology and associated supply chain. Traditionally, the ITRS roadmap was updated in even years, and completely revised in odd years.
The last revision of the ITRS Roadmap was published in 2013. The methodology and the physics behind the scaling results for 2013 tables is described in transistor roadmap projection using predictive full-band atomistic modeling which covers double gate MOSFETs over the 15 years to 2028.
With the generally acknowledged sunsetting of Moore's law and, ITRS issuing in 2016 its final roadmap, a new initiative for a more generalized roadmapping was started through the IEEE's Rebooting Computing initiative, named the International Roadmap for Devices and Systems (IRDS).
ITRS 2.0
In April 2014, the ITRS committee announced it would be reorganizing the ITRS roadmap to better suit the needs of the industry. The plan was to take all the elements included in the 17 technical working groups and map them into seven focus topics:
System integration This is a design-focused topic that examines architectures, and how to integrate heterogeneous blocks.
Outside system connectivity Focuses on wireless technologies, how they work, and how to choose the best solution.
Heterogeneous integration The focus will be on integration of separately manufactured technologies into a new unit so that they function better than the individual pieces do separately - whilst allowing for components such as cameras and microphones.
Heterogeneous components Focuses on different devices that form heterogeneous systems, such as MEMS, power generation, and sensing devices.
Beyond CMOS The focus is on devices that provide electronics but aren’t CMOS based, such as spintronics, memristors, and others.
More Moore Because there is still work to be done, this group will take on the continued shrinking of CMOS.
Factory integration Focus will be on the new tools and processes to produce heterogeneous integration of all these things.
Chapters on each topic were published in 2015.
References
Further reading
External links
Official itrs2 website
Mirror of the original website at Archive.org
Yearly ITRS reports
Semiconductor industry | International Technology Roadmap for Semiconductors | [
"Materials_science"
] | 920 | [
"Semiconductor technology",
"Microtechnology"
] |
3,308,651 | https://en.wikipedia.org/wiki/Flutamide | Flutamide, sold under the brand name Eulexin among others, is a nonsteroidal antiandrogen (NSAA) which is used primarily to treat prostate cancer. It is also used in the treatment of androgen-dependent conditions like acne, excessive hair growth, and high androgen levels in women. It is taken by mouth, usually three times per day.
Side effects in men include breast tenderness and enlargement, feminization, sexual dysfunction, and hot flashes. Conversely, the medication has fewer side effects and is better-tolerated in women with the most common side effect being dry skin. Diarrhea and elevated liver enzymes can occur in both sexes. Rarely, flutamide can cause liver damage, lung disease, sensitivity to light, elevated methemoglobin, elevated sulfhemoglobin, and deficient neutrophils. Numerous cases of liver failure and death have been reported, which has limited the use of flutamide.
Flutamide acts as a selective antagonist of the androgen receptor (AR), competing with androgens like testosterone and dihydrotestosterone (DHT) for binding to ARs in tissues like the prostate gland. By doing so, it prevents their effects and stops them from stimulating prostate cancer cells to grow. Flutamide is a prodrug to a more active form. Flutamide and its active form stay in the body for a relatively short time, which makes it necessary to take flutamide multiple times per day.
Flutamide was first described in 1967 and was first introduced for medical use in 1983. It became available in the United States in 1989. The medication has largely been replaced by newer and improved NSAAs, namely bicalutamide and enzalutamide, due to their better efficacy, tolerability, safety, and dosing frequency (once per day), and is now relatively little-used. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Prostate cancer
GnRH is released by the hypothalamus in a pulsatile fashion; this causes the anterior pituitary gland to release luteinizing hormone (LH) and follicle-stimulating hormone (FSH). LH stimulates the testes to produce testosterone, which is metabolized to DHT by the enzyme 5α-reductase.
DHT, and to a significantly smaller extent, testosterone, stimulate prostate cancer cells to grow. Therefore, blocking these androgens can provide powerful treatment for prostate cancer, especially metastatic disease. Normally administered are GnRH analogues, such as leuprorelin or cetrorelix. Although GnRH agonists stimulate the same receptors that GnRH does, since they are present continuously and not in a pulsatile manner, they serve to inhibit the pituitary gland and therefore block the whole chain. However, they initially cause a surge in activity; this is not solely a theoretical risk but may cause the cancer to flare. Flutamide was initially used at the beginning of GnRH agonist therapy to block this surge, and it and other NSAAs continue in this use. In contrast to GnRH agonists, GnRH antagonists don't cause an initial androgen surge, and are gradually replacing GnRH agonists in clinical use.
There have been studies to investigate the benefit of adding an antiandrogen to surgical orchiectomy or its continued use with a GnRH analogue (combined androgen blockade (CAB)). Adding antiandrogens to orchiectomy showed no benefit, while a small benefit was shown with adding antiandrogens to GnRH analogues.
Unfortunately, therapies which lower testosterone levels, such as orchiectomy or GnRH analogue administration, also have significant side effects. Compared to these therapies, treatment with antiandrogens exhibits "fewer hot flashes, less of an effect on libido, less muscle wasting, fewer personality changes, and less bone loss." However, antiandrogen therapy alone is less effective than surgery. Nevertheless, given the advanced age of many with prostate cancer, as well as other features, many men may choose antiandrogen therapy alone for a better quality of life.
Flutamide has been found to be similarly effective in the treatment of prostate cancer to bicalutamide, although indications of inferior efficacy, including greater compensatory increases in testosterone levels and greater reductions in PSA levels with bicalutamide, were observed. The medication, at a dosage of 750 mg/day (250 mg three times daily), has also been found to be equivalent in effectiveness to 250 mg/day oral cyproterone acetate as a monotherapy in the treatment of prostate cancer in a large-scale clinical trial of 310 patients, though its side effect and toxicity profiles (including gynecomastia, diarrhea, nausea, loss of appetite, and liver disturbances) were regarded as considerably worse than those of cyproterone acetate.
A dosage of 750 mg/day flutamide (250 mg/three times a day) is roughly equivalent in terms of effectiveness to 50 mg/day bicalutamide when used as the antiandrogen component in combined androgen blockade in the treatment of advanced prostate cancer.
Flutamide has been used to prevent the effects of the testosterone flare at the start of GnRH agonist therapy in men with prostate cancer.
The combination of flutamide with an estrogen such as ethinylestradiol sulfonate has been used as a form of combined androgen blockade and as an alternative to the combination of flutamide with surgical or medical castration.
Skin and hair conditions
Flutamide has been researched and used extensively in the treatment of androgen-dependent skin and hair conditions in women including acne, seborrhea, hirsutism, and scalp hair loss, as well as in hyperandrogenism (e.g., in polycystic ovary syndrome or congenital adrenal hyperplasia), and is effective in improving the symptoms of these conditions. The dosages used are lower than those used in the treatment of prostate cancer. Although flutamide continues to be used for these indications, its use in recent years has been limited due to the risk of potentially fatal hepatotoxicity, and it is no longer recommended as a first- or second-line therapy. The related NSAA bicalutamide has also been found to be effective in the treatment of hirsutism in women and appears to have comparable effectiveness to that of flutamide, but has a far lower and only small risk of hepatotoxicity in comparison.
Aside from its risk of liver toxicity and besides other nonsteroidal antiandrogens, it has been said that flutamide is likely the best typically used antiandrogen medication for the treatment of androgen-dependent symptoms in women. This is related to its high effectiveness and minimal side effects.
Acne and seborrhea
Flutamide has been found to be effective in the treatment of acne and seborrhea in women in a number of studies. In a long-term study of 230 women with acne, 211 of whom also had seborrhea, very-low-dose flutamide alone or in combination with an oral contraceptive caused a marked decrease in acne and seborrhea after 6 months of treatment, with maximal effect by 1 year of treatment and benefits maintained in the years thereafter. In the study, 97% of the women reported satisfaction with the control of their acne with flutamide. In another study, flutamide decreased acne and seborrhea scores by 80% in only 3 months. In contrast, spironolactone decreased symptoms by only 40% in the same time period, suggesting superior effectiveness for flutamide for these indications. Flutamide has, in general, been found to reduce symptoms of acne by as much as 90% even at low doses, with several studies showing complete acne clearance.
Excessive hair growth
Flutamide has been found to be effective in the treatment of hirsutism (excessive body/facial hair growth) in numerous studies. It possesses moderate effectiveness for this indication, and the overall quality of the evidence is considered to be moderate. The medication shows equivalent or superior effectiveness to other antiandrogens including spironolactone, cyproterone acetate, and finasteride in the treatment of hirsutism, although its relatively high risk of hepatotoxicity makes it unfavorable compared to these other options. It has been used to treat hirsutism at dosages ranging from 62.5 mg/day to 750 mg/day. A study found that multiple dosages of flutamide significantly reduced hirsutism in women with polycystic ovary syndrome and that there were no significant differences in the effectiveness for dosages of 125 mg/day, 250 mg/day, and 375 mg/day. In addition, a study found that combination of 125 mg/day flutamide with finasteride was no more effective than 125 mg/day flutamide alone in the treatment of hirsutism. These findings support the use of flutamide at lower doses for hirsutism without loss of effectiveness, which may help to lower the risk of hepatotoxicity. However, the risk has been found to remain even at very low doses.
Scalp hair loss
Flutamide has been found to be effective in the treatment of female pattern hair loss in a number of studies. In one study of 101 pre- and postmenopausal women, flutamide alone or in combination with an oral contraceptive produced a marked decrease in hair loss scores after 1 year of treatment, with maximum effect after 2 years of treatment and benefits maintained for another 2 years. In a small study of flutamide with an oral contraceptive, the medication caused an increase in cosmetically acceptance hair density in 6 of 7 women with diffuse scalp hair loss. In a comparative study, flutamide significantly improved scalp hair growth (21% reduction in Ludwig scores) in hyperandrogenic women after 1 year of treatment, whereas cyproterone acetate and finasteride were ineffective.
Other uses
Flutamide has been used in case reports to decrease the frequency of spontaneous orgasms, for instance in men with post-orgasmic illness syndrome.
Available forms
Flutamide is available in the form of 125 mg oral capsules and 250 mg oral tablets.
Side effects
The side effects of flutamide are sex-dependent. In men, a variety of side effects related to androgen deprivation may occur, the most common being gynecomastia and breast tenderness. Others include hot flashes, decreased muscle mass, decreased bone mass and an associated increased risk of fractures, depression, and sexual dysfunction including reduced libido and erectile dysfunction. In women, flutamide is, generally, relatively well tolerated, and does not interfere with ovulation. The only common side effect of flutamide in women is dry skin (75%), which can be attributed to a reduction of androgen-mediated sebum production. General side effects that may occur in either sex include dizziness, lack of appetite, gastrointestinal side effects such as nausea, vomiting, and diarrhea, a greenish-bluish discoloration of the urine, and hepatic changes. Because flutamide is a pure antiandrogen, unlike steroidal antiandrogens like cyproterone acetate and megestrol acetate (which additionally possess progestogenic activity), it does not appear to have a risk of cardiovascular side effects (e.g., thromboembolism) or fluid retention.
Gynecomastia
Flutamide, as a monotherapy, causes gynecomastia in 30 to 79% of men, and also produces breast tenderness. However, more than 90% of cases of gynecomastia with NSAAs including flutamide are mild to moderate. Tamoxifen, a selective estrogen receptor modulator (SERM) with predominantly antiestrogenic actions, can counteract flutamide-induced gynecomastia and breast pain in men.
Diarrhea
Diarrhea is more common and sometimes more severe with flutamide than with other NSAAs. In a comparative trial of combined androgen blockade for prostate cancer, the rate of diarrhea was 26% for flutamide and 12% for bicalutamide. Moreover, 6% of flutamide-treated patients discontinued the medication due to diarrhea, whereas only 0.5% of bicalutamide-treated patients did so. In the case of antiandrogen monotherapy for prostate cancer, the rates of diarrhea are 5 to 20% for flutamide, 2 to 5% for bicalutamide, and 2 to 4% for nilutamide. In contrast to diarrhea, the rates of nausea and vomiting are similar among the three medications.
Rare reactions
Liver toxicity
Although rare, flutamide has been associated with severe hepatotoxicity and death. By 1996, 46 cases of severe cholestatic hepatitis had been reported, with 20 fatalities. There have been continued case reports since, including liver transplants and death. A 2021 review of the literature found 15 cases of serious hepatotoxicity in women treated with flutamide, including 7 liver transplantations and 2 deaths.
Based on the number of prescriptions written and the number of cases reported in the MedWatch database, the rate of serious hepatotoxicity associated with flutamide treatment was estimated in 1996 as approximately 0.03% (3 per 10,000). However, other research has suggested that the true incidence of significant hepatotoxicity with flutamide may be much greater, as high as 0.18 to 10%.
Flutamide is also associated with liver enzyme elevations in up to 42 to 62% of patients, although marked elevations in liver enzymes (above 5 times upper normal limit) occur only in 3 to 5%. The risk of hepatotoxicity with flutamide is much higher than with nilutamide or bicalutamide. Lower doses of the medication appear to have a possibly reduced but still significant risk. Liver function should be monitored regularly with liver function tests during flutamide treatment. In addition, due to the high risk of serious hepatotoxicity, flutamide should not be used in the absence of a serious indication.
The mechanism of action of flutamide-induced hepatotoxicity is thought to be due to mitochondrial toxicity. Specifically, flutamide and particularly its major metabolite hydroxyflutamide inhibit enzymes in the mitochondrial electron transport chain in hepatocytes, including respiratory complexes I (NADH ubiquinone oxidoreductase), II (succinate dehydrogenase), and V (ATP synthase), and thereby reduce cellular respiration via ATP depletion and hence decrease cell survival. Inhibition of taurocholate (a bile acid) efflux has also been implicated in flutamide-induced hepatotoxicity. In contrast to flutamide and hydroxyflutamide, which severely compromise hepatocyte cellular respiration in vitro, bicalutamide does not significantly do so at the same concentrations and is regarded as non-mitotoxic. It is thought that the nitroaromatic group of flutamide and hydroxyflutamide enhance their mitochondrial toxicity; bicalutamide, in contrast, possesses a cyano group in place of the nitro moiety, greatly reducing the potential for such toxicity.
The hepatotoxicity of flutamide appears to depend on hydrolysis of flutamide catalyzed by an arylacetamide deacetalyse enzyme. This is analogous to the hepatotoxicity that occurs with the withdrawn paracetamol (acetominophen)-related medication phenacetin. In accordance, the combination of paracetamol (acetaminophen) and flutamide appears to result in additive to synergistic hepatotoxicity, indicating a potential drug interaction.
Hepatotoxicity with flutamide may be cross-reactive with that of cyproterone acetate.
Others
Flutamide has also been associated with interstitial pneumonitis (which can progress to pulmonary fibrosis). The incidence of interstitial pneumonitis with flutamide was found to be 0.04% (4 per 10,000) in a large clinical cohort of 41,700 prostate cancer patients. A variety of case reports have associated flutamide with photosensitivity. Flutamide has been associated with several case reports of methemoglobinemia. Bicalutamide does not appear to share this risk with flutamide. Flutamide has also been associated with reports of sulfhemoglobinemia and neutropenia.
Birth defects
Out of the available endocrine-disrupting compounds looked at, flutamide has a notable effect on anogenital distance in rats.)
Pharmacology
Pharmacodynamics
Antiandrogenic activity
Flutamide acts as a selective, competitive, silent antagonist of the androgen receptor (AR). Its active form, hydroxyflutamide, has between 10- and 25-fold higher affinity for the AR than does flutamide, and hence is a much more potent AR antagonist in comparison. However, at high concentrations, unlike flutamide, hydroxyflutamide is able to weakly activate the AR. Flutamide has far lower affinity for the AR than do steroidal antiandrogens like spironolactone and cyproterone acetate, and it is a relatively weak antiandrogen in terms of potency by weight, but the large dosages at which flutamide is used appear to compensate for this. In accordance with its selectivity for the AR, flutamide does not interact with the progesterone, estrogen, glucocorticoid, or mineralocorticoid receptor, and possesses no intrinsic progestogenic, estrogenic, glucocorticoid, or antigonadotropic activity. However, it can have some indirect estrogenic effects via increased levels of estradiol secondary to AR blockade, and this involved in the gynecomastia it can produce. Because flutamide does not have any estrogenic, progestogenic, or antigonadotropic activity, the medication does not cause menstrual irregularities in women. This is in contrast to steroidal antiandrogens like spironolactone and cyproterone acetate. Similarly to nilutamide, bicalutamide, and enzalutamide, flutamide crosses the blood–brain barrier and exerts central antiandrogen actions.
Flutamide has been found to be equal to slightly more potent than cyproterone acetate and substantially more potent than spironolactone as an antiandrogen in bioassays. This is in spite of the fact that hydroxyflutamide has on the order of 10-fold lower affinity for the AR relative to cyproterone acetate. Hydroxyflutamide shows about 2- to 4-fold lower affinity for the rat and human AR than does bicalutamide. In addition, whereas bicalutamide has an elimination half-life of around 6 days, hydroxyflutamide has an elimination half-life of only 8 to 10 hours, a roughly 17-fold difference. In accordance, at dosages of 50 mg/day bicalutamide and 750 mg/day flutamide (a 15-fold difference), circulating levels of flutamide at steady-state have been found to be approximately 7.5-fold lower than those of bicalutamide. Moreover, whereas flutamide at this dosage has been found to produce a 75% reduction in prostate-specific antigen levels in men with prostate cancer, a fall of 90% has been demonstrated with this dosage of bicalutamide. In accordance, 50 mg/day bicalutamide has been found to possess equivalent or superior effectiveness to 750 mg/day flutamide in a large clinical trial for prostate cancer. Also, bicalutamide has been shown to be 5-fold more potent than flutamide in rats and 50-fold more potent than flutamide in dogs. Taken together, flutamide appears to be a considerably less potent and efficacious antiandrogen than is bicalutamide.
Dose-ranging studies of flutamide in men with benign prostatic hyperplasia and prostate cancer alone and in combination with a GnRH agonist have been performed.
Flutamide increases testosterone levels by 5- to 10-fold in gonadally intact male rats.
CYP17A1 inhibition
Flutamide and hydroxyflutamide have been found in vitro to inhibit CYP17A1 (17α-hydroxylase/17,20-lyase), an enzyme which is required for the biosynthesis of androgens. In accordance, flutamide has been found to slightly but significantly lower androgen levels in GnRH analogue-treated male prostate cancer patients and women with polycystic ovary syndrome. In a directly comparative study of flutamide monotherapy (375mg once daily) versus bicalutamide monotherapy (80mg once daily) in Japanese men with prostate cancer, after 24weeks of treatment flutamide decreased dehydroepiandrosterone (DHEA) levels by about 44% while bicalutamide increased them by about 4%. As such, flutamide is a weak inhibitor of androgen biosynthesis. However, the clinical significance of this action may be limited when flutamide is given without a GnRH analogue to non-castrated men, as the medication markedly elevates testosterone levels into the high normal male range via prevention of AR activation-mediated negative feedback on the hypothalamic–pituitary–gonadal axis in this context.
Other activities
Flutamide has been identified as an agonist of the aryl hydrocarbon receptor. This may be involved in the hepatotoxicity of flutamide.
Pharmacokinetics
The absorption of flutamide is complete upon oral ingestion. Food has no effect on the bioavailability of flutamide. Steady-state levels of hydroxyflutamide, the active form of flutamide, are achieved after 2 to 4 days administration. Levels of hydroxyflutamide are approximately 50-fold higher than those of flutamide at steady-state.
The plasma protein binding of flutamide and hydroxyflutamide are high; 94 to 96% and 92 to 94%, respectively. Flutamide and its metabolite hydroxyflutamide are known to be transported by the multidrug resistance-associated protein 1 (MRP1; ABCC1).
Flutamide is metabolized by CYP1A2 (via α-hydroxylation) in the liver during first-pass metabolism to its main metabolite hydroxyflutamide (which accounts for 23% of an oral dose of flutamide one hour post-ingestion), and to at least five other, minor metabolites. Flutamide has at least 10 inactive metabolites total, including 4-nitro-3-fluoro-methylaniline.
Flutamide is excreted in various forms in the urine, the primary form being 2-amino-5-nitro-4-(trifluoromethyl)phenol.
Flutamide and hydroxyflutamide have elimination half-lives of 4.7 hours and 6 hours in adults, respectively. However, the half-life of hydroxyflutamide is extended to 8 hours after a single dose and to 9.6 hours at steady state) in elderly individuals. The elimination half-lives of flutamide and hydroxyflutamide are regarded as too short to allow for once-daily dosing, and for this reason, flutamide is instead administered three times daily at 8-hour intervals. In contrast, the newer NSAAs nilutamide, bicalutamide, and enzalutamide all have much longer half-lives, and this allows for once-daily administration in their cases.
Chemistry
Unlike the hormones with which it competes, flutamide is not a steroid; rather, it is a substituted anilide. Hence, it is described as nonsteroidal in order to distinguish it from older steroidal antiandrogens such as cyproterone acetate and megestrol acetate.
Synthesis
Schotten–Baumann reaction between 4-nitro-3-(trifluoromethyl)aniline [393-11-3] (1) with isobutanoyl chloride [79-30-1] (2) in the presence of triethylamine.
History
Flutamide was first synthesized in 1967 by Neri and colleagues at Schering Plough Corporation. It was originally synthesized as a bacteriostatic agent, but was subsequently, and serendipitously found to possess antiandrogen activity. The code name of flutamide during development was SCH-13521. Clinical research of the medication began in 1971, and it was first marketed in 1983, specifically in Chile under the brand name Drogenil and in West Germany under the brand name Flugerel. Flutamide was not introduced in the United States until 1989; it was specifically approved by the U.S. Food and Drug Administration for the treatment of metastatic prostate cancer in combination with a gonadotropin-releasing hormone (GnRH) analogue. The medication was first studied for the treatment of hirsutism in women in 1989. It was the first "pure antiandrogen" to be studied in the treatment of hirsutism. Flutamide was the first NSAA to be introduced, and was followed by nilutamide in 1989 and then bicalutamide in 1995.
Society and culture
Generic names
Flutamide is the generic name of the drug and its , , , , and . Its names in Latin, German, and Spanish are flutamidum, flutamid, and flutamida, respectively. The medication has also been referred to by the name niftolide.
Brand names
Brand names of flutamide include or have included Cebatrol, Cytomid, Drogenil, Etaconil, Eulexin, Flucinom, Flumid, Flutacan, Flutamid, Flutamida, Flutamin, Flutan, Flutaplex, Flutasin, Fugerel, Profamid, and Sebatrol, among others.
Availability
Flutamide is marketed widely throughout the world, including in the United States, Canada, Europe, Australia, New Zealand, South Africa, Central and South America, East and Southeast Asia, India, and the Middle East.
Research
Prostate cancer
The combination of an estrogen and flutamide as a form of combined androgen blockade for the treatment of prostate cancer has been researched.
Enlarged prostate
Flutamide has been studied in the treatment of benign prostatic hyperplasia (BPH; enlarged prostate) in men in several clinical studies. It has been found to reduce prostate volume by about 25%, which is comparable to the reduction achieved with the 5α-reductase inhibitor finasteride. Unfortunately, it has been associated with side effects in these studies including gynecomastia and breast tenderness (in about 50% of patients), gastrointestinal disturbances such as nausea, diarrhea, and flatulence, and hepatotoxicity, although sexual function including libido and erectile potency were maintained.
Breast cancer
Flutamide was studied for the treatment of advanced breast cancer in two phase II clinical trials but was found to be ineffective. Out of a total of 47 patients, only three short-term responses occurred. However, the patients in the studies were selected irrespective of AR, , , or HER2 status, which were all unknown.
Psychiatric disorders
Flutamide has been studied in the treatment of bulimia nervosa in women.
Flutamide was found to be effective in the treatment of obsessive–compulsive disorder (OCD) in men with comorbid Tourette's syndrome in one small randomized controlled trial. Conversely, it was ineffective in patients with OCD in another study. More research is necessary to determine whether flutamide is effective in the treatment of OCD.
References
Further reading
Anilides
Anti-acne preparations
Aryl hydrocarbon receptor agonists
CYP17A1 inhibitors
Enantiopure drugs
Hair loss medications
Hair removal
Hepatotoxins
Hormonal antineoplastic drugs
Nitrobenzene derivatives
Nonsteroidal antiandrogens
Prodrugs
Progonadotropins
Propionamides
Prostate cancer
Trifluoromethyl compounds | Flutamide | [
"Chemistry"
] | 6,131 | [
"Chemicals in medicine",
"Stereochemistry",
"Enantiopure drugs",
"Prodrugs"
] |
3,308,982 | https://en.wikipedia.org/wiki/Exterior%20insulation%20finishing%20system | Exterior insulation and finish system (EIFS) is a general class of non-load bearing building cladding systems that provides exterior walls with an insulated, water-resistant, finished surface in an integrated composite material system.
EIFS has been in use since the 1960s in North America and was first used on masonry buildings. Since the 1990s, the majority of wood-framed buildings have used EIFS.
History of EIFS
EIFS was developed in Europe after World War II and was initially used to retrofit masonry walls. EIFS started to be used in North America in the 1960s, at first on commercial masonry buildings. EIFS became popular in the mid-1970s due to the oil embargo and the resultant surge in interest in insulating wall systems that conserve energy used for heating and cooling.
In the late 1980s problems started developing due to water leakage in EIFS-clad buildings. This led to international controversy and lawsuits. EIFS installation was found to be a contributing factor in the multibillion-dollar problem known as the "Leaky condo crisis" in southwestern British Columbia and the "Leaky homes" issue in New Zealand that emerged separately in the 1980s and 1990s.
Critics argue that, while not inherently more prone to water penetration than other exterior finishes, barrier-type EIFS systems (non-water-managed systems) do not allow water that does penetrate the building to escape. The EIFS industry has consistently maintained that poor craftsmanship and bad architectural detailing at the perimeter of the EIFS was the problem. As a result, building codes began mandating a drainage system for EIFS systems on wood-frame buildings and additional on-site inspection.
Though there are some cases where insurance companies may not offer coverage for EIFS, several companies do. EIFS systems installed at lower building levels are subject to vandalism, as the material is soft and can be chipped or carved resulting in significant damage. In these cases, heavier ounce reinforcing mesh can drastically increase the durability of the EIFS system.
EIFS is now used all over North America, and in other areas around the world, especially in Europe and the Pacific Rim. The use of EIFS over stud-and-sheathing framing instead of over solid walls is a technique used primarily in North America. As of 1997 EIFS accounted for about 4% of the residential siding market and 12% of the commercial siding market.
Terminology
In the United States, the International Building Code and ASTM International define Exterior Insulation and Finish System (EIFS) as a non-load-bearing exterior wall cladding system that consists of an insulation board attached either adhesively, mechanically, or both, to the substrate; an integrally reinforced base coat; and a textured protective finish coat.
The predominant method of EIFS applied today is EIFS with Drainage, which provides a way for moisture accumulated in the wall cavity to evacuate.
EIFS is not stucco despite often called "synthetic stucco". Traditional stucco is a centuries-old, hard, dense, thick, non-insulating material which consists of aggregate, a binder, and water. EIFS is a lightweight synthetic wall cladding that includes foam plastic insulation and thin synthetic coatings. There are also specialty stuccos that use synthetic materials but no insulation, and these are also not EIFS. A common example is one-coat stucco, which is a thick, synthetic stucco applied in a single layer (traditional stucco is applied in 3 layers).
EIFS are proprietary systems of a particular EIFS manufacturer and consist of specific components. EIFS are not generic products made from common separate materials. The materials and installation methods specified by different EIFS manufacturers are not all compatible and should not be used interchangeably in new construction or repair work.
The technical definition of an EIFS does not include wall framing, sheathing, flashings, caulking, water barriers, windows, doors, and other wall components. However, some architects have begun specifying flashings, sealants, and wiring fasteners as being a part of the EIFS scope of work. Many of the EIFS manufacturers have their own standard details showing typical building conditions for window and door flashings, control joints, inside/outside corners, penetrations, and joints at dissimilar materials which should be followed for that manufacturer's warranty.
EIFS installation
EIFS are typically attached to the outside face of exterior walls with an adhesive (cementitious or acrylic based) or mechanical fasteners. Adhesives are commonly used to attach EIFS to gypsum board, cement board, or concrete substrates. EIFS are attached with mechanical fasteners (specially designed for this application) when installed over house wraps (sheet-good weather barriers) such as are commonly used over wood sheathings.
EIFS since year 2000
Research, conducted by the Oak Ridge National Laboratory and supported by the Department of Energy, has affirmed that EIFS are the "best performing cladding" in relation to thermal and moisture control when compared to brick, stucco, and cementitious fiberboard siding. EIFS are in compliance with modern building codes that emphasize energy conservation through the use of CI (continuous insulation) and a continuous air barrier.
EIFS before 2000 were barrier systems, meaning that the EIFS itself was the weather barrier. After 2000, the EIFS industry introduced the air/moisture barrier that resides behind the foam. In a study done by the Department Of Energy's Office of Science - Oak Ridge National Laboratory, it was found that the best air/moisture barrier was a fluid barrier. The Oak Ridge National Laboratory, ATLANTA, Oct. 28, 2006 — EIFS "outperformed all other walls in terms of moisture while maintaining superior thermal performance." The National Institute of Standards and Technology (NIST) has evaluated the five life cycle stages of the environmental impact of EIFS alongside brick, aluminum, stucco, vinyl, and cedar. Depending on a variety of site and project specific conditions, EIFS have the potential to save money in construction costs and contribute toward energy efficient operations and environmental responsibility when correctly designed and executed.
Some types of EIFS have passed some fire tests that range from resistance to ignitability, that include: ASTM E 119, NFPA 268, NFPA 285. However, some types and thicknesses of EIFS have been involved in large uncontrolled exterior building fires, such as the 2008 Monte Carlo Hotel Casino fire.
Composition & types of EIFS
Types of EIFS are defined by their materials and the existence/absence of a drainage plane. The EIFS Industry Members Association (EIMA) defines two classes of EIFS: Class PB (polymer based) identified as PB EIFS, and Class PM (polymer modified) identified as PM EIFS.
PB EIFS is the most common type in North America. It uses expanded polystyrene (EPS) insulation adhered to the substrate with fiberglass mesh embedded in a nominal base coat which can receive additional layers of mesh for stronger impact resistance. Other types of insulation board can include polyisocyanurate.
PM EIFS use extruded polystyrene insulation (XEPS) and a thick, cementitious base coat applied over mechanically attached glass fiber reinforcing mesh. The system has joints similar to traditional stucco. PM EIFS have evolved to include different insulation materials and base coats.
The most common type of EIFS used today is the system that includes a drainage cavity, which allows any and all moisture to exit the wall. EIFS with drainage typically consists of the following components:
An optional water-resistive barrier (WRB) that covers the substrate.
A drainage plane between the WRB and the insulation board that is most commonly achieved with vertical ribbons of adhesive applied over the WRB.
Insulation board typically made of expanded polystyrene (EPS) which is secured with an adhesive or mechanically to the substrate.
Glass-fiber reinforcing mesh embedded in the base coat
A water-resistant base coat that is applied on top of the insulation to serve as a weather barrier.
A finish coat that typically uses colorfast and crack-resistant acrylic co-polymer technology.
If an EIFS with Drainage, or water-managed EIFS is installed, a water resistive barrier (aka a WRB) is first installed over the substrate (generally glass faced exterior-grade gypsum sheathing, oriented strand board (OSB) or plywood). The moisture barrier is applied to the entire wall surface with a mesh tape over joints and a liquid-applied membrane or a protective wrap like tyvek or felt paper. Then a drainage cavity is created and the other 3 layers, described above, are added. This type of EIFS is required by many building codes areas on wood-frame construction and is intended to provide a path for incidental water that may get behind the EIFS with a safe route back to the outside. The purpose is to preclude water from damaging the supporting wall.
Adhesives and finishes are water-based, and thus must be installed at temperatures well above freezing. Two types of adhesives used contain Portland cement ("cementitious"), or do not have any Portland cement ("cementless"). Adhesives that contain Portland cement harden by the chemical reaction of the cement with water. Adhesives and finishes that are cementless harden by the evaporation of water. Adhesives come in two forms: The most common is in a plastic pail as a paste, to which Portland cement is added and as dry powders in sacks, to which water is added. Finishes come in a plastic pail, ready to use, like paint. EIFS insulation comes in individual pieces, usually 2' x 4', in large bags. The pieces are trimmed to fit the wall at the construction site.
Legal issues
EIFS systems have been the subject of several lawsuits in the United States, mostly related to the installation process and failure of the system causing moisture buildups and subsequent mold growth. The most notable case concerned the former San Martin, California courthouse. This case was settled for $12 million.
The basic underlying problem behind EIFS litigation was that EIFS was marketed as a cost-effective replacement for stucco. Stucco is expensive to install because it must be carefully applied by skilled craftsmen. General contractors switched to EIFS because they were supposed to be easy to install with unskilled or semi-skilled labor and would not crack like traditional stucco. Although EIFS if properly installed according to the manufacturer's directions should not have water intrusion problems, many installers cut corners by using insufficiently trained labor and also failed to supervise their work adequately. In turn, thousands of EIFS installations were noncompliant and suffered severe water intrusion and mold as a result. While the EIFS industry has consistently tried to shift the blame to installing contractors, the construction industry has retorted that using journeymen carpenters in turn eliminates the cost advantage of EIFS over stucco, and that the EIFS industry should have anticipated this issue and engineered its products from the beginning to be installed by unskilled labor or semi-skilled labor (that is, it should have been a fault-tolerant design).
Marketing of EIFS & the EIFS industry
EIFS account for about 10% of the US commercial wall cladding market. There are several dozen EIFS manufacturers in North America. Some sell nationwide, and some are regional in their area of business operations. The top five EIFS producers account for about 90% of the US market. These producers include Dryvit Systems, STO Corp., BASF Wall Systems, Master Wall, and Parex.
EIFS architectural details
EIFS offer the option of adding architectural details that are composed of the same materials. These mouldings come in a variety of shapes and sizes. They are widely used on residential and commercial projects in North America and are gaining popularity worldwide.
References
Building materials
Building insulation materials
Construction | Exterior insulation finishing system | [
"Physics",
"Engineering"
] | 2,527 | [
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
3,309,536 | https://en.wikipedia.org/wiki/Sverdrup%20balance | The Sverdrup balance, or Sverdrup relation, is a theoretical relationship between the wind stress exerted on the surface of the open ocean and the vertically integrated meridional (north-south) transport of ocean water.
History
Aside from the oscillatory motions associated with tidal flow, there are two primary causes of large scale flow in the ocean: (1) thermohaline processes, which induce motion by introducing changes at the surface in temperature and salinity, and therefore in seawater density, and (2) wind forcing. In the 1940s, when Harald Sverdrup was thinking about calculating the gross features of ocean circulation, he chose to consider exclusively the wind stress component of the forcing. As he says in his 1947 paper, in which he presented the Sverdrup relation, this is probably the more important of the two. After making the assumption that frictional dissipation is negligible, Sverdrup obtained the simple result that the meridional mass transport (the Sverdrup transport) is proportional to the curl of the wind stress. This is known as the Sverdrup relation;
.
Here,
is the rate of change of the Coriolis parameter, f, with meridional distance;
is the vertically integrated meridional mass transport including the geostrophic interior mass transport and the Ekman mass transport;
is the unit vector in the vertical direction;
is the wind stress vector.
Physical interpretation
Sverdrup balance may be thought of as a consistency relationship for flow which is dominated
by the Earth's rotation. Such flow will be characterized by weak rates of spin compared
to that of the earth.
Any parcel at rest with respect to the surface of the earth must match the spin of the earth underneath it. Looking down on the earth at the north pole, this spin is in a counterclockwise direction, which is defined as positive rotation or vorticity. At the south pole it is in a clockwise direction, corresponding to negative rotation. Thus to move a parcel of fluid from the south to the north without causing it to spin, it is necessary to add sufficient (positive)
rotation so as to keep it matched with the rotation of the earth underneath it. The left-hand side of
the Sverdrup equation represents the motion required to maintain this match between the absolute vorticity of a water column and the planetary vorticity, while
the right represents the applied force of the wind.
Derivation
The Sverdrup relation can be derived from the linearized barotropic vorticity equation for steady motion:
.
Here is the geostrophic interior y-component (northward) and is the z-component (upward) of the water velocity. In words, this equation says that as a vertical column of water is squashed, it moves toward the Equator; as it is stretched, it moves toward the pole. Assuming, as did Sverdrup, that there is a level below which motion ceases, the vorticity equation can be integrated from this level to the base of the Ekman surface layer to obtain:
,
where is seawater density, is the geostrophic meridional mass transport and is the vertical velocity at the base of the Ekman layer.
The driving force behind the vertical velocity is the Ekman transport, which in the Northern (Southern) hemisphere is to the right (left) of the wind stress; thus a stress field with a positive (negative) curl leads to Ekman divergence (convergence), and water must rise from beneath to replace the old Ekman layer water. The expression for this Ekman pumping velocity is
,
which, when combined with the previous equation and adding the Ekman transport, yields the Sverdrup relation.
Further development
In 1948 Henry Stommel proposed a circulation for the entire ocean depth by starting with the same equations as Sverdrup but adding bottom friction, and showed that the variation in Coriolis parameter with latitude results in a narrow western boundary current in ocean basins. In 1950, Walter Munk combined the results of Rossby (eddy viscosity), Sverdrup (upper ocean wind driven flow), and Stommel (western boundary current flow), and proposed a complete solution for the ocean circulation.
See also
Atlantic meridional overturning circulation
References
External links
Glossary of Physical Oceanography and Related Disciplines Sverdrup balance
Ocean currents
Physical oceanography | Sverdrup balance | [
"Physics",
"Chemistry"
] | 914 | [
"Ocean currents",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Fluid dynamics"
] |
3,309,687 | https://en.wikipedia.org/wiki/Axiomatic%20design | Axiomatic design is a systems design methodology using matrix methods to systematically analyze the transformation of customer needs into functional requirements, design parameters, and process variables. Specifically, a set of functional requirements(FRs) are related to a set of design parameters (DPs) by a Design Matrix A:
The method gets its name from its use of design principles or design Axioms (i.e., given without proof) governing the analysis and decision making process in developing high quality product or system designs. The two axioms used in Axiomatic Design (AD) are:
Axiom 1: The Independence Axiom. Maintain the independence of the functional requirements (FRs).
Axiom 2: The Information Axiom. Minimize the information content of the design.
Axiomatic design is considered to be a design method that addresses fundamental issues in Taguchi methods.
Coupling is the term Axiomatic Design uses to describe a lack of independence between the FRs of the system as determined by the DPs. I.e., if varying one DP has a resulting significant impact on two separate FRs, it is said the FRs are coupled. Axiomatic Design introduces matrix analysis of the Design Matrix to both assess and mitigate the effects of coupling.
Axiom 2, the Information Axiom, provides a metric of the probability that a specific DP will deliver the functional performance required to satisfy the FR. The metric is normalized to be summed up for the entire system being modeled. Systems with less functional performance risk (minimal information content) are preferred over alternative systems with higher information content.
The methodology was developed by Dr. Suh Nam Pyo at MIT, Department of Mechanical Engineering since the 1990s. A series of academic conferences have been held to present current developments of the methodology.
See also
Design structure matrix (DSM)
New product development (NPD)
Design for Six Sigma
Six Sigma
Taguchi methods
Axiomatic product development lifecycle (APDL)
C-K theory
References
External links
A discussion of the methodology is given here:
Axiomatic Design for Complex Systems is a professional short course offered at MIT
Axiomatic Design Technology described by Axiomatic Design Solutions, Inc.
Axiomatic Design Conferences:
Past proceedings of International Conferences on Axiomatic Design can be downloaded here:
ICAD2016
ICAD2015
ICAD2014
ICAD2013
ICAD2011
ICAD2009
ICAD2006
ICAD2004
ICAD2002
ICAD2000
Engineering concepts
Industrial engineering
Quality management
Systems engineering | Axiomatic design | [
"Engineering"
] | 511 | [
"Systems engineering",
"nan",
"Industrial engineering"
] |
3,310,426 | https://en.wikipedia.org/wiki/Impact%20ionization | Impact ionization is the process in a material by which one energetic charge carrier can lose energy by the creation of other charge carriers. For example, in semiconductors, an electron (or hole) with enough kinetic energy can knock a bound electron out of its bound state (in the valence band) and promote it to a state in the conduction band, creating an electron-hole pair. For carriers to have sufficient kinetic energy a sufficiently large electric field must be applied, in essence requiring a sufficiently large voltage but not necessarily a large current.
If this occurs in a region of high electrical field then it can result in avalanche breakdown. This process is exploited in avalanche diodes, by which a small optical signal is amplified before entering an external electronic circuit. In an avalanche photodiode the original charge carrier is created by the absorption of a photon.
The impact ionization process is used in modern cosmic dust detectors like the Galileo Dust Detector and dust analyzers Cassini CDA, Stardust CIDA and the Surface Dust Analyser for the identification of dust impacts and the compositional analysis of cosmic dust particles.
In some sense, impact ionization is the reverse process to Auger recombination.
Avalanche photodiodes (APD) are used in optical receivers before the signal is given to the receiver circuitry the photon is multiplied with the photocurrent and this increases the sensitivity of the receiver since photocurrent is multiplied before encountering of the thermal noise associated with the receiver circuit.
See also
Multiphoton ionization
Avalanche breakdown
Avalanche diode
Avalanche photodiode
References
External links
Animation showing impact ionization in a semiconductor
Semiconductors
Ionization | Impact ionization | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 336 | [
"Ionization",
"Physical phenomena",
"Matter",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Nuclear and atomic physics stubs",
"Condensed matter physics",
"Nuclear physics",
"Solid state engineering",
"Electrical resistance and conductance"
] |
3,311,339 | https://en.wikipedia.org/wiki/Intergranular%20corrosion | In materials science, intergranular corrosion (IGC), also known as intergranular attack (IGA), is a form of corrosion where the boundaries of crystallites of the material are more susceptible to corrosion than their insides. (Cf. transgranular corrosion.)
Description
This situation can happen in otherwise corrosion-resistant alloys, when the grain boundaries are depleted, known as , of the corrosion-inhibiting elements such as chromium by some mechanism. In nickel alloys and austenitic stainless steels, where chromium is added for corrosion resistance, the mechanism involved is precipitation of chromium carbide at the grain boundaries, resulting in the formation of chromium-depleted zones adjacent to the grain boundaries (this process is called sensitization). Around 12% chromium is minimally required to ensure passivation, a mechanism by which an ultra thin invisible film, known as passive film, forms on the surface of stainless steels. This passive film protects the metal from corrosive environments. The self-healing property of the passive film make the steel stainless. Selective leaching often involves grain boundary depletion mechanisms.
These zones also act as local galvanic couples, causing local galvanic corrosion. This condition happens when the material is heated to temperatures around 700 °C for too long a time, and often occurs during welding or an improper heat treatment. When zones of such material form due to welding, the resulting corrosion is termed weld decay. Stainless steels can be stabilized against this behavior by addition of titanium, niobium, or tantalum, which form titanium carbide, niobium carbide and tantalum carbide preferentially to chromium carbide, by lowering the content of carbon in the steel and in case of welding also in the filler metal under 0.02%, or by heating the entire part above 1000 °C and quenching it in water, leading to dissolution of the chromium carbide in the grains and then preventing its precipitation. Another possibility is to keep the welded parts thin enough so that, upon cooling, the metal dissipates heat too quickly for chromium carbide to precipitate. The ASTM A923, ASTM A262, and other similar tests are often used to determine when stainless steels are susceptible to intergranular corrosion. The tests require etching with chemicals that reveal the presence of intermetallic particles, sometimes combined with Charpy V-Notch and other mechanical testing.
Another related kind of intergranular corrosion is termed knifeline attack (KLA). Knifeline attack impacts steels stabilized by niobium, such as 347 stainless steel. Titanium, niobium, and their carbides dissolve in steel at very high temperatures. At some cooling regimes (depending on the rate of cooling), niobium carbide does not precipitate and the steel then behaves like unstabilized steel, forming chromium carbide instead. This affects only a thin zone several millimeters wide in the very vicinity of the weld, making it difficult to spot and increasing the corrosion speed. Structures made of such steels have to be heated in a whole to about 1065 °C (1950 °F), when the chromium carbide dissolves and niobium carbide forms. The cooling rate after this treatment is not important, as the carbon that would otherwise pose risk of formation of chromium carbide is already sequestered as niobium carbide.
Aluminium-based alloys may be sensitive to intergranular corrosion if there are layers of materials acting as anodes between the aluminium-rich crystals. High strength aluminium alloys, especially when extruded or otherwise subjected to high degree of working, can undergo exfoliation corrosion (metallurgy), where the corrosion products build up between the flat, elongated grains and separate them, resulting in lifting or leafing effect and often propagating from edges of the material through its entire structure. Intergranular corrosion is a concern especially for alloys with high content of copper.
Other kinds of alloys can undergo exfoliation as well; the sensitivity of cupronickel increases together with its nickel content. A broader term for this class of corrosion is lamellar corrosion. Alloys of iron are susceptible to lamellar corrosion, as the volume of iron oxides is about seven times higher than the volume of original metal, leading to formation of internal tensile stresses tearing the material apart. Similar effect leads to formation of lamellae in stainless steels, due to the difference of thermal expansion of the oxides and the metal.
Copper-based alloys become sensitive when depletion of copper content in the grain boundaries occurs.
Anisotropic alloys, where extrusion or heavy working leads to formation of long, flat grains, are especially prone to intergranular corrosion.
Intergranular corrosion induced by environmental stresses is termed stress corrosion cracking. Inter granular corrosion can be detected by ultrasonic and eddy current methods.
Sensitization effect
Sensitization refers to the precipitation of carbides at grain boundaries in a stainless steel or alloy, causing the steel or alloy to be susceptible to intergranular corrosion or intergranular stress corrosion cracking.
Certain alloys when exposed to a temperature characterized as a sensitizing temperature become particularly susceptible to intergranular corrosion. In a corrosive atmosphere, the grain interfaces of these sensitized alloys become very reactive and intergranular corrosion results. This is characterized by a localized attack at and adjacent to grain boundaries with relatively little corrosion of the grains themselves. The alloy disintegrates (grains fall out) and/or loses its strength.
The photos show the typical microstructure of a normalized (unsensitized) type 304 stainless steel and a heavily sensitized steel. The samples have been polished and etched before taking the photos, and the sensitized areas show as wide, dark lines where the etching fluid has caused corrosion. The dark lines consist of carbides and corrosion products.
Intergranular corrosion is generally considered to be caused by the segregation of impurities at the grain boundaries or by enrichment or depletion of one of the alloying elements in the grain boundary areas. Thus in certain aluminium alloys, small amounts of iron have been shown to segregate in the grain boundaries and cause intergranular corrosion. Also, it has been shown that the zinc content of a brass is higher at the grain boundaries and subject to such corrosion. High-strength aluminium alloys such as the Duralumin-type alloys (Al-Cu) which depend upon precipitated phases for strengthening are susceptible to intergranular corrosion following sensitization at temperatures of about 120 °C. Nickel-rich alloys such as Inconel 600 and Incoloy 800 show similar susceptibility. Die-cast zinc alloys containing aluminum exhibit intergranular corrosion by steam in a marine atmosphere. Cr-Mn and Cr-Mn-Ni steels are also susceptible to intergranular corrosion following sensitization in the temperature range of 420 °C–850 °C. In the case of the austenitic stainless steels, when these steels are sensitized by being heated in the temperature range of about 520 °C to 800 °C, depletion of chromium in the grain boundary region occurs, resulting in susceptibility to intergranular corrosion. Such sensitization of austenitic stainless steels can readily occur because of temperature service requirements, as in steam generators, or as a result of subsequent welding of the formed structure.
Several methods have been used to control or minimize the intergranular corrosion of susceptible alloys, particularly of the austenitic stainless steels. For example, a high-temperature solution heat treatment, commonly termed solution-annealing, quench-annealing or solution-quenching, has been used. The alloy is heated to a temperature of about 1,060 °C to 1,120 °C and then water quenched. This method is generally unsuitable for treating large assemblies, and also ineffective where welding is subsequently used for making repairs or for attaching other structures.
Another control technique for preventing intergranular corrosion involves incorporating strong carbide formers or stabilizing elements such as niobium or titanium in the stainless steels. Such elements have a much greater affinity for carbon than does chromium; carbide formation with these elements reduces the carbon available in the alloy for formation of chromium carbides. Such a stabilized titanium-bearing austenitic chromium-nickel-copper stainless steel is shown in U.S. Pat. No. 3,562,781. Or the stainless steel may initially be reduced in carbon content below 0.03 percent so that insufficient carbon is provided for carbide formation. These techniques are expensive and only partially effective since sensitization may occur with time. The low-carbon steels also frequently exhibit lower strengths at high temperatures.
See also
Intergranular fracture
References
Corrosion | Intergranular corrosion | [
"Chemistry",
"Materials_science"
] | 1,881 | [
"Materials degradation",
"Electrochemistry",
"Metallurgy",
"Corrosion"
] |
3,312,554 | https://en.wikipedia.org/wiki/Selective%20leaching | In metallurgy, selective leaching, also called dealloying, demetalification, parting and selective corrosion, is a corrosion type in some solid solution alloys, when in suitable conditions a component of the alloys is preferentially leached from the initially homogenous material. The less noble metal is removed from the alloy by a microscopic-scale galvanic corrosion mechanism. The most susceptible alloys are the ones containing metals with high distance between each other in the galvanic series, e.g. copper and zinc in brass. The elements most typically undergoing selective removal are zinc, aluminium, iron, cobalt, chromium, and others.
Leaching of zinc
The most common example is selective leaching of zinc from brass alloys containing more than 15% zinc (dezincification) in the presence of oxygen and moisture, e.g. from brass taps in chlorine-containing water. Dezincification has been studied since the 1860s, and the mechanism by which it occurs was under extensive examination by the 1960s. It is believed that both copper and zinc gradually dissolve out simultaneously, and copper precipitates back from the solution. The material remaining is a copper-rich sponge with poor mechanical properties, and a color changed from yellow to red. Dezincification can be caused by water containing sulfur, carbon dioxide, and oxygen. Stagnant or low velocity waters tend to promote dezincification.
To combat this, arsenic or tin can be added to brass, or gunmetal can be used instead. Dezincification resistant brass (DZR), also known as Brass C352 is an alloy used to make pipe fittings for use with potable water. Plumbing fittings that are resistant to dezincification are appropriately marked, with the letters "CR" (Corrosion Resistant) or DZR (dezincification resistant) in the UK, and the letters "DR" (dezincification resistant) in Australia.
Graphitic corrosion
Graphitic corrosion is selective leaching of iron, from grey cast iron, where iron is removed and graphite grains remain intact. Affected surfaces develop a layer of graphite, rust, and metallurgical impurities that may inhibit further leaching. The effect can be substantially reduced by alloying the cast iron with nickel.
Leaching of other elements
Dealuminification is a corresponding process for aluminum alloys. Similar effects for different metals are decarburization (removal of carbon from the surface of alloy), decobaltification, denickelification, etc. The prototypical system for dealloying to create nano-porous metals is the np-Au system, which is created by selectively leaching Ag out of an Au-Ag homogenous alloy.
Mechanisms
Liquid Metal Dealloying
When an initially homogenous alloy is placed in an acid that can preferentially dissolve one or more components out of the alloy, the remaining component will diffuse and organize into a unique, nano-porous microstructure. The resulting material will have ligaments, formed by the remaining material, surrounded by pores, empty space from which atoms were leached/diffused away.
Porosity Development
The way that porosity develops during the dealloying process has been studied computationally to understand the diffusional pathways on an atomistic level. Firstly, the less noble atoms must be dissolved away from the surface of the alloy. This process is easiest for the lower coordinated atoms, i.e., those bonded to fewer other atoms, usually found as single atoms sitting on the surface ("adatoms"), but it is more difficult for higher coordinated atoms, i.e., those sitting at "steps" or in the bulk of the material. Thus, the slowest step, and that which is most important for determining rate of porosity evolution is the dissolution of these higher coordinated less noble atoms. Just as the less noble metal is less stable as an adatom on the surface, so is an atom of the more noble metal. Therefore, as dissolution proceeds, any more noble atoms will move to more stable positions, like steps, where its coordination is higher. This diffusion process is similar to spinodal decomposition. Eventually, clusters of more noble atoms form this way, and surrounding less noble atoms dissolve away, leaving behind a "bicontinuous structure" and providing a pathway for dissolution to continue deeper into the metal.
Effects on Mechanical Properties
Testing Methods
Due to the relatively small sample size achievable with dealloying, the mechanical properties of these materials are often probed using the following techniques:
Nanoindentation
Micropillar compression
Deflection testing of bridges
Thin-film wrinkling
Strength and Stiffness of Nano-porous Materials
A common concept in materials science is that, at ambient conditions, smaller features (like grain size or absolute size) generally lead to stronger materials (see Hall-Petch strengthening, Weibull statistics). However, due to the high-level of porosity in the dealloyed materials, their strengths and stiffnesses are relatively low compared to the bulk counterparts. The decrease in strength due to porosity can be described with the Gibson-Ashby (GA) relations, which give the yield strength and Young's modulus of a foam according to the following equations:
where and are geometric constants, and are microstructure dependent exponents, and is the relative density of the foam.
The GA relations can be used to estimate the strength and stiffness of a given dealloyed, porous material, but more extensive study has revealed an additional factor: ligament size. When the ligament diameter is greater than 100 nm, increasing ligament size leads to greater agreement between GA predictions and experimental measurements of yield stress and Young's modulus. However, when the ligament size is under 100 nm, which is very common in many dealloying processes, there is an addition to the GA strength that looks similar to Hall-Petch strengthening of bulk polycrystalline metals (i.e., the yield stress increases with the inverse square root of grain size). Combining this relationship with the GA relation from before, an expression for the yield stress of dealloyed materials with ligaments smaller than 100 nm can be determined:
where A and m are empirically determined constants, and is the ligament size. The represents the Hall-Petch-like contribution.
There are two theories for why this increase in strength occurs: 1) dislocations are less common in smaller sample volumes, so deformation requires activation of sources (which is a more difficult process), or 2) dislocations pile-up, which strengthens the material. Either way, there would be significant surface and small volume effects in the ligaments <100 nm, which lead to this increase in yield stress. A relationship between ligament size and Young's modulus has not been studied past the GA relation.
Occasionally, the metastable nature of these materials means that ligaments in the structure may "pinch off" due to surface diffusion, which decreases the connectivity of the structure, and reduces the strength of the dealloyed material past what would be expected from simply porosity (as predicted by the Gibson-Ashby relations).
Dislocation Motion in nano-porous materials
Because the ligaments of these materials are essentially small metallic samples, they are themselves expected to be quite ductile; although, the entire nano-porous material is often observed to be brittle in tension. Dislocation behavior is extensive within the ligaments (just as would be expected in a metal): a high density. of partial dislocations, stacking faults and twins have been observed both in simulation and in TEM. However, the morphology of the ligaments makes bulk dislocation motion very difficult; the limited size of each ligament and complex connectivity within the nano-porous structure means that a dislocation cannot freely travel long distances and thus induce large-scale plasticity.
Countermeasures
Countermeasures involve using alloys not susceptible to grain boundary depletion, using a suitable heat treatment, altering the environment (e.g. lowering oxygen content), and/or use cathodic protection.
Uses
Selective leaching can be used to produce powdered materials with extremely high surface area, such as Raney nickel and other heterogeneous catalysts. Selective leaching can be the pre-final stage of depletion gilding.
See also
Corrosion engineering
References
External links
Dezincification
Corrosion prevention
Corrosion
Nanotechnology | Selective leaching | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,750 | [
"Corrosion prevention",
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Nanotechnology",
"Materials degradation"
] |
11,472,211 | https://en.wikipedia.org/wiki/Histamine%20N-methyltransferase | Histamine N-methyltransferase (HNMT) is a protein encoded by the HNMT gene in humans. It belongs to the methyltransferases superfamily of enzymes and plays a role in the inactivation of histamine, a biomolecule that is involved in various physiological processes. Methyltransferases are present in every life form including archaeans, with 230 families of methyltransferases found across species.
Specifically, HNMT transfers a methyl (-CH3) group from S-adenosyl-L-methionine (SAM-e) to histamine, forming an inactive metabolite called Nτ-methylhistamine, in a chemical reaction called Nτ-methylation. In mammals, HNMT operates alongside diamine oxidase (DAO) as the only two enzymes responsible for histamine metabolism; however, what sets HNMT apart is its unique presence within the central nervous system (CNS), where it governs histaminergic neurotransmission, that is a process where histamine acts as a messenger molecule between the neurons—nerve cells—in the brain. By degrading and regulating levels of histamine specifically within the CNS, HNMT ensures the proper functioning of neural pathways related to arousal, appetite regulation, sleep-wake cycles, and other essential brain functions.
Research on knockout mice—that are genetically modified mice lacking the Hnmt gene—has revealed that the absence of this enzyme leads to increased brain histamine concentrations and behavioral changes such as heightened aggression and disrupted sleep patterns. These findings highlight the critical role played by HNMT in maintaining normal brain function through precise regulation of neuronal signaling involving histamine. Genetic variants affecting HNMT activity have also been implicated in various neurological disorders like Parkinson's disease and attention deficit disorder.
Gene
Histamine N-methyltransferase is encoded by a single gene, called HNMT, which has been mapped to chromosome 2 in humans.
Three transcript variants have been identified for this gene in humans, which produce different protein isoforms due to alternative splicing, which allows a single gene to code for multiple proteins by including or excluding particular exons of a gene in the final mRNA produced from that gene. Of those isoforms, only one has histamine-methylating activity.
In the human genome, six exons from the 50-kb HNMT contribute to forming a unique mRNA species, approximately 1.6 kb in size. This mRNA is then translated into the cytosolic enzyme histamine N-methyltransferase, comprising 292 amino acids, of which 130 amino acids are a conserved sequence. HNMT does not have promoter cis-elements, such as TATA and CAAT boxes.
Protein
HNMT is a cytoplasmic protein, meaning that it operates within the cytoplasm of a cell. The cytoplasm fills the space between the outer cell membrane (also known as the cellular plasma membrane) and the nuclear membrane (which surrounds the cell's nucleus). HNMT helps regulate histamine levels by degrading histamine within the cytoplasm, ensuring proper cellular function.
Proteins consist of amino acid residues and form a three-dimensional structure. The crystallographic structure to depict the three-dimensional structure of human HNMT protein was first described in 2001 as a monomeric protein that has a mass of 33 kilodaltons and consists of two structural domains.
The first domain, called the "MTase domain", contains the active site where methylation occurs. It has a classic fold found in many other methyltransferases and consists of a seven-stranded beta-sheet surrounded by three helices on each side. This domain binds to its cofactor, S-adenosyl-L-methionine (SAM-e), which provides the methyl group for Nτ-methylation reactions.
The second domain, called the "substrate binding domain", interacts with histamine, contributing to its binding to the enzyme molecule. This domain is connected to the MTase domain and forms a separate region. It includes an anti-parallel beta sheet along with additional alpha helices and 310 helices.
Species
Histamine N-methyltransferase belongs to methyltransferases, a superfamily of enzymes present in every life form, including archaeans.
These enzymes catalyze methylation, which is a chemical process that involves the addition of a methyl group to a molecule, which can affect its biological function.
To facilitate methylation, methyltransferases transfer a methyl group (-CH3) from a cosubstrate (donor) to a substrate molecule (acceptor), leading to the formation of a methylated molecule. Most methyltransferases use S-adenosyl-L-methionine (SAM-e) as a donor, converting it into S-adenosyl-L-homocysteine (SAH). In various species, members of the methyltransferase superfamily of enzymes methylate a wide range of molecules, including small molecules, proteins, nucleic acids, and lipids. These enzymes are involved in numerous cellular processes such as signaling, protein repair, chromatin regulation, and gene regulation. More than 230 families of methyltransferases have been described in various species.
This specific protein, histamine N-methyltransferase, is found in vertebrates, including mammals, birds, reptiles, amphibians, and fishes, but not in invertebrates and plants.
The complementary DNA (cDNA) of Hnmt was initially cloned from a rat kidney and has since been cloned from human, mouse, and guinea pig sources. Human HNMT shares 55.37% similarity with that of zebrafish, 86.76% with that of mouse, 90.53% with that of dog, and 99.54% with that of chimpanzee. Moreover, expressed sequence tags from cow, pig, and gorilla, as well as genome survey sequences from pufferfish, also exhibit strong similarity to human HNMT, suggesting that it is a highly conserved protein among vertebrates. To understand the role of histamine N-methyltransferase in brain function, researchers have studied Hnmt-deficient (knockout) mice, that were genetically modified to have the Hnmt gene "knocked out", i.e., deactivated. Scientists discovered that disrupting the gene led to a significant rise in histamine levels in the mouse brain that highlighted the role of the gene in the brain's histamine system and suggested that HNMT genetic variations in humans could be linked to brain disorders.
Tissue and subcellular distribution
On subcellular distribution, histamine N-methyltransferase protein in humans is mainly localized to the nucleoplasm (which is an organelle, i.e., a subunit of a cell) and cytosol (which is the intracellular fluid, i.e., a fluid inside cells). In addition, it is localized to the centrosome (another organelle).
In humans, the protein is present in many tissues and is most abundantly expressed in the brain, thyroid gland, bronchus, duodenum, liver, gallbladder, kidney, and skin.
Function
The function of the HNMT enzyme is histamine metabolism by ways of Nτ-methylation using S-adenosyl-L-methionine (SAM-e) as the methyl donor, producing Nτ-methylhistamine, which, unless excreted, can be further processed by monoamine oxidase B (MAOB) or by diamine oxidase (DAO). Methylated histamine metabolites are excreted with urine.
In mammals, there are two main ways to inactivate histamine by metabolism: one is through a process called oxidative deamination, which involves the enzyme diamine oxidase (DAO) produced by the AOC1 gene, and the other is through a process called Nτ-methylation, which involves the enzyme N-methyltransferase. In the context of biochemistry, inactivation by metabolism refers to the process where a substance, such as a hormone, is converted into a form that is no longer active or effective (inactivation), via a process where the substance is chemically altered (metabolism).
HNMT and DAO are two enzymes that play distinct roles in histamine metabolism. DAO is primarily responsible for metabolizing histamine in extracellular (outside cells) fluids, which include interstitial fluid (fluid surrounding cells) and blood plasma. Such histamine can be exogenous (from food or intestinal flora) or endogenous (released from granules of mast cells and basophils, such as during allergic reactions). DAO is predominantly expressed in the cells of the intestinal epithelium and placenta but not in the central nervous system (CNS). In contrast, HNMT is expressed in CNS and involved in the metabolism of intracellular (inside cells) histamine, which is primarily endogenous and persistently present. HNMT operates in the cytosol, which is the fluid inside cells. Histamine is required to be carried into the cytosol through transporters such as plasma membrane monoamine transporter (SLC29A4) or organic cation transporter 3 (SLC22A3). HNMT enzyme is found in cells of diverse tissues: neurons and glia, brain, kidneys, liver, bronchi, large intestine, ovary, prostate, spinal cord, spleen, and trachea, etc. While DAO is primarily found in the intestinal epithelium, HNMT is present in a wider range of tissues throughout the body. This difference in location also requires different transport mechanisms for histamine to reach each enzyme, reflecting the distinct roles of these enzymes in histamine metabolism. Another distinction between HNMT and DAO lies in their substrate specificity. While HNMT has a strong preference for histamine, DAO can metabolize other biogenic amines—substances, produced by a life form (like a bacteria or an animal) that has an amine functional group (−NH2). The examples of biogenic amines besides histamine that DAO can metabolize are putrescine and cadaverine; still, DAO has a preference for histamine. Both DAO and HNMT exhibit comparable affinities toward histamine.
In the brain of mammals, histamine takes part in histaminergic neurotransmission, that is a process where histamine acts as a messenger molecule between the neurons—the nerve cells. Histamine neurotransmitter activity is controlled by HNMT, since DAO is not present in the CNS. Consequently, the deactivation of histamine via HNMT represents the sole mechanism for ending neurotransmission within the mammalian CNS. This highlights the key role of HNMT for the histamine system of the brain and the brain function in general.
Physiological and clinical significance
Role in health
Histamine has important roles in human physiology as both a hormone and a neurotransmitter. As a hormone, it is involved in the inflammatory response and itching. It regulates physiological functions in the gut and acts on the brain, spinal cord, and uterus. As a neurotransmitter, histamine promotes arousal and regulates appetite and the sleep-wake cycle. It also affects vasodilation, fluid production in tissues like the nose and eyes, gastric acid secretion, sexual function, and immune responses.
HNMT is the only enzyme in the human body responsible for metabolizing histamine within the CNS, playing a role in brain function.
HNMT plays a role in maintaining the proper balance of histamine in the human body. HNMT is responsible for the breakdown and metabolism of histamine, converting it into an inactive metabolite, Nτ-methylhistamine, which inhibits HNMT gene expression in a negative feedback loop. By metabolizing histamine, HNMT helps prevent excessive levels of histamine from accumulating in various tissues and organs. This enzymatic activity ensures that histamine remains at appropriate levels to carry out its physiological functions without causing unwanted effects or triggering allergic reactions. In the central nervous system, HNMT plays an essential role in degrading histamine, where it acts as a neurotransmitter, since HNMT is the only enzyme in the body that can metabolize histamine in the CNS, ending its neurotransmitter activity.
HNMT also plays a role in the airway response to harmful particles, which is the body's physiological reaction to immune allergens, bacteria, or viruses in the respiratory system. Histamine is stored in granules in mast cells, basophils, and in the synaptic vesicles of histaminergic neurons of the airways. When exposed to immune allergens or harmful particles, histamine is released from these storage granules and quickly diffuses into the surrounding tissues. However, the released histamine needs to be rapidly deactivated for proper regulation, which is a function of HNMT.
Histamine intolerance
Histamine intolerance is a presumed set of adverse reactions to ingested histamine in food believed to be associated with flawed activity of DAO and HNMT enzymes. This set of reactions include cutaneous reactions (such as itching, flushing and edema), gastrointestinal symptoms (such as abdominal pain and diarrhea), respiratory symptoms (such as runny nose and nasal congestion), and neurological symptoms (such as dizziness and headache). However, this link between DAO and HNMT enzymes and adverse reactions to ingested histamine in food is not shared by mainstream science due to insufficient evidence. The exact underlying mechanisms by which deficiency in these enzymes can cause these adverse reactions are not fully understood but are hypothesized to involve genetic factors. Despite extensive research, there are no definitive, objective measures or indicators that could unambiguously define histamine intolerance as a distinct medical condition.
Activity measurements
The activity of HNMT, unlike that of DAO, cannot be measured by blood (serum) analysis.
Organs that produce DAO continuously release it into the bloodstream. DAO is stored in vesicular structures associated with the plasma membrane in epithelial cells. As a result, serum DAO activity can be measured, but not HNMT. This is because HNMT is primarily found within the cells of internal organs like the brain or liver and is not released to the bloodstream. Measuring intracellular HNMT directly is challenging. Therefore, diagnosis of HNMT activity is typically done indirectly by testing for known genetic variants.
Genetic variants
There is a genetic variant, registered in the Single Nucleotide Polymorphism database (dbSNP) as rs11558538, found in 10% of the population worldwide, which means that the T allele presents at position 314 of HNMT instead of a usual C allele (c.314C>T). This variant causes the protein to be synthesized with threonine (Thr) replaced with isoleucine (Ile) at position 105 (p.Thr105Ile, T105I). This variant is described as loss-of-function allele reducing HNMT activity, and is associated with diseases such as asthma, allergic rhinitis, and atopic eczema (atopic dermatitis). For individuals with this variant, the intake of HNMT inhibitors, which hamper enzyme activity, and histamine liberators, which release histamine from the granules of mast cells and basophils, could potentially influence their histamine levels. Still, this genetic variant is associated with a reduced risk of Parkinson's disease.
Experiments involving Hnmt-knockout mice have shown that a deficiency in HNMT indeed leads to increased brain histamine concentrations, resulting in heightened aggressive behaviors and disrupted sleep-wake cycles in these mice. In humans, genetic variants that affect HNMT activity have been implicated in various brain disorders, such as Parkinson's disease and attention deficit disorder, but it remains unclear whether these alterations in HNMT are a primary cause or secondary effect of these conditions. Additionally, reduced histamine levels in cerebrospinal fluid have been consistently reported in patients with narcolepsy and other conditions characterized by excessive daytime sleepiness. The association between HNMT polymorphisms and gastrointestinal diseases is still uncertain. While mild polymorphisms can lead to diseases such as asthma and inflammatory bowel disease, they may also reduce the risk of brain disorders like Parkinson's disease. On the other hand, severe mutations in HNMT can result in intellectual disability. Despite these findings, the role of HNMT in human health is not fully understood and continues to be an active area of research.
Inhibitors
The following substances are known to be HNMT inhibitors: amodiaquine, chloroquine, dimaprit, etoprine, metoprine, quinacrine, SKF-91488, tacrine, and diphenhydramine. HNMT inhibitors may increase histamine levels in peripheral tissues and aggravate conditions associated with histamine excess, such as allergic rhinitis, urticaria, and peptic ulcer disease. the effect of HNMT inhibitors on brain function is not yet fully understood. Research suggests that using new inhibitors of HNMT to increase the levels of histamine in the brain could potentially contribute to improvements in the treatment of brain disorders.
Methamphetamine overdose
HNMT could be a potential target for the treatment of symptoms of methamphetamine overdose. It is a central nervous system stimulant, which can be abused up to the lethal consequences: numerous deaths related to methamphetamine overdoses have been reported. The reasoning behind this is that such overdose often leads to behavioral abnormalities, and it has been observed that elevated levels of histamine in the brain can attenuate these methamphetamine-induced behaviors. Therefore, by targeting HNMT, it might be possible to increase the levels of histamine in the brain, which could, in turn, help to mitigate the effects of a methamphetamine overdose. This effect could be achieved by using HNMT inhibitors. Studies predict that one such inhibitor can be metoprine, which crosses the blood-brain barrier and can potentially increase brain histamine levels by inhibiting HNMT; still, treatment of methamphetamine overdose by HNMT inhibitors is still an area of research.
Nτ-methylhistamine
Nτ-methylhistamine (NτMH), also known as 1-methylhistamine, is a product of Nτ-methylation of histamine in a reaction catalyzed by the HNMT enzyme.
NτMH is considered a biologically inactive metabolite of histamine. NτMH is excreted in the urine and can be measured to estimate the amounts of active histamine in the body. While NτMH has some biological activity on its own, it is much weaker than histamine. NτMH can bind to histamine receptors but has a lower affinity and efficacy than histamine for these receptors, meaning that it binds less strongly and activates them less effectively. Depending on the receptor subtype and the tissue context, NτMH may act as a partial agonist or an antagonist for some histamine receptors. NτMH may have some modulatory effects on histamine signaling, but it is unlikely to cause significant allergic or inflammatory reactions by itself. NτMH may also serve as a feedback mechanism to regulate histamine levels and prevent excessive histamine release. Still, NMT, being a product in a reaction catalyzed by HNMT, may inhibit expression of HNMT in a negative feedback loop.
Urinary NτMH can be measured in clinical settings when systemic mastocytosis is suspected. Systemic mastocytosis and anaphylaxis are typically associated with at least a two-fold increase in urinary NτMH levels, which are also increased in patients taking monoamine oxidase inhibitors and in patients on histamine-rich diets.
References
External links
PDBe-KB provides an overview of all the structure information available in the PDB for human histamine N-methyltransferase
EC 2.1.1
Histamine
Enzymes
Metabolism
Human proteins | Histamine N-methyltransferase | [
"Chemistry",
"Biology"
] | 4,421 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
11,474,616 | https://en.wikipedia.org/wiki/TPL%20Tables | TPL Tables is a cross tabulation system used to generate statistical tables for analysis or publication.
Background / history
TPL Tables has its roots in the Table Producing Language (TPL) system, developed at the U.S. Bureau of Labor Statistics (BLS) in the 1970s and early 1980s to run on IBM mainframes.
It was one of the first software languages that was task oriented rather than procedure oriented. To create a table in TPL, the user needed to specify his data and describe what his table should look like. He did not need to write procedures to create the table. This was in sharp contrast to the Cobol and PL/1 programs people were using at BLS to create tables before TPL. When statistical offices began moving to databases, TPL extended its non-procedural model to database access
The mainframe software gained international popularity during its time, particularly in government statistical offices, but at a substantial number of other sites as well. The BLS version of TPL was distributed by the United Nations. When TPL evolved into a commercial product, the UN connections remained. This led to such diverse customers as the census of the Comoros Islands [Population 600,000] and the census of the People's Republic of China [Population > 1,000,000,000]
BLS ceased major software development of the software in the early to mid-1980s. At that time, two developers of the mainframe product founded QQQ Software, Inc. and began development of TPL Tables, rewriting the system for PCs and Unix systems. The first version of TPL Tables was released in 1987. The current version is 7.0.
Uses
TPL Tables is used with many different types of data, from small surveys or other datasets to national level censuses. Its many formatting features allow creation of publication quality output that can be published on paper or on the web.
Text or interactive mode
TPL Tables has a language for specifying tabulations and controlling format details. This language is the same for both Windows and Unix versions of the software. The Windows version also has an interactive interface that can access most features and includes Ted, an editor used to display PostScript tables on the screen and edit them interactively.
Tabulation Features
TPL Tables can process an unlimited amount of data and produce tables that range in size from a few lines to hundreds of pages. Subsets of the data can be selected and new variables can be computed from incoming data or from tabulated values. Alternate computations can be performed depending on specified conditions being met. New variables can also be defined by recoding or grouping values of other variables. Table rows can be ordered (ranked) according to the values in a selected column. Other computational features include percent distributions, maximums, minimums, medians and other quantiles. Weighted values can be tabulated.
Inputs
TPL Tables can read files with data in fixed columns or delimited file types such as CSV Comma Separated Values . TPL-SQL, an optional add-on feature, provides direct access from TPL Tables to SQL databases produced by products such as Sybase and Oracle. In the Windows version, TPL-SQL can access databases for which there are ODBC drivers.
Outputs
TPL Tables automatically formats table output according to the table specification, available names and labels, and default settings. Tables can be created in PostScript or as text. Additional format features allow control of such things as page size, table orientation and column widths Rows or columns can be deleted, and labels and titles can be replaced. Display formats for data values can include alignment specifications and addition of special characters such as % and $. Footnotes can be included for both labels and data values. PostScript tables can contain proportional fonts in various styles and sizes.
Exports
Tables can be exported as PDF, HTML, or CSV. The Windows version also allows tables to be exported for use as input to PC-Axis .
Notes
External links
Home page for QQQ Software, Inc. and TPL Tables
QQQ Software, Inc. download page . Contains various documentation files, including the TPL Tables, Version 7.0 User Manual in PDF format.
References
Mendelssohn, Rudolph C., The Bureau of Labor Statistics' Table Producing Language (TPL), ACM Press, New York, NY, 1974
Survey Data Processing: A Review of Issues and Procedures, United Nations Department of Technical Co-operation for Development and Statistical Office, New York, 1982
Statistical software | TPL Tables | [
"Mathematics"
] | 925 | [
"Statistical software",
"Mathematical software"
] |
11,479,997 | https://en.wikipedia.org/wiki/Electromagnetic%20brake | Electromagnetic brakes or EM brakes are used to slow or stop vehicles using electromagnetic force to apply mechanical resistance (friction). They were originally called electro-mechanical brakes but over the years the name changed to "electromagnetic brakes", referring to their actuation method which is generally unrelated to modern electro-mechanical brakes. Since becoming popular in the mid-20th century, especially in trains and trams, the variety of applications and brake designs has increased dramatically, but the basic operation remains the same.
Both electromagnetic brakes and eddy current brakes use electromagnetic force, but electromagnetic brakes ultimately depend on friction whereas eddy current brakes use magnetic force directly.
Applications
In locomotives, a mechanical linkage transmits torque to an electromagnetic braking component.
Trams and trains use electromagnetic track brakes where the braking element is pressed by magnetic force to the rail. They are distinguished from mechanical track brakes, where the braking element is mechanically pressed on the rail.
Electric motors in industrial and robotic applications also employ electromagnetic brakes.
Recent design innovations have led to the application of electromagnetic brakes to aircraft applications. In this application, a combination motor/generator is used first as a motor to spin the tires up to speed prior to touchdown, thus reducing wear on the tires, and then as a generator to provide regenerative braking.
Types
Single face brake
A friction-plate brake uses a single plate friction surface to engage the input and output members of the clutch. Single face electromagnetic brakes make up approximately 80% of all of the power applied brake applications.
Power off brake
Power off brakes stop or hold a load when electrical power is either accidentally lost or intentionally disconnected. In the past, some companies have referred to these as "fail safe" brakes. These brakes are typically used on or near an electric motor. Typical applications include robotics, holding brakes for Z axis ball screws and servo motor brakes. Brakes are available in multiple voltages and can have either standard backlash or zero backlash hubs. Multiple disks can also be used to increase brake torque, without increasing brake diameter. There are 2 main types of holding brakes. The first is spring applied brakes. The second is permanent magnet brakes.
Spring type - When no electricity is applied to the brake, a spring pushes against a pressure plate, squeezing the friction disk between the inner pressure plate and the outer cover plate. This frictional clamping force is transferred to the hub, which is mounted to a shaft.
Permanent magnet type – A permanent magnet holding brake looks very similar to a standard power applied electromagnetic brake. Instead of squeezing a friction disk, via springs, it uses permanent magnets to attract a single face armature. When the brake is engaged, the permanent magnets create magnetic lines of flux, which can in turn attract the armature to the brake housing. To disengage the brake, power is applied to the coil which sets up an alternate magnetic field that cancels out the magnetic flux of the permanent magnets.
Both power off brakes are considered to be engaged when no power is applied to them. They are typically required to hold or to stop alone in the event of a loss of power or when power is not available in a machine circuit. Permanent magnet brakes have a very high torque for their size, but also require a constant current control to offset the permanent magnetic field. Spring applied brakes do not require a constant current control, they can use a simple rectifier, but are larger in diameter or would need stacked friction disks to increase the torque.
Particle brake
Magnetic particle brakes are unique in their design from other electro-mechanical brakes because of the wide operating torque range available. Like an electro-mechanical brake, torque to voltage is almost linear; however, in a magnetic particle brake, torque can be controlled very accurately (within the operating RPM range of the unit). This makes these units ideally suited for tension control applications, such as wire winding, foil, film, and tape tension control. Because of their fast response, they can also be used in high cycle applications, such as magnetic card readers, sorting machines and labeling equipment.
Magnetic particles (very similar to iron filings) are located in the powder cavity. When electricity is applied to the coil, the resulting magnetic flux tries to bind the particles together, almost like a magnetic particle slush. As the electric current is increased, the binding of the particles becomes stronger. The brake rotor passes through these bound particles. The output of the housing is rigidly attached to some portion of the machine. As the particles start to bind together, a resistant force is created on the rotor, slowing, and eventually stopping the output shaft.
Hysteresis power brake
Electrical hysteresis units have an extremely wide torque range. Since these units can be controlled remotely, they are ideal for test stand applications where varying torque is required. Since drag torque is minimal, these units offer the widest available torque range of any of the hysteresis products. Most applications involving powered hysteresis units are in test stand requirements.
When electricity is applied to the field, it creates an internal magnetic flux. That flux is then transferred into a hysteresis disk (that may be made from an AlNiCo alloy) passing through the field. The hysteresis disk is attached to the brake shaft. A magnetic drag on the hysteresis disk allows for a constant drag, or eventual stoppage of the output shaft.
When electricity is removed from the brake, the hysteresis disk is free to turn, and no relative force is transmitted between either member. Therefore, the only torque seen between the input and the output is bearing drag.
Multiple disk brake
Multiple disk brakes are used to deliver extremely high torque within a small space. These brakes can be used either wet or dry, which makes them ideal to run in multi-speed gear box applications, machine tool applications, or in off-road equipment.
Electro-mechanical disk brakes operate via electrical actuation, but transmit torque mechanically. When electricity is applied to the coil of an electromagnet, the magnetic flux attracts the armature to the face of the brake. As it does so, it squeezes the inner and outer friction disks together. The hub is normally mounted on the shaft that is rotating. The brake housing is mounted solidly to the machine frame. As the disks are squeezed, torque is transmitted from the hub into the machine frame, stopping and holding the shaft.
When electricity is removed from the brake, the armature is free to turn with the shaft. Springs keep the friction disk and armature away from each other. There is no contact between braking surfaces and minimal drag.
See also
Brake run
Electromagnetic clutch
Regenerative brake
Eddy current brake
Dynamic braking
References
Brakes
Railway brakes
Electromagnetism
Electromagnetic brakes and clutches
cs:Elektrodynamická brzda | Electromagnetic brake | [
"Physics",
"Engineering"
] | 1,376 | [
"Electromagnetism",
"Physical phenomena",
"Electromagnetic brakes and clutches",
"Fundamental interactions",
"Mechanical engineering"
] |
11,482,363 | https://en.wikipedia.org/wiki/Spatial%20twist%20continuum | In finite element analysis, the spatial twist continuum (STC) is a dual representation of a hexahedral mesh that defines the global connectivity constraint. Generation of an STC can simplify the automated generation of a mesh. The method was published in 1993 by a group led by Peter Murdoch.
The name is derived from the description of the surfaces that define the connectivity of the hexahedral elements. The surfaces are arranged in the three principal dimensions such that they form orthogonal intersections that coincide with the centroid of the hexahedral element. They are arranged predominately coplanar to each other in their respective dimensions yet they can twist into the other dimensional planes through transitions. The surfaces are unbroken throughout the entire volume of the mesh hence they are continuums.
Explanation
One of the areas where the STC finds application is computational fluid dynamics, a field of analysis that involves simulating the flow of fluids over and through bodies defined by boundary surfaces. The procedure involves building a mesh using it to analyze the system using a finite volume approach.
An analyst has many choices available for creating a mesh that can be used in a CFD or CAE simulation, one is to use a Tetrahedral, Polyhedral, Trimmed Cartesian or Mixed of Hybrid of Hexahedra called hex dominate, these are classified as non-structured meshes, which can all be created automatically, however the CFD and FEA results are both inaccurate and prone to solution divergence, (the simulation fails to solve).
The other option for the analyst is to use an all-hexahedral mesh that offers far greater solver stability and speed as well as accuracy and the ability to run much more powerful turbulence solvers like Large eddy simulation LES in transient mode as opposed to the non-structured meshes that can only run a steady state RANS model.
The difficulty with generating an all-hexahedral mesh on a complex geometry is that mesh needs to take into consideration the local geometric detail as well as the global connectivity constraint. This is the STC, and it is only present in an all-hexahedral mesh. This is the reason why it is relatively easy to automate a non-structured mesh, the automatic generator only needs to be concerned with the local cell size geometry.
Advantages
The tradeoffs and relative benefits of using either mesh method to build and solve a CFD or CAE model are best explained by looking at the total work flow.
1) CAD cleanup. This involves fixing the gaps and holes in the CAD data. Usually the forgotten task that can consume a lot of time and energy and not something any experienced analyst looks forward too.
2) Mesh generation: The two main choices are to use an automated non-structured mesh or build a full hexahedral mesh.
a) Non-Structured: If one chooses to build a non-structured mesh then it is not as easy as first perceived. The process involves automatically building the mesh then manually fixing the regions of very poor cell quality. This process can take a considerable amount of time, another hidden time cost.
b) All-Hexahedral: As of mid-2009 there are a few all-hexahedral mesh generating tools. Some of them are (in alphabetical order)
GridPro (1985) - a pure multiblock meshing tool ... with really good inter and intra block smoothing. For more details visit http://www.gridpro.com
Moceon (1995) - based on the STC ... just released .. and has generated good interest among the community. For more details http://www.moceon.com
IcemCFD http://www.ansys.com/products/icemcfd.asp
Pointwise (primarily a multiblock meshing tool .. but can also produce tetrahedrons) http://www.pointwise.com
TrueGrid (multiblock meshing tool) www.truegrid.com
However, there are ways of quickly building a hexahedral mesh such as using a 2D quad mesh and projecting into the z-direction. Another method is building a block structured mesh by using a CAD based program to create logically connected splines. After the blocks are built the cell factors are added to the blocks and the mesh created. One significant advantage of using a block based hexahedral mesh is the mesh can be smoothed very quickly. For large complex geometric models the process of building a hexahedral mesh can take days, weeks and even months depending on the skill level and tool sets available to the analyst.
3) Set up the model and assign the boundary conditions: This is a rather trivial step and it is usually taken care of by GUI assisted menus.
4) Running the Simulation: This is where the nightmares for the non-structured mesh begin. Since it takes six tetrahedrals to represent one hexahedral the tet mesh size will be considerably larger and will require a lot more computing power and RAM to solve an equivalent hexahedral mesh. The tetrahedral mesh will also require more relaxation factors to solve the simulation by effectively dampening the amplitude of the gradients. This increases the number of sub-cycle steps and drives the courant number up. If you built a hexahedral mesh this is where the tortoise passes the hare.
5) Post processing the results: The time required in this step is highly dependent on the size of the mesh (number of cells).
6) Making design changes: If you build a non-structured mesh this is where you go back to the beginning and start all over again. If you build a hexahedral mesh then you make the geometric change, re-smooth the mesh and restart the simulation.
7) Accuracy: This is the major difference between a non-structured mesh and a hexahedral mesh, and the main reason why it is preferred.
The "spatial twist continuum" addresses the issue of complex mesh model creation by elevating the structure of the mesh to a higher level of abstraction that assists in the creation of the all-hexahedral mesh.
References
Murdoch P.; Benzley S.1; Blacker T.; Mitchell S.A. "The spatial twist continuum: A connectivity based method for representing all-hexahedral finite element meshes." Finite Elements in Analysis and Design, Volume 28, Number 2, 15 December 1997, Elsevier, pp. 137–149(13)
Murdoch, Peter and Steven E. Benzley. "The Spatial Twist Continuum." Proceedings, 4th International Meshing Roundtable, Sandia National Laboratories, pp. 243–251, October 1995
1995 introductions
Computational fluid dynamics
Finite element method | Spatial twist continuum | [
"Physics",
"Chemistry"
] | 1,366 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
11,483,854 | https://en.wikipedia.org/wiki/LCP%20theory | In chemistry, ligand close packing theory (LCP theory), sometimes called the ligand close packing model describes how ligand – ligand repulsions affect the geometry around a central atom. It has been developed by R. J. Gillespie and others from 1997 onwards and is said to sit alongside VSEPR which was originally developed by R. J. Gillespie and R Nyholm. The inter-ligand distances in a wide range of molecules have been determined. The example below shows a series of related molecules:
The consistency of the interligand distances (F-F and O-F) in the above molecules is striking and this phenomenon is repeated across a wide range of molecules and forms the basis for LCP theory.
Ligand radius
From a study of known structural data a series of inter-ligand distances has been determined and it has been found that there is a constant inter-ligand radius for a given central atom. The table below shows the inter-ligand radius (pm) for some of the period 2 elements:
The ligand radius should not be confused with the ionic radius.
Treatment of lone pairs
In LCP theory a lone pair is treated as a ligand. Gillespie terms the lone pair a lone pair domain and states that these lone pair domains push the ligands together until they reach the interligand distance predicted by the relevant inter-ligand radii. An example demonstrating this is shown below, where the F-F distance is the same in the AF3 and AF4+ species :
LCP and VSEPR
LCP and VSEPR make very similar predictions as to geometry but LCP theory has the advantage that predictions are more quantitative particularly for the second period elements, Be, B, C, N, O, F. Ligand -ligand repulsions are important when
the central atom is small e.g. period 2, (Be, B, C, N, O)
the ligands are only weakly electronegative compared to the central atom
the ligands are large compared to the central atom
there are 5 or more ligands around the central atom
References
Chemistry theories
Molecular geometry
Stereochemistry
Quantum chemistry | LCP theory | [
"Physics",
"Chemistry"
] | 427 | [
"Quantum chemistry",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Space",
" molecular",
"nan",
"Atomic",
"Spacetime",
"Matter",
" and optical physics"
] |
7,774,869 | https://en.wikipedia.org/wiki/Anthropogenic%20metabolism | Anthropogenic metabolism, also referred to as metabolism of the anthroposphere, is a term used in industrial ecology, material flow analysis, and waste management to describe the material and energy turnover of human society. It emerges from the application of systems thinking to the industrial and other man-made activities and it is a central concept of sustainable development. In modern societies, the bulk of anthropogenic (man-made) material flows is related to one of the following activities: sanitation, transportation, habitation, and communication, which were "of little metabolic significance in prehistoric times". Global man-made stocks of steel in buildings, infrastructure, and vehicles, for example, amount to about 25 Gigatonnes (more than three tonnes per person), a figure that is surpassed only by construction materials such as concrete.
Sustainable development is closely linked to the design of a sustainable anthropogenic metabolism, which will entail substantial changes in the energy and material turnover of the different human activities. Anthropogenic metabolism can be seen as synonymous to social or socioeconomic metabolism. It comprises both industrial metabolism and urban metabolism.
Negative effects
In layman's terms, anthropogenic metabolism indicates the human impact on the world by the modern industrialized world. Much of these impacts include waste management, ecological footprints, water footprints, and flow analysis (i.e., the rate at which each human depleted the energy around them). Most anthropogenic metabolism happens in developed countries. According to Rosales, "Economic growth is at present the main cause of increased climate change, and climate change is a main mechanism of biodiversity loss; because of this, economic growth is a major catalyst of biodiversity loss."
A water footprint is the amount of water that each person uses in their daily lives. Most of the world's water is salt water which cannot be used in human food or water supplies. Therefore, the freshwater sources that were once plentiful are now being diminished due to anthropogenic metabolism of the growing population. The water footprint encompasses how much fresh water is needed for each consumer's needs. According to J. Allan, "there is a huge impact of water use on stores of surface and groundwater and on flows to which water is returned after use. These impacts are shown to be particularly high for manufacturing industries. For example, that there are less than 10 economies worldwide that have a significant water surplus, but that these economies have successfully met, or have the potential to meet, the water deficits of the other 190 economies. Consumers enjoy the delusion of food and water security provided by virtual water trade.
In addition, the ecological footprint is a more economical and land-focused way of looking at human impact. Developed countries tend to have higher ecological footprints, which do not strictly correspond to a country's total population. According to research by Dias de Oliveira, Vaughan and Rykiel, "The Ecological Footprint...is an accounting tool based on two fundamental concepts, sustainability and carrying capacity. It makes it possible to estimate the resource consumption and waste assimilation requirements of a defined human population or economy sector in terms of corresponding productive land area."
One of the major cycles that humans can contribute to that cause a major impact on climate change is the nitrogen cycle. This comes from nitrogen fertilizers that humans use. Gruber and Galloway have researched, "The massive acceleration of the nitrogen cycle caused by the production and industrial use of artificial nitrogen fertilizers worldwide has led to a range of environmental problems. Most important is how the availability of nitrogen will affect the capacity of Earth's biosphere to continue absorbing carbon from the atmosphere and to thereby continue helping to mitigate climate change."
The carbon cycle is another major contributor to climate change primarily from anthropogenic metabolism. A couple examples of how humans contribute to the carbon in the atmosphere is by burning fossil fuels and deforestation. By taking a close look at the carbon cycle Peng, Thomas and Tian have discovered that, "It is recognized that human activities, such as fossil fuel burning, land-use change, and forest harvesting at a large scale, have resulted in the increase of greenhouse gases in the atmosphere since the onset of the Industrial Revolution. The increasing amounts of greenhouse gases, particularly in the atmosphere, is believed to have induced climate change and global warming."
Impact of climate change extend beyond humans. There is a forecast for extinctions of species because of their habitats being affected. An example of this is marine animals. There are major impacts on the marine systems as a result of anthropogenic metabolism, according to Blaustein, the dramatic findings indicate that "every square kilometer [is] affected by some anthropogenic driver of ecological change".
The negative effects of anthropogenic metabolism are seen through the water footprint, ecological footprint, carbon cycle, and the nitrogen cycle. Studies on the marine ecosystem that show major impacts by humans and developed countries which include more industries, thus more anthropogenic metabolism.
See also
References
Further reading
Baccini, Peter and Brunner, Paul H., Metabolism of the Anthroposphere, Springer, 1991, Heidelberg, Berlin, New York, (). New edition March 2012, MIT Press, Cambridge MA, .
Waste management concepts
Industrial ecology | Anthropogenic metabolism | [
"Chemistry",
"Engineering"
] | 1,077 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
7,775,585 | https://en.wikipedia.org/wiki/Architectural%20sculpture | Architectural sculpture is the use of sculptural techniques by an architect and/or sculptor in the design of a building, bridge, mausoleum or other such project. The sculpture is usually integrated with the structure, but freestanding works that are part of the original design are also considered to be architectural sculpture. The concept overlaps with, or is a subset of, monumental sculpture.
It has also been defined as "an integral part of a building or sculpture created especially to decorate or embellish an architectural structure."
Architectural sculpture has been employed by builders throughout history, and in virtually every continent on earth save pre-colonial Australia.
Egyptian
Modern understanding of ancient Egyptian architecture is based mainly on the religious monuments that have survived since antiquity, which are carved stone with post and lintel construction. These religious monuments dedicated to the gods or pharaohs were designed with a great deal of architectural sculpture inside and out: engaged statues, carved columns and pillars, and wall surfaces carved with bas-reliefs. The classic examples of Egyptian colossal monuments (the Great Sphinx of Giza, the Abu Simbel temples, the Karnak Temple Complex, etc.) represent thoroughly integrated combinations of architecture and sculpture.
Obelisks, elaborately carved from a single block of stone, were usually placed in pairs to flank the entrances to temples and pyramids.
Reliefs are also common in Egyptian building, depicting scenes of everyday life and often accompanied by hieroglyphics.
Assyro-Babylonian
The Fertile Crescent architectural sculptural tradition began when Ashurnasirpal II moved his capitol to the city of Nimrud around 879 BCE. This site was located near a major deposit of gypsum (alabaster). This fairly easy to cut stone could be quarried in large blocks that allowed them to be easily carved for the palaces that were built there. The early style developed out of an already flourishing mural tradition by creating drawings that were then carved in low relief. Another contributing factor in the development of architectural sculpture were the small carved seals that had been made in the area for centuries.
Indian
Greco-Roman
The most significant Greek introduction, well before the Classical period, was pedimental sculpture, fitting in the long, low triangle formed by the pediment above the portico of Greek temples. This remained a feature of later Greek and Roman temples and was revived in the Renaissance, with many new examples, by then mostly on large public buildings, created in the 19th and 20th centuries.
Classical Greek architecture, like the prototypical Parthenon, incorporate architectural sculpture in a fairly narrow set of standard, formal building elements. The names of these elements still comprise the usual vocabulary for discussion: the pediment, metope, frieze, caryatid, quadriga, acroteria, etc.
Greek examples of architectural sculpture are distinguished not only by their age but their very high quality and skilful technique, with rhythmic and dynamic modelling, figural compositions in friezes that continue seamlessly over vertical joints from one block of stone to the next, and mastery of depth and legibility.
The known Greek and Roman examples have been exhaustively studied, and frequently copied or adapted into subsequent neoclassical styles: Greek Revival architecture (usually the most strict), Neoclassical architecture, Beaux-Arts architecture with its exaggerated and romantic free interpretations of the vocabulary, and even Stalinist architecture like the Central Moscow Hippodrome adapted to a totalitarian aesthetic. These re-interpretations are sometimes dubious; for instance, there are many modern copies of the Mausoleum of Halicarnassus, like the National Diet Building in Tokyo, despite the fact that all classical descriptions of the Mausoleum are vague.
European
Pre-Columbian North and South America
Post-contact North and South America
United States
Not until about 1870 did the U.S. develop the talent, the economic power, and the taste for buildings grand enough to need architectural sculpture. The Philadelphia City Hall, constructed 1871 through 1901, is recognized as the turning point, because of the approximately 250 sculptures planned for the building, the large finial of William Penn, and the practical effect of Alexander Milne Calder training many assistants there.
In the same years, H.H. Richardson began to develop his influential signature genre, which included romantic, medieval, and Romanesque stone carving. Richard Morris Hunt became the first to bring the Parisian neo-classical École des Beaux-Arts style back to the United States, a style that depended on integrated figural sculpture and a highly ornamented building fabric for its aesthetic effect. The Beaux-Arts style dominated for major public buildings between the 1893 World's Columbian Exposition in Chicago, through about 1912, the year of the San Francisco City Hall. The need for sculptors saw the emergence of a small industry of carvers and modelers, and a professional organization, the National Sculpture Society.
The advent of steel frames and reinforced concrete encouraged, at first, more diverse building styles into the 1910s and 1920s. The diversity of skyscraper Gothic, exotic "revivals" of Mayan and Egyptian, Stripped Classicism, Art Deco, etc. called for a similar diversity of sculptural approaches. The use of sculpture was still expected, particularly for public buildings such as war memorials and museums. In 1926 the pre-eminent American architectural sculptor, Lee Lawrie, with his longtime friend and collaborator architect Bertram Goodhue, developed perhaps the most sophisticated American examples at the Nebraska State Capitol and the Los Angeles Public Library.
Goodhue's premature death ended that collaboration. The Depression, and the onset of World War II, decimated building activity. The old building trades disbanded. By the postwar years the aesthetic of architectural modernism had taken hold. Except for a few diehards and regional sculptors, the profession was not only dead but discredited. As of the 2010s there are isolated signs of a revival of interest, for instance in the career of Raymond Kaskey and the Persist statue in Sacramento, California.
See also
List of architectural sculpture in Westminster
Pedimental sculptures in Canada
Pedimental sculptures in the United States
References
External links
Sculpture
Garden features
Sculpture
History of sculpture
Landscape design history
Ornaments (architecture) | Architectural sculpture | [
"Engineering"
] | 1,240 | [
"Architectural history",
"Architecture"
] |
7,777,698 | https://en.wikipedia.org/wiki/Synopses%20of%20the%20British%20Fauna | Synopses of the British Fauna is a series of identification guides, published by The Linnean Society and The Estuarine and Coastal Sciences Association. Each volume in the series provides and in-depth analysis of a group of animals and is designed to bridge the gap between the standard field guide and more specialised monograph or treatise. The series is now published by The Field Studies Council on behalf of The Linnean Society and The Estuarine and Coastal Sciences Association.
The series is designed for use in the field and is kept as user friendly as possible with technical terminology kept to a minimum and a glossary of terms provided, although the complexity of the subject matter makes the books more suitable for the more experienced practitioner.
History of the series
On 11 March 1943, at a meeting of The Linnean Society in Burlington House, TH Savoy presented his "Synopsis of the Opiliones" (Harvestmen). It was so well received that a decision was made there and then to publish it as the first of a series of "ecological fauna lists".
Re-launched by Dr Doris Kermack in the mid-1960s, the New Series of Synopses of the British Fauna went from strength to strength. From number 13, the series had been jointly sponsored by The Estuarine and Coastal Sciences Association and Dr RSK Barnes became co-editor.
From 1993, the series has been published by The Field Studies Council and benefits from association with the extensive testing undertaken as part of the AIDGAP project.
Volumes
The series contains the following volumes, many of which are out of print. Many of the volumes have been updated and reprinted under slightly different names to reflect either taxonomic changes or advances in the understanding of a group.
Volume 62: Marine Gastropods 3: Neogastropoda (Wigham and Graham) 2018
Volume 61: Marine Gastropods 2: Littorinimorpha and other unassigned Caenogastropoda (Wigham and Graham) 2017
Volume 60: Marine Gastropods 1: Patellogastropoda and Vetigastropoda (Wigham and Graham) 2017
Volume 59: Athecate hydroids and their medusae (Shuchert) 2012
Volume 58: Centipedes (AD Barber) 2009
Volume 57: Barnacles (AJ Southward) 2008
Volume 56: Echinoderms (EC Southward and AC Campbell) 2005
Volume 55: Lobsters, Mud Shrimps and Anomuran Crabs (RW Ingle and ME Christiansen) 2004
Volume 54: Polychaetes: British Chrysopetaloidea, Pisionoidea and Aphroditoidea (SJ Chambers and AI Muir) 1998
Volume 53: Free Living British Nematodes, Part 3 Monohysterids (RM Warwick, HM Platt and PJ Somerfield) 1998
Volume 52: Ticks of North-West Europe (Paul D Hillyard) 1996
Volume 51: Marine and Brackish Water Harpacticoid Copepods, Part 1 (R Huys, JM Gee, CG Moore and R Hamond) 1996
Volume 50: North-west European Thecate Hydroids and Their Medusae (PFS Cornelius) 1995
Volume 49: Woodlice Keys and Notes for Identification of the Species (PG Oliver and CJ Meechan) 1993
Volume 48: Marine Planktonic Ostracods (MV Angel) 1993
Volume 47: Copepods Parasitic on Fishes (Z Kabata) 1992
Volume 46: Commensal and Parasitic Copepods Associated with Marine Invertebrates (and Whales) (V Gotto) 1993
Volume 45: Polychaetes British Phyllodocoideans, Typhloscolecoideans and Tomopteroideans (F Pleijel and RP Dales) 1991
Volume 44: Polychaetes: Interstitial Families (Second Edition) (W Westheide) 2008
Volume 44: Polychaetes: Interstitial Families (W Westheide) 1990
Volume 43: Marine and Brackish Water Ostracods (Superfamilies Cypridacea and Cytheracea) (J Athersuch, DJ Horne and JE Whittaker) 1990
Volume 42: Freshwater Ostracoda (PA Henderson) 1990
Volume 41: Entoprocts (C Nielsen) 1989
Volume 40: Pseudoscorpions (G Legg and RE Jones) 1988
Volume 39: Chaetognatha (AC Pierrot-Bults and KC Chidghey) 1988
Volume 38: Free Living Marine Nematodes Part II British Chromadorids (HM Platt and RM Warwick) 1988
Volume 37: Molluscs Caudofoveata, Solenogastres, Polyplacophora and Scaphopoda (AM Jones and JM Baxtyer) 1987
Volume 36: Halacarid Mites (J Green and M Macquitty) 1987
Volume 35: Millipedes (J Gordon Blower) 1985
Volume 34: Cyclostome Bryozoans (PJ Hayward and JS Ryland) 1985
Volume 33: Ctenostome Bryozoans (PJ Hayward) 1985
Volume 32: Polychaetes British Amphinomida, Spintherida and Eunicida (JD George and G Hartmann-Schroder) 1985
Volume 31: Earthworms (RW Sims and BM Garard) 1985
Volume 30: Euphasiid, Stomatopod and Leptostracan Crustaceans (J Mauchline) 1984
Volume 29: Siphonophores and Velellids (PA Kirkpatrick and PR Pugh) 1984
Volume 28: Free-Living Marine Nematodes Pt 1: British Enoplids Free Living Marine Nematodes (HM Platt and RM Warwick) 1983
Volume 27: Tanaids (DM Holdich and JA Jones) 1983
Volume 26: British Polyclad Turbellarians (S Prudhoe) 1983
Volume 25: Shallow Water Crabs Keys and notes for identification of the species (RW Ingle) 1983
Volume 24: Nemerteans R Gibson 1982
Volume 23: British and Other Freshwater Ciliated Protozoa (Part 2) Ciliophora: Oligohymenophora & Polyhymenophora (CR Curds, MA Gates and D McRoberts) 1982
Volume 22: British and Other Freshwater Ciliated Protozoa (Part 1) Ciliophora: Kinetofragminophora (CR Curds) 1982
Volume 21: British Other Marine Estuarine Oligochaetes (Brinkhurst) 1982
Volume 20: British Pelagic Tunicates (JH Fraser) 1982
Volume 19: British Planarians (IR Ball and TB Reynoldson) 1981
Volume 18: British Anthozoa (RL Manuel) 1981
Volume 17: British Brachiopods (C Howard, C Brunton and GB Curry) 1979
Volume 16: British Nearshore Foraminiferids (JW Murray) 1979
Volume 15: Coastal Shrimps and Prawns Keys and Notes for Identification of the Species (Ed. G Smaldon, LB Holthius and CHJM Fransen) 1994
Volume 15: British Coastal Shrimps Prawns (G Smaldon) 1979
Volume 14: Cheilostomatous Bryozoa, Part 2 Hippothooidea - Celleporoidea (PJ Hayward and JS Ryland) 1999
Volume 14: British Ascophoran Bryozoans (PJ Hayward, JS Ryland) 1979
Volume 13: British and Other Phoronids (CC Emig) 1979
Volume 12: Sipunculans (PE Gibbs) 2001
Volume 12: British Sipunculans (PE Gibbs) 1978
Volume 11: British Freshwater Bivalve Mollusca (AE Ellis) 1978
Volume 10: Cheilostomatous Bryozoa, Part 1: Aeteoidea-Cribrilinoidea (PJ Hayward and JS Ryland)
Volume 8: Molluscs: Benthic Opisthobranchs (Mollusca: Gastropoda) (TE Thompson) 1989
Volume 8: British Opisthobranch Molluscs (TE Thompson, GH Brown) 1976
Volume 7: British Cumaceans (NS Jones) 1976
Volume 6: British Land Snails (RAD Cameron, M Redfern) 1976
Volume 5: Sea-Spiders (Pycnogonida) of the north-east Atlantic (RN Bamber) 2010
Volume 5: British Sea Spiders (PE King) 1974
Volume 4: Harvestmen (PD Hillyard) 2005
Volume 4: British Harvestmen (J Sankey, TH Savory) 1974
Volume 3: Intertidal Marine Isopods (E Naylor, A Brandt) 2015
Volume 3: British Marine Isopods (E Naylor) 1972
Volume 2: Molluscs: Prosobranch and Pyramidellid Gastropods Keys and Notes for the Identification of the Species
Volume 1: British Ascidians (R Millar) 1970
External links
Linnean Society
Full list of Synopses in print
Biological literature
Fauna of the United Kingdom
Natural history
Taxonomy (biology) books
Zoological literature
Wild animals identification | Synopses of the British Fauna | [
"Biology"
] | 1,891 | [
"Taxonomy (biology)",
"Taxonomy (biology) books"
] |
7,781,359 | https://en.wikipedia.org/wiki/Quantum%20cohomology | In mathematics, specifically in symplectic topology and algebraic geometry, a quantum cohomology ring is an extension of the ordinary cohomology ring of a closed symplectic manifold. It comes in two versions, called small and big; in general, the latter is more complicated and contains more information than the former. In each, the choice of coefficient ring (typically a Novikov ring, described below) significantly affects its structure, as well.
While the cup product of ordinary cohomology describes how submanifolds of the manifold intersect each other, the quantum cup product of quantum cohomology describes how subspaces intersect in a "fuzzy", "quantum" way. More precisely, they intersect if they are connected via one or more pseudoholomorphic curves. Gromov–Witten invariants, which count these curves, appear as coefficients in expansions of the quantum cup product.
Because it expresses a structure or pattern for Gromov–Witten invariants, quantum cohomology has important implications for enumerative geometry. It also connects to many ideas in mathematical physics and mirror symmetry. In particular, it is ring-isomorphic to symplectic Floer homology.
Throughout this article, X is a closed symplectic manifold with symplectic form ω.
Novikov ring
Various choices of coefficient ring for the quantum cohomology of X are possible. Usually a ring is chosen that encodes information about the second homology of X. This allows the quantum cup product, defined below, to record information about pseudoholomorphic curves in X. For example, let
be the second homology modulo its torsion. Let R be any commutative ring with unit and Λ the ring of formal power series of the form
where
the coefficients come from R,
the are formal variables subject to the relation ,
for every real number C, only finitely many A with ω(A) less than or equal to C have nonzero coefficients .
The variable is considered to be of degree , where is the first Chern class of the tangent bundle TX, regarded as a complex vector bundle by choosing any almost complex structure compatible with ω. Thus Λ is a graded ring, called the Novikov ring for ω. (Alternative definitions are common.)
Small quantum cohomology
Let
be the cohomology of X modulo torsion. Define the small quantum cohomology with coefficients in Λ to be
Its elements are finite sums of the form
The small quantum cohomology is a graded R-module with
The ordinary cohomology H*(X) embeds into QH*(X, Λ) via , and QH*(X, Λ) is generated as a Λ-module by H*(X).
For any two cohomology classes a, b in H*(X) of pure degree, and for any A in , define (a∗b)A to be the unique element of H*(X) such that
(The right-hand side is a genus-0, 3-point Gromov–Witten invariant.) Then define
This extends by linearity to a well-defined Λ-bilinear map
called the small quantum cup product.
Geometric interpretation
The only pseudoholomorphic curves in class A = 0 are constant maps, whose images are points. It follows that
in other words,
Thus the quantum cup product contains the ordinary cup product; it extends the ordinary cup product to nonzero classes A.
In general, the Poincaré dual of (a∗b)A corresponds to the space of pseudoholomorphic curves of class A passing through the Poincaré duals of a and b. So while the ordinary cohomology considers a and b to intersect only when they meet at one or more points, the quantum cohomology records a nonzero intersection for a and b whenever they are connected by one or more pseudoholomorphic curves. The Novikov ring just provides a bookkeeping system large enough to record this intersection information for all classes A.
Example
Let X be the complex projective plane with its standard symplectic form (corresponding to the Fubini–Study metric) and complex structure. Let be the Poincaré dual of a line L. Then
The only nonzero Gromov–Witten invariants are those of class A = 0 or A = L. It turns out that
and
where δ is the Kronecker delta. Therefore,
In this case it is convenient to rename as q and use the simpler coefficient ring Z[q]. This q is of degree . Then
Properties of the small quantum cup product
For a, b of pure degree,
and
The small quantum cup product is distributive and Λ-bilinear. The identity element is also the identity element for small quantum cohomology.
The small quantum cup product is also associative. This is a consequence of the gluing law for Gromov–Witten invariants, a difficult technical result. It is tantamount to the fact that the Gromov–Witten potential (a generating function for the genus-0 Gromov–Witten invariants) satisfies a certain third-order differential equation known as the WDVV equation.
An intersection pairing
is defined by
(The subscripts 0 indicate the A = 0 coefficient.) This pairing satisfies the associativity property
Dubrovin connection
When the base ring R is C, one can view the evenly graded part H of the vector space QH*(X, Λ) as a complex manifold. The small quantum cup product restricts to a well-defined, commutative product on H. Under mild assumptions, H with the intersection pairing is then a Frobenius algebra.
The quantum cup product can be viewed as a connection on the tangent bundle TH, called the Dubrovin connection. Commutativity and associativity of the quantum cup product then correspond to zero-torsion and zero-curvature conditions on this connection.
Big quantum cohomology
There exists a neighborhood U of 0 ∈ H such that and the Dubrovin connection give U the structure of a Frobenius manifold. Any a in U defines a quantum cup product
by the formula
Collectively, these products on H are called the big quantum cohomology. All of the genus-0 Gromov–Witten invariants are recoverable from it; in general, the same is not true of the simpler small quantum cohomology.
Small quantum cohomology has only information of 3-point Gromov–Witten invariants, but the big quantum cohomology has of all (n ≧ 4) n-point Gromov–Witten invariants. To obtain enumerative geometrical information for some manifolds, we need to use big quantum cohomology. Small quantum cohomology would correspond to 3-point correlation functions in physics while big quantum cohomology would correspond to all of n-point correlation functions.
References
McDuff, Dusa & Salamon, Dietmar (2004). J-Holomorphic Curves and Symplectic Topology, American Mathematical Society colloquium publications. .
Piunikhin, Sergey; Salamon, Dietmar & Schwarz, Matthias (1996). Symplectic Floer–Donaldson theory and quantum cohomology. In C. B. Thomas (Ed.), Contact and Symplectic Geometry, pp. 171–200. Cambridge University Press.
Algebraic geometry
Cohomology theories
String theory
Symplectic topology | Quantum cohomology | [
"Astronomy",
"Mathematics"
] | 1,552 | [
"String theory",
"Fields of abstract algebra",
"Astronomical hypotheses",
"Algebraic geometry"
] |
7,781,429 | https://en.wikipedia.org/wiki/Novikov%20ring | In mathematics, given an additive subgroup , the Novikov ring of is the subring of consisting of formal sums such that and . The notion was introduced by Sergei Novikov in the papers that initiated the generalization of Morse theory using a closed one-form instead of a function. The notion is used in quantum cohomology, among the others.
The Novikov ring is a principal ideal domain. Let S be the subset of consisting of those with leading term 1. Since the elements of S are unit elements of , the localization of with respect to S is a subring of called the "rational part" of ; it is also a principal ideal domain.
Novikov numbers
Given a smooth function f on a smooth manifold with nondegenerate critical points, the usual Morse theory constructs a free chain complex such that the (integral) rank of is the number of critical points of f of index p (called the Morse number). It computes the (integral) homology of (cf. Morse homology):
In an analogy with this, one can define "Novikov numbers". Let X be a connected polyhedron with a base point. Each cohomology class may be viewed as a linear functional on the first homology group ; when composed with the Hurewicz homomorphism, it can be viewed as a group homomorphism . By the universal property, this map in turns gives a ring homomorphism,
,
making a module over . Since X is a connected polyhedron, a local coefficient system over it corresponds one-to-one to a -module. Let be a local coefficient system corresponding to with module structure given by . The homology group is a finitely generated module over which is, by the structure theorem, the direct sum of its free part and its torsion part. The rank of the free part is called the Novikov Betti number and is denoted by . The number of cyclic modules in the torsion part is denoted by . If , is trivial and is the usual Betti number of X.
The analog of Morse inequalities holds for Novikov numbers as well (cf. the reference for now.)
Notes
References
S. P. Novikov, Multi-valued functions and functionals: An analogue of Morse theory. Soviet Mathematics - Doklady 24 (1981), 222–226.
S. P. Novikov: The Hamiltonian formalism and a multi-valued analogue of Morse theory. Russian Mathematical Surveys 35:5 (1982), 1–56.
External links
Different definitions of Novikov ring?
Commutative algebra
Ring theory
Morse theory | Novikov ring | [
"Mathematics"
] | 529 | [
"Fields of abstract algebra",
"Commutative algebra",
"Ring theory"
] |
16,888,524 | https://en.wikipedia.org/wiki/Wired%20communication | Wired communication refers to the transmission of data over a wire-based communication technology (telecommunication cables). Wired communication is also known as wireline communication. Examples include telephone networks, cable television or internet access, and fiber-optic communication. Most wired networks use Ethernet cables to transfer data between connected PCs. Also waveguide (electromagnetism), used for high-power applications, is considered wired line. Local telephone networks often form the basis for wired communications and are used by both residential and business customers in the area. Many networks today rely on the use of fiber optic communication technology as a means of providing clear signaling for both inbound and outbound transmissions and are replacing copper wire transmission. Fiber optic technology is capable of accommodating far more signals than copper wiring while still maintaining the integrity of the signal over longer distances.
Alternatively, communication technologies that don't rely on wires to transmit information (voice or data) are considered wireless, and are generally considered to have higher latency and lower reliability.
The legal definition of most, if not all, wireless technologies today or "apparatus, and services (among other things, the receipt, forwarding, and delivery of communications) incidental to such transmission" are a wire communication as defined in the Communications Act of 1934 in 47 U.S.C. §153 ¶(59). This makes everything online today and all wireless phones a use of wire communications by law whether a physical connection to wire is visible or is not. The Communications Act of 1934 created the Federal Communications Commission to replace the Federal Radio Commission. If there were no real wired communications today, there would be no online and there would be no mobile phones. Satellite communications would be the only current technology considered wireless.
In general, wired communications are considered to be the most stable and best of all types of communications services. They are relatively impervious to adverse weather conditions in comparison to wireless communication solutions. These characteristics have allowed wired communications to remain popular even as wireless solutions have continued to advance.
See also
Telecommunications cable
References
Telecommunications systems | Wired communication | [
"Technology"
] | 414 | [
"Telecommunications systems"
] |
16,889,066 | https://en.wikipedia.org/wiki/Boutique%20Design | Boutique Design magazine is a trade publication produced by ST Media Group International. As the only hospitality interiors magazine that focuses specifically on boutique hospitality, Boutique Design (BD) is the authority on the boutique hotel, spa and restaurant market. About designers and for designers, BD features major hospitality projects, industry news and products which are relevant to the industry in each of its bi-monthly issues. The publication debuted in spring, 2005.
BD also produces Boutique Design New York (BDNY), a hospitality interiors show that runs concurrently with the International Hotel, Motel + Restaurant Show at the Javits Center in New York. Over 750 exhibitors representing high-end, unique and innovative design products—including furniture, lighting, wall coverings, fabric, seating, accessories, artwork, carpet and flooring, materials, bath and spa – are presented in small-scale displays, creating an intimate, boutique-style shopping environment.
The event also includes education sessions, presented by BD and its sister publication Hospitality Style; design forums; special show floor exhibits; and the presentation of the annual Boutique Design Awards.
Each spring, Boutique Design names a list of up-and-coming hospitality interior designers known as The Boutique 18.
References
External links
Boutique Design
Hospitality Style
Boutique Design New York
2005 establishments in Ohio
Visual arts magazines published in the United States
Bimonthly magazines published in the United States
Design magazines
Magazines established in 2005
Magazines published in Cincinnati | Boutique Design | [
"Engineering"
] | 290 | [
"Design magazines",
"Design"
] |
16,895,165 | https://en.wikipedia.org/wiki/Photon%20diffusion%20equation | Photon diffusion equation is a second order partial differential equation describing the time behavior of photon fluence rate distribution in a low-absorption high-scattering medium.
Its mathematical form is as follows.
where is photon fluence rate (W/cm2), is del operator, is absorption coefficient (cm−1), is diffusion constant, is the speed of light in the medium (m/s), and is an isotropic source term (W/cm3).
Its main difference with diffusion equation in physics is that photon diffusion equation has an absorption term in it.
Application
Medical Imaging
The properties of photon diffusion as explained by the equation is used in diffuse optical tomography.
External links
Diffuse Optics Lab at University of Pennsylvania, Philadelphia, USA
Equations | Photon diffusion equation | [
"Mathematics"
] | 152 | [
"Mathematical objects",
"Equations"
] |
18,077,896 | https://en.wikipedia.org/wiki/Global%20Energy%20and%20Water%20Exchanges | The Global Energy and Water Exchanges Project (abbreviated GEWEX, formerly named the Global Energy and Water Cycle Experiment from 1990 to 2012) is an international research project and a core project of the World Climate Research Programme (WCRP).
In the beginning, the project intended to observe, comprehend and model the Earth's water cycle. The experiment also observes how much energy the Earth receives, and studies how much of that energy reaches the surfaces of the Earth and how that energy is transformed. Sunlight's energy evaporates water to produce clouds and rain and dries out land masses after rain. Rain that falls on land becomes the water budget which can be used by people for agricultural and other processes.
GEWEX is a collaboration of researchers worldwide to find better ways of studying the water cycle and how it transforms energy through the atmosphere. If the Earth's climates were identical from year to year, then people could predict when, where and what crops to plant. However, the instability created by solar variation, weather trends, and chaotic events creates weather that is unpredictable on seasonal scales. Through weather patterns such as droughts and higher rainfall these cycles impact ecosystems and human activities. GEWEX is designed to collect a much greater amount of data, and see if better models of that data can forecast weather and climate change into the future.
Project structures
GEWEX is organized into several structures. As GEWEX was conceived projects were organized by participating factions, this task is now done by the International GEWEX Project Office (IGPO). IGPO oversees major initiatives and coordinates between national projects in an effort to bring about communication between researchers. IGPO claims to support communication exchange between 2000 scientist and is the instrument for publication of major reports.
The Scientific Steering Group organizes the projects and assigns them to panels, which oversee progress and provide critique. The Coordinated Energy and Water Cycle Observations Project (CEOP) the 'Hydrology Project' is a major instrument in GEWEX. This panel includes geographic study areas such as the Climate Prediction Program for the Americas operated by NOAA, but also examines several types of climate zones (e.g. high altitude and semi-arid). Another panel, the GEWEX Radiation Panel oversees the coordinated use of satellites and ground-based observation to better estimate energy and water fluxes. One recent result GEWEX's Radiation panel has assessed data on rainfall for the last 25 years and determined that global rainfall is 2.61 mm/day with a small statistical variation. While the study period is short, after 25 years of measurement regional trends are beginning to appear. The GEWEX Modeling and Prediction Panel takes current models and analyzes the models when climate forcing phenomena occur (global warming as an example of a 'climate forcing' event). GEWEX is now the core project of WCRP.
Goals and design
Predicting weather change requires accurate data that is collected over many years, and the application of models. GEWEX was conceived to respond to the need for observations of the Earth's radiation budget and clouds. Many preexisting techniques were limited to observations taken from land and populated areas. This ignored the large amount of weather that occurs over the oceans and unpopulated regions, with key data missing from these areas. Since satellites orbiting the Earth cover large areas in small time frames, they can better estimate climate where measurements are infrequently taken. GEWEX was initiated by World Climate Research Programme (WCRP) to take advantage of environmental satellites such as TRMM, but now uses information from newer satellites as well as collections land-based instruments, such as BSRN. These land-based instruments can be used to verify information interpreted from satellite. GEWEX studies the long-term and regional changes in climate with a goal of predicting important seasonal weather patterns and climate changes that occurs over a few years.
Research goals
The research interest of GEWEX is to study fluxes of radiation at the Earth's surface, predict seasonal hydration levels of soils and develop accurate models of predicting energy and water budgets around the world. The project sets its goal as to improve, by an order of magnitude, the ability to model and therefore prediction hydration (rainfall and evaporation) patterns GEWEX is linked to other WCRP projects such as Stratospheric Processes and their Role in Climate (SPARC) Project, and the Climate and Cryosphere Project through WCRP. and thus shares information and goals with other WCRP projects. The goal becomes more important with the newer WCRP project, the Coordinated Observation and Prediction of the Earth System.
Complexity of the experiment
Aside from fluctuations of solar radiation, the sunlight that is transformed by the Earth can vary greatly, some have concluded for instance, that ice-ages self-perpetuate once enough ice has accumulated in the polar regions to reflect enough radiation at high elevations to lower the global average temperature, whereas it takes an unusually warm period to reverse this state. Water usage by plants, herbivore activities can change albedo in the temperate and tropical zones. These trends in reflection are subject to change. Some have proposed extrapolating pre-GEWEX information using new information and measurements taken with pre-GEWEX technology. Natural fires, volcanism, and man-made aerosols can alter the amount of radiation reaching the Earth. There are oscillations in oceanic currents, such as El-Niño and North Atlantic Oscillation, which alter the parts of the Earth's ice mass and land water availability. The experiment takes a sampling of climate, with some trends lasting a million years, and as paleo-climatology shows, can abruptly change.
Therefore, the ability to use data to predict change depends on factors that are measurable over periods of time, and factors that can affect global climate that abruptly appear can markedly alter the future.
Design
GEWEX is being implemented in phases. The first phase comprises information gathering, modelling, predictions, and advancement of observation techniques and is complete. The second phase addresses several scientific questions such as prediction capacity, changes in Earth's water cycle, and the impact on water resources.
First phase (1990–2002)
Phase I (1990–2002), also called the "Build-Up Phase", was designed to determine the hydrological cycle and energy fluxes by means of global measurements of atmospheric and surface properties. GEWEX was also designed to model the global hydrological cycle and its impact on the atmosphere, oceans and land surfaces. Phase I processes were to develop the ability to predict the variations of global and regional hydrological processes & water resources, and their response to environmental change. It was also to advance the development of observing techniques, data management, and assimilation systems for operational application to long-range weather forecasts, hydrology, and climate predictions.
During Phase I GEWEX projects were divided into the three overlapping sectors.
GEWEX Radiation Panel (GRP) used satellite and ground-based sensing over long periods to determine to delineate natural variation and climate changing forces.
GEWEX Modelling and Prediction Panel (GMPP): Model the energy and water budget of the earth and determine the predictability. Apply modeling to determine climate forcing events, or respond to climate forcing events by analysis of predictions.
GEWEX Hydrometeorology Panel (GHP) - Modeled and predicted changes in water cycle events on longer time scales (up to annual) using intensive regional studies to determine efficacy of data gathering and predictions. The Continental-Scale Experiments (CSEs) relied heavily on the following study areas that would eventually form the basis of the Coordinated Enhanced Observing Period (CEOP):
Canada - Mackenzie river basin study area (MAGS) -completed
United States - North American study area or GEWEX American Prediction Project(GAPP).
Brazil - Large-Scale Biosphere Atmosphere Experiment in Amazonia (LBA)
Scandinavia - Baltic Sea Experiment (BALTEX)
Southern Africa - African Monsoon Multidisciplinary Analysis Project (AMMA)
Indopacific and Asia - GEWEX Asian Monsoon Experiment (GAME) - completed in 2005
Australia - Murray-Darling Basin Water Budget Project (MDB)
But also:
Continental-scale - International Project (GCIP)
International Satellite Land-Surface Climatology Project (ISLSCP)
CEOP projects interacted with other non-GEWEX projects like CLIVAR and CLiC
Results
The results of the build-up phase include 15 to 25 years of study, measured the indirect effects of aerosols, compiled a correlated data set, some reductions in uncertainty GEWEX claims the following accomplishments: A long period data set of clouds, rain fall, water vapor, surface radiation, and aerosols with no indication of large global trends, but with evidence of regional variability, models showing increased precipitation, and showed the importance of regional factors, such as water and soil conservation in regional climate change. The Phase I also claims to have produced over 200 publications and 15 review articles.
The Mississippi watershed was part of the GEWEX Continental scale International Projects and as a result was well situated for the analysis of the Great Flood of 1993 (Mississippi River and Red River watersheds). The coordination between ground sensing observations and satellite information allowed a more thorough analysis of events that led up to the flood. Researchers at the Center for Ocean-Land-Atmosphere Studies (COLA) found that upstream soil moisture and a multifold increase of moist air flow from the Gulf of Mexico to the flooded regions was a major factor in excessive rainfall. The Global Land/Atmosphere System Study (GLASS) gave GEWEX investigators the ability to observe soil wetness over much of the world's surface by correlating observations on the ground with information obtained by satellites. While the ability to show cause is important, the different conditions (soil wetness, global patterns) that were permissive for weather anomalies are the focus of Phase I, gathering information and learning how to use satellite information better.
One of the biggest impacts of the Aerosol analysis has been the demonstration of the fairly large impact of anthropogenic aerosols, smoke patterns, even daily ripples of aerosols can be observed off the coasts of some developing nations and extend hundreds of miles over surrounding oceans. Some have questioned whether this aerosol pollution is partly to blame for long-term drought in places like the African Sahel.
Critique
One critique of the Build-up Phase data and predictions is that there needs to be better error descriptions. The global estimate of rainfall indicates that the confidence range is large relative to possible trends. The number of ground sensing stations (currently around 40) in the BSRN is rather limited for global observation this affected the measurement of aerosols which are regionally dominant. The best measurements of aerosol pollution are obtained when cloud types are identified properly by satellite observation, therefore better cloud sensing strategies and models are needed to provide the clearest real-time data. Certain projects like GCIP allow have focused on continental scale observations provide better prediction for project areas; however, areas outside these project areas may lag in receiving forecasting improvements. Many of the deficiencies in Phase I are improvement areas within the objectives of Phase II of the project. Currently scientist use NASA Aqua's Advanced Microwave Scanning Radiometer (AMSR-E) to evaluation soil moisture from space. However, except for focused observations the satellites data is not useful for global weather prediction. The proposed Soil Moisture and Ocean Salinity satellite would provide the detail of soil moisture information on a daily basis may provide the data needed for real time forecasting.
Second phase (2003–2012)
Phase II, "Full Implementation" (2003–2012) of GEWEX is to "exploit new capabilities" developed during phase I such as new satellite information and, increasingly, new models. These include changes in the Earth's energy budget and water cycle, contribution of processes in climate feedback, causes of natural variability, predicting changes on seasonal or annual timescales, and how changes impact water resources. Phase II of is designed to be active models that have use to regional resource managers in real time. Some phases, such as the GAME (GEWEX Asia Monsoon Experiment) are already completed . GEWEX has become an umbrella program for the coordination of studies and experiments around the world. Reports from the phase I are still being produced and it will be some time before the results of the second phase are available. The experiment is still in progress.
Third phase (2013–Ongoing)
Panels
There are three panels in GEWEX: The Coordinated Energy and Water Cycle Observations Project (CEOP), GEWEX Radiation Panel (GRP), and GEWEX Modeling and Prediction Panel (GMPP).
Coordinated Energy and Water Cycle Observations Project
The Coordinated Energy and Water Cycle Observations Project (CEOP) is the largest of the panel projects. There are several regional project areas most of these are now covered by CEOP
Areas
For CEOP which survey the hydroclimate for southern African (AMMA), Baltic Sea area (BALTEX), North America (CPPA), Eastern Amazonia (LBA), La Plate Basin (LBB), Asia (MAHASRI), Australia (MDB), and Northern Eurasia (NEEPSI). In addition, CEOP coordinates the study of region types, such as cold, high altitude, monsoon, and semiarid climates and collects and formulates modelling on global, regional scale including land surface and surface hydrology modelling. Since GEWEX is an international cooperation it can utilize information from existing and planned satellites.
Objectives
The CEOP project has a number of energy budget and water cycle objectives. First is to produce more consistent research with better error definitions. Second is to better determine how energy flux and water cycles involve in feedback mechanisms. Third is to the predictability of important variables and improved parametric analysis to better model these processes. Fourth, to collaborate with other hydrological science projects to create tools for assessing the water-system consequences of predictions and global climate change.
GEWEX Radiation Panel
GEWEX Radiation panel (GRP) is a collaborative organization with a goal of reviewing theoretical and experimental knowledge of radiative processes within the climate system. Sixty percent of the energy that comes to Earth from the Sun is transformed by the earth. The goals of this collaboration is to determine how energy is transformed as it inevitably is radiated back into space.
Global precipitation climatology project
GPCP task was to estimate precipitation using satellites that were global including places where people were not present to take measurements. Secondarily the project was tasked with studying regional precipitation on seasonal to between year time scales. As the study period of the project increased past 25 years a third objective was added analyze long-term variation, such as that caused by global warming. Also, in a renewed effort for better data and with more observation satellites, the GPCP, hopes to gain insights to rainfall variation on 'weather'-scale, or 4-hour periods to daily time scales.
Precipitation Assessment Group
The Precipitation Assessment Group was assigned by the panel to evaluate data on precipitation emphasizing data in the Global Precipitation Climatology Project (GPCP) product (GRP project). The GRP prepares to assimilate data from GPCP diurnal variation data for better estimation of the global precipitation products. The result of 25 years of measurement the global average precipitation rate is 2.61 mm per/day (about 0.1 inch/day) with about 1% uncertainty. The finding suggests there is no significant variation in mean annual rainfall. Regional variation was separated from land and ocean and the land variation of received precipitation was greater than the ocean. Satellites used to train the dataset analysis have the flaw of not having inaccurate measurements of drizzle and snow, and lack measurements in isolated places and over oceans. The rainfall maps show the greatest absolute rainfall error over the tropical oceans in regions with the highest estimated rainfall. The report self-critiques two aspects: the lack of polar-crossing satellites at the beginning of the study and the inability to correlate new information and older information (ground-based measurements). The noticeable trends in the dataset were deemed insignificant with regard to issues like global warming, but some stand-out positive trends over the Indopacific region were notable (Bay of Bengal and Indochina) and negative trends over South Central Africa.
Surface Radiation Budget project
The SRB project under NASA/GEWEX took global radiation measurements to determine radiative energy fluxes. The energy that comes from the sun strikes the atmosphere and scatters, clouds and is reflected, the earth or water where heat and light are radiated back into the atmosphere or space. When water is struck heated surface water can evaporate carrying energy back into space through cloud formation and rain. The SRB project measured these processes by measuring fluxes at the Earth's surface, top-of-atmosphere with shortwave (SW) and longwave (LW) radiation.
Baseline Surface Radiation Network
At the onset of GEWEX there was inadequate information on how radiation redistributed, both horizontally and vertically.
BSRN is a global system of less than 40 widely spread radiation measuring devices designed to measure changes in radiation at the Earth's surface. The information obtained is stored at the World Radiation Monitoring Center (WRMC) at the ETH (Zurich).
Global Aerosol Climatology Project
Established by Radiation Sciences Program(NASA) and GEWEX in 1998 to analyze satellite and field data to determine the distribution of aerosols, how they are formed, transformed and transported.
GEWEX Cloud Assessment Project
The GEWEX cloud assessment was initiated by the GEWEX Radiation Panel (GRP) in 2005 to evaluate the reliability of available, global, long-term cloud data products, with a special emphasis on ISCCP.
GEWEX Modeling and Prediction Panel
The GEWEX modelling and prediction panel (GMPP) is charged with the task of finding better ways to use the data by other projects and other agencies. It oversees GEWEX Atmospheric Boundary Layer Study (GABLS), GEWEX Cloud System Study (GCSS), and Global Land/Atmosphere System Study(GLASS). Climate forcing is a process of study which observes the contribution of irregular events, such a volcano eruption, greenhouse warming, solar variation, fluctuations in the Earth's orbit, long-term variation in the oceans circulation. The GMPP exploits these natural perturbations to test models developed that should predict what happens to global energy and water budgets with the perturbations.
GEWEX Atmospheric Boundary Layer Study
GEWEX Atmospheric Boundary Layer Study (GABLS) is a more recent addition to GEWEX. The study is tasked with understanding the physical properties of the atmospheric boundary layers for better models which include representation of boundary layers.
GEWEX Cloud System Study
GEWEX Cloud System Study (GCSS) task is to individualize modelling for different types of cloud systems. GCSS identifies 5 types of cloud systems:boundary layer, cirrus, extra tropical layer, precipitating convective, and polar. These cloud systems are generally too small to be rationalized in large scale climate modelling, this results in inadequate development of equations resulting in greater statistical uncertainty in results. In order to rationalize these processes, the study observes cloud systems at single fixed positions on earth in order to better estimate their parameters. These four areas are: Azores and Madeira Islands, Barbados, Equatorial Western Pacific, and Atlantic Tropics. The initial data collection is complete, methods developed for land and aircraft-based observations can be compared with satellite observations so that better models of cloud system identification can be made at smaller scales.
Global Land/Atmosphere System Study
Global Land/Atmosphere System Study (GLASS) tries to understand the impact on land surface parameters on the atmosphere. Changes in land as a result of natural and man-made activities results in the ability to alter the local climate and affect wind and cloud formation.
Critique
The GEWEX project has been in existence for over 30 years, and while some climate oscillations are short, such as El-Nino, some climate oscillations last for decades, such as the North Atlantic Oscillation. Some have proposed extrapolating pre-GEWEX information using new information and measurements taken with pre-GEWEX technology. The MAGS project, located in Northwestern Canada utilized indigenous peoples traditional experiences. In addition, in other parts of the GEWEX study, these oscillations are an aspect of climate forcing, which allow testing of predictions and models. This modelling may be complicated by the fact that the North Atlantic Oscillation in switching state (see graph) as the effects of global warming are becoming more prominent. For example, 2006 and 2007 saw one of the most dramatic declines in Arctic Sea ice, a decline that was largely unpredicted and can shift the late summer albedo in the northern hemisphere. In 2008, sea ice extent decline has backed off from the previous years' trend, and researchers had forecast a strong La Nina event for late 2007 and 2008. However, unexpectedly the surface temperatures in the Eastern Pacific have already begun to rise to El-Nino temperature ranges, indicating the La Nina event may terminate unexpectedly. With this, the loss of Northern Polar sea ice has begun to accelerate back toward the earlier trend. Such rapid and unexpected changes in climate-forcing events eventually suggest that modellers need to include parameters such as ocean temperature thermoclines, energy accumulation in the tropical oceans, sea ice extents in the polar regions, land glacial ice retraction in Greenland, and sheet ice and shelf ice remodelling in Antarctica. When multiple climate-forcing influences are acting simultaneously in which one of the events will eventually take dominance, lack of precedents from the past study of similar confluences of events, as well as knowledge of the uncertainty of sensitive 'switches' in the oceanic/atmospheric switches may affect the ability to provide accurate models and predictions. In addition, sampling points may be spread to monitor leading indicators in one common scenario may be useless during an oscillation where the pool of energy shifts to an unmonitored region so that the magnitude of the shift avoids computation.
An example of climate-forcing anomalies might be used to describe the events of 1998 to 2002, a strong El-Nino/La Nina cycle. The onset of the cycle can be influenced by global warming, which facilitated a larger increase of warm water in the tropics, rapidly enough that the thermocline was tolerant. A thermocline is a sharp temperature drop at depth; it varies during the year, with location, and over long periods of time. As the thermocline depth increases El-Nino events are more likely; however, during the peak of the event energy is dissipated and the thermocline decreases depth, possibly to below normal levels so the a strong La-Nina event can results. The world's oceans, particularly the depths of the Atlantic, are believed to be a sink for that is adsorbed at the polar regions, as this builds into the Pacific the upwelling and warming of water can bring -rich waters trapped in the cold pressurized bottom layers to the surface. Local increases of occur which allow more heat-trapping; the La-Nina may be mild or aborted early in the process. However, if the return of the thermocline has enough momentum it could propel a strong La-Nina event that last for a few years. However, rapid cooling in the Arctic can allow for more trapping and offset release of during La-Nina in a specific area. The Pacific Decadal Anomaly (PDA See image) may influence the source, direction or momentum of rise of the cold water component of the thermocline.
The extent and duration of the PDA are yet unpredictable, and its modulating effects on El-Nino/La-Nina patterns can only be speculated. These unknowns affect the ability for climate modellers to predict and indicate climate-forcing models need to accurate a wider sampling of data to be predictive.
There are also longer-term cycles, the mini ice-age that preceded the medieval warm period may have been a transition to an ice age, the last ice-age lasted from ~130,000 years ago until the onset of the Holocene. This ice-age may have been aborted by other factors including global warming. Such a stalling of long-term cycles is believed to be a factor in the Dryas period, a warming interrupted by surface impacts of extraterrestrial origin may have occurred over hundreds of years. But the anthropogenic greenhouse effects and changing insolation patterns may have unpredictable long-term effects. Reductions of glacial ice on land masses can cause isostatic rebounds and may affect earthquakes and volcanism over a wide range. Rising sea levels can also affect patterns, and was seen in Indonesia, simply drilling a gas well in the wrong place may have touched off a mud volcano and there are some signs that this may precede a new caldera formation for a volcano. Over the very long term, the change in temperature of the Earth's crust on geothermal and volcanic processes is unknown. How this plays into climate-forcing events with magnitudes that are unpredictable is unknown.
The critiques at GEWEX can only be thrust at current results, which have added much more information about climate modelling that have created critiques, the major thrust of modelling was originally intended to be part of Phase II which will, after 4 years, produce its results. One of the major critiques of GEWEX phase I was land-based measurements, which are now increasing. The other major critique is the inability to capture decadal rainfall events, events that frequently occur over a few hours. Therefore, more measurements documenting shorter time frames may provide essential data for almost continuous data set. Therefore, Phase II is mainly modelling with addition of more data as deemed lacking in Phase I. Many of the critiques above may be compensated for with better data requiring better models including insolation and changes in reflection. The problem with variation in ocean currents, particular with respect to thermocline depths requires more oceanography as part of the project, as with losses of ice and changes of climate on the ice edges.
References
External links
Asian Monsoon Years 2007-2012
The Hemispheric Observing System Research and Predictability Experiment
Predictions in Ungauged Basins
Monsoon Integrated Regional Studies
GEWEX Soil Wetness Project 2
International Satellite Cloud Climatology Project (ISCCP)
Hydrometerological Array for ISV-monsoon Automonitoring
Global Water System Project
Global Energy and Water Cycle Experiment (GEWEX) Continental-Scale International Project:A Review of Progress and Opportunities - a free online book
European Space Agency
Effects of climate change
Climatological research
Meteorology research and field projects
Weather prediction | Global Energy and Water Exchanges | [
"Physics"
] | 5,488 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
18,082,975 | https://en.wikipedia.org/wiki/Flow%20line | A flow line, used on a drilling rig, is a large diameter pipe (typically a section of casing) that is connected to the bell nipple (under the drill floor) and extends to the possum belly (on the mud tanks) and acts as a return line (for the drilling fluid as it comes out of the hole), to the mud.
Possum Belly
The possum belly is used to slow the flow of returning drilling fluid before it hits the shale shakers. This enables the shale shaker to clean the cuttings out of the drilling fluid before it is returned to the pits for circulation.
Sample Box
Another common add on is the sample box. This is a heavy duty rubber hose that is inserted at the end of the flow line and at the other end emplaced into the sample box itself. The sample box is used to capture samples of drill cuttings for geological logging. The box is typically equipped with a raising door that allows the water and cuttings to escape after a sample is collected.
Stinger Line
A stinger line is similar to a flow line, but unlike a flow line is not used to maintain circulation. The stinger line is attached to the blowout preventer to allow for the pressure from a blowout to be released. The stinger line usually will run parallel to the flow line.
See also
Drilling rig (petroleum)
Flow show
References
Flow line
Drilling technology
Oilfield terminology
Petroleum engineering
Piping | Flow line | [
"Chemistry",
"Engineering"
] | 295 | [
"Building engineering",
"Chemical engineering",
"Petroleum engineering",
"Energy engineering",
"Mechanical engineering",
"Piping"
] |
18,083,499 | https://en.wikipedia.org/wiki/Casing%20head | In oil drilling, a casing head is a simple metal flange welded or screwed onto the top of the conductor pipe (also known as drive-pipe) or the casing and forms part of the wellhead system for the well.
Application
Casing heads are the primary interface for the surface pressure control equipment, for example blowout preventers (for well drilling) or the Christmas tree (for well production).
The casing head, when installed, is typically tested to very strict pressure and leak-off parameters to insure viability under blowout conditions, before any surface equipment is installed.
References
External links
Flanges & Forgings Information
Oilfield terminology
Drilling technology
Petroleum engineering | Casing head | [
"Chemistry",
"Engineering"
] | 141 | [
"Petroleum",
"Petroleum engineering",
"Energy engineering",
"Petroleum stubs"
] |
18,085,080 | https://en.wikipedia.org/wiki/Possum%20belly | A Possum belly, on a drilling rig, is a metal container at the head of the shale shaker that receives the flow of drilling fluid and is directly connected to and at the end of the flow line. A possum belly may also be referred to as a distribution box or flowline trap.
The purpose of the possum belly is to slow the flow of the drilling fluid (after it has gained momentum from coming down through the flow line) so that it does not shoot off of the shale shakers.
Possum bellies are generally used when bentonite or another form of "mud" is being used. During the use of freshwater or brine water, the flow line generally either goes straight to the reserve pit, or into the steel pits.
The possum belly derives its name from the similarity of its appearance to the low hanging abdomen of the possum.
References
Oilfield terminology
Drilling technology
Petroleum engineering | Possum belly | [
"Engineering"
] | 192 | [
"Petroleum engineering",
"Energy engineering"
] |
4,485,896 | https://en.wikipedia.org/wiki/Lowry%20protein%20assay | The Lowry protein assay is a biochemical assay for determining the total level of protein in a solution. The total protein concentration is exhibited by a color change of the sample solution in proportion to protein concentration, which can then be measured using colorimetric techniques. It is named for the biochemist Oliver H. Lowry who developed the reagent in the 1940s. His 1951 paper describing the technique is the most-highly cited paper ever in the scientific literature, cited over 300,000 times.
Mechanism
The method combines the reactions of copper ions with the peptide bonds under alkaline conditions (the Biuret test) with the oxidation of aromatic protein residues. The Lowry method is based on the reaction of Cu+, produced by the oxidation of peptide bonds, with Folin–Ciocalteu reagent (a mixture of phosphotungstic acid and phosphomolybdic acid in the Folin–Ciocalteu reaction). The reaction mechanism is not well understood, but involves reduction of the Folin–Ciocalteu reagent and oxidation of aromatic residues (mainly tryptophan, also tyrosine).
Proper caution must be taken when dealing with the Folin's reagent, which is only active in acidic conditions. Although this is true, the reduction reaction, as previously mentioned, will only occur in basic pH 10. Thus, the reduction must occur before the reagent breaks down. Mixing the protein solution as the Folin's reagent is simultaneously added will ensure that the reaction occurs in the desired manner.
Experiments have shown that cysteine is also reactive to the reagent. Therefore, cysteine residues in protein probably also contribute to the absorbance seen in the Lowry assay. The result of this reaction is an intense blue molecule known as heteropolymolybdenum Blue. The concentration of the reduced Folin reagent (heteropolymolybdenum Blue) is measured by absorbance at 660 nm. As a result, the total concentration of protein in the sample can be deduced from the concentration of tryptophan and tyrosine residues that reduce the Folin–Ciocalteu reagent.
The method was first proposed by Lowry in 1951. The bicinchoninic acid assay and the Hartree–Lowry assay are subsequent modifications of the original Lowry procedure.
See also
Biuret test
Bradford protein assay
References
Walker, J. M. (2002). The protein protocols handbook. Totowa, N.J: Humana Press.
External links
A simplification of the protein assay method of Lowry et al. which is more generally applicable
Biochemistry detection reactions
Protein methods
Analytical chemistry
Reagents for biochemistry | Lowry protein assay | [
"Chemistry",
"Biology"
] | 569 | [
"Biochemistry methods",
"Protein methods",
"Biochemistry detection reactions",
"Protein biochemistry",
"Biochemical reactions",
"Microbiology techniques",
"nan",
"Biochemistry",
"Reagents for biochemistry"
] |
4,487,041 | https://en.wikipedia.org/wiki/Chlorosome | A chlorosome is a photosynthetic antenna complex found in green sulfur bacteria (GSB) and many green non-sulfur bacteria (GNsB), together known as green bacteria. They differ from other antenna complexes by their large size and lack of protein matrix supporting the photosynthetic pigments. Green sulfur bacteria are a group of organisms that generally live in extremely low-light environments, such as at depths of 100 metres in the Black Sea. The ability to capture light energy and rapidly deliver it to where it needs to go is essential to these bacteria, some of which see only a few photons of light per chlorophyll per day. To achieve this, the bacteria contain chlorosome structures, which contain up to 250,000 chlorophyll molecules. Chlorosomes are ellipsoidal bodies, in GSB their length varies from 100 to 200 nm, width of 50-100 nm and height of 15 – 30 nm, in GNsB the chlorosomes are somewhat smaller.
Chlorosomes are a type of chromatophores that are found in photosynthetic bacteria (e.g. purple bacteria).
Structure
Chlorosome shape can vary between species, with some species containing ellipsoidal shaped chlorosomes and others containing conical or irregular shaped chlorosomes.
Inside green sulfur bacteria, the chlorosomes are attached to type-I reaction centers in the cell membrane via FMO-proteins and a chlorosome baseplate composed of CsmA proteins. Filamentous anoxygenic phototrophs of the phylum Chloroflexota lack the FMO complex, but instead use a protein complex called B808-866. Unlike the FMO proteins in green sulfur bacteria, B808-866 proteins are embedded in the cytoplasmic membrane and surround type-II reaction centers, providing the link between the reaction centers and the baseplate.
The composition of the chlorosomes is mostly bacteriochlorophyll (BChl) with small amounts of carotenoids and quinones surrounded by a galactolipid monolayer. In Chlorobi, chlorosome monolayers can contain up to eleven different proteins. The proteins of Chlorobi are the ones currently best understood in terms of structure and function. These proteins are named CsmA through CsmF, CsmH through CsmK, and CsmX. Other Csm proteins with different letter suffixes can be found in Chloroflexota and Ca. "Chloracidobacterium".
Within the chlorosome, the thousands of BChl pigment molecules have the ability to self assemble with each other, meaning they do not interact with protein scaffolding complexes for assembly. These pigments self assemble in lamellar structures about 10-30 nm wide.
Organization of the light harvesting pigments
Bacteriochlorophyll and carotenoids are two molecules responsible for harvesting light energy. Current models of the organization of bacteriochlorophyll and carotenoids (the main constituents) inside the chlorosomes have put them in a lamellar organization, where the long farnesol tails of the bacteriochlorophyll intermix with carotenoids and each other, forming a structure resembling a lipid multilayer.
Recently, another study has determined the organization of the bacteriochlorophyll molecules in green sulfur bacteria. Because they have been so difficult to study, the chlorosomes in green sulfur bacteria are the last class of light-harvesting complexes to be characterized structurally by scientists. Each individual chlorosome has a unique organization and this variability in composition had prevented scientists from using X-ray crystallography to characterize the internal structure. To get around this problem, the team used a combination of different experimental approaches. Genetic techniques to create a mutant bacterium with a more regular internal structure, cryo-electron microscopy to identify the larger distance constraints for the chlorosome, solid-state nuclear magnetic resonance (NMR) spectroscopy to determine the structure of the chlorosome's component chlorophyll molecules, and modeling to bring together all of the pieces and create a final picture of the chlorosome.
To create the mutant, three genes were inactivated that green sulfur bacteria acquired late in their evolution. In this way it was possible to go backward in evolutionary time to an intermediate state with much less variable and better ordered chlorosome organelles than the wild-type. The chlorosomes were isolated from the mutant and the wild-type forms of the bacteria. Cryo-electron microscopy was used to take pictures of the chlorosomes. The images reveal that the chlorophyll molecules inside chlorosomes have a nanotube shape. The team then used MAS NMR spectroscopy to resolve the microscopic arrangement of chlorophyll inside the chlorosome. With distance constraints and DFT ring current analyses, the organization was found to consist of unique syn-anti monomer stacking. The combination of NMR, cryo-electron microscopy and modeling enabled the scientists to determine that the chlorophyll molecules in green sulfur bacteria are arranged in helices. In the mutant bacteria, the chlorophyll molecules are positioned at a nearly 90-degree angle in relation to the long axis of the nanotubes, whereas the angle is less steep in the wild-type organism. The structural framework can accommodate disorder to improve the biological light harvesting function, which implies that a less ordered structure has a better performance.
An alternative energy source
The interactions that lead to the assembly of the chlorophylls in chlorosomes are rather simple and the results may one day be used to build artificial photosynthetic systems that convert solar energy to electricity or biofuel.
List of bacterial taxa containing chlorosomes
List adapted from, Figure 1.
Phylum Chlorobiota ("green sulfur bacteria"), in full. Example genera:
Chlorobium
Pelodictyon
Prostecochloris
Phylum Chloroflexota, class Chloroflexia ("green non-sulfur bacteria"), suborder Chloroflexineae, in full.
Family Chloroflexineae. Example genera:
Chloroflexus
Chloronema
Family Oscillochloridaceae. Example genera:
Oscillochloris
Species Chloracidobacterium thermophilum. This is the only Acidobacterium known to make a chlorosome. (Proposed in 2021 to be split into three species of different temperature preference by sequence similarity.)
References
Photosynthesis
Prokaryotic cell anatomy | Chlorosome | [
"Chemistry",
"Biology"
] | 1,424 | [
"Biochemistry",
"Photosynthesis"
] |
4,488,634 | https://en.wikipedia.org/wiki/Hydraulic%20power%20network | A hydraulic power network is a system of interconnected pipes carrying pressurized liquid used to transmit mechanical power from a power source, like a pump, to hydraulic equipment like lifts or motors. The system is analogous to an electrical grid transmitting power from a generating station to end-users. Only a few hydraulic power transmission networks are still in use; modern hydraulic equipment has a pump built into the machine. In the late 19th century, a hydraulic network might have been used in a factory, with a central steam engine or water turbine driving a pump and a system of high-pressure pipes transmitting power to various machines.
The idea of a public hydraulic power network was suggested by Joseph Bramah in a patent obtained in 1812. William Armstrong began installing systems in England from the 1840s, using low-pressure water, but a breakthrough occurred in 1850 with the introduction of the hydraulic accumulator, which allowed much higher pressures to be used. The first public network, supplying many companies, was constructed in Kingston upon Hull, England. The Hull Hydraulic Power Company began operation in 1877, with Edward B. Ellington as its engineer. Ellington was involved in most of the British networks, and some further afield. Public networks were constructed in Britain at London, Liverpool, Birmingham, Manchester and Glasgow. There were similar networks in Antwerp, Melbourne, Sydney, Buenos Aires and Geneva. All of the public networks had ceased to operate by the mid-1970s, but Bristol Harbour still has an operational system, with an accumulator situated outside the main pumphouse, enabling its operation to be easily visualised.
History
Joseph Bramah, an inventor and locksmith living in London, registered a patent at the London Patent Office on 29 April 1812, which was principally about a provision of a public water supply network, but included a secondary concept for the provision of a high-pressure water main, which would enable workshops to operate machinery. The high-pressure water would be applied "to a variety of other useful purposes, to which the same has never before been so applied". Major components of the system were a ring main, into which a number of pumping stations would pump the water, with pressure being regulated by several air vessels or loaded pistons. Pressure relief valves would protect the system, which he believed could deliver water at a pressure of "a great plurality of atmospheres", and in concept, this was how later hydraulic power systems worked.
In Newcastle upon Tyne, a solicitor called William Armstrong, who had been experimenting with water-powered machines, was working for a firm of solicitors who were appointed to act on behalf of the Whittle Dene Water Company. The water company had been set up to supply Newcastle with drinking water, and Armstrong was appointed secretary at the first meeting of shareholders. Soon afterwards, he wrote to Newcastle Town Council, suggesting that the cranes on the quay should be converted to hydraulic power. He was required to carry out the work at his own expense, but would be rewarded if the conversion was a success. It was, and he set up the Newcastle Cranage Company, which received an order for the conversion of the other four cranes. Further work followed, with the engineer from Liverpool Docks visiting Newcastle and being impressed by a demonstration of the crane's versatility, given by the crane driver John Thorburn, known locally as "Hydraulic Jack".
While the Newcastle system ran on water from the public water supply, the crane installed by Armstrong at Burntisland was not located where such an option was possible, and so he built a tower, with a water tank at the top, which was filled by a steam engine. At Elswick in Glasgow, charges by the Corporation Water Department for the water used persuaded the owners that the use of a steam-powered crane would be cheaper. Bramah's concept of "loaded pistons" was introduced in 1850, when the first hydraulic accumulator was installed as part of a scheme for cranes for the Manchester, Sheffield and Lincolnshire Railway. A scheme for cranes at Paddington the following year specified an accumulator with a piston and a stroke of , which enabled pressures of to be achieved. Compared to the of the Newcastle scheme, this increased pressure significantly reduced the volumes of water used. Cranes were not the only application, with hydraulic operation of the dock gates at Swansea reducing the operating time from 15 to two minutes, and the number of men required to operate them from twelve to four. Each of these schemes was for a single customer, and the application of hydraulic power more generally required a new model.
Public power in the United Kingdom
Kingston upon Hull
The first practical installation which supplied hydraulic power to the public was in Kingston upon Hull, in England. The Hull Hydraulic Power Company began operation in 1876. They had of pipes, which were up to in diameter, and ran along the west bank of the River Hull from Sculcoates bridge to its junction with the Humber. The pumping station was near the north end of the pipeline, on Machell Street, near the disused Scott Street bascule bridge, which was powered hydraulically. There was an accumulator at Machell Street, and another one much nearer the Humber, on the corner of Grimsby Lane. Special provision was made where the pressure main passed under the entrance to Queens Dock. By 1895, pumps rated at pumped some of water into the system each week, and 58 machines were connected to it. The working pressure was , and the water was used to operate cranes, dock gates, and a variety of other machinery connected with ships and shipbuilding. The Hull system lasted until the 1940s, when the systematic bombing of the city during the Second World War led to the destruction of much of the infrastructure, and the company was wound up in 1947, when Mr F J Haswell, who had been the manager and engineer since 1904, retired.
The man responsible for the Hull system was Edward B. Ellington, who had risen to become the managing director of the Hydraulic Engineering Company, based in Chester, since first joining it in 1869. At the time of its installation, such a scheme seemed like "a leap in the dark", according to R. H. Tweddell writing in 1895, but despite a lack of enthusiasm for the scheme, Ellington pushed ahead and used it as a test bed for both the mechanical and the commercial aspects of the idea. He was eventually involved on some level in most of the hydraulic power networks of Britain. The success of such systems led to them being installed in places as far away as Antwerp in Belgium, Melbourne and Sydney in Australia, and Buenos Aires in Argentina.
Independent hydraulic power networks were also installed at Hull's docks - both the Albert Dock (1869), and Alexandra Dock (1885) installed hydraulic generating stations and accumulators.
London
The best-known public hydraulic network was the citywide network of the London Hydraulic Power Company. This was formed in 1882, as the General Hydraulic Power Company, with Ellington as the consulting engineer. By the following year another enterprise, the Wharves and Warehouses Steam Power and Hydraulic Pressure Company, had begun to operate, with of pressure mains on both sides of the River Thames. These supplied cranes, dock gates, and other heavy machinery. Under the terms of an Act of Parliament obtained in 1884, the two companies amalgamated to become the London Hydraulic Power Company. Initially supplying 17.75 million gallons (80.7 megalitres) of high-pressure water each day, this had risen to 1,650 million gallons (7,500 megalitres) by 1927, when the company was powering around 8,000 machines from the supply. They maintained of mains at , which covered an area reaching Pentonville in the north, Limehouse in the east, Nine Elms and Bermondsey in the south and Earls Court and Notting Hill in the west.
Five pumping stations kept the mains pressurised, assisted by accumulators. The original station was at Falcon Wharf, Bankside, but this was replaced by four stations at Wapping, Rotherhithe, Grosvenor Road in Pimlico and City Road in Clerkenwell. A fifth station at East India Docks was originally operated by the Port of London Authority, but was taken over and connected to the system. The stations used steam engines until 1953, when Grosvenor Road station was converted to use electric motors, and following the success of this project, the other four were also converted. The electric motors allowed much smaller accumulators to be used, since they were then only controlling the pressure and flow, rather than storing power. While the network supplied lifts, cranes and dockgates, it also powered the cabaret platform at the Savoy Hotel, and from 1937, the 720-tonne three-section central floor at the Earls Court Exhibition Centre, which could be raised or lowered relative to the main floor to convert between a swimming pool and an exhibition hall. The London system contracted during the Second World War, due to the destruction of customers' machinery and premises. Following the hostilities, large areas of London were reconstructed, and the re-routing of pressure mains was much more difficult than the provision of an electric supply, so that by 1954 the number of machines had fallen to 4,286. The company was wound up in 1977.
Liverpool
A system began operating in Liverpool in 1888. It was an offshoot of the London-based General Hydraulic Power Company, and was authorised by acts of Parliament obtained in 1884 and 1887. By 1890, some of mains had been installed, supplied by a pumping station at Athol Street, on the bank of the Leeds and Liverpool Canal. Although water was originally taken from the canal, cleaner water supplied by Liverpool Corporation was in use by 1890, removing the need for a filtration plant. At this time two pumpsets were in use, and a third was being installed. Pressure was maintained by two accumulators, each with an diameter piston with a stroke of . The Practical Engineer quoted the pressure as , but this is unlikely to be correct by comparison with other systems. A second pumping station at Grafton Street was operational by 1909. The system ceased operation in 1971.
Birmingham
Birmingham obtained its system in 1891, when the Dalton Street hydraulic station opened. In an unusual move, J. W. Gray, the Water Department engineer for the city, had been laying pressure mains beneath the streets for some years, anticipating the need for such a system. The hydraulic station used Otto 'Silent' type gas engines, and had two accumulators, with an diameter piston, a stroke of and each loaded with a 93-tonne weight. The gas engines were started by a small hydraulic engine, which used the hydraulic energy stored in the accumulators, and all equipment was supplied by Ellington's company. Very few documents describing the details of the system are known to exist.
Manchester and Glasgow
The final two public systems in Britain were in Manchester, commissioned in 1894, and Glasgow, commissioned the following year. Both were equipped by Ellington's company, and used the higher pressure of . This was maintained by six sets of triple-expansion steam engines, rated at each. Two accumulators with pistons of diameter, a stroke of , and loaded with 127 tonnes were installed. In Manchester, the hydraulic station was built on the east side of Gloucester Street, by Manchester Oxford Road railway station. It was later supplemented by stations at Water Street and Pott Street, the latter now under the car parks of the Central Retail Park. At its peak in the 1930s, the system consisted on of pipes, which were connected to 2,400 machines, most of which were used for baling cotton. The system was shut down in 1972. In Glasgow, the pumping station was at the junction of High Street and Rottenrow. By 1899, it was supplying power to 348 machines, and another 39 were in the process of being completed. The pipes were in diameter, and there were around of them by 1909, when of high pressure water were supplied to customers. The system was shut down in 1964.
Systems outside the United Kingdom
Antwerp
All of the British systems were designed to provide power for intermittent processes, such as the operation of dock gates or cranes. The system installed at Antwerp was somewhat different, in that its primary purpose was the production of electricity for lighting. It was commissioned in 1894, and used pumping engines producing a total of to supply water at . Ellington, writing in 1895, stated that he found it difficult to see that this was an economical use of hydraulic power, although tests conducted at his works at Chester in October 1894 showed that efficiencies of 59 per cent could be achieved using a Pelton wheel directly coupled to a dynamo.
Australia
Two major systems were built in Australia. The first was in Melbourne, where the Melbourne Hydraulic Power Company began operating in July 1889. The company was authorised by an Act of the Victorian Parliament passed in December 1887, and construction of the system began, with Coates & Co. acting as consulting engineers, and George Swinburne working as engineering manager. The steam pumping plant was supplied by Abbot & Co. from England. Expansion was rapid, with around 70 machines, mainly hydraulic lifts, connected to the system by the end of 1889, and a third steam engine had to be installed in mid-1890, which more than doubled the capacity of the system. A fourth pumping engine was added in 1891, by which time there were 100 customers connected to the mains. The mains were a mixture of and pipes. The water was extracted from the Yarra River until 1893, after which it was drawn from the Public Works Department's supply. There were some of mains by 1897. A second pumping station was added in 1901, and in 1902, 102 million gallons (454 megalitres) of pressurised water were used by customers.
The system was operated as a commercial enterprise until 1925, after which the business and its assets reverted to the City of Melbourne, as specified by the original act. One of the early improvements made by the City Council was to consolidate the system. The steam pumps were replaced by new electric pumps, located in the Spencer Street power station, which thus supplied both electric power and hydraulic power to the city. The hydraulic system continued to operate under municipal ownership until December 1967.
In January 1891, a system in Sydney came on-line, having been authorised by act of Parliament in 1888. George Swinburne was again the engineer, and the system was supplying power to around 200 machines by 1894, which included 149 lifts and 20 dock cranes. The operating company was the Sydney and Suburbs Hydraulic Power Company, later shortened to the Sydney Hydraulic Power Company. Pressure mains were either of or diameter, and at its peak, there were around of mains, covering an area between Pyrmont, Woolloomooloo, and Broadway. In 1919, most of the 2369 lifts in the metropolitan area were hydraulically operated. The pumping station, together with two accumulators, was situated in the Darling Harbour district, and the original steam engines were replaced by three electric motors driving centrifugal pumps in 1952. The scheme remained in private ownership until its demise in 1975, and the pumping station has since been re-used as a tavern.
Buenos Aires
Ellington's system in Buenos Aires was designed to operate a sewage pumping scheme in the city.
Geneva
Geneva created a public system in 1879, using a steam engine installed at the Pont de la Machine to pump water from Lake Geneva, which provided drinking water and a pressurized water supply for the city. The water power was used by about a hundred small workshops having Schmid-type water engines installed. The power of the engines was between and the water was supplied at a pressure of .
Due to increased demand, a new pumping plant was installed, which started operation in 1886. The pumps were driven by Jonval turbines using the water power of the river Rhône. This structure was called Usine des Forces Motrices and was one of the largest structures for generation and distribution of power at the time of construction. By 1897 a total of 18 turbines had been installed, with a combined rating of 3.3MW.
The distribution network used three different pressure levels. The drinking water supply used the lowest pressure, while the intermediate and the high pressure mains served as hydraulic power networks. The intermediate pressure mains operated at and by 1896 some of pipework had been installed. It was used for powering 130 Schmid type water engines with a gross power of . The high pressure network had an operating pressure of bar and had a total length of . It was used to power 207 turbines and motors, as well as elevator drives, and had a gross power of .
Many turbines were used for driving generators for electric lighting. In 1887 an electricity generation plant was built next to the powerhouse, which generated 110 V DC with a maximum power of and an AC network with a maximum power of . The generators were driven by a water turbine supplied from the hydraulic power network. The hydraulic power network was not in competition with the electric power supply, but was seen as a supplement to it, and continued to supply power to many customer until the economic crisis of the 1930s, when the demand for pressurized water as an energy source declined. The last water engine was decommissioned in 1958.
In order to avoid excessive pressure build-up in the hydraulic power network, a release valve was fitted beside the main hall of the powerhouse. A tall water fountain, the Jet d'Eau, was ejected by the device whenever it was activated. This typically happened at the end of the day when the factories switched off their machines, making it hard to control the pressure in the system, and to adjust the supply of pressurized water to match the actual demand. The tall fountain was visible from a great distance and became a landmark in the city. When an engineering solution was found which made the fountain redundant, there was an outcry, and in 1891 it was moved to its current location in the lake, where it operated solely as a tourist attraction, although the water to create it still came from the hydraulic network.
New Zealand
Two systems were built in New Zealand. The Thames Water Race was built in 1876 to supply water to the Thames goldfields powering stamper batteries, pumps and mine-head lifting equipment. Later, electricity was supplied to the residents of Thames in 1914, and when goldmining ceased the following year, a Francis Turbine and generator made use of the surplus water to generate more electricity for the residents of the town. It was eventually decommissioned in 1946.
The Oamaru Borough Water Race was designed by Donald McLeod (b.1835). It opened in 1880 after 3 years of construction. With water sourced from the Waitaki River, the race stretched nearly 50 km and comprised an intake structure, a stilling pond, 19 aqueducts and six tunnels. The spare horsepower generated water motors, water engines and turbines in the town of Oamaru for decades and operated for 103 years. Much of the race and its components can still be seen today.
Summary
Legacy
Bristol Harbour still has a working system, the pumping machinery of which was supplied by Fullerton, Hodgart and Barclay of Paisley, Scotland in 1907. The engine house is a grade II* listed building, constructed in 1887, fully commissioned by 1888, with a tower at one end to house the hydraulic accumulator. A second accumulator was fitted outside the building (dated 1954) which enables the operation of the system to be more easily visualised.
A number of artefacts, including the buildings used as pumping stations, have survived the demise of public hydraulic power networks. In Hull, the Machell Street pumping station has been reused as a workshop. The building still supports the sectional cast-iron roof tank used to allow the silt-laden water of the River Hull to settle, and is marked by a Blue plaque, to commemorate its importance. In London, Bermondsey pumping station, built in 1902, is in use as an engineering works, but retains its chimney and accumulator tower, while the station at Wapping is virtually complete, retaining all of its equipment, which is still in working order. The building is grade II* listed because of its completeness.
In Manchester, the Water Street pumping station, built in Baroque style between 1907 and 1909, was used as workshops for the City College, but has formed part of the People's History Museum since 1994. One of the pumping sets has been moved to the Museum of Science and Industry, where it has been restored to working order and forms part of a larger display about hydraulic power. The pumps were made by the Manchester firm of Galloways.
Geneva still has its Jet d'Eau fountain, but since 1951 it has been powered by a partially submerged pumping station, which uses water from the lake rather than the city water supply. Two Sulzer pumps, named Jura and Salève, create a fountain which rises to a height of above the surface of the lake.
See also
Power transmission
Pumped-storage hydroelectricity
Pneumatic tube
Bibliography
References
Literature
, From a paper read before the Liverpool Engineering Society, 28 January 1885
Hull system
London system
Hydraulics
Networks | Hydraulic power network | [
"Physics",
"Chemistry"
] | 4,304 | [
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
4,490,255 | https://en.wikipedia.org/wiki/First%20quantization | First quantization is a procedure for converting equations of classical particle equations into quantum wave equations. The companion concept of second quantization converts classical field equations in to quantum field equations.
However, this need not be the case. In particular, a fully quantum version of the theory can be created by interpreting the interacting fields and their associated potentials as operators of multiplication, provided the potential is written in the canonical coordinates that are compatible with the Euclidean coordinates of standard classical mechanics. First quantization is appropriate for studying a single quantum-mechanical system (not to be confused with a single particle system, since a single quantum wave function describes the state of a single quantum system, which may have arbitrarily many complicated constituent parts, and whose evolution is given by just one uncoupled Schrödinger equation) being controlled by laboratory apparatuses that are governed by classical mechanics, for example an old fashion voltmeter (one devoid of modern semiconductor devices, which rely on quantum theory—however though this is sufficient, it is not necessary), a simple thermometer, a magnetic field generator, and so on.
History
Published in 1901, Max Planck deduced the existence and value of the constant now bearing his name from considering only Wien's displacement law, statistical mechanics, and electromagnetic theory. Four years later in 1905, Albert Einstein went further to elucidate this constant and its deep connection to the stopping potential of electrons emitted in the photoelectric effect. The energy in the photoelectric effect depended not only on the number of incident photons (the intensity of light) but also the frequency of light, a novel phenomenon at the time. (This work would earn Einstein the 1921 Nobel Prize in Physics.) It can then be concluded that this was a key onset of quantization, that is the discretization of matter into fundamental constituents.
About eight years later Niels Bohr in 1913, published his famous three part series where, essentially by fiat, he posits the quantization of the angular momentum in hydrogen and hydrogen like metals. Where in effect, the orbital angular momentum of the (valence) electron, takes the form , where , referred to as a quantum number, is presumed a whole number . In the original presentation, the orbital angular momentum of the electron was named , the Planck constant divided by two pi as , and the quantum number or "counting of number of passes between stationary points", as stated by Bohr originally as, . See references above for more detail.
While it would be later shown that this assumption is not entirely correct, it in fact ends up being rather close to the correct expression for the orbital angular momentum operator's (eigenvalue) quantum number for large values of the quantum number , and indeed this was part of Bohr's own assumption. Regard the consequence of Bohr's assumption , and compare it with the correct version known today as . Clearly for large , there is little difference, just as well as for , the equivalence is exact. Without going into further historical detail, it suffices to stop here and regard this era of the history of quantization to be the "old quantum theory", meaning a period in the history of physics where the corpuscular nature of subatomic particles began to play an increasingly important role in understanding the results of physical experiments, whose mandatory conclusion was the discretization of key physical observable quantities. However, unlike the era below described as the era of first quantization, this era was based solely on purely classical arguments such as Wien's displacement law, thermodynamics, statistical mechanics, and the electromagnetic theory. In fact, the observation of the Balmer series of hydrogen in the history of spectroscopy dates as far back as 1885.
Nonetheless, the watershed events that would come to denote the era of first quantization took place in the vital years spanning 1925–1928. Simultaneously the authors Max Born and Pascual Jordan in December 1925, together with Paul Dirac also in December 1925, then Erwin Schrödinger in January 1926, following that, Werner Heisenberg together with Born and Jordan in August 1926, and finally Dirac in 1928. The results of these publications were three theoretical formalisms, two of which proved to be equivalent; that of Born, Heisenberg and Jordan was equivalent to that of Schrödinger, while Dirac's 1928 theory came to be regarded as the relativistic version of the prior two. Lastly, it is worth mentioning the publication of Heisenberg and Pauli in 1929, which can be regarded as the first attempt at "second quantization", a term used verbatim by Pauli in a 1943 publication of the American Physical Society.
For purposes of clarification and understanding of the terminology as it evolved over history, it suffices to end with the major publication that helped recognize the equivalence of the matrix mechanics of Born, Heisenberg, and Jordan 1925–1926 with the wave equation of Schrödinger in 1926. The collected and expanded works of John von Neumann showed that the two theories were mathematically equivalent, and it is this realization that is today understood as first quantization.
Qualitative mathematical preliminaries
To understand the term first quantization one must first understand what it means for something to be quantum in the first place. The classical theory of Newton is a second order nonlinear differential equation that gives the deterministic trajectory of a system of mass, . The acceleration, , in Newton's second law of motion, , is the second derivative of the system's position as a function of time. Therefore, it is natural to seek solutions of the Newton equation that are at least second order differentiable.
Quantum theory differs dramatically in that it replaces physical observables such as the position of the system, the time at which that observation is made, the mass, and the velocity of the system at the instant of observation with the notion of operator observables. Operators as observables change the notion of what is measurable and brings to the table the unavoidable conclusion of the Max Born probability theory. In this framework of nondeterminism, the probability of finding the system in a particular observable state is given by a dynamic probability density that is defined as the absolute value squared of the solution to the Schrödinger equation. The fact that probability densities are integrable and normalizable to unity imply that the solutions to the Schrödinger equation must be square integrable. The vector space of infinite sequences, whose square summed up is a convergent series, is known as (pronounced "little ell two"). It is in one-to-one correspondence with the infinite dimensional vector space of square-integrable functions, , from the Euclidean space to the complex plane, . For this reason, and are often referred to indiscriminately as "the" Hilbert space. This is rather misleading because is also a Hilbert space when equipped and completed under the Euclidean inner product, albeit a finite dimensional space.
Types of systems
Both the Newton theory and the Schrödinger theory have a mass parameter in them and they can thus describe the evolution of a collection of masses or a single constituent system with a single total mass, as well as an idealized single particle with idealized single mass system. Below are examples of different types of systems.
One-particle systems
In general, the one-particle state could be described by a complete set of quantum numbers denoted by . For example, the three quantum numbers associated to an electron in a coulomb potential, like the hydrogen atom, form a complete set (ignoring spin). Hence, the state is called and is an eigenvector of the Hamiltonian operator. One can obtain a state function representation of the state using . All eigenvectors of a Hermitian operator form a complete basis, so one can construct any state obtaining the completeness relation:
Many have felt that all the properties of the particle could be known using this vector basis, which is expressed here using the Dirac Bra–ket notation. However this need not be true.
Many-particle systems
When turning to N-particle systems, i.e., systems containing N identical particles i.e. particles characterized by the same physical parameters such as mass, charge and spin, an extension of the single-particle state function to the N-particle state function is necessary. A fundamental difference between classical and quantum mechanics concerns the concept of indistinguishability of identical particles. Only two species of particles are thus possible in quantum physics, the so-called bosons and fermions which obey the rules:
(bosons),
(fermions).
Where we have interchanged two coordinates of the state function. The usual wave function is obtained using the Slater determinant and the identical particles theory. Using this basis, it is possible to solve any many-particle problem that can be clearly and accurately described by a single wave function single system-wide diagonalizable state. From this perspective, first quantization is not a truly multi-particle theory but the notion of "system" need not consist of a single particle either.
See also
Canonical quantization
Geometric quantization
Quantization
Second quantization
Notes
References
Quantum mechanics | First quantization | [
"Physics"
] | 1,915 | [
"Theoretical physics",
"Quantum mechanics"
] |
4,490,333 | https://en.wikipedia.org/wiki/Load%20profile | In electrical engineering, a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve.
Power generation
In a power system, a load curve or load profile is a chart illustrating the variation in demand/electrical load over a specific time. Generation companies use this information to plan how much power they will need to generate at any given time. A load duration curve is similar to a load curve. The information is the same but is presented in a different form. These curves are useful in the selection of generator units for supplying electricity.
Electricity distribution
In an electricity distribution grid, the load profile of electricity usage is important to the efficiency and reliability of power transmission. The power transformer or battery-to-grid are critical aspects of power distribution and sizing and modelling of batteries or transformers depends on the load profile. The factory specification of transformers for the optimization of load losses versus no-load losses is dependent directly on the characteristics of the load profile that the transformer is expected to be subjected to. This includes such characteristics as average load factor, diversity factor, utilization factor, and demand factor, which can all be calculated based on a given load profile.
On the power market so-called EFA blocks are used to specify the traded forward contract on the delivery of a certain amount of electrical energy at a certain time.
Retail energy markets
In retail energy markets, supplier obligations are settled on an hourly or subhourly basis. For most customers, consumption is measured on a monthly basis, based on meter reading schedules. Load profiles are used to convert the monthly consumption data into estimates of hourly or subhourly consumption in order to determine the supplier obligation. For each hour, these estimates are aggregated for all customers of an energy supplier, and the aggregate amount is used in market settlement calculations as the total demand that must be covered by the supplier.
Calculating and recording load profiles
Load profiles can be determined by direct metering but on smaller devices such as distribution network transformers this is not routinely done. Instead a load profile can be inferred from customer billing or other data. An example of a practical calculation used by utilities is using a transformer's maximum demand reading and taking into account the known number of each customer type supplied by these transformers. This process is called load research.
Actual demand can be collected at strategic locations to perform more detailed load analysis; this is beneficial to both distribution and end-user customers looking for peak consumption. Smart grid meters, utility meter load profilers, data logging sub-meters and portable data loggers are designed to accomplish this task by recording readings at a set interval.
See also
Cost of electricity by source
Duck curve
Electricity generation
Electric power transmission
Electric power distribution
Load duration curve
Peak demand
Vehicle-to-grid
Unit commitment problem in electrical power production
Duty cycle
References
Electronic engineering
Electric power
Electrical systems | Load profile | [
"Physics",
"Technology",
"Engineering"
] | 626 | [
"Physical quantities",
"Computer engineering",
"Electrical systems",
"Physical systems",
"Power (physics)",
"Electronic engineering",
"Electric power",
"Electrical engineering"
] |
4,490,554 | https://en.wikipedia.org/wiki/EGS%20%28program%29 |
The EGS (Electron Gamma Shower) computer code system is a general purpose package for the Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies from a few keV up to several hundreds of GeV. It originated at SLAC but National Research Council of Canada and KEK have been involved in its development since the early 80s.
Development of the original EGS code ended with version EGS4. Since then two groups have re-written the code with new physics:
EGSnrc, maintained by the Ionizing Radiation Standards Group, Measurement Science and Standards, National Research Council of Canada
EGS5, maintained by KEK, the Japanese particle physics research facility.
EGSnrc
EGSnrc is a general-purpose software toolkit that can be applied to build Monte Carlo simulations of coupled electron-photon transport, for particle energies ranging from 1 keV to 10 GeV. It is widely used internationally in a variety of radiation-related fields. The EGSnrc implementation improves the accuracy and precision of the charged particle transport mechanics and the atomic scattering cross-section data. The charged particle multiple scattering algorithm allows for large step sizes without sacrificing accuracy - a key feature of the toolkit that leads to fast simulation speeds. EGSnrc also includes a C++ class library called egs++ that can be used to model elaborate geometries and particle sources.
EGSnrc is open source and distributed on GitHub under the GNU Affero General Public License. Download EGSnrc for free, submit bug reports, and contribute pull requests on a group GitHub page. The documentation for EGSnrc is also available online.
EGSnrc is distributed with a wide range of applications that utilize the radiation transport physics to calculate specific quantities. These codes have been developed by numerous authors over the lifetime of EGSnrc to support the large user community. It is possible to calculate quantities such as absorbed dose, kerma, particle fluence, and much more, with complex geometrical conditions. One of the most well-known EGSnrc applications is BEAMnrc, which was developed as part of the OMEGA project. This was a collaboration between the National Research Council of Canada and a research group at the University of Wisconsin–Madison. All types of medical linear accelerators can be modelled using the BEAMnrc's component module system.
See also
GEANT (program)
Geant4
References
External links
NRC-CNRC page for EGSnrc
KEK page for EGS5
EGSnrc Github page
EGSnrc online documentation
EGSnrc subreddit
Monte Carlo software
Physics software
Medical physics
Radiation therapy
Monte Carlo particle physics software
Free science software | EGS (program) | [
"Physics"
] | 567 | [
"Applied and interdisciplinary physics",
"Physics software",
"Medical physics",
"Computational physics"
] |
4,491,248 | https://en.wikipedia.org/wiki/Bregman%20divergence | In mathematics, specifically statistics and information geometry, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly convex function; they form an important class of divergences. When the points are interpreted as probability distributions – notably as either values of the parameter of a parametric model or as a data set of observed values – the resulting distance is a statistical distance. The most basic Bregman divergence is the squared Euclidean distance.
Bregman divergences are similar to metrics, but satisfy neither the triangle inequality (ever) nor symmetry (in general). However, they satisfy a generalization of the Pythagorean theorem, and in information geometry the corresponding statistical manifold is interpreted as a (dually) flat manifold. This allows many techniques of optimization theory to be generalized to Bregman divergences, geometrically as generalizations of least squares.
Bregman divergences are named after Russian mathematician Lev M. Bregman, who introduced the concept in 1967.
Definition
Let be a continuously-differentiable, strictly convex function defined on a convex set .
The Bregman distance associated with F for points is the difference between the value of F at point p and the value of the first-order Taylor expansion of F around point q evaluated at point p:
Properties
Non-negativity: for all , . This is a consequence of the convexity of .
Positivity: When is strictly convex, iff .
Uniqueness up to affine difference: iff is an affine function.
Convexity: is convex in its first argument, but not necessarily in the second argument. If F is strictly convex, then is strictly convex in its first argument.
For example, Take f(x) = |x|, smooth it at 0, then take , then .
Linearity: If we think of the Bregman distance as an operator on the function F, then it is linear with respect to non-negative coefficients. In other words, for strictly convex and differentiable, and ,
Duality: If F is strictly convex, then the function F has a convex conjugate which is also strictly convex and continuously differentiable on some convex set . The Bregman distance defined with respect to is dual to as
Here, and are the dual points corresponding to p and q.
Moreover, using the same notations :
Integral form: by the integral remainder form of Taylor's Theorem, a Bregman divergence can be written as the integral of the Hessian of along the line segment between the Bregman divergence's arguments.
Mean as minimizer: A key result about Bregman divergences is that, given a random vector, the mean vector minimizes the expected Bregman divergence from the random vector. This result generalizes the textbook result that the mean of a set minimizes total squared error to elements in the set. This result was proved for the vector case by (Banerjee et al. 2005), and extended to the case of functions/distributions by (Frigyik et al. 2008). This result is important because it further justifies using a mean as a representative of a random set, particularly in Bayesian estimation.
Bregman balls are bounded, and compact if X is closed: Define Bregman ball centered at x with radius r by . When is finite dimensional, , if is in the relative interior of , or if is locally closed at (that is, there exists a closed ball centered at , such that is closed), then is bounded for all . If is closed, then is compact for all .
Law of cosines:
For any
Parallelogram law: for any ,
Bregman projection: For any , define the "Bregman projection" of onto :
. Then
if is convex, then the projection is unique if it exists;
if is nonempty, closed, and convex and is finite dimensional, then the projection exists and is unique.
Generalized Pythagorean Theorem:
For any ,
This is an equality if is in the relative interior of .
In particular, this always happens when is an affine set.
Lack of triangle inequality: Since the Bregman divergence is essentially a generalization of squared Euclidean distance, there is no triangle inequality. Indeed, , which may be positive or negative.
Proofs
Non-negativity and positivity: use Jensen's inequality.
Uniqueness up to affine difference: Fix some , then for any other , we have by definition.
Convexity in the first argument: by definition, and use convexity of F. Same for strict convexity.
Linearity in F, law of cosines, parallelogram law: by definition.
Duality: See figure 1 of.
Bregman balls are bounded, and compact if X is closed:
Fix . Take affine transform on , so that .
Take some , such that . Then consider the "radial-directional" derivative of on the Euclidean sphere .
for all .
Since is compact, it achieves minimal value at some .
Since is strictly convex, . Then .
Since is in , is continuous in , thus is closed if is.
Projection is well-defined when is closed and convex.
Fix . Take some , then let . Then draw the Bregman ball . It is closed and bounded, thus compact. Since is continuous and strictly convex on it, and bounded below by , it achieves a unique minimum on it.
Pythagorean inequality.
By cosine law, , which must be , since minimizes in , and is convex.
Pythagorean equality when is in the relative interior of .
If , then since is in the relative interior, we can move from in the direction opposite of , to decrease , contradiction.
Thus .
Classification theorems
The only symmetric Bregman divergences on are squared generalized Euclidean distances (Mahalanobis distance), that is, for some positive definite .
The following two characterizations are for divergences on , the set of all probability measures on , with .
Define a divergence on as any function of type , such that for all , then:
The only divergence on that is both a Bregman divergence and an f-divergence is the Kullback–Leibler divergence.
If , then any Bregman divergence on that satisfies the data processing inequality must be the Kullback–Leibler divergence. (In fact, a weaker assumption of "sufficiency" is enough.) Counterexamples exist when .
Given a Bregman divergence , its "opposite", defined by , is generally not a Bregman divergence. For example, the Kullback-Leiber divergence is both a Bregman divergence and an f-divergence. Its reverse is also an f-divergence, but by the above characterization, the reverse KL divergence cannot be a Bregman divergence.
Examples
The squared Mahalanobis distance is generated by the convex quadratic form .
The canonical example of a Bregman distance is the squared Euclidean distance . It results as the special case of the above, when is the identity, i.e. for . As noted, affine differences, i.e. the lower orders added in , are irrelevant to .
The generalized Kullback–Leibler divergence
is generated by the negative entropy function
When restricted to the simplex, the last two terms cancel, giving the usual Kullback–Leibler divergence for distributions.
The Itakura–Saito distance,
is generated by the convex function
Generalizing projective duality
A key tool in computational geometry is the idea of projective duality, which maps points to hyperplanes and vice versa, while preserving incidence and above-below relationships. There are numerous analytical forms of the projective dual: one common form maps the point to the hyperplane . This mapping can be interpreted (identifying the hyperplane with its normal) as the convex conjugate mapping that takes the point p to its dual point , where F defines the d-dimensional paraboloid .
If we now replace the paraboloid by an arbitrary convex function, we obtain a different dual mapping that retains the incidence and above-below properties of the standard projective dual. This implies that natural dual concepts in computational geometry like Voronoi diagrams and Delaunay triangulations retain their meaning in distance spaces defined by an arbitrary Bregman divergence. Thus, algorithms from "normal" geometry extend directly to these spaces (Boissonnat, Nielsen and Nock, 2010)
Generalization of Bregman divergences
Bregman divergences can be interpreted as limit cases of skewed Jensen divergences (see Nielsen and Boltz, 2011). Jensen divergences can be generalized using comparative convexity, and limit cases of these skewed Jensen divergences generalizations yields generalized Bregman divergence (see Nielsen and Nock, 2017).
The Bregman chord divergence is obtained by taking a chord instead of a tangent line.
Bregman divergence on other objects
Bregman divergences can also be defined between matrices, between functions, and between measures (distributions). Bregman divergences between matrices include the Stein's loss and von Neumann entropy. Bregman divergences between functions include total squared error, relative entropy, and squared bias; see the references by Frigyik et al. below for definitions and properties. Similarly Bregman divergences have also been defined over sets, through a submodular set function which is known as the discrete analog of a convex function. The submodular Bregman divergences subsume a number of discrete distance measures, like the Hamming distance, precision and recall, mutual information and some other set based distance measures (see Iyer & Bilmes, 2012 for more details and properties of the submodular Bregman.)
For a list of common matrix Bregman divergences, see Table 15.1 in.
Applications
In machine learning, Bregman divergences are used to calculate the bi-tempered logistic loss, performing better than the softmax function with noisy datasets.
Bregman divergence is used in the formulation of mirror descent, which includes optimization algorithms used in machine learning such as gradient descent and the hedge algorithm.
References
Geometric algorithms
Statistical distance | Bregman divergence | [
"Physics"
] | 2,148 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
4,493,061 | https://en.wikipedia.org/wiki/Electrolytic%20detector | An electrolytic detector, or liquid barretter, is a type of detector (demodulator) used in early radio receivers. It was first used by Canadian radio researcher Reginald Fessenden in 1903, and used until about 1913, after which it was superseded by crystal detectors and vacuum tube detectors such as the Fleming valve and Audion (triode). It was considered very sensitive and reliable compared to other detectors available at the time such as the magnetic detector and the coherer. It was one of the first rectifying detectors, able to receive AM (sound) transmissions. On December 24, 1906, US Naval ships with radio receivers equipped with Fessenden's electrolytic detectors received the first AM radio broadcast from Fessenden's Brant Rock, Massachusetts transmitter, consisting of a program of Christmas music.
History
Fessenden, more than any other person, is responsible for developing amplitude modulation (AM) radio transmission around 1900. While working to develop AM transmitters, he realized that the radio wave detectors used in existing radio receivers were not suitable to receive AM signals. The radio transmitters of the time transmitted information by radiotelegraphy; the transmitter was turned on and off by the operator using a switch called a telegraph key producing pulses of radio waves, to transmit text data using Morse code. Thus, receivers didn't have to extract an audio signal from the radio signal, but only detected the presence or absence of the radio frequency to produce "clicks" in the earphone representing the pulses of Morse code. The device that did this was called a "detector". The detector used in receivers of that day, called a coherer, simply acted as a switch, that conducted current in the presence of radio waves, and thus did not have the capability to demodulate, or extract the audio signal from, an amplitude-modulated radio wave.
The simplest way to extract the sound waveform from an AM signal is to rectify it; remove the oscillations on one side of the wave, converting it from an alternating current to a varying direct current. The variations in the amplitude of the radio wave that represent the sound waveform will cause variations in the current, and thus can be converted to sound by an earphone. To do this a rectifier is required, an electrical component that conducts electric current in only one direction and blocks current in the opposite direction. It was known at the time that passing current through solutions of electrolytes such as acids could have this unilateral conduction property.
In 1902 Fessenden developed what he called a "barretter" detector that would rectify an AM signal, but it was not very sensitive. The barretter used a fine platinum wire, called Wollaston wire, manufactured as a platinum core in a silver sheath that had to be stripped off with acid. In the process of stripping some Wollaston wire, Fessenden left it immersed in acid too long, eating away most of the wire until only a tip remained in contact with the solution; he noted that it responded well to radio signals being generated nearby, and could be used as new type of detector.
This story was disputed at the time, with credit for the discovery also given to Michael I. Pupin, W. Schloemilch, Hugo Gernsback and others. However, it is apparent that Fessenden was the first to put the device to practical use.
Description
The action of this detector is based upon the fact that only the tip of a platinum wire a few hundred-thousandths of an inch in diameter is immersed in an electrolyte solution, and a small D.C. voltage bias is applied to the cell thus formed. Platinum is used because other metals are too quickly dissolved in the acid. The applied bias current decomposes the solution by electrolysis into tiny gas bubbles that cling to the metal point insulating the metal tip from the solution thus reducing the bias current. An incoming R.F. current can flow better in the direction across the point that makes the point more negative. That recombines the gases and increases point exposure to the liquid. RF current flow in the direction that makes the point more positive only reinforces the resistance from the gaseous blocking of the point. Detection results from this asymmetrical flow.
In practical use, a series circuit is made of the detector, headphones, and a battery with a potentiometer. The wire is made positive, and the signal to be demodulated is applied directly to it; a small (about 5 ml) platinum cup filled with either sulfuric or nitric acid completes the headphone circuit, and is also connected to ground to complete the signal circuit.
To adjust the cell, the point of the wire electrode is dipped into the electrolyte and the potentiometer adjusted until a hissing noise is heard in the headphones. The potentiometer setting is then moved to reduce the current until the noise just ceases, at which point the detector is in its most sensitive state.
It was found that strong atmospheric noise would render it insensitive, requiring that the device be rebiased after each strong burst of static interference.
Sealed-point detector
Another form of electrolytic detector, the sealed-point electrolytic detector, which could stand considerable rough usage, was commercially known as the Radioson Detector; it had the cell sealed in a glass envelope. The operation was the same as in the bare-point electrolytic detector, the advantage being that the acid was sealed in, and consequently could not spill or evaporate.
See also
Hot-wire barretter
Coherer
Crystal Radio
Spark-gap transmitter
Radio receiver
Antique radio
Camille Papin Tissot
Notes
External links
United States Early Radio History
History of radio
Radio electronics
Detectors | Electrolytic detector | [
"Engineering"
] | 1,183 | [
"Radio electronics"
] |
1,137,568 | https://en.wikipedia.org/wiki/Artificial%20gravity | Artificial gravity is the creation of an inertial force that mimics the effects of a gravitational force, usually by rotation.
Artificial gravity, or rotational gravity, is thus the appearance of a centrifugal force in a rotating frame of reference (the transmission of centripetal acceleration via normal force in the non-rotating frame of reference), as opposed to the force experienced in linear acceleration, which by the equivalence principle is indistinguishable from gravity.
In a more general sense, "artificial gravity" may also refer to the effect of linear acceleration, e.g. by means of a rocket engine.
Rotational simulated gravity has been used in simulations to help astronauts train for extreme conditions.
Rotational simulated gravity has been proposed as a solution in human spaceflight to the adverse health effects caused by prolonged weightlessness.
However, there are no current practical outer space applications of artificial gravity for humans due to concerns about the size and cost of a spacecraft necessary to produce a useful centripetal force comparable to the gravitational field strength on Earth (g).
Scientists are concerned about the effect of such a system on the inner ear of the occupants. The concern is that using centripetal force to create artificial gravity will cause disturbances in the inner ear leading to nausea and disorientation. The adverse effects may prove intolerable for the occupants.
Centripetal force
In the context of a rotating space station, it is the radial force provided by the spacecraft's hull that acts as centripetal force. Thus, the "gravity" force felt by an object is the centrifugal force perceived in the rotating frame of reference as pointing "downwards" towards the hull.
By Newton's Third Law, the value of little g (the perceived "downward" acceleration) is equal in magnitude and opposite in direction to the centripetal acceleration. It was tested with satellites like Bion 3 (1975) and Bion 4 (1977); they both had centrifuges on board to put some specimens in an artificial gravity environment.
Differences from normal gravity
From the perspective of people rotating with the habitat, artificial gravity by rotation behaves similarly to normal gravity but with the following differences, which can be mitigated by increasing the radius of a space station.
Centrifugal force varies with distance: Unlike real gravity, the apparent force felt by observers in the habitat pushes radially outward from the axis, and the centrifugal force is directly proportional to the distance from the axis of the habitat. With a small radius of rotation, a standing person's head would feel significantly less gravity than their feet. Likewise, passengers who move in a space station experience changes in apparent weight in different parts of the body.
The Coriolis effect gives an apparent force that acts on objects that are moving relative to a rotating reference frame. This apparent force acts at right angles to the motion and the rotation axis and tends to curve the motion in the opposite sense to the habitat's spin. If an astronaut inside a rotating artificial gravity environment moves towards or away from the axis of rotation, they will feel a force pushing them in or against the direction of spin. These forces act on the semicircular canals of the inner ear and can cause dizziness. Lengthening the period of rotation (lower spin rate) reduces the Coriolis force and its effects. It is generally believed that at 2 rpm or less, no adverse effects from the Coriolis forces will occur, although humans have been shown to adapt to rates as high as 23 rpm.
Changes in the rotation axis or rate of a spin would cause a disturbance in the artificial gravity field and stimulate the semicircular canals (refer to above). Any movement of mass within the station, including a movement of people, would shift the axis and could potentially cause a dangerous wobble. Thus, the rotation of a space station would need to be adequately stabilized, and any operations to deliberately change the rotation would need to be done slowly enough to be imperceptible. One possible solution to prevent the station from wobbling would be to use its liquid water supply as ballast which could be pumped between different sections of the station as required.
Human spaceflight
The Gemini 11 mission attempted in 1966 to produce artificial gravity by rotating the capsule around the Agena Target Vehicle to which it was attached by a 36-meter tether. They were able to generate a small amount of artificial gravity, about 0.00015 g, by firing their side thrusters to slowly rotate the combined craft like a slow-motion pair of bolas. The resultant force was too small to be felt by either astronaut, but objects were observed moving towards the "floor" of the capsule.
Health benefits
Artificial gravity has been suggested as a solution to various health risks associated with spaceflight. In 1964, the Soviet space program believed that a human could not survive more than 14 days in space for fear that the heart and blood vessels would be unable to adapt to the weightless conditions. This fear was eventually discovered to be unfounded as spaceflights have now lasted up to 437 consecutive days, with missions aboard the International Space Station commonly lasting 6 months. However, the question of human safety in space did launch an investigation into the physical effects of prolonged exposure to weightlessness. In June 1991, a Spacelab Life Sciences 1 flight performed 18 experiments on two men and two women over nine days. In an environment without gravity, it was concluded that the response of white blood cells and muscle mass decreased. Additionally, within the first 24 hours spent in a weightless environment, blood volume decreased by 10%. Long weightless periods can cause brain swelling and eyesight problems. Upon return to Earth, the effects of prolonged weightlessness continue to affect the human body as fluids pool back to the lower body, the heart rate rises, a drop in blood pressure occurs, and there is a reduced tolerance for exercise.
Artificial gravity, for its ability to mimic the behavior of gravity on the human body, has been suggested as one of the most encompassing manners of combating the physical effects inherent in weightless environments. Other measures that have been suggested as symptomatic treatments include exercise, diet, and Pingvin suits. However, criticism of those methods lies in the fact that they do not fully eliminate health problems and require a variety of solutions to address all issues. Artificial gravity, in contrast, would remove the weightlessness inherent in space travel. By implementing artificial gravity, space travelers would never have to experience weightlessness or the associated side effects. Especially in a modern-day six-month journey to Mars, exposure to artificial gravity is suggested in either a continuous or intermittent form to prevent extreme debilitation to the astronauts during travel.
Proposals
Several proposals have incorporated artificial gravity into their design:
Discovery II: a 2005 vehicle proposal capable of delivering a 172-metric-ton crew to Jupiter's orbit in 118 days. A very small portion of the 1,690-metric-ton craft would incorporate a centrifugal crew station.
Multi-Mission Space Exploration Vehicle (MMSEV): a 2011 NASA proposal for a long-duration crewed space transport vehicle; it included a rotational artificial gravity space habitat intended to promote crew health for a crew of up to six persons on missions of up to two years in duration. The torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to 0.69 g if built with the diameter option.
ISS Centrifuge Demo: a 2011 NASA proposal for a demonstration project preparatory to the final design of the larger torus centrifuge space habitat for the Multi-Mission Space Exploration Vehicle. The structure would have an outside diameter of with a ring interior cross-section diameter of . It would provide 0.08 to 0.51 g partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for the ISS crew.
Mars Direct: A plan for a crewed Mars mission created by NASA engineers Robert Zubrin and David Baker in 1990, later expanded upon in Zubrin's 1996 book The Case for Mars. The "Mars Habitat Unit", which would carry astronauts to Mars to join the previously launched "Earth Return Vehicle", would have had artificial gravity generated during flight by tying the spent upper stage of the booster to the Habitat Unit, and setting them both rotating about a common axis.
The proposed Tempo3 mission rotates two halves of a spacecraft connected by a tether to test the feasibility of simulating gravity on a crewed mission to Mars.
The Mars Gravity Biosatellite was a proposed mission meant to study the effect of artificial gravity on mammals. An artificial gravity field of 0.38 g (equivalent to Mars's surface gravity) was to be produced by rotation (32 rpm, radius of ca. 30 cm). Fifteen mice would have orbited Earth (Low Earth orbit) for five weeks and then land alive. However, the program was canceled on 24 June 2009, due to a lack of funding and shifting priorities at NASA.
Vast Space is a private company that proposes to build the world's first artificial gravity space station using the rotating spacecraft concept.
A Mars gravity simulator could be built on the Moon to prepare for Mars missions. The surface gravity of Mars is somewhat more than twice that of the Moon. It has been proposed to build a large low-pressure bubble, and within it up to twenty higher-pressure rotating tori, all within a cave or lava tube. An analogous system could be built on Mars to prepare people to return to Earth, whose surface gravity is more than twice that of Mars.
Issues with implementation
Some of the reasons that artificial gravity remains unused today in spaceflight trace back to the problems inherent in implementation. One of the realistic methods of creating artificial gravity is the centrifugal effect caused by the centripetal force of the floor of a rotating structure pushing up on the person. In that model, however, issues arise in the size of the spacecraft. As expressed by John Page and Matthew Francis, the smaller a spacecraft (the shorter the radius of rotation), the more rapid the rotation that is required. As such, to simulate gravity, it would be better to utilize a larger spacecraft that rotates slowly.
The requirements on size about rotation are due to the differing forces on parts of the body at different distances from the axis of rotation. If parts of the body closer to the rotational axis experience a force that is significantly different from parts farther from the axis, then this could have adverse effects. Additionally, questions remain as to what the best way is to initially set the rotating motion in place without disturbing the stability of the whole spacecraft's orbit. At the moment, there is not a ship massive enough to meet the rotation requirements, and the costs associated with building, maintaining, and launching such a craft are extensive.
In general, with the small number of negative health effects present in today's typically shorter spaceflights, as well as with the very large cost of research for a technology which is not yet really needed, the present day development of artificial gravity technology has necessarily been stunted and sporadic.
As the length of typical space flights increases, the need for artificial gravity for the passengers in such lengthy spaceflights will most certainly also increase, and so will the knowledge and resources available to create such artificial gravity, most likely also increase. In summary, it is probably only a question of time, as to how long it might take before the conditions are suitable for the completion of the development of artificial gravity technology, which will almost certainly be required at some point along with the eventual and inevitable development of an increase in the average length of a spaceflight.
In science fiction
Several science fiction novels, films, and series have featured artificial gravity production.
In the movie 2001: A Space Odyssey, a rotating centrifuge in the Discovery spacecraft provides artificial gravity.
The 1999 television series Cowboy Bebop, a rotating ring in the Bebop spacecraft creates artificial gravity throughout the spacecraft.
In the novel The Martian, the Hermes spacecraft achieves artificial gravity by design; it employs a ringed structure, at whose periphery forces around 40% of Earth's gravity are experienced, similar to Mars' gravity.
In the novel Project Hail Mary by the same author, weight on the titular ship Hail Mary is provided initially by engine thrust, as the ship is capable of constant acceleration up to and is also able to separate, turn the crew compartment inwards, and rotate to produce while in orbit.
The movie Interstellar features a spacecraft called the Endurance that can rotate on its central axis to create artificial gravity, controlled by retro thrusters on the ship.
The 2021 film Stowaway features the upper stage of a launch vehicle connected by 450-meter long tethers to the ship's main hull, acting as a counterweight for inertia-based artificial gravity.
In the television series For All Mankind, the space hotel Polaris, later renamed Phoenix after being purchased and converted into a space vessel by Helios Aerospace for their own Mars mission, features a wheel-like structure controlled by thrusters to create artificial gravity, whilst a central axial hub operates in zero gravity as a docking station.
Linear acceleration
Linear acceleration is another method of generating artificial gravity, by using the thrust from a spacecraft's engines to create the illusion of being under a gravitational pull. A spacecraft under constant acceleration in a straight line would have the appearance of a gravitational pull in the direction opposite to that of the acceleration, as the thrust from the engines would cause the spacecraft to "push" itself up into the objects and persons inside of the vessel, thus creating the feeling of weight. This is because of Newton's third law: the weight that one would feel standing in a linearly accelerating spacecraft would not be a true gravitational pull, but simply the reaction of oneself pushing against the craft's hull as it pushes back. Similarly, objects that would otherwise be free-floating within the spacecraft if it were not accelerating would "fall" towards the engines when it started accelerating, as a consequence of Newton's first law: the floating object would remain at rest, while the spacecraft would accelerate towards it, and appear to an observer within that the object was "falling".
To emulate artificial gravity on Earth, spacecraft using linear acceleration gravity may be built similar to a skyscraper, with its engines as the bottom "floor". If the spacecraft were to accelerate at the rate of 1 g—Earth's gravitational pull—the individuals inside would be pressed into the hull at the same force, and thus be able to walk and behave as if they were on Earth.
This form of artificial gravity is desirable because it could functionally create the illusion of a gravity field that is uniform and unidirectional throughout a spacecraft, without the need for large, spinning rings, whose fields may not be uniform, not unidirectional with respect to the spacecraft, and require constant rotation. This would also have the advantage of relatively high speed: a spaceship accelerating at 1 g, 9.8 m/s2, for the first half of the journey, and then decelerating for the other half, could reach Mars within a few days. Similarly, a hypothetical space travel using constant acceleration of 1 g for one year would reach relativistic speeds and allow for a round trip to the nearest star, Proxima Centauri. As such, low-impulse but long-term linear acceleration has been proposed for various interplanetary missions. For example, even heavy (100 ton) cargo payloads to Mars could be transported to Mars in and retain approximately 55 percent of the LEO vehicle mass upon arrival into a Mars orbit, providing a low-gravity gradient to the spacecraft during the entire journey.
This form of gravity is not without challenges, however. At present, the only practical engines that could propel a vessel fast enough to reach speeds comparable to Earth's gravitational pull require chemical reaction rockets, which expel reaction mass to achieve thrust, and thus the acceleration could only last for as long as a vessel had fuel. The vessel would also need to be constantly accelerating and at a constant speed to maintain the gravitational effect, and thus would not have gravity while stationary, and could experience significant swings in g-forces if the vessel were to accelerate above or below 1 g. Further, for point-to-point journeys, such as Earth-Mars transits, vessels would need to constantly accelerate for half the journey, turn off their engines, perform a 180° flip, reactivate their engines, and then begin decelerating towards the target destination, requiring everything inside the vessel to experience weightlessness and possibly be secured down for the duration of the flip.
A propulsion system with a very high specific impulse (that is, good efficiency in the use of reaction mass that must be carried along and used for propulsion on the journey) could accelerate more slowly producing useful levels of artificial gravity for long periods of time. A variety of electric propulsion systems provide examples. Two examples of this long-duration, low-thrust, high-impulse propulsion that have either been practically used on spacecraft or are planned in for near-term in-space use are Hall effect thrusters and Variable Specific Impulse Magnetoplasma Rockets (VASIMR). Both provide very high specific impulse but relatively low thrust, compared to the more typical chemical reaction rockets. They are thus ideally suited for long-duration firings which would provide limited amounts of, but long-term, milli-g levels of artificial gravity in spacecraft.
In a number of science fiction plots, acceleration is used to produce artificial gravity for interstellar spacecraft, propelled by as yet theoretical or hypothetical means.
This effect of linear acceleration is well understood, and is routinely used for 0 g cryogenic fluid management for post-launch (subsequent) in-space firings of upper stage rockets.
Roller coasters, especially launched roller coasters or those that rely on electromagnetic propulsion, can provide linear acceleration "gravity", and so can relatively high acceleration vehicles, such as sports cars. Linear acceleration can be used to provide air-time on roller coasters and other thrill rides.
Simulating lunar gravity
In January 2022, China was reported by the South China Morning Post to have built a small ( diameter) research facility to simulate low lunar gravity with the help of magnets. The facility was reportedly partly inspired by the work of Andre Geim (who later shared the 2010 Nobel Prize in Physics for his research on graphene) and Michael Berry, who both shared the Ig Nobel Prize in Physics in 2000 for the magnetic levitation of a frog.
Graviton control or generator
Speculative or fictional mechanisms
In science fiction, artificial gravity (or cancellation of gravity) or "paragravity" is sometimes present in spacecraft that are neither rotating nor accelerating. At present, there is no confirmed technique as such that can simulate gravity other than actual rotation or acceleration. There have been many claims over the years of such a device. Eugene Podkletnov, a Russian engineer, has claimed since the early 1990s to have made such a device consisting of a spinning superconductor producing a powerful "gravitomagnetic field." In 2006, a research group funded by ESA claimed to have created a similar device that demonstrated positive results for the production of gravitomagnetism, although it produced only 0.0001 g.
See also
References
External links
List of peer review papers on artificial gravity
TEDx talk about artificial gravity
Overview of artificial gravity in Sci-Fi and Space Science
NASA's Java simulation of artificial gravity
Variable Gravity Research Facility (xGRF), concept with tethered rotating satellites, perhaps a Bigelow expandable module and a spent upper stage as a counterweight
Gravity
Gravity
Space colonization
Scientific speculation
Space medicine
Rotation | Artificial gravity | [
"Physics"
] | 4,038 | [
"Physical phenomena",
"Motion (physics)",
"Classical mechanics",
"Rotation"
] |
1,138,322 | https://en.wikipedia.org/wiki/Argument%20principle | In complex analysis, the argument principle (or Cauchy's argument principle) is a theorem relating the difference between the number of zeros and poles of a meromorphic function to a contour integral of the function's logarithmic derivative.
Formulation
If f(z) is a meromorphic function inside and on some closed contour C, and f has no zeros or poles on C, then
where Z and P denote respectively the number of zeros and poles of f(z) inside the contour C, with each zero and pole counted as many times as its multiplicity and order, respectively, indicate. This statement of the theorem assumes that the contour C is simple, that is, without self-intersections, and that it is oriented counter-clockwise.
More generally, suppose that f(z) is a meromorphic function on an open set Ω in the complex plane and that C is a closed curve in Ω which avoids all zeros and poles of f and is contractible to a point inside Ω. For each point z ∈ Ω, let n(C,z) be the winding number of C around z. Then
where the first summation is over all zeros a of f counted with their multiplicities, and the second summation is over the poles b of f counted with their orders.
Interpretation of the contour integral
The contour integral can be interpreted as 2πi times the winding number of the path f(C) around the origin, using the substitution w = f(z):
That is, it is i times the total change in the argument of f(z) as z travels around C, explaining the name of the theorem; this follows from
and the relation between arguments and logarithms.
Proof of the argument principle
Let zZ be a zero of f. We can write f(z) = (z − zZ)kg(z) where k is the multiplicity of the zero, and thus g(zZ) ≠ 0. We get
and
Since g(zZ) ≠ 0, it follows that g' (z)/g(z) has no singularities at zZ, and thus is analytic at zZ, which implies that the residue of f′(z)/f(z) at zZ is k.
Let zP be a pole of f. We can write f(z) = (z − zP)−mh(z) where m is the order of the pole, and
h(zP) ≠ 0. Then,
and
similarly as above. It follows that h′(z)/h(z) has no singularities at zP since h(zP) ≠ 0 and thus it is analytic at zP. We find that the residue of
f′(z)/f(z) at zP is −m.
Putting these together, each zero zZ of multiplicity k of f creates a simple pole for
f′(z)/f(z) with the residue being k, and each pole zP of order m of
f creates a simple pole for f′(z)/f(z) with the residue being −m. (Here, by a simple pole we
mean a pole of order one.) In addition, it can be shown that f′(z)/f(z) has no other poles,
and so no other residues.
By the residue theorem we have that the integral about C is the product of 2πi and the sum of the residues. Together, the sum of the ks for each zero zZ is the number of zeros counting multiplicities of the zeros, and likewise for the poles, and so we have our result.
Applications and consequences
The argument principle can be used to efficiently locate zeros or poles of meromorphic functions on a computer. Even with rounding errors, the expression will yield results close to an integer; by determining these integers for different contours C one can obtain information about the location of the zeros and poles. Numerical tests of the Riemann hypothesis use this technique to get an upper bound for the number of zeros of Riemann's function inside a rectangle intersecting the critical line. The argument principle can also be used to prove Rouché's theorem, which can be used to bound the roots of polynomial roots.
A consequence of the more general formulation of the argument principle is that, under the same hypothesis, if g is an analytic function in Ω, then
For example, if f is a polynomial having zeros z1, ..., zp inside a simple contour C, and g(z) = zk, then
is power sum symmetric polynomial of the roots of f.
Another consequence is if we compute the complex integral:
for an appropriate choice of g and f we have the Abel–Plana formula:
which expresses the relationship between a discrete sum and its integral.
The argument principle is also applied in control theory. In modern books on feedback control theory, it is commonly used as the theoretical foundation for the Nyquist stability criterion. Moreover, a more generalized form of the argument principle can be employed to derive Bode's sensitivity integral and other related integral relationships.
Generalized argument principle
There is an immediate generalization of the argument principle. Suppose that g is analytic in the region . Then
where the first summation is again over all zeros a of f counted with their multiplicities, and the second summation is again over the poles b of f counted with their orders.
History
According to the book by Frank Smithies (Cauchy and the Creation of Complex Function Theory, Cambridge University Press, 1997, p. 177), Augustin-Louis Cauchy presented a theorem similar to the above on 27 November 1831, during his self-imposed exile in Turin (then capital of the Kingdom of Piedmont-Sardinia) away from France. However, according to this book, only zeroes were mentioned, not poles. This theorem by Cauchy was only published many years later in 1874 in a hand-written form and so is quite difficult to read. Cauchy published a paper with a discussion on both zeroes and poles in 1855, two years before his death.
See also
Logarithmic derivative
Nyquist stability criterion
References
Backlund, R.-J. (1914) Sur les zéros de la fonction zeta(s) de Riemann, C. R. Acad. Sci. Paris 158, 1979–1982.
External links
Theorems in complex analysis | Argument principle | [
"Mathematics"
] | 1,339 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
1,138,512 | https://en.wikipedia.org/wiki/Prospective%20short-circuit%20current | The prospective short-circuit current (PSCC), available fault current, or short-circuit making current is the highest electric current which can exist in a particular electrical system under short-circuit conditions. It is determined by the voltage and impedance of the supply system. It is of the order of a few thousand amperes for a standard domestic mains electrical installation, but may be as low as a few milliamperes in a separated extra-low voltage (SELV) system or as high as hundreds of thousands of amps in large industrial power systems. The term is used in electrical engineering rather than electronics.
Protective devices such as circuit breakers and fuses must be selected with an interrupting rating that exceeds the prospective short-circuit current, if they are to safely protect the circuit from a fault. When a large electric current is interrupted an arc forms, and if the breaking capacity of a fuse or circuit breaker is exceeded, it will not extinguish the arc. Current will continue, resulting in damage to equipment, fire, or explosion.
Residential
In designing domestic power installations, the short-circuit current available on the electrical outlets should not be too high or too low. The effect of too high short-circuit current is discussed in the previous section. The short-circuit current should be around 20 times the rating of the circuit to ensure the branch circuit protection clears a fault quickly. Quick disconnecting is needed, because during a line-to-ground short circuit the grounding pin potential on the power outlet can rise relative to the local earth (concrete floor, water pipe etc.) to a dangerous voltage, which needs to be shut down quickly for safety. If the short-circuit current is lower than this figure, special precautions need to be taken to make sure that the system is safe; those usually include using a residual-current device (a.k.a. ground fault interrupter) for extra protection.
The short-circuit current available on the electrical outlets is often tested when inspecting new electrical installations to make sure that the short-circuit current is within reasonable limits. A high short-circuit current on the outlet also shows that the resistance from the electrical panel to the outlet is low, so there won't be an unacceptably high voltage drop on the wires under normal load.
The resistance path is the total resistance back through the supply transformer; to measure this an engineer will use an "earth fault loop impedance meter". The application of a low voltage allows a small current to pass from the socket back through earth to the supply transformer and distribution board. The resistance measured can be used to calculate the short-circuit current.
Utility and industrial
In power transmission systems and industrial power systems, often the short-circuit current is calculated from the nameplate impedances of connected equipment and the impedance of interconnecting wiring. For simple radial distribution systems with only a few elements, hand calculation is feasible, but computer software is generally used for more complex systems. Where rotating machines (generators and motors) are present in the system, the time-varying effect of their contribution to a short circuit may be evaluated. Stored energy in a generator may contribute much more current to a short circuit in the first few cycles than later on; this affects the interrupting rating selected for circuit breakers and fuses. An isolated generator may be specially designed to ensure that it can source enough current on a short circuit to allow subordinate overcurrent protection devices to operate properly.
Where an industrial system is fed from an electrical utility, the short circuit level at the point of connection may be specified, often with minimum and maximum values or values to be expected after system growth. This allows calculation by an industrial customer of its internal fault levels within its plant. If the prospective short-circuit current from the utility source is very large compared to the customer's system size, an "infinite bus" is assumed, with zero effective internal impedance; the only limit to the prospective short-circuit current is then the impedances after the defined "infinite bus".
In polyphase electrical systems, generally phase-to-phase, phase-to-ground (earth), and phase-to-neutral faults are examined, as well as a case where all three phases are short-circuited. Because impedances of cables or devices varies between phases, the prospective short-circuit current varies depending on the type of fault. Protection devices in the system must respond to all three cases. The method of symmetrical components is used to simplify analysis of unsymmetrical faults in three-phase systems.
See also
Current limiting reactor
References
Further reading
Electric power | Prospective short-circuit current | [
"Physics",
"Engineering"
] | 944 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,138,578 | https://en.wikipedia.org/wiki/Crystal%20field%20theory | In molecular physics, crystal field theory (CFT) describes the breaking of degeneracies of electron orbital states, usually d or f orbitals, due to a static electric field produced by a surrounding charge distribution (anion neighbors). This theory has been used to describe various spectroscopies of transition metal coordination complexes, in particular optical spectra (colors). CFT successfully accounts for some magnetic properties, colors, hydration enthalpies, and spinel structures of transition metal complexes, but it does not attempt to describe bonding. CFT was developed by physicists Hans Bethe and John Hasbrouck van Vleck in the 1930s. CFT was subsequently combined with molecular orbital theory to form the more realistic and complex ligand field theory (LFT), which delivers insight into the process of chemical bonding in transition metal complexes. CFT can be complicated further by breaking assumptions made of relative metal and ligand orbital energies, requiring the use of inverted ligand field theory (ILFT) to better describe bonding.
Overview
According to crystal field theory, the interaction between a transition metal and ligands arises from the attraction between the positively charged metal cation and the negative charge on the non-bonding electrons of the ligand. The theory is developed by considering energy changes of the five degenerate d-orbitals upon being surrounded by an array of point charges consisting of the ligands. As a ligand approaches the metal ion, the electrons from the ligand will be closer to some of the d-orbitals and farther away from others, causing a loss of degeneracy. The electrons in the d-orbitals and those in the ligand repel each other due to repulsion between like charges. Thus the d-electrons closer to the ligands will have a higher energy than those further away which results in the d-orbitals splitting in energy. This splitting is affected by the following factors:
the nature of the metal ion.
the metal's oxidation state. A higher oxidation state leads to a larger splitting relative to the spherical field.
the arrangement of the ligands around the metal ion.
the coordination number of the metal (i.e. tetrahedral, octahedral...)
the nature of the ligands surrounding the metal ion. The stronger the effect of the ligands then the greater the difference between the high and low energy d groups.
The most common type of complex is octahedral, in which six ligands form the vertices of an octahedron around the metal ion. In octahedral symmetry the d-orbitals split into two sets with an energy difference, Δoct (the crystal-field splitting parameter, also commonly denoted by 10Dq for ten times the "differential of quanta") where the dxy, dxz and dyz orbitals will be lower in energy than the dz2 and dx2-y2, which will have higher energy, because the former group is farther from the ligands than the latter and therefore experiences less repulsion. The three lower-energy orbitals are collectively referred to as t2g, and the two higher-energy orbitals as eg. These labels are based on the theory of molecular symmetry: they are the names of irreducible representations of the octahedral point group, Oh.(see the Oh character table) Typical orbital energy diagrams are given below in the section High-spin and low-spin.
Tetrahedral complexes are the second most common type; here four ligands form a tetrahedron around the metal ion. In a tetrahedral crystal field splitting, the d-orbitals again split into two groups, with an energy difference of Δtet. The lower energy orbitals will be dz2 and dx2-y2, and the higher energy orbitals will be dxy, dxz and dyz - opposite to the octahedral case. Furthermore, since the ligand electrons in tetrahedral symmetry are not oriented directly towards the d-orbitals, the energy splitting will be lower than in the octahedral case. Square planar and other complex geometries can also be described by CFT.
The size of the gap Δ between the two or more sets of orbitals depends on several factors, including the ligands and geometry of the complex. Some ligands always produce a small value of Δ, while others always give a large splitting. The reasons behind this can be explained by ligand field theory. The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce (small Δ to large Δ; see also this table):
I− < Br− < S2− < SCN− (S–bonded) < Cl− < NO3− < N3− < F− < OH− < C2O42− < H2O < NCS− (N–bonded) < CH3CN < py < NH3 < en < 2,2'-bipyridine < phen < NO2− < PPh3 < CN− < CO.
It is useful to note that the ligands producing the most splitting are those that can engage in metal to ligand back-bonding.
The oxidation state of the metal also contributes to the size of Δ between the high and low energy levels. As the oxidation state increases for a given metal, the magnitude of Δ increases. A V3+ complex will have a larger Δ than a V2+ complex for a given set of ligands, as the difference in charge density allows the ligands to be closer to a V3+ ion than to a V2+ ion. The smaller distance between the ligand and the metal ion results in a larger Δ, because the ligand and metal electrons are closer together and therefore repel more.
High-spin and low-spin
Ligands which cause a large splitting Δ of the d-orbitals are referred to as strong-field ligands, such as CN− and CO from the spectrochemical series. In complexes with these ligands, it is unfavourable to put electrons into the high energy orbitals. Therefore, the lower energy orbitals are completely filled before population of the upper sets starts according to the Aufbau principle. Complexes such as this are called "low spin". For example, NO2− is a strong-field ligand and produces a large Δ. The octahedral ion [Fe(NO2)6]3−, which has 5 d-electrons, would have the octahedral splitting diagram shown at right with all five electrons in the t2g level. This low spin state therefore does not follow Hund's rule.
Conversely, ligands (like I− and Br−) which cause a small splitting Δ of the d-orbitals are referred to as weak-field ligands. In this case, it is easier to put electrons into the higher energy set of orbitals than it is to put two into the same low-energy orbital, because two electrons in the same orbital repel each other. So, one electron is put into each of the five d-orbitals in accord with Hund's rule, and "high spin" complexes are formed before any pairing occurs. For example, Br− is a weak-field ligand and produces a small Δoct. So, the ion [FeBr6]3−, again with five d-electrons, would have an octahedral splitting diagram where all five orbitals are singly occupied.
In order for low spin splitting to occur, the energy cost of placing an electron into an already singly occupied orbital must be less than the cost of placing the additional electron into an eg orbital at an energy cost of Δ. As noted above, eg refers to the
dz2 and dx2-y2 which are higher in energy than the t2g in octahedral complexes. If the energy required to pair two electrons is greater than Δ, the energy cost of placing an electron in an eg, high spin splitting occurs.
The crystal field splitting energy for tetrahedral metal complexes (four ligands) is referred to as Δtet, and is roughly equal to 4/9Δoct (for the same metal and same ligands). Therefore, the energy required to pair two electrons is typically higher than the energy required for placing electrons in the higher energy orbitals. Thus, tetrahedral complexes are usually high-spin.
The use of these splitting diagrams can aid in the prediction of magnetic properties of co-ordination compounds. A compound that has unpaired electrons in its splitting diagram will be paramagnetic and will be attracted by magnetic fields, while a compound that lacks unpaired electrons in its splitting diagram will be diamagnetic and will be weakly repelled by a magnetic field.
Stabilization energy
The crystal field stabilization energy (CFSE) is the stability that results from placing a transition metal ion in the crystal field generated by a set of ligands. It arises due to the fact that when the d-orbitals are split in a ligand field (as described above), some of them become lower in energy than before with respect to a spherical field known as the barycenter in which all five d-orbitals are degenerate. For example, in an octahedral case, the t2g set becomes lower in energy than the orbitals in the barycenter. As a result of this, if there are any electrons occupying these orbitals, the metal ion is more stable in the ligand field relative to the barycenter by an amount known as the CFSE. Conversely, the eg orbitals (in the octahedral case) are higher in energy than in the barycenter, so putting electrons in these reduces the amount of CFSE.
If the splitting of the d-orbitals in an octahedral field is Δoct, the three t2g orbitals are stabilized relative to the barycenter by 2/5 Δoct, and the eg orbitals are destabilized by 3/5 Δoct. As examples, consider the two d5 configurations shown further up the page. The low-spin (top) example has five electrons in the t2g orbitals, so the total CFSE is 5 x 2/5 Δoct = 2Δoct. In the high-spin (lower) example, the CFSE is (3 x 2/5 Δoct) - (2 x 3/5 Δoct) = 0 - in this case, the stabilization generated by the electrons in the lower orbitals is canceled out by the destabilizing effect of the electrons in the upper orbitals.
Optical properties
The optical properties (details of absorption and emission spectra) of many coordination complexes can be explained by Crystal Field Theory. Often, however, the deeper colors of metal complexes arise from more intense charge-transfer excitations.
Geometries and splitting diagrams
See also
Schottky anomaly — low temperature spike in heat capacity seen in materials containing high-spin magnetic impurities, often due to crystal field splitting
Ligand field theory
Molecular orbital theory
References
Further reading
External links
Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012,
Condensed matter physics
Inorganic chemistry
Chemical bonding
Coordination chemistry
Transition metals
de:Kristallfeld- und Ligandenfeldtheorie#Kristallfeldtheorie | Crystal field theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,396 | [
"Coordination chemistry",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Matter"
] |
12,431,124 | https://en.wikipedia.org/wiki/Process%20window%20index | Process window index (PWI) is a statistical measure that quantifies the robustness of a manufacturing process, e.g. one which involves heating and cooling, known as a thermal process. In manufacturing industry, PWI values are used to calibrate the heating and cooling of soldering jobs (known as a thermal profile) while baked in a reflow oven.
PWI measures how well a process fits into a user-defined process limit known as the specification limit. The specification limit is the tolerance allowed for the process and may be statistically determined. Industrially, these specification limits are known as the process window, and values that a plotted inside or outside this window are known as the process window index.
Using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
Statistical process control
Process capability is the ability of a process to produce output within specified limits. To help determine whether a manufacturing or business process is in a state of statistical control, process engineers use control charts, which help to predict the future performance of the process based on the current process.
To help determine the capability of a process, statistically determined upper and lower limits are drawn on either side of a process mean on the control chart. The control limits are set at three standard deviations on either side of the process mean, and are known as the upper control limit (UCL) and lower control limit (LCL) respectively. If the process data plotted on the control chart remains within the control limits over an extended period, then the process is said to be stable.
The tolerance values specified by the end-user are known as specification limits – the upper specification limit (USL) and lower specification limit (LSL). If the process data plotted on a control chart remains within these specification limits, then the process is considered a capable process, denoted by .
The manufacturing industry has developed customized specification limits known as process windows. Within this process window, values are plotted. The values relative to the process mean of the window are known as the process window index. By using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
Control limits
Control limits, also known as natural process limits, are horizontal lines drawn on a statistical process control chart, usually at a distance of ±3 standard deviations of the plotted statistic's mean, used to judge the stability of a process.
Control limits should not be confused with tolerance limits or specifications, which are completely independent of the distribution of the plotted sample statistic. Control limits describe what a process is capable of producing (sometimes referred to as the "voice of the process"), while tolerances and specifications describe how the product should perform to meet the customer's expectations (referred to as the "voice of the customer").
Use
Control limits are used to detect signals in process data that indicate that a process is not in control and, therefore, not operating predictably. A value in excess of the control limit indicates a special cause is affecting the process.
To detect signals one of several rule sets may be used (). One specification outlines that a signal is defined as any single point outside of the control limits. A process is also considered out of control if there are seven consecutive points, still inside the control limits but on one single side of the mean.
For normally distributed statistics, the area bracketed by the control limits will on average contain 99.73% of all the plot points on the chart, as long as the process is and remains in statistical control. A false-detection rate of at least 0.27% is therefore expected.
It is often not known whether a particular process generates data that conform to particular distributions, but the Chebyshev's inequality and the Vysochanskij–Petunin inequality allow the inference that for any unimodal distribution at least 95% of the data will be encapsulated by limits placed at 3 sigma.
PWI in electronics manufacturing
An example of a process to which the PWI concept may be applied is soldering. In soldering, a thermal profile is the set of time-temperature values for a variety of processes such as slope, thermal soak, reflow, and peak.
Each thermal profile is ranked on how it fits in a process window (the specification or tolerance limit). Raw temperature values are normalized in terms of a percentage relative to both the process mean and the window limits. The center of the process window is defined as zero, and the extreme edges of the process window are ±99%. A PWI greater than or equal to 100% indicates that the profile does not process the product within specification. A PWI of 99% indicates that the profile runs at the edge of the process window. For example, if the process mean is set at 200 °C, with the process window calibrated at 180 °C and 220 °C respectively; then a measured value of 188 °C translates to a process window index of −60%. A lower PWI value indicates a more robust profile. For maximum efficiency, separate PWI values are computed for peak, slope, reflow, and soak processes of a thermal profile.
To avoid thermal shock affecting production, the steepest slope in the thermal profile is determined and leveled. Manufacturers use custom-built software to accurately determine and decrease the steepness of the slope. In addition, the software also automatically recalibrates the PWI values for the peak, slope, reflow, and soak processes. By setting PWI values, engineers can ensure that the reflow soldering work does not overheat or cool too quickly.
Formula
The PWI is calculated as the worst case (i.e. highest number) in the set of thermal profile data. For each profile statistic the percentage used of the respective process window is calculated, and the worst case (i.e. highest percentage) is the PWI.
For example, a thermal profile with three thermocouples, with four profile statistics logged for each thermocouple, would have a set of twelve statistics for that thermal profile. In this case, the PWI would be the highest value among the twelve percentages of the respective process windows.
The formula to calculate PWI is:
where:
i = 1 to N (number of thermocouples)
j = 1 to M (number of statistics per thermocouple)
measured value [i, j] = the [i, j]th statistic's measured value
average limits [i, j] = the average of the high and low (specified) limits of the [i, j''']th statistic
range [i, j] = the high limit minus the low limit of the [i, j'']th statistic
See also
Acceptable quality limit
Reflow soldering
Wave soldering
References
Electronics manufacturing
Industrial_processes
Brazing and soldering
Statistical charts and diagrams
Statistical process control
Statistical distance
Quality control | Process window index | [
"Physics",
"Engineering"
] | 1,451 | [
"Physical quantities",
"Statistical distance",
"Distance",
"Statistical process control",
"Electronic engineering",
"Engineering statistics",
"Electronics manufacturing"
] |
12,433,418 | https://en.wikipedia.org/wiki/Filtering%20problem%20%28stochastic%20processes%29 | In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations. While originally motivated by problems in engineering, filtering found applications in many fields from signal processing to finance.
The problem of optimal non-linear filtering (even for the non-stationary case) was solved by Ruslan L. Stratonovich (1959, 1960), see also Harold J. Kushner's work and Moshe Zakai's, who introduced a simplified dynamics for the unnormalized conditional law of the filter known as the Zakai equation. The solution, however, is infinite-dimensional in the general case. Certain approximations and special cases are well understood: for example, the linear filters are optimal for Gaussian random variables, and are known as the Wiener filter and the Kalman-Bucy filter. More generally, as the solution is infinite dimensional, it requires finite dimensional approximations to be implemented in a computer with finite memory. A finite dimensional approximated nonlinear filter may be more based on heuristics, such as the extended Kalman filter or the assumed density filters, or more methodologically oriented such as for example the projection filters, some sub-families of which are shown to coincide with the Assumed Density Filters.
Particle filters are another option to attack the infinite dimensional filtering problem and are based on sequential Monte Carlo methods.
In general, if the separation principle applies, then filtering also arises as part of the solution of an optimal control problem. For example, the Kalman filter is the estimation part of the optimal control solution to the linear-quadratic-Gaussian control problem.
The mathematical formalism
Consider a probability space (Ω, Σ, P) and suppose that the (random) state Yt in n-dimensional Euclidean space Rn of a system of interest at time t is a random variable Yt : Ω → Rn given by the solution to an Itō stochastic differential equation of the form
where B denotes standard p-dimensional Brownian motion, b : [0, +∞) × Rn → Rn is the drift field, and σ : [0, +∞) × Rn → Rn×p is the diffusion field. It is assumed that observations Ht in Rm (note that m and n may, in general, be unequal) are taken for each time t according to
Adopting the Itō interpretation of the stochastic differential and setting
this gives the following stochastic integral representation for the observations Zt:
where W denotes standard r-dimensional Brownian motion, independent of B and the initial condition Y0, and c : [0, +∞) × Rn → Rn and γ : [0, +∞) × Rn → Rn×r satisfy
for all t and x and some constant C.
The filtering problem is the following: given observations Zs for 0 ≤ s ≤ t, what is the best estimate Ŷt of the true state Yt of the system based on those observations?
By "based on those observations" it is meant that Ŷt is measurable with respect to the σ-algebra Gt generated by the observations Zs, 0 ≤ s ≤ t. Denote by K = K(Z, t) the collection of all Rn-valued random variables Y that are square-integrable and Gt-measurable:
By "best estimate", it is meant that Ŷt minimizes the mean-square distance between Yt and all candidates in K:
Basic result: orthogonal projection
The space K(Z, t) of candidates is a Hilbert space, and the general theory of Hilbert spaces implies that the solution Ŷt of the minimization problem (M) is given by
where PK(Z,t) denotes the orthogonal projection of L2(Ω, Σ, P; Rn) onto the linear subspace K(Z, t) = L2(Ω, Gt, P; Rn). Furthermore, it is a general fact about conditional expectations that if F is any sub-σ-algebra of Σ then the orthogonal projection
is exactly the conditional expectation operator E[·|F], i.e.,
Hence,
This elementary result is the basis for the general Fujisaki-Kallianpur-Kunita equation of filtering theory.
More advanced result: nonlinear filtering SPDE
The complete knowledge of the filter at a time t would be given by the probability law of the signal Yt conditional on the sigma-field Gt generated by observations Z up to time t. If this probability law admits a density, informally
then under some regularity assumptions the density satisfies a non-linear stochastic partial differential equation (SPDE) driven by and called Kushner-Stratonovich equation, or a unnormalized version of the density satisfies a linear SPDE called Zakai equation.
These equations can be formulated for the above system, but to simplify the exposition one can assume that the unobserved signal Y and the partially observed noisy signal Z satisfy the equations
In other terms, the system is simplified by assuming that the observation noise W is not state dependent.
One might keep a deterministic time dependent in front of but we assume this has been taken out by re-scaling.
For this particular system, the Kushner-Stratonovich SPDE for the density reads
where T denotes transposition, denotes the expectation with respect to the density p,
and the forward diffusion operator is
where .
If we choose the unnormalized density , the Zakai SPDE for the same system reads
These SPDEs for p and q are written in Ito calculus form. It is possible to write them in Stratonovich calculus form, which turns out to be helpful when deriving filtering approximations based on differential geometry, as in the projection filters.
For example, the Kushner-Stratonovich equation written in Stratonovich calculus reads
From any of the densities p and q one can calculate all statistics of the signal Yt conditional on the sigma-field generated by observations Z up to time t, so that the densities give complete knowledge of the filter. Under the particular linear-constant assumptions with respect to Y, where the systems coefficients b and c are linear functions of Y and where and do not depend on Y, with the initial condition for the signal Y being Gaussian or deterministic, the density is Gaussian and it can be characterized by its mean and variance-covariance matrix, whose evolution is described by the Kalman-Bucy filter, which is finite dimensional. More generally, the evolution of the filter density occurs in an infinite-dimensional function space, and it has to be approximated via a finite dimensional approximation, as hinted above.
See also
The smoothing problem, closely related to the filtering problem
Filter (signal processing)
Kalman filter, a well-known filtering algorithm for linear systems, related both to the filtering problem and the smoothing problem
Extended Kalman filter, an extension of the Kalman filter to nonlinear systems
Smoothing
Projection filters
Particle filters
References
Further reading
(See Section 6.1)
Control theory
Signal estimation
Stochastic differential equations | Filtering problem (stochastic processes) | [
"Mathematics"
] | 1,459 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
12,435,391 | https://en.wikipedia.org/wiki/Residuated%20Boolean%20algebra | In mathematics, a residuated Boolean algebra is a residuated lattice whose lattice structure is that of a Boolean algebra. Examples include Boolean algebras with the monoid taken to be conjunction, the set of all formal languages over a given alphabet Σ under concatenation, the set of all binary relations on a given set X under relational composition, and more generally the power set of any equivalence relation, again under relational composition. The original application was to relation algebras as a finitely axiomatized generalization of the binary relation example, but there exist interesting examples of residuated Boolean algebras that are not relation algebras, such as the language example.
Definition
A residuated Boolean algebra is an algebraic structure such that
An equivalent signature better suited to the relation algebra application is where the unary operations x\ and x▷ are intertranslatable in the manner of De Morgan's laws via
x\y = ¬(x▷¬y), x▷y = ¬(x\¬y),
and dually /y and ◁y as
x/y = ¬(¬x◁y), x◁y = ¬(¬x/y),
with the residuation axioms in the residuated lattice article reorganized accordingly (replacing z by ¬z) to read
⇔ ⇔
This De Morgan dual reformulation is motivated and discussed in more detail in the section below on conjugacy.
Since residuated lattices and Boolean algebras are each definable with finitely many equations, so are residuated Boolean algebras, whence they form a finitely axiomatizable variety.
Examples
Any Boolean algebra, with the monoid multiplication • taken to be conjunction and both residuals taken to be material implication x→y. Of the remaining 15 binary Boolean operations that might be considered in place of conjunction for the monoid multiplication, only five meet the monotonicity requirement, namely 0, 1, x, y, and . Setting y = z = 0 in the residuation axiom y ≤ x\z ⇔ x•y ≤ z, we have 0 ≤ x\0 ⇔ x•0 ≤ 0, which is falsified by taking x = 1 when x•y = 1, x, or . The dual argument for z/y rules out x•y = y. This just leaves x•y = 0 (a constant binary operation independent of x and y), which satisfies almost all the axioms when the residuals are both taken to be the constant operation x/y = x\y = 1. The axiom it fails is , for want of a suitable value for . Hence conjunction is the only binary Boolean operation making the monoid multiplication that of a residuated Boolean algebra.
The power set 2X2 made a Boolean algebra as usual with ∩, ∪ and complement relative to X2, and made a monoid with relational composition. The monoid unit is the identity relation {(x,x)|x ∈ X}. The right residual R\S is defined by x(R\S)y if and only if for all z in X, zRx implies zSy. Dually the left residual S/R is defined by y(S/R)x if and only if for all z in X, xRz implies ySz.
The power set 2Σ* made a Boolean algebra as for Example 2, but with language concatenation for the monoid. Here the set Σ is used as an alphabet while Σ* denotes the set of all finite (including empty) words over that alphabet. The concatenation LM of languages L and M consists of all words uv such that u ∈ L and v ∈ M. The monoid unit is the language {ε} consisting of just the empty word ε. The right residual M\L consists of all words w over Σ such that Mw ⊆ L. The left residual L/M is the same with wM in place of Mw.
Conjugacy
The De Morgan duals ▷ and ◁ of residuation arise as follows. Among residuated lattices, Boolean algebras are special by virtue of having a complementation operation ¬. This permits an alternative expression of the three inequalities
y ≤ x\z ⇔ x•y ≤ z ⇔ x ≤ z/y
in the axiomatization of the two residuals in terms of disjointness, via the equivalence x ≤ y ⇔ x∧¬y = 0. Abbreviating x∧y = 0 to x # y as the expression of their disjointness, and substituting ¬z for z in the axioms, they become with a little Boolean manipulation
¬(x\¬z) # y ⇔ x•y # z ⇔ ¬(¬z/y) # x
Now ¬(x\¬z) is reminiscent of De Morgan duality, suggesting that x\ be thought of as a unary operation f, defined by f(y) = x\y, that has a De Morgan dual ¬f(¬y), analogous to ∀xφ(x) = ¬∃x¬φ(x). Denoting this dual operation as x▷, we define x▷z as ¬(x\¬z). Similarly we define another operation z◁y as ¬(¬z/y). By analogy with x\ as the residual operation associated with the operation x•, we refer to x▷ as the conjugate operation, or simply conjugate, of x•. Likewise ◁y is the conjugate of •y. Unlike residuals, conjugacy is an equivalence relation between operations: if f is the conjugate of g then g is also the conjugate of f, i.e. the conjugate of the conjugate of f is f. Another advantage of conjugacy is that it becomes unnecessary to speak of right and left conjugates, that distinction now being inherited from the difference between x• and •x, which have as their respective conjugates x▷ and ◁x. (But this advantage accrues also to residuals when x\ is taken to be the residual operation to x•.)
All this yields (along with the Boolean algebra and monoid axioms) the following equivalent axiomatization of a residuated Boolean algebra.
y # x▷z ⇔ x•y # z ⇔ x # z◁y
With this signature it remains the case that this axiomatization can be expressed as finitely many equations.
Converse
In Examples 2 and 3 it can be shown that . In Example 2 both sides equal the converse x˘ of x, while in Example 3, both sides are when x contains the empty word and 0 otherwise. In the former case x˘ = x. This is impossible for the latter because retains hardly any information about x. Hence in Example 2 we can substitute x˘ for x in and cancel (soundly) to give
.
x˘˘ = x can be proved from these two equations. Tarski's notion of a relation algebra can be defined as a residuated Boolean algebra having an operation x˘ satisfying these two equations.
The cancellation step in the above is not possible for Example 3, which therefore is not a relation algebra, x˘ being uniquely determined as .
Consequences of this axiomatization of converse include x˘˘ = x, ¬(x˘) = (¬x)˘, , and (x•y)˘ = y˘•x˘.
References
Bjarni Jónsson and Constantine Tsinakis, Relation algebras as residuated Boolean algebras, Algebra Universalis, 30 (1993) 469-478.
Peter Jipsen, Computer aided investigations of relation algebras, Ph.D. Thesis, Vanderbilt University, May 1992.
Boolean algebra
Mathematical logic
Fuzzy logic
Algebraic logic | Residuated Boolean algebra | [
"Mathematics"
] | 1,605 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic",
"Algebraic logic"
] |
5,921,769 | https://en.wikipedia.org/wiki/Human%20artificial%20chromosome | A human artificial chromosome (HAC) is a microchromosome that can act as a new chromosome in a population of human cells. That is, instead of 46 chromosomes, the cell could have 47 with the 47th being very small, roughly 6–10megabases (Mb) in size instead of 50–250Mb for natural chromosomes, and able to carry new genes introduced by human researchers. Ideally, researchers could integrate different genes that perform a variety of functions, including disease defense.
Alternative methods of creating transgenes, such as utilizing yeast artificial chromosomes and bacterial artificial chromosomes, lead to unpredictable problems. The genetic material introduced by these vectors not only leads to different expression levels, but the inserts also disrupt the original genome. HACs differ in this regard, as they are entirely separate chromosomes. This separation from existing genetic material assumes that no insertional mutants would arise. This stability and accuracy makes HACs preferable to other methods such as viral vectors, YACs, and BACs. HACs allow for delivery of more DNA (including promoters and copy-number variation) than is possible with viral vectors.
Yeast artificial chromosomes and bacterial artificial chromosomes were created before human artificial chromosomes, which were first developed in 1997. HACs are useful in expression studies as gene transfer vectors, as a tool for elucidating human chromosome function, and as a method for actively annotating the human genome.
History
HACs were first constructed de novo in 1997 by adding alpha-satellite DNA to telomeric and genomic DNA in human HT1080 cells. This resulted in an entirely new microchromosome that contained DNA of interest, as well as elements allowing it to be structurally and mitotically stable, such as telomeric and centromeric sequences. Due to the difficulty of de novo HAC formation, this method has largely been abandoned.
Construction methods
There are currently two accepted models for the creation of human artificial chromosome vectors. The first is to create a small minichromosome by altering a natural human chromosome. This is accomplished by truncating the natural chromosome, followed by the introduction of unique genetic material via the Cre-Lox system of recombination. The second method involves the literal creation of a novel chromosome de novo. Progress regarding de novo HAC formation has been limited, as many large genomic fragments will not successfully integrate into de novo vectors. Another factor limiting de novo vector formation is limited knowledge of what elements are required for construction, specifically centromeric sequences. Challenges involving centromeric sequences are being overcome.
Applications
A 2009 study has shown additional benefits of HACs, namely their ability to stably contain extremely large genomic fragments. Researchers incorporated the 2.4Mb dystrophin gene, in which a mutation is a key causal element of Duchenne muscular dystrophy. The resulting HAC was mitotically stable, and correctly expressed dystrophin in chimeric mice. Previous attempts at correctly expressing dystrophin have failed. Due to its large size, it has never before been successfully integrated into a vector.
In 2010, a refined human artificial chromosome called 21HAC was reported. 21HAC is based on a stripped copy of human chromosome 21, producing a chromosome 5Mb in length. Truncation of chromosome 21 resulted in a human artificial chromosome that was mitotically stable. 21HAC was also able to be transferred into cells from a variety of species (mice, chickens, humans). Using 21HAC, researchers were able to insert a herpes simplex virus thymidine kinase coding gene into tumor cells. This "suicide gene" is required to activate many antiviral medications. These targeted tumor cells were successfully, and selectively, terminated by the antiviral drug ganciclovir in a population including healthy cells. This research opens a variety of opportunities for using HACs in gene therapy.
In 2011, researchers formed a human artificial chromosome by truncating chromosome 14. Genetic material was then introduced using the Cre-Lox recombination system. This particular study focused on changes in expression levels by leaving portions of the existing genomic DNA. By leaving existing telomeric and sub-telomeric sequences, researchers were able to amplify expression levels of genes coding for erythropoietin production over 1000-fold. This work also has large gene therapy implications, as erythropoietin controls red blood cell formation.
HACs have been used to create transgenic animals for use as animal models of human disease and for production of therapeutic products.
See also
Plasmid
Cosmid
Fosmid
References
Molecular biology | Human artificial chromosome | [
"Chemistry",
"Biology"
] | 950 | [
"Biochemistry",
"Molecular biology"
] |
5,921,892 | https://en.wikipedia.org/wiki/Infrared%20spectroscopy%20correlation%20table | An infrared spectroscopy correlation table (or table of infrared absorption frequencies) is a list of absorption peaks and frequencies, typically reported in wavenumber, for common types of molecular bonds and functional groups. In physical and analytical chemistry, infrared spectroscopy (IR spectroscopy) is a technique used to identify chemical compounds based on the way infrared radiation is absorbed by the compound.
The absorptions in this range do not apply only to bonds in organic molecules. IR spectroscopy is useful when it comes to analysis of inorganic compounds (such as metal complexes or fluoromanganates) as well.
Group frequencies
Tables of vibrational transitions of stable and transient molecules are also available.
See also
Applied spectroscopy
Absorption spectroscopy
References
Infrared spectroscopy
Chemistry-related lists | Infrared spectroscopy correlation table | [
"Physics",
"Chemistry"
] | 145 | [
"Infrared spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"nan"
] |
5,924,217 | https://en.wikipedia.org/wiki/Hilbert%20symbol | In mathematics, the Hilbert symbol or norm-residue symbol is a function (–, –) from K× × K× to the group of nth roots of unity in a local field K such as the fields of reals or p-adic numbers. It is related to reciprocity laws, and can be defined in terms of the Artin symbol of local class field theory. The Hilbert symbol was introduced by in his Zahlbericht, with the slight difference that he defined it for elements of global fields rather than for the larger local fields.
The Hilbert symbol has been generalized to higher local fields.
Quadratic Hilbert symbol
Over a local field K whose multiplicative group of non-zero elements is K×,
the quadratic Hilbert symbol is the function (–, –) from K× × K× to {−1,1} defined by
Equivalently, if and only if is equal to the norm of an element of the quadratic extension .
Properties
The following three properties follow directly from the definition, by choosing suitable solutions of the diophantine equation above:
If a is a square, then (a, b) = 1 for all b.
For all a,b in K×, (a, b) = (b, a).
For any a in K× such that a−1 is also in K×, we have (a, 1−a) = 1.
The (bi)multiplicativity, i.e.,
(a, b1b2) = (a, b1)·(a, b2)
for any a, b1 and b2 in K× is, however, more difficult to prove, and requires the development of local class field theory.
The third property shows that the Hilbert symbol is an example of a Steinberg symbol and thus factors over the second Milnor K-group , which is by definition
K× ⊗ K× / (a ⊗ (1−a), a ∈ K× \ {1})
By the first property it even factors over . This is the first step towards the Milnor conjecture.
Interpretation as an algebra
The Hilbert symbol can also be used to denote the central simple algebra over K with basis 1,i,j,k and multiplication rules , , . In this case the algebra represents an element of order 2 in the Brauer group of K, which is identified with -1 if it is a division algebra and +1 if it is isomorphic to the algebra of 2 by 2 matrices.
Hilbert symbols over the rationals
For a place v of the rational number field and rational numbers a, b we let (a, b)v denote the value of the Hilbert symbol in the corresponding completion Qv. As usual, if v is the valuation attached to a prime number p then the corresponding completion is the p-adic field and if v is the infinite place then the completion is the real number field.
Over the reals, (a, b)∞ is +1 if at least one of a or b is positive, and −1 if both are negative.
Over the p-adics with p odd, writing and , where u and v are integers coprime to p, we have
, where
and the expression involves two Legendre symbols.
Over the 2-adics, again writing and , where u and v are odd numbers, we have
, where
It is known that if v ranges over all places, (a, b)v is 1 for almost all places. Therefore, the following product formula
makes sense. It is equivalent to the law of quadratic reciprocity.
Kaplansky radical
The Hilbert symbol on a field F defines a map
where Br(F) is the Brauer group of F. The kernel of this mapping, the elements a such that (a,b)=1 for all b, is the Kaplansky radical of F.
The radical is a subgroup of F*/F*2, identified with a subgroup of F*. The radical is equal to F* if and only if F has u-invariant at most 2. In the opposite direction, a field with radical F*2 is termed a Hilbert field.
The general Hilbert symbol
If K is a local field containing the group of nth roots of unity for some positive integer n prime to the characteristic of K, then the Hilbert symbol (,) is a function from K*×K* to μn. In terms of the Artin symbol it can be defined by
Hilbert originally defined the Hilbert symbol before the Artin symbol was discovered, and his definition (for n prime) used the power residue symbol when K has residue characteristic coprime to n, and was rather complicated when K has residue characteristic dividing n.
Properties
The Hilbert symbol is (multiplicatively) bilinear:
(ab,c) = (a,c)(b,c)
(a,bc) = (a,b)(a,c)
skew symmetric:
(a,b) = (b,a)−1
nondegenerate:
(a,b)=1 for all b if and only if a is in K*n
It detects norms (hence the name norm residue symbol):
(a,b)=1 if and only if a is a norm of an element in K()
It has the "symbol" properties:
(a,1–a)=1, (a,–a)=1.
Hilbert's reciprocity law
Hilbert's reciprocity law states that if a and b are in an algebraic number field containing the nth roots of unity then
where the product is over the finite and infinite primes p of the number field, and where (,)p is the Hilbert symbol of the completion at p. Hilbert's reciprocity law follows from the Artin reciprocity law and the definition of the Hilbert symbol in terms of the Artin symbol.
Power residue symbol
If K is a number field containing the nth roots of unity, p is a prime ideal not dividing n, π is a prime element of the local field of p, and a is coprime to p, then the power residue symbol () is related to the Hilbert symbol by
The power residue symbol is extended to fractional ideals by multiplicativity, and defined for elements of the number field
by putting ()=() where (b) is the principal ideal generated by b.
Hilbert's reciprocity law then implies the following reciprocity law for the residue symbol, for a and b prime to each other and to n:
See also
Azumaya algebra
External links
HilbertSymbol at Mathworld
References
Class field theory
Quadratic forms
David Hilbert | Hilbert symbol | [
"Mathematics"
] | 1,375 | [
"Quadratic forms",
"Number theory"
] |
5,926,889 | https://en.wikipedia.org/wiki/Atomic%20and%20molecular%20astrophysics | Atomic astrophysics is concerned with performing atomic physics calculations that will be useful to astronomers and using atomic data to interpret astronomical observations. Atomic physics plays a key role in astrophysics as astronomers' only information about a particular object comes through the light that it emits, and this light arises through atomic transitions.
Molecular astrophysics, developed into a rigorous field of investigation by theoretical astrochemist Alexander Dalgarno beginning in 1967, concerns the study of emission from molecules in space. There are 110 currently known interstellar molecules. These molecules have large numbers of observable transitions. Lines may also be observed in absorption—for example the highly redshifted lines seen against the gravitationally lensed quasar PKS1830-211. High energy radiation, such as ultraviolet light, can break the molecular bonds which hold atoms in molecules. In general then, molecules are found in cool astrophysical environments. The most massive objects in our galaxy are giant clouds of molecules and dust known as giant molecular clouds. In these clouds, and smaller versions of them, stars and planets are formed. One of the primary fields of study of molecular astrophysics is star and planet formation. Molecules may be found in many environments, however, from stellar atmospheres to those of planetary satellites. Most of these locations are relatively cool, and molecular emission is most easily studied via photons emitted when the molecules make transitions between low rotational energy states. One molecule, composed of the abundant carbon and oxygen atoms, and very stable against dissociation into atoms, is carbon monoxide (CO). The wavelength of the photon emitted when the CO molecule falls from its lowest excited state to its zero energy, or ground, state is 2.6mm, or 115 gigahertz. This frequency is a thousand times higher than typical FM radio frequencies. At these high frequencies, molecules in the Earth's atmosphere can block transmissions from space, and telescopes must be located in dry (water is an important atmospheric blocker), high sites. Radio telescopes must have very accurate surfaces to produce high fidelity images.
On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
See also
Alexander Dalgarno (physicist)
Astrochemistry
Astrophysics
Atomic, molecular, and optical physics
Cosmochemistry
Interstellar medium
Molecular modelling
Quantum dynamics
Spectroscopy
References
National Radio Astronomy: Molecular Astrophysics
Molecular Astrophysics: A volume honouring Alexander Dalgarno
External links
Astrochemistry
Atomic physics
Astronomical sub-disciplines
Astrophysics
Subfields of physics | Atomic and molecular astrophysics | [
"Physics",
"Chemistry",
"Astronomy"
] | 577 | [
"Quantum mechanics",
"Astrophysics",
"Astrochemistry",
" molecular",
"nan",
"Atomic physics",
"Atomic",
"Astronomical sub-disciplines",
" and optical physics"
] |
5,929,448 | https://en.wikipedia.org/wiki/SuperGrid%20%28hydrogen%29 | In lossless power transmission, a supergrid with hydrogen is an idea for combining very long distance electric power transmission with liquid hydrogen distribution, to achieve superconductivity in the cables. The hydrogen is both a distributed fuel and a cryogenic coolant for the power lines, rendering them superconducting. The concept's advocates describe it as being in a "visionary" stage, for which no new scientific breakthrough is required but which requires major technological innovations before it could progress to a practical system. A system for the United States is projected to require "several decades" before it could be fully implemented.
One proposed design for a superconducting cable includes a superconducting bipolar DC line operating at ±50 kV, and 50 kA, transmitting about 2.5 GW for several hundred kilometers at zero resistance and nearly no line loss. High-voltage direct current (HVDC) lines have the capability of transmitting similar wattages, for example a 5 gigawatt HVDC system is being constructed along the southern provinces of China without the use of superconducting cables.
In the United States, a Continental SuperGrid 4,000 kilometers long might carry 40,000 to 80,000 MW in a tunnel shared with long-distance high speed maglev trains, which at low pressure could allow cross continental journeys of one hour. The liquid hydrogen pipeline would both store and deliver hydrogen.
1.5% of the energy transmitted on the British AC Supergrid is lost (transformer, heating and capacitive losses). Of this, a little under two-thirds (or 1% on the British supergrid), represents "DC" (resistive) heating type losses. With superconductive power lines, the capacitive and transformer losses (in the unlikely event the transmission lines were still overhead AC lines) would remain the same. In addition, overhead lines do not lend themselves at all well physically to the incorporation of cryogenic hydrogen piping, due to the likely weight of the transmission medium and the considerable brittleness of supercooled materials. It would probably be necessary for a supercooled hydrogen-carrying transmission line to be subterranean, and this in turn means that for such a cable, if it were of any distance (e.g. over 60 km), the power would have to be converted to DC and transmitted as such, since otherwise the capacitive losses would be too high. In this case, the power electronic losses in the AC/DC converter substations would negate part or all of the power savings from the superconductive line itself.
Even before comprehensive continental and (in the case of the proposed European Super Grid) intercontinental backbones of electrical transmission may be realized, such cables could be used to efficiently interconnect regional power grids of conventional design.
See also
Superconducting cables
High voltage direct current
References
External links
SuperGrid Workshop at University of Illinois at Urbana-Champaign
Hydrogen economy
Electric power transmission systems
Superconductivity
Electric power systems components | SuperGrid (hydrogen) | [
"Physics",
"Materials_science",
"Engineering"
] | 623 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
213,665 | https://en.wikipedia.org/wiki/Enthalpy%20of%20neutralization | In chemistry and thermodynamics, the enthalpy of neutralization () is the change in enthalpy that occurs when one equivalent of an acid and a base undergo a neutralization reaction to form water and a salt. It is a special case of the enthalpy of reaction. It is defined as the energy released with the formation of 1 mole of water.
When a reaction is carried out under standard conditions at the temperature of 298 K (25 degrees Celsius) and 1 atm of pressure and one mole of water is formed, the heat released by the reaction is called the standard enthalpy of neutralization ().
The heat () released during a reaction is
where is the mass of the solution, is the specific heat capacity of the solution, and is the temperature change observed during the reaction. From this, the standard enthalpy change () is obtained by division with the amount of substance (in moles) involved.
When a strong acid, HA, reacts with a strong base, BOH, the reaction that occurs is
H+ + OH^- -> H2O
as the acid and the base are fully dissociated and neither the cation nor the anion are involved in the neutralization reaction. The enthalpy change for this reaction is -57.62 kJ/mol at 25 °C.
For weak acids or bases, the heat of neutralization is pH-dependent. In the absence of any added mineral acid or alkali, some heat is required for complete dissociation. The total heat evolved during neutralization will be smaller.
e.g. at 25°C
The heat of ionization for this reaction is equal to (–12 + 57.3) = 45.3 kJ/mol at 25 °C.
References
Enthalpy
Thermochemistry
Acid–base chemistry | Enthalpy of neutralization | [
"Physics",
"Chemistry",
"Mathematics"
] | 381 | [
"Acid–base chemistry",
"Thermodynamic properties",
"Thermochemistry",
"Physical quantities",
"Quantity",
"Equilibrium chemistry",
"Enthalpy",
"nan"
] |
214,124 | https://en.wikipedia.org/wiki/Umbral%20calculus | The term umbral calculus has two related but distinct meanings.
In mathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to prove them. These techniques were introduced in 1861 by John Blissard and are sometimes called Blissard's symbolic method. They are often attributed to Édouard Lucas (or James Joseph Sylvester), who used the technique extensively. The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus".
History
In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful.
The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively.
In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences, but may encompass systematic correspondence techniques of the calculus of finite differences.
19th-century umbral calculus
The method is a notational procedure used for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty.
An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion (which contains a binomial coefficient):
and the remarkably similar-looking relation on the Bernoulli polynomials:
Compare also the ordinary derivative
to a very similar-looking relation on the Bernoulli polynomials:
These similarities allow one to construct umbral proofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n − k is an exponent:
and then differentiating, one gets the desired result:
In the above, the variable b is an "umbra" (Latin for shadow).
See also Faulhaber's formula.
Umbral Taylor series
In differential calculus, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. That is, a real or complex-valued function f (x) that is analytic at can be written as:
Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k-th forward differences of a polynomial function f,
where
is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial.
This series is also known as the Newton series or Newton's forward difference expansion.
The analogy to Taylor's expansion is utilized in the calculus of finite differences.
Modern umbral calculus
Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in z defined by
Then, using the definition of the Bernoulli polynomials and the definition and linearity of L, one can write
This enables one to replace occurrences of by , that is, move the n from a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that:
Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=".
In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets.
In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by
When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term. A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence.
Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of the cumulants.
See also
Bernoulli umbra
Umbral composition of polynomial sequences
Calculus of finite differences
Pidduck polynomials
Symbolic method in invariant theory
Narumi polynomials
Notes
References
G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975.
. Reprinted by Dover, 2005.
External links
Roman, S. (1982), The Theory of the Umbral Calculus, I
Combinatorics
Polynomials
Finite differences | Umbral calculus | [
"Mathematics"
] | 1,125 | [
"Mathematical analysis",
"Discrete mathematics",
"Polynomials",
"Finite differences",
"Combinatorics",
"Algebra"
] |
214,137 | https://en.wikipedia.org/wiki/Linear%20form | In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers).
If is a vector space over a field , the set of all linear functionals from to is itself a vector space over with addition and scalar multiplication defined pointwise. This space is called the dual space of , or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted , or, when the field is understood, ; other notations are also used, such as , or When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).
Examples
The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of ).
Indexing into a vector: The second element of a three-vector is given by the one-form That is, the second element of is
Mean: The mean element of an -vector is given by the one-form That is,
Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location.
Net present value of a net cash flow, is given by the one-form where is the discount rate. That is,
Linear functionals in Rn
Suppose that vectors in the real coordinate space are represented as column vectors
For each row vector there is a linear functional defined by
and each linear functional can be expressed in this form.
This can be interpreted as either the matrix product or the dot product of the row vector and the column vector :
Trace of a square matrix
The trace of a square matrix is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all matrices. The trace is a linear functional on this space because and for all scalars and all matrices
(Definite) Integration
Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral
is a linear functional from the vector space of continuous functions on the interval to the real numbers. The linearity of follows from the standard facts about the integral:
Evaluation
Let denote the vector space of real-valued polynomial functions of degree defined on an interval If then let be the evaluation functional
The mapping is linear since
If are distinct points in then the evaluation functionals form a basis of the dual space of ( proves this last fact using Lagrange interpolation).
Non-example
A function having the equation of a line with (for example, ) is a linear functional on , since it is not linear. It is, however, affine-linear.
Visualization
In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by .
Applications
Application to quadrature
If are distinct points in , then the linear functionals defined above form a basis of the dual space of , the space of polynomials of degree The integration functional is also a linear functional on , and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients for which
for all This forms the foundation of the theory of numerical quadrature.
In quantum mechanics
Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.
Distributions
In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.
Dual vectors and bilinear forms
Every non-degenerate bilinear form on a finite-dimensional vector space induces an isomorphism such that
where the bilinear form on is denoted (for instance, in Euclidean space, is the dot product of and ).
The inverse isomorphism is , where is the unique element of such that
for all
The above defined vector is said to be the dual vector of
In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping from into its V.
Relationship to bases
Basis of the dual space
Let the vector space have a basis , not necessarily orthogonal. Then the dual space has a basis called the dual basis defined by the special property that
Or, more succinctly,
where is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.
A linear functional belonging to the dual space can be expressed as a linear combination of basis functionals, with coefficients ("components") ,
Then, applying the functional to a basis vector yields
due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then
So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.
The dual basis and inner product
When the space carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let have (not necessarily orthogonal) basis In three dimensions (), the dual basis can be written explicitly
for where ε is the Levi-Civita symbol and the inner product (or dot product) on .
In higher dimensions, this generalizes as follows
where is the Hodge star operator.
Over a ring
Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module over a ring , a linear form on is a linear map from to , where the latter is considered as a module over itself. The space of linear forms is always denoted , whether is a field or not. It is a right module if is a left module.
The existence of "enough" linear forms on a module is equivalent to projectivity.
Change of field
Suppose that is a vector space over Restricting scalar multiplication to gives rise to a real vector space called the of
Any vector space over is also a vector space over endowed with a complex structure; that is, there exists a real vector subspace such that we can (formally) write as -vector spaces.
Real versus complex linear functionals
Every linear functional on is complex-valued while every linear functional on is real-valued. If then a linear functional on either one of or is non-trivial (meaning not identically ) if and only if it is surjective (because if then for any scalar ), where the image of a linear functional on is while the image of a linear functional on is
Consequently, the only function on that is both a linear functional on and a linear function on is the trivial functional; in other words, where denotes the space's algebraic dual space.
However, every -linear functional on is an -linear (meaning that it is additive and homogeneous over ), but unless it is identically it is not an -linear on because its range (which is ) is 2-dimensional over Conversely, a non-zero -linear functional has range too small to be a -linear functional as well.
Real and imaginary parts
If then denote its real part by and its imaginary part by
Then and are linear functionals on and
The fact that for all implies that for all
and consequently, that and
The assignment defines a bijective -linear operator whose inverse is the map defined by the assignment that sends to the linear functional defined by
The real part of is and the bijection is an -linear operator, meaning that and for all and
Similarly for the imaginary part, the assignment induces an -linear bijection whose inverse is the map defined by sending to the linear functional on defined by
This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray), and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.
Properties and relationships
Suppose is a linear functional on with real part and imaginary part
Then if and only if if and only if
Assume that is a topological vector space. Then is continuous if and only if its real part is continuous, if and only if 's imaginary part is continuous. That is, either all three of and are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular, if and only if where the prime denotes the space's continuous dual space.
Let If for all scalars of unit length (meaning ) then
Similarly, if denotes the complex part of then implies
If is a normed space with norm and if is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of and so that
This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces.
If is a complex Hilbert space with a (complex) inner product that is antilinear in its first coordinate (and linear in the second) then becomes a real Hilbert space when endowed with the real part of Explicitly, this real inner product on is defined by for all and it induces the same norm on as because for all vectors Applying the Riesz representation theorem to (resp. to ) guarantees the existence of a unique vector (resp. ) such that (resp. ) for all vectors The theorem also guarantees that and It is readily verified that Now and the previous equalities imply that which is the same conclusion that was reached above.
In infinite dimensions
Below, all vector spaces are over either the real numbers or the complex numbers
If is a topological vector space, the space of continuous linear functionals — the — is often simply called the dual space. If is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the . In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.
A linear functional on a (not necessarily locally convex) topological vector space is continuous if and only if there exists a continuous seminorm on such that
Characterizing closed subspaces
Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed, and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.
Hyperplanes and maximal subspaces
A vector subspace of is called maximal if (meaning and ) and does not exist a vector subspace of such that A vector subspace of is maximal if and only if it is the kernel of some non-trivial linear functional on (that is, for some linear functional on that is not identically ). An affine hyperplane in is a translate of a maximal vector subspace. By linearity, a subset of is a affine hyperplane if and only if there exists some non-trivial linear functional on such that
If is a linear functional and is a scalar then This equality can be used to relate different level sets of Moreover, if then the kernel of can be reconstructed from the affine hyperplane by
Relationships between multiple linear functionals
Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other).
This fact can be generalized to the following theorem.
If is a non-trivial linear functional on with kernel , satisfies and is a balanced subset of , then if and only if for all
Hahn–Banach theorem
Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example,
Equicontinuity of families of linear functionals
Let be a topological vector space (TVS) with continuous dual space
For any subset of the following are equivalent:
is equicontinuous;
is contained in the polar of some neighborhood of in ;
the (pre)polar of is a neighborhood of in ;
If is an equicontinuous subset of then the following sets are also equicontinuous:
the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.
Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).
See also
Notes
Footnotes
Proofs
References
Bibliography
Functional analysis
Linear algebra
Linear operators
Linear functionals | Linear form | [
"Mathematics"
] | 2,765 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Linear operators",
"Mathematical relations",
"Linear algebra",
"Algebra"
] |
214,572 | https://en.wikipedia.org/wiki/Anabolism | Anabolism () is the set of metabolic pathways that construct macromolecules like DNA or RNA from smaller units. These reactions require energy, known also as an endergonic process. Anabolism is the building-up aspect of metabolism, whereas catabolism is the breaking-down aspect. Anabolism is usually synonymous with biosynthesis.
Pathway
Polymerization, an anabolic pathway used to build macromolecules such as nucleic acids, proteins, and polysaccharides, uses condensation reactions to join monomers. Macromolecules are created from smaller molecules using enzymes and cofactors.
Energy source
Anabolism is powered by catabolism, where large molecules are broken down into smaller parts and then used up in cellular respiration. Many anabolic processes are powered by the cleavage of adenosine triphosphate (ATP). Anabolism usually involves reduction and decreases entropy, making it unfavorable without energy input. The starting materials, called the precursor molecules, are joined using the chemical energy made available from hydrolyzing ATP, reducing the cofactors NAD+, NADP+, and FAD, or performing other favorable side reactions. Occasionally it can also be driven by entropy without energy input, in cases like the formation of the phospholipid bilayer of a cell, where hydrophobic interactions aggregate the molecules.
Cofactors
The reducing agents NADH, NADPH, and FADH2, as well as metal ions, act as cofactors at various steps in anabolic pathways. NADH, NADPH, and FADH2 act as electron carriers, while charged metal ions within enzymes stabilize charged functional groups on substrates.
Substrates
Substrates for anabolism are mostly intermediates taken from catabolic pathways during periods of high energy charge in the cell.
Functions
Anabolic processes build organs and tissues. These processes produce growth and differentiation of cells and increase in body size, a process that involves synthesis of complex molecules. Examples of anabolic processes include the growth and mineralization of bone and increases in muscle mass.
Anabolic hormones
Endocrinologists have traditionally classified hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The classic anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.
Photosynthetic carbohydrate synthesis
Photosynthetic carbohydrate synthesis in plants and certain bacteria is an anabolic process that produces glucose, cellulose, starch, lipids, and proteins from CO2. It uses the energy produced from the light-driven reactions of photosynthesis, and creates the precursors to these large molecules via carbon assimilation in the photosynthetic carbon reduction cycle, a.k.a. the Calvin cycle.
Amino acid biosynthesis
All amino acids are formed from intermediates in the catabolic processes of glycolysis, the citric acid cycle, or the pentose phosphate pathway. From glycolysis, glucose 6-phosphate is a precursor for histidine; 3-phosphoglycerate is a precursor for glycine and cysteine; phosphoenol pyruvate, combined with the 3-phosphoglycerate-derivative erythrose 4-phosphate, forms tryptophan, phenylalanine, and tyrosine; and pyruvate is a precursor for alanine, valine, leucine, and isoleucine. From the citric acid cycle, α-ketoglutarate is converted into glutamate and subsequently glutamine, proline, and arginine; and oxaloacetate is converted into aspartate and subsequently asparagine, methionine, threonine, and lysine.
Glycogen storage
During periods of high blood sugar, glucose 6-phosphate from glycolysis is diverted to the glycogen-storing pathway. It is changed to glucose-1-phosphate by phosphoglucomutase and then to UDP-glucose by UTP--glucose-1-phosphate uridylyltransferase. Glycogen synthase adds this UDP-glucose to a glycogen chain.
Gluconeogenesis
Glucagon is traditionally a catabolic hormone, but also stimulates the anabolic process of gluconeogenesis by the liver, and to a lesser extent the kidney cortex and intestines, during starvation to prevent low blood sugar. It is the process of converting pyruvate into glucose. Pyruvate can come from the breakdown of glucose, lactate, amino acids, or glycerol. The gluconeogenesis pathway has many reversible enzymatic processes in common with glycolysis, but it is not the process of glycolysis in reverse. It uses different irreversible enzymes to ensure the overall pathway runs in one direction only.
Regulation
Anabolism operates with separate enzymes from catalysis, which undergo irreversible steps at some point in their pathways. This allows the cell to regulate the rate of production and prevent an infinite loop, also known as a futile cycle, from forming with catabolism.
The balance between anabolism and catabolism is sensitive to ADP and ATP, otherwise known as the energy charge of the cell. High amounts of ATP cause cells to favor the anabolic pathway and slow catabolic activity, while excess ADP slows anabolism and favors catabolism. These pathways are also regulated by circadian rhythms, with processes such as glycolysis fluctuating to match an animal's normal periods of activity throughout the day.
Etymology
The word anabolism is from Neo-Latin, with roots from , "upward" and , "to throw".
References
Metabolism | Anabolism | [
"Chemistry",
"Biology"
] | 1,236 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
214,573 | https://en.wikipedia.org/wiki/Carbohydrate%20catabolism | Digestion is the breakdown of carbohydrates to yield an energy-rich compound called ATP. The production of ATP is achieved through the oxidation of glucose molecules. In oxidation, the electrons are stripped from a glucose molecule to reduce NAD+ and FAD. NAD+ and FAD possess a high energy potential to drive the production of ATP in the electron transport chain. ATP production occurs in the mitochondria of the cell. There are two methods of producing ATP: aerobic and anaerobic.
In aerobic respiration, oxygen is required. Using oxygen increases ATP production from 4 ATP molecules to about 30 ATP molecules.
In anaerobic respiration, oxygen is not required. When oxygen is absent, the generation of ATP continues through fermentation. There are two types of fermentation: alcohol fermentation and lactic acid fermentation.
There are several different types of carbohydrates: polysaccharides (e.g., starch, amylopectin, glycogen, cellulose), monosaccharides (e.g., glucose, galactose, fructose, ribose) and the disaccharides (e.g., sucrose, maltose, lactose).
Monosaccharides, also known as simple sugars, are the most basic, fundamental unit of a carbohydrate. These are simple sugars with the general chemical structure of C6H12O6.
Disaccharides are a type of carbohydrate. Disaccharides consist of compound sugars containing two monosaccharides with the elimination of a water molecule with the general chemical structure C12H22O11.
Oligosaccharides are carbohydrates that consist of a polymer that contains three to ten monosaccharides linked together by glycosidic bonds.
Glucose reacts with oxygen in the following reaction, C6H12O6 + 6O2 → 6CO2 + 6H2O. Carbon dioxide and water are waste products, and the overall reaction is exothermic.
The reaction of glucose with oxygen releasing energy in the form of molecules of ATP is therefore one of the most important biochemical pathways found in living organisms.
Glycolysis
Glycolysis, which means “sugar splitting,” is the initial process in the cellular respiration pathway. Glycolysis can be either an aerobic or anaerobic process. When oxygen is present, glycolysis continues along the aerobic respiration pathway. If oxygen is not present, then ATP production is restricted to anaerobic respiration. The location where glycolysis, aerobic or anaerobic, occurs is in the cytosol of the cell. In glycolysis, a six-carbon glucose molecule is split into two three-carbon molecules called pyruvate. These carbon molecules are oxidized into NADH and ATP. For the glucose molecule to oxidize into pyruvate, an input of ATP molecules is required. This is known as the investment phase, in which a total of two ATP molecules are consumed. At the end of glycolysis, the total yield of ATP is four molecules, but the net gain is two ATP molecules. Even though ATP is synthesized, the two ATP molecules produced are few compared to the second and third pathways, Krebs cycle and oxidative phosphorylation.
Fermentation
Even if there is no oxygen present, glycolysis can continue to generate ATP. However, for glycolysis to continue to produce ATP, there must be NAD+ present, which is responsible for oxidizing glucose. This is achieved by recycling NADH back to NAD+. When NAD+ is reduced to NADH, the electrons from NADH are eventually transferred to a separate organic molecule, transforming NADH back to NAD+. This process of renewing the supply of NAD+ is called fermentation, which falls into two categories.
Alcohol Fermentation
In alcohol fermentation, when a glucose molecule is oxidized, ethanol (ethyl alcohol) and carbon dioxide are byproducts. The organic molecule that is responsible for renewing the NAD+ supply in this type of fermentation is the pyruvate from glycolysis. Each pyruvate releases a carbon dioxide molecule, turning into acetaldehyde. The acetaldehyde is then reduced by the NADH produced from glycolysis, forming the alcohol waste product, ethanol, and forming NAD+, thereby replenishing its supply for glycolysis to continue producing ATP.
Lactic Acid Fermentation
In lactic acid fermentation, each pyruvate molecule is directly reduced by NADH. The only byproduct from this type of fermentation is lactate. Lactic acid fermentation is used by human muscle cells as a means of generating ATP during strenuous exercise where oxygen consumption is higher than the supplied oxygen. As this process progresses, the surplus of lactate is brought to the liver, which converts it back to pyruvate.
Respiration
The Citric acid cycle (also known as the Krebs cycle)
If oxygen is present, then following glycolysis, the two pyruvate molecules are brought into the mitochondrion itself to go through the Krebs cycle. In this cycle, the pyruvate molecules from glycolysis are further broken down to harness the remaining energy. Each pyruvate goes through a series of reactions that converts it to acetyl coenzyme A. From here, only the acetyl group participates in the Krebs cycle—in which it goes through a series of redox reactions, catalyzed by enzymes, to further harness the energy from the acetyl group. The energy from the acetyl group, in the form of electrons, is used to reduce NAD+ and FAD to NADH and FADH2, respectively. NADH and FADH2 contain the stored energy harnessed from the initial glucose molecule and is used in the electron transport chain where the bulk of the ATP is produced.
Oxidative phosphorylation
The last process in aerobic respiration is oxidative phosphorylation, also known as the electron transport chain. Here NADH and FADH2 deliver their electrons to oxygen and protons at the inner membranes of the mitochondrion, facilitating the production of ATP. Oxidative phosphorylation contributes the majority of the ATP produced, compared to glycolysis and the Krebs cycle. While the ATP count is glycolysis and the Krebs cycle is two ATP molecules, the electron transport chain contributes, at most, twenty-eight ATP molecules. A contributing factor is due to the energy potentials of NADH and FADH2. A second contributing factor is that cristae, the inner membranes of mitochondria, increase the surface area and therefore the amount of proteins in the membrane that assist in the synthesis of ATP. Along the electron transport chain, there are separate compartments, each with their own concentration gradient of H + ions, which are the power source of ATP synthesis. To convert ADP to ATP, energy must be provided. That energy is provided by the H+ gradient. On one side of the membrane compartment, there is a high concentration of H+ ions compared to the other. The shuttling of H+ to one side of the membrane is driven by the exergonic flow of electrons throughout the membrane. These electrons are supplied by NADH and FADH2 as they transfer their potential energy. Once the H+ concentration gradient is established, a proton-motive force is established, which provides the energy to convert ADP to ATP. The H+ ions that were initially forced to one side of the mitochondrion membrane now naturally flow through a membrane protein called ATP synthase, a protein that converts ADP to ATP with the help of H+ ions.
See also
cellular respiration
References
Metabolism | Carbohydrate catabolism | [
"Chemistry",
"Biology"
] | 1,682 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
215,038 | https://en.wikipedia.org/wiki/Enhancer%20%28genetics%29 | In genetics, an enhancer is a short (50–1500 bp) region of DNA that can be bound by proteins (activators) to increase the likelihood that transcription of a particular gene will occur. These proteins are usually referred to as transcription factors. Enhancers are cis-acting. They can be located up to 1 Mbp (1,000,000 bp) away from the gene, upstream or downstream from the start site. There are hundreds of thousands of enhancers in the human genome. They are found in both prokaryotes and eukaryotes. Active enhancers typically get transcribed as enhancer or regulatory non-coding RNA, whose expression levels correlate with mRNA levels of target genes.
The first discovery of a eukaryotic enhancer was in the immunoglobulin heavy chain gene in 1983. This enhancer, located in the large intron, provided an explanation for the transcriptional activation of rearranged Vh gene promoters while unrearranged Vh promoters remained inactive. Lately, enhancers have been shown to be involved in certain medical conditions, for example, myelosuppression. Since 2022, scientists have used artificial intelligence to design synthetic enhancers and applied them in animal systems, first in a cell line, and one year later also in vivo.
Locations
In eukaryotic cells the structure of the chromatin complex of DNA is folded in a way that functionally mimics the supercoiled state characteristic of prokaryotic DNA, so although the enhancer DNA may be far from the gene in a linear way, it is spatially close to the promoter and gene. This allows it to interact with the general transcription factors and RNA polymerase II. The same mechanism holds true for silencers in the eukaryotic genome. Silencers are antagonists of enhancers that, when bound to its proper transcription factors called repressors, repress the transcription of the gene. Silencers and enhancers may be in close proximity to each other or may even be in the same region only differentiated by the transcription factor the region binds to.
An enhancer may be located upstream or downstream of the gene it regulates. Furthermore, an enhancer does not need to be located near the transcription initiation site to affect transcription, as some have been found located several hundred thousand base pairs upstream or downstream of the start site. Enhancers do not act on the promoter region itself, but are bound by activator proteins as first shown by in vivo competition experiments. Subsequently, molecular studies showed direct interactions with transcription factors and cofactors, including the mediator complex, which recruits polymerase II and the general transcription factors which then begin transcribing the genes. Enhancers can also be found within introns. An enhancer's orientation may even be reversed without affecting its function; additionally, an enhancer may be excised and inserted elsewhere in the chromosome, and still affect gene transcription. That is one reason that introns polymorphisms may have effects although they are not translated. Enhancers can also be found at the exonic region of an unrelated gene and they may act on genes on another chromosome.
Enhancers are bound by p300-CBP and their location can be predicted by ChIP-seq against this family of coactivators.
Role in gene expression
Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes. Core promoters are sufficient to direct transcription
initiation, but generally have low basal activity. Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the regulation of gene expression. An enhancer localized in a DNA region distant from the promoter of a gene can have a very large effect on gene expression, with some genes undergoing up to 100-fold increased expression due to an activated enhancer.
Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. While there are hundreds of thousands of enhancer DNA regions, for a particular type of tissue only specific enhancers are brought into proximity with the promoters that they regulate. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to their target promoters. Multiple enhancers, each often at tens or hundreds of thousands of nucleotides distant from their target genes, loop to their target gene promoters and can coordinate with each other to control the expression of their common target gene.
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern level of transcription of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two Enhancer RNAs (eRNAs) as illustrated in the Figure. Like mRNAs, these eRNAs are usually protected by their 5′ cap. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
Theories
, there are two different theories on the information processing that occurs on enhancers:
Enhanceosomes – rely on highly cooperative, coordinated action and can be disabled by single point mutations that move or remove the binding sites of individual proteins.
Flexible billboards – less integrative, multiple proteins independently regulate gene expression and their sum is read in by the basal transcriptional machinery.
Examples in the human genome
HACNS1
HACNS1 (also known as CENTG2 and located in the Human Accelerated Region 2) is a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb, and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome, HACNS1 has undergone the most change during the evolution of humans following the split with the ancestors of chimpanzees.
GADD45G
An enhancer near the gene GADD45g has been described that may regulate brain growth in chimpanzees and other mammals, but not in humans. The GADD45G regulator in mice and chimps is active in regions of the brain where cells that form the cortex, ventral forebrain, and thalamus are located and may suppress further neurogenesis. Loss of the GADD45G enhancer in humans may contribute to an increase of certain neuronal populations and to forebrain expansion in humans.
In developmental biology
The development, differentiation and growth of cells and tissues require precisely regulated patterns of gene expression. Enhancers work as cis-regulatory elements to mediate both spatial and temporal control of development by turning on transcription in specific cells and/or repressing it in other cells. Thus, the particular combination of transcription factors and other DNA-binding proteins in a developing tissue controls which genes will be expressed in that tissue. Enhancers allow the same gene to be used in diverse processes in space and time.
Identification and characterization
Traditionally, enhancers were identified by enhancer trap techniques using a reporter gene or by comparative sequence analysis and computational genomics. In genetically tractable models such as the fruit fly Drosophila melanogaster, for example, a reporter construct such as the lacZ gene can be randomly integrated into the genome using a P element transposon. If the reporter gene integrates near an enhancer, its expression will reflect the expression pattern driven by that enhancer. Thus, staining the flies for LacZ expression or activity and cloning the sequence surrounding the integration site allows the identification of the enhancer sequence.
The development of genomic and epigenomic technologies, however, has dramatically changed the outlook for cis-regulatory modules (CRM) discovery. Next-generation sequencing (NGS) methods now enable high-throughput functional CRM discovery assays, and the vastly increasing amounts of available data, including large-scale libraries of transcription factor-binding site (TFBS) motifs, collections of annotated, validated CRMs, and extensive epigenetic data across many cell types, are making accurate computational CRM discovery an attainable goal. An example of NGS-based approach called DNase-seq have enabled identification of nucleosome-depleted, or open chromatin regions, which can contain CRM. More recently techniques such as ATAC-seq have been developed which require less starting material. Nucelosome depleted regions can be identified in vivo through expression of Dam methylase, allowing for greater control of cell-type specific enhancer identification.
Computational methods include comparative genomics, clustering of known or predicted TF-binding sites, and supervised machine-learning approaches trained on known CRMs.
All of these methods have proven effective for CRM discovery, but each has its own considerations and limitations, and each is subject to a greater or lesser number of false-positive identifications.
In the comparative genomics approach, sequence conservation of non-coding regions can be indicative of enhancers. Sequences from multiple species are aligned, and conserved regions are identified computationally. Identified sequences can then be attached to a reporter gene such as green fluorescent protein or lacZ to determine the in vivo pattern of gene expression produced by the enhancer when injected into an embryo. mRNA expression of the reporter can be visualized by in situ hybridization, which provides a more direct measure of enhancer activity, since it is not subjected to the complexities of translation and protein folding. Although much evidence has pointed to sequence conservation for critical developmental enhancers, other work has shown that the function of enhancers can be conserved with little or no primary sequence conservation. For example, the RET enhancers in humans have very little sequence conservation to those in zebrafish, yet both species' sequences produce nearly identical patterns of reporter gene expression in zebrafish. Similarly, in highly diverged insects (separated by around 350 million years), similar gene expression patterns of several key genes was found to be regulated through similarly constituted CRMs although these CRMs do not show any appreciable sequence conservation detectable by standard sequence alignment methods such as BLAST.
In segmentation of insects
The enhancers determining early segmentation in Drosophila melanogaster embryos are among the best characterized developmental enhancers. In the early fly embryo, the gap gene transcription factors are responsible for activating and repressing a number of segmentation genes, such as the pair rule genes. The gap genes are expressed in blocks along the anterior-posterior axis of the fly along with other maternal effect transcription factors, thus creating zones within which different combinations of transcription factors are expressed. The pair-rule genes are separated from one another by non-expressing cells. Moreover, the stripes of expression for different pair-rule genes are offset by a few cell diameters from one another. Thus, unique combinations of pair-rule gene expression create spatial domains along the anterior-posterior axis to set up each of the 14 individual segments. The 480 bp enhancer responsible for driving the sharp stripe two of the pair-rule gene even-skipped (eve) has been well-characterized. The enhancer contains 12 different binding sites for maternal and gap gene transcription factors. Activating and repressing sites overlap in sequence. Eve is only expressed in a narrow stripe of cells that contain high concentrations of the activators and low concentration of the repressors for this enhancer sequence. Other enhancer regions drive eve expression in 6 other stripes in the embryo.
In vertebrate patterning
Establishing body axes is a critical step in animal development. During mouse embryonic development, Nodal, a transforming growth factor-beta superfamily ligand, is a key gene involved in patterning both the anterior-posterior axis and the left-right axis of the early embryo. The Nodal gene contains two enhancers: the Proximal Epiblast Enhancer (PEE) and the Asymmetric Enhancer (ASE). The PEE is upstream of the Nodal gene and drives Nodal expression in the portion of the primitive streak that will differentiate into the node (also referred to as the primitive node). The PEE turns on Nodal expression in response to a combination of Wnt signaling plus a second, unknown signal; thus, a member of the LEF/TCF transcription factor family likely binds to a TCF binding site in the cells in the node. Diffusion of Nodal away from the node forms a gradient which then patterns the extending anterior-posterior axis of the embryo. The ASE is an intronic enhancer bound by the fork head domain transcription factor Fox1. Early in development, Fox1-driven Nodal expression establishes the visceral endoderm. Later in development, Fox1 binding to the ASE drives Nodal expression on the left side of the lateral plate mesoderm, thus establishing left-right asymmetry necessary for asymmetric organ development in the mesoderm.
Establishing three germ layers during gastrulation is another critical step in animal development. Each of the three germ layers has unique patterns of gene expression that promote their differentiation and development. The endoderm is specified early in development by Gata4 expression, and Gata4 goes on to direct gut morphogenesis later. Gata4 expression is controlled in the early embryo by an intronic enhancer that binds another forkhead domain transcription factor, FoxA2. Initially the enhancer drives broad gene expression throughout the embryo, but the expression quickly becomes restricted to the endoderm, suggesting that other repressors may be involved in its restriction. Late in development, the same enhancer restricts expression to the tissues that will become the stomach and pancreas. An additional enhancer is responsible for maintaining Gata4 expression in the endoderm during the intermediate stages of gut development.
Multiple enhancers promote developmental robustness
Some genes involved in critical developmental processes contain multiple enhancers of overlapping function. Secondary enhancers, or "shadow enhancers", may be found many kilobases away from the primary enhancer ("primary" usually refers to the first enhancer discovered, which is often closer to the gene it regulates). On its own, each enhancer drives nearly identical patterns of gene expression. Are the two enhancers truly redundant? Recent work has shown that multiple enhancers allow fruit flies to survive environmental perturbations, such as an increase in temperature. When raised at an elevated temperature, a single enhancer sometimes fails to drive the complete pattern of expression, whereas the presence of both enhancers permits normal gene expression.
Evolution of developmental mechanisms
One theme of research in evolutionary developmental biology ("evo-devo") is investigating the role of enhancers and other cis-regulatory elements in producing morphological changes via developmental differences between species.
Stickleback Pitx1
Recent work has investigated the role of enhancers in morphological changes in threespine stickleback fish. Sticklebacks exist in both marine and freshwater environments, but sticklebacks in many freshwater populations have completely lost their pelvic fins (appendages homologous to the posterior limb of tetrapods). Pitx1 is a homeobox gene involved in posterior limb development in vertebrates. Preliminary genetic analyses indicated that changes in the expression of this gene were responsible for pelvic reduction in sticklebacks. Fish expressing only the freshwater allele of Pitx1 do not have pelvic spines, whereas fish expressing a marine allele retain pelvic spines. A more thorough characterization showed that a 500 base pair enhancer sequence is responsible for turning on Pitx1 expression in the posterior fin bud. This enhancer is located near a chromosomal fragile site—a sequence of DNA that is likely to be broken and thus more likely to be mutated as a result of imprecise DNA repair. This fragile site has caused repeated, independent losses of the enhancer responsible for driving Pitx1 expression in the pelvic spines in isolated freshwater population, and without this enhancer, freshwater fish fail to develop pelvic spines.
In Drosophila wing pattern evolution
Pigmentation patterns provide one of the most striking and easily scored differences between different species of animals. Pigmentation of the Drosophila wing has proven to be a particularly amenable system for studying the development of complex pigmentation phenotypes. The Drosophila guttifera wing has 12 dark pigmentation spots and 4 lighter gray intervein patches. Pigment spots arise from expression of the yellow gene, whose product produces black melanin. Recent work has shown that two enhancers in the yellow gene produce gene expression in precisely this pattern – the vein spot enhancer drives reporter gene expression in the 12 spots, and the intervein shade enhancer drives reporter expression in the 4 distinct patches. These two enhancers are responsive to the Wnt signaling pathway, which is activated by wingless expression at all of the pigmented locations. Thus, in the evolution of the complex pigmentation phenotype, the yellow pigment gene evolved enhancers responsive to the wingless signal and wingless expression evolved at new locations to produce novel wing patterns.
In inflammation and cancer
Each cell typically contains several hundred of a special class of enhancers that stretch over many kilobases long DNA sequences, called "super-enhancers". These enhancers contain a large number of binding sites for sequence-specific, inducible transcription factors, and regulate expression of genes involved in cell differentiation. During inflammation, the transcription factor NF-κB facilitates remodeling of chromatin in a manner that selectively redistributes cofactors from high-occupancy enhancers, thereby repressing genes involved in maintaining cellular identify whose expression they enhance; at the same time, this F-κB-driven remodeling and redistribution activates other enhancers that guide changes in cellular function through inflammation. As a result, inflammation reprograms cells, altering their interactions with the rest of tissue and with the immune system. In cancer, proteins that control NF-κB activity are dysregulated, permitting malignant cells to decrease their dependence on interactions with local tissue, and hindering their surveillance by the immune system.
Designing enhancers in synthetic biology
Synthetic regulatory elements such as enhancers promise to be a powerful tool to direct gene products to particular cell types in order to treat disease by activating beneficial genes or by halting aberrant cell states.
Since 2022, artificial intelligence and transfer learning strategies have led to a better understanding of the features of regulatory DNA sequences, the prediction, and the design of synthetic enhancers.
Building on work in cell culture, synthetic enhancers were successfully applied to entire living organisms in 2023. Using deep neural networks, scientists simulated the evolution of DNA sequences to analyze the emergence of features that underly enhancer function. This allowed the design and production of a range of functioning synthetic enhancers for different cell types of the fruit fly brain. A second approach trained artificial intelligence models on single-cell DNA accessibility data and transferred the learned models towards the prediction of enhancers for selected tissues in the fruit fly embryo. These enhancer prediction models were used to design synthetic enhancers for the nervous system, brain, muscle, epidermis and gut.
See also
Shadow enhancers
References
External links
TFSEARCH
JASPAR
ReMap
ENCODE threads explorer Enhancer discovery and characterization. Nature
Gene expression | Enhancer (genetics) | [
"Chemistry",
"Biology"
] | 4,289 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
215,050 | https://en.wikipedia.org/wiki/Peroxyacetyl%20nitrate | Peroxyacetyl nitrate is a peroxyacyl nitrate. It is a secondary pollutant present in photochemical smog. It is thermally unstable and decomposes into peroxyethanoyl radicals and nitrogen dioxide gas. It is a lachrymatory substance, meaning that it irritates the lungs and eyes.
Peroxyacetyl nitrate, or PAN, is an oxidant that is more stable than ozone. Hence, it is more capable of long-range transport than ozone. It serves as a carrier for oxides of nitrogen (NOx) into rural regions and causes ozone formation in the global troposphere.
Atmospheric chemistry
PAN is produced in the atmosphere via photochemical oxidation of hydrocarbons to peroxyacetic acid radicals, which react reversibly with nitrogen dioxide () to form PAN. Night-time reaction of acetaldehyde with nitrogen trioxide is another possible source. Since there are no direct emissions, it is a secondary pollutant. Next to ozone and hydrogen peroxide (H2O2), it is an important component of photochemical smog.
Further peroxyacyl nitrates in the atmosphere are peroxypropionyl nitrate (PPN), peroxybutyryl nitrate (PBN), and peroxybenzoyl nitrate (PBzN). Chlorinated forms have also been observed. PAN is the most important peroxyacyl nitrate. PAN and its homologues reach about 5 to 20 percent of the concentration of ozone in urban areas. At lower temperatures, it is stable and can be transported over long distances, providing nitrogen oxides to otherwise unpolluted areas. At higher temperatures, it decomposes into NO2 and the peroxyacetyl radical.
The decay of PAN in the atmosphere is mainly thermal. Thus, the long-range transport occurs through cold regions of the atmosphere, whereas the decomposition takes place at warmer levels. PAN can also be photolysed by UV radiation. It is a reservoir gas that serves both as a source and a sink of ROx- and NOx radicals. Nitrogen oxides from PAN decomposition enhance ozone production in the lower troposphere.
The natural concentration of PAN in the atmosphere is below 0.1 μg/m3. Measurements in German cities showed values up to 25 μg/m3. Peak values above 200 μg/m3 have been measured in Los Angeles in the second half of the 20th century (1 ppm of PAN corresponds to 4370 μg/m3). Due to the complexity of the measurement setup, only sporadic measurements are available.
PAN is a greenhouse gas.
Synthesis
PAN can be produced in a lipophilic solvent from peroxyacetic acid. For the synthesis, concentrated sulfuric acid is added to degassed n-tridecane and peroxyacetic acid in an ice bath. Next, concentrated nitric acid is added.
As an alternative, PAN can also be synthesized in the gas phase via photolysis of acetone and NO2 with a mercury lamp. Methyl nitrate (CH3ONO2) is created as a by-product.
Toxicity
The toxicity of PAN is higher than that of ozone. Eye irritation from photochemical smog is caused more by PAN and other trace gases than by ozone, which is only sparingly soluble. PAN is a mutagen, and is considered a potential contributor to the development of skin cancer.
References
Organic peroxides
Nitrate esters
Organic peroxide explosives
Explosive chemicals
Acetyl compounds
Pollutants
Smog | Peroxyacetyl nitrate | [
"Physics",
"Chemistry"
] | 740 | [
"Visibility",
"Physical quantities",
"Smog",
"Organic compounds",
"Explosive chemicals",
"Organic peroxide explosives",
"Organic peroxides"
] |
215,051 | https://en.wikipedia.org/wiki/Ground-level%20ozone | Ground-level ozone (O3), also known as surface-level ozone and tropospheric ozone, is a trace gas in the troposphere (the lowest level of the Earth's atmosphere), with an average concentration of 20–30 parts per billion by volume (ppbv), with close to 100 ppbv in polluted areas. Ozone is also an important constituent of the stratosphere, where the ozone layer (2 to 8 parts per million ozone) exists which is located between 10 and 50 kilometers above the Earth's surface. The troposphere extends from the ground up to a variable height of approximately 14 kilometers above sea level. Ozone is least concentrated in the ground layer (or planetary boundary layer) of the troposphere. Ground-level or tropospheric ozone is created by chemical reactions between NOx gases (oxides of nitrogen produced by combustion) and volatile organic compounds (VOCs). The combination of these chemicals in the presence of sunlight form ozone. Its concentration increases as height above sea level increases, with a maximum concentration at the tropopause. About 90% of total ozone in the atmosphere is in the stratosphere, and 10% is in the troposphere. Although tropospheric ozone is less concentrated than stratospheric ozone, it is of concern because of its health effects. Ozone in the troposphere is considered a greenhouse gas, and as such contribute to global warming. as reported in IPCC reports. Actually, tropospheric ozone is considered the third most important greenhouse gas after CO2 and CH4, as indicated by estimates of its radiative forcing.
Photochemical and chemical reactions involving ozone drive many of the chemical processes that occur in the troposphere by day and by night. At abnormally high concentrations (the largest source being emissions from combustion of fossil fuels), it is a pollutant, and a constituent of smog. Its levels have increased significantly since the industrial revolution, as NOx gasses and VOCs are some of the byproducts of combustion. With more heat and sunlight in the summer months, more ozone is formed which is why regions often experience higher levels of pollution in the summer months. Although the same molecule, ground-level ozone can be harmful to human health, unlike stratospheric ozone that protects the earth from excess UV radiation.
Photolysis of ozone occurs at wavelengths below approximately 310–320 nanometres. This reaction initiates a chain of chemical reactions that remove carbon monoxide, methane, and other hydrocarbons from the atmosphere via oxidation. Therefore, the concentration of tropospheric ozone affects how long these compounds remain in the air. If the oxidation of carbon monoxide or methane occur in the presence of nitrogen monoxide (NO), this chain of reactions has a net product of ozone added to the system.
Measurement
Ozone in the atmosphere can be measured by remote sensing technology, or by in-situ monitoring technology. Because ozone absorbs light in the UV spectrum, the most common way to measure ozone is to measure how much of this light spectrum is absorbed in the atmosphere. Because the stratosphere has higher ozone concentration than the troposphere, it is important for remote sensing instruments to be able to determine altitude along with the concentration measurements. A total ozone mapping spectrometer-earth probe (TOMS-EP) aboard a satellite from NASA is an example of an ozone layer measuring satellite, and the tropospheric emission spectrometer (TES) is an example of an ozone measuring satellite that is specifically for the troposphere. LIDAR is a common ground-based remote sensing technique that uses laser to measure ozone. The Tropospheric Ozone Lidar Network (TOLNet) is the network of ozone observing lidars across the United States.
Ozonesondes are a form of in situ, or local ozone measuring instruments. An ozonesonde is attached to a meteorological balloon, so that the instrument can directly measure ozone concentration at the varying altitudes along the balloon's upward path. The information collected from the instrument attached to the balloon is transmitted back using radiosonde technology. NOAA has worked to create a global network of tropospheric ozone measurements using ozonesondes.
Ozone is also measured in air quality environmental monitoring networks. In these networks, in-situ ozone monitors based on ozone's UV-absorption properties are used to measure ppb-levels in ambient air.
Total atmospheric ozone (sometimes seen in weather reports) is measured in a column from the surface to the top of the atmosphere, and is dominated by high concentrations of stratospheric ozone. Typical units of measure for this purpose include the Dobson unit and millimoles per square meter (mmol/m2).
Formation
The majority of tropospheric ozone formation occurs when nitrogen oxides (NOx), carbon monoxide (CO), and volatile organic compounds (VOCs), react in the atmosphere in the presence of sunlight, specifically the UV spectrum. NOx, CO, and VOCs are considered ozone precursors. Motor vehicle exhaust, industrial emissions, and chemical solvents are the major anthropogenic sources of these ozone precursors. Although the ozone precursors often originate in urban areas, winds can carry NOx hundreds of kilometers, causing ozone formation to occur in less populated regions as well. Methane, a VOC whose atmospheric concentration has increased tremendously during the last century, contributes to ozone formation but on a global scale rather than in local or regional photochemical smog episodes. In situations where this exclusion of methane from the VOC group of substances is not obvious, the term Non-Methane VOC (NMVOC) is often used.
Indoors ozone is produced by certain high-voltage electric devices (such as air ionizers), and as a by-product of other types of pollution. Outdoor air used for ventilation may have sufficient ozone to react with common indoor pollutants as well as skin oils and other common indoor air chemicals or surfaces. Particular concern is warranted when using "green" cleaning products based on citrus or terpene extracts, because these chemicals react very quickly with ozone to form toxic and irritating chemicals as well as fine and ultrafine particles.
The chemical reactions involved in tropospheric ozone formation are a series of complex cycles in which carbon monoxide and VOCs are oxidised to water vapour and carbon dioxide. The reactions involved in this process are illustrated here with CO but similar reactions occur for VOC as well. The oxidation begins with the reaction of CO with the hydroxyl radical (•OH). The radical intermediate formed by this reacts rapidly with oxygen to give a peroxy radical •
An outline of the chain reaction that occurs in oxidation of CO, producing O3:
The reaction begins with the oxidation of CO by the hydroxyl radical (•OH). The radical adduct (•HOCO) is unstable and reacts rapidly with oxygen to give a peroxy radical, HO2•:
•OH + CO → •HOCO
•HOCO + O2 → HO2• + CO2
Peroxy-radicals then go on to react with NO to produce NO2, which is photolysed by UV-A radiation to give a ground-state atomic oxygen, which then reacts with molecular oxygen to form ozone.
HO2• + NO → •OH + NO2
NO2 + hν → NO + O(3P), λ<400 nm
O(3P) + O2 → O3
note that these three reactions are what forms the ozone molecule, and will occur the same way in the oxidation of CO or VOCs case.
The net reaction in this case is then:
CO + 2 → +
The amount of ozone produced through these reactions in ambient air can be estimated using a modified Leighton relationship. The limit on these interrelated cycles producing ozone is the reaction of •OH with NO2 to form nitric acid at high NOx levels. If nitrogen monoxide (NO) is instead present at very low levels in the atmosphere (less than 10 approximately ppt), the peroxy radicals (HO2• ) formed from the oxidation will instead react with themselves to form peroxides, and not produce ozone.
Health effects
Health effects depend on ozone precursors, which is a group of pollutants, primarily generated during the combustion of fossil fuels. Ground-level ozone is created by nitrous oxides reacting with organic compounds in the presence of sunlight. There are many man-made sources of these organic compounds including vehicle and industrial emissions, along with several other sources. Reaction with daylight ultraviolet (UV) rays and these precursors create ground-level ozone pollution (tropospheric ozone). Ozone is known to have the following health effects at concentrations common in urban air:
Irritation of the respiratory system, causing coughing, throat irritation, and/or an uncomfortable sensation in the chest. Ozone affects people with underlying respiratory conditions such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer as well those who spend a lot of time being active outdoors.
Reduced lung function, making it more difficult to breathe deeply and vigorously. Breathing may become more rapid and more shallow than normal, and a person's ability to engage in vigorous activities may be limited. Ozone causes the muscles in the airways to constrict which traps air in the alveoli leading to wheezing and shortness of breath.
Aggravation of asthma. When ozone levels are high, more people with asthma have attacks that require a doctor's attention or use of medication. One reason this happens is that ozone makes people more sensitive to allergens, which in turn trigger asthma attacks.
Increased susceptibility to respiratory infections. Examples of these respiratory complications include bronchitis, emphysema, and asthma.
Inflammation and damage to the lining of the lungs. Within a few days, the damaged cells are shed and replaced much like the skin peels after a sunburn. Animal studies suggest that if this type of inflammation happens repeatedly over a long time period (months, years, a lifetime), lung tissue may become permanently scarred, resulting in permanent loss of lung function and a lower quality of life.
More recent data suggests that ozone can also have harmful effects via the inflammatory pathway leading to heart disease, type 2 diabetes, and other metabolic disorders.
It was observed in the 1990s that ground-level ozone can advance death by a few days in predisposed and vulnerable populations. A statistical study of 95 large urban communities in the United States found significant association between ozone levels and premature death. The study estimated that a one-third reduction in urban ozone concentrations would save roughly 4000 lives per year (Bell et al., 2004). Tropospheric ozone causes approximately 22,000 premature deaths per year in 25 countries in the European Union. (WHO, 2008)
Problem areas
The United States Environmental Protection Agency has developed an Air Quality index to help explain air pollution levels to the general public. 8-hour average ozone mole fractions of 76 to 95 nmol/mol are described as "Unhealthy for Sensitive Groups", 96 nmol/mol to 115 nmol/mol as unhealthy and 116 nmol/mol to 404 nmol/mol as very unhealthy. The EPA has designated over 300 counties of the United States, clustered around the most heavily populated areas (especially in California and the Northeast), as failing to comply with the National Ambient Air Quality Standards.
In 2000, the Ozone Annex was added to the U.S.–Canada Air Quality Agreement. The Ozone Annex addresses transboundary air pollution that contributes to ground-level ozone, which contributes to smog. The main goal was to attain proper ozone air quality standards in both countries. The North Front Range of Colorado has been out of compliance with the Federal Air Quality standards. The U.S. EPA designated Fort Collins as part of the ozone non-attainment area in November 2007. This means that the U.S.’s environmental law considers the air quality to be worse than the National Ambient Air Quality Standards, which are defined in the Clean Air Act Amendments. In 2024, the Lung Association ranked Fort Collins 16th in the nation for high ozone days out of 228 metropolitan areas, 38 for 24-hour particle pollution out of 223 metropolitan areas, and 136 for annual particle pollution out of 204 metropolitan areas.
In monitoring air quality, Boulder County, Colorado is classified by the EPA as part of a nine-county group that includes the Denver metro area and North Front Range region. This nine-county zone has recorded ozone levels that exceed the EPA's ozone standard since 2004. Attempts have been made under the Early Action Compact to bring the area's air quality up to the EPA's standards. However, since 2004 ozone pollution in Boulder County has regularly failed to meet federal standards set by the Environmental Protection Agency. The County of Boulder continues trying to alleviate some of the ozone pollution through programming that encourages people to drive less, and stop ozone polluting activities during the heat of the day.
Ozone and the climate
Ground-level ozone is both naturally occurring and anthropogenically formed. It is the primary constituent of urban smog, forming naturally as a secondary pollutant through photochemical reactions involving nitrogen oxides and volatile organic compounds in the presence of bright sunshine with high temperatures.
Regardless of whether it occurs naturally or is anthropogenically formed, the change in ozone concentrations in the upper troposphere will:
exert a considerable impact on global warming, because it is a key air pollutant and greenhouse gas, and
impact the production of surface level ozone (contributing again to climate change).
As a result, photochemical smog pollution at the earth's surface, as well as stratospheric ozone depletion, have received a lot of attention in recent years. The disruptions in the "free troposphere" are likely to be the focus of the next cycle of scientific concern. In several parts of the northern hemisphere, tropospheric ozone levels have been rising. On various scales, this may have an impact on moisture levels, cloud volume and dispersion, precipitation, and atmospheric dynamics. A rising environment, on the other hand, favours ozone synthesis and accumulation in the atmosphere, owing to two physicochemical mechanisms. First, a warming climate alters humidity and wind conditions in some parts of the world, resulting in a reduction in the frequency of surface cyclones.
Climate change impacts on processes that affect ozone
Changes in air temperature and water content affect the air's chemistry and the rates of chemical reactions that create and remove ozone. Many chemical reaction rates increase with temperature and lead to increased ozone production. Climate change projections show that rising temperatures and water vapour in the atmosphere will likely increase surface ozone in polluted areas like the eastern United States. In particular, the degradation of the pollutant peroxyacetylnitrate (PAN), which is a significant reservoir species for long-range transport of ozone precursors, is accelerated by rising temperatures. As a result, as the temperature rises, the lifetime of PAN reduces, changing the long-range transport of ozone pollution. Second, the same radiative forcing that causes global warming would chill the stratosphere. This cooling is projected to result in a relative rise in ozone (O3) depletion in the polar region, as well as an increase in the frequency of ozone holes.
Ozone depletion, on the other hand, is a radiative forcing of the climate system. Two opposite effects exist: Reduced ozone causes the stratosphere to absorb less solar radiation, cooling it while warming the troposphere; as a result, the stratosphere emits less long-wave radiation downward, cooling the troposphere. The IPCC believes that "measured stratospheric O3 losses over the past two decades have generated a negative forcing of the surface-troposphere system" of around 0.15 0.10 watts per square metre (W/m2). Furthermore, rising air temperatures often improve ozone-forming processes, which has a repercussion on climate, as well.
Also, since climate change is causing sea ice to melt, what occurs is the sea ice releases molecular chlorine, which reacts with UV radiation to produce chlorine radicals. Because chlorine radicals are highly reactive, they can expedite the degradation of methane and tropospheric ozone and the oxidation of mercury to more toxic forms. Ozone production rises during heat waves, because plants absorb less ozone. It is estimated that curtailed ozone absorption by plants could be responsible for the loss of 460 lives in the UK in the hot summer of 2006. A similar investigation to assess the joint effects of ozone and heat during the European heat waves in 2003, concluded that these appear to be additive.
See also
Atmospheric chemistry
National Ambient Air Quality Standards (USA)
Ozone
Photochemical smog
Troposphere
Tropospheric ozone depletion events
References
Further reading
External links
European Air Quality Index, European Environment Agency
Ozoneweb - near real-time ozone conditions across Europe, The European Environment Agency (ozoneweb) (defunct)
Ground-level Ozone Pollution, U.S. Environmental Protection Agency
Ground-level Ozone, U.S. Environmental Protection Agency (November 2015 archived)
Ground-level Ozone, U.S. Environmental Protection Agency (November 2014 archived)
US Live Ozone Map, U.S. Environmental Protection Agency
Air Quality Designations for Ozone, U.S. Environmental Protection Agency
Tropospheric Ozone, the Polluter UCAR (University Corporation for Atmospheric Research) (archived 2017)
Ozone and Air Quality map, NASA
Total Ozone Mapping Spectrometer (satellite monitoring 1999–2011) (archived)
WHO-Europe reports: Health Aspects of Air Pollution (2002) (PDF) and "Answer to follow-up questions from CAFE (2003) (PDF)
Air Quality: Surface-Level Ozone, NASA
Ambient Air Monitoring and Quality Assurance/Quality Control Guidelines: National Air Pollution Surveillance Program, Canadian Council of Ministers of the Environment, 2019 (PDF)
Airborne pollutants
Atmosphere of Earth
Atmosphere
Ozone
Smog | Ground-level ozone | [
"Physics",
"Chemistry"
] | 3,748 | [
"Visibility",
"Physical quantities",
"Smog",
"Oxidizing agents",
"Ozone"
] |
215,226 | https://en.wikipedia.org/wiki/Thermionic%20emission | Thermionic emission is the liberation of charged particles from a hot electrode whose thermal energy gives some particles enough kinetic energy to escape the material's surface. The particles, sometimes called thermions in early literature, are now known to be ions or electrons. Thermal electron emission specifically refers to emission of electrons and occurs when thermal energy overcomes the material's work function.
After emission, an opposite charge of equal magnitude to the emitted charge is initially left behind in the emitting region. But if the emitter is connected to a battery, that remaining charge is neutralized by charge supplied by the battery as particles are emitted, so the emitter will have the same charge it had before emission. This facilitates additional emission to sustain an electric current. Thomas Edison in 1880 while inventing his light bulb noticed this current, so subsequent scientists referred to the current as the Edison effect, though it wasn't until after the 1897 discovery of the electron that scientists understood that electrons were emitted and why.
Thermionic emission is crucial to the operation of a variety of electronic devices and can be used for electricity generation (such as thermionic converters and electrodynamic tethers) or cooling. Thermionic vacuum tubes emit electrons from a hot cathode into an enclosed vacuum and may steer those emitted electrons with applied voltage. The hot cathode can be a metal filament, a coated metal filament, or a separate structure of metal or carbides or borides of transition metals. Vacuum emission from metals tends to become significant only for temperatures over . Charge flow increases dramatically with temperature.
The term thermionic emission is now also used to refer to any thermally-excited charge emission process, even when the charge is emitted from one solid-state region into another.
History
Because the electron was not identified as a separate physical particle until the work of J. J. Thomson in 1897, the word "electron" was not used when discussing experiments that took place before this date.
The phenomenon was initially reported in 1853 by Edmond Becquerel. It was observed again in 1873 by Frederick Guthrie in Britain. While doing work on charged objects, Guthrie discovered that a red-hot iron sphere with a negative charge would lose its charge (by somehow discharging it into air). He also found that this did not happen if the sphere had a positive charge. Other early contributors included Johann Wilhelm Hittorf (1869–1883), Eugen Goldstein (1885), and Julius Elster and Hans Friedrich Geitel (1882–1889).
Edison effect
Thermionic emission was observed again by Thomas Edison in 1880 while his team was trying to discover the reason for breakage of carbonized bamboo filaments and undesired blackening of the interior surface of the bulbs in his incandescent lamps. This blackening was carbon deposited from the filament and was darkest near the positive end of the filament loop, which apparently cast a light shadow on the glass, as if negatively-charged carbon emanated from the negative end and was attracted towards and sometimes absorbed by the positive end of the filament loop. This projected carbon was deemed "electrical carrying" and initially ascribed to an effect in Crookes tubes where negatively-charged cathode rays from ionized gas move from a negative to a positive electrode. To try to redirect the charged carbon particles to a separate electrode instead of the glass, Edison did a series of experiments (a first inconclusive one is in his notebook on 13 February 1880) such as the following successful one:
This effect had many applications. Edison found that the current emitted by the hot filament increased rapidly with voltage, and filed a patent for a voltage-regulating device using the effect on 15 November 1883, notably the first US patent for an electronic device. He found that sufficient current would pass through the device to operate a telegraph sounder, which was exhibited at the International Electrical Exhibition of 1884 in Philadelphia. Visiting British scientist William Preece received several bulbs from Edison to investigate. Preece's 1885 paper on them referred to the one-way current through the partial vacuum as the Edison effect, although that term is occasionally used to refer to thermionic emission itself. British physicist John Ambrose Fleming, working for the British Wireless Telegraphy Company, discovered that the Edison effect could be used to detect radio waves. Fleming went on to develop a two-element thermionic vacuum tube diode called the Fleming valve (patented 16 November 1904). Thermionic diodes can also be configured to convert a heat difference to electric power directly without moving parts as a device called a thermionic converter, a type of heat engine.
Richardson's law
Following J. J. Thomson's identification of the electron in 1897, the British physicist Owen Willans Richardson began work on the topic that he later called "thermionic emission". He received a Nobel Prize in Physics in 1928 "for his work on the thermionic phenomenon and especially for the discovery of the law named after him".
From band theory, there are one or two electrons per atom in a solid that are free to move from atom to atom. This is sometimes collectively referred to as a "sea of electrons". Their velocities follow a statistical distribution, rather than being uniform, and occasionally an electron will have enough velocity to exit the metal without being pulled back in. The minimum amount of energy needed for an electron to leave a surface is called the work function. The work function is characteristic of the material and for most metals is on the order of several electronvolts (eV). Thermionic currents can be increased by decreasing the work function. This often-desired goal can be achieved by applying various oxide coatings to the wire.
In 1901 Richardson published the results of his experiments: the current from a heated wire seemed to depend exponentially on the temperature of the wire with a mathematical form similar to the modified Arrhenius equation, . Later, he proposed that the emission law should have the mathematical form
where J is the emission current density, T is the temperature of the metal, W is the work function of the metal, k is the Boltzmann constant, and AG is a parameter discussed next.
In the period 1911 to 1930, as physical understanding of the behaviour of electrons in metals increased, various theoretical expressions (based on different physical assumptions) were put forward for AG, by Richardson, Saul Dushman, Ralph H. Fowler, Arnold Sommerfeld and Lothar Wolfgang Nordheim. Over 60 years later, there is still no consensus among interested theoreticians as to the exact expression of AG, but there is agreement that AG must be written in the form:
where λR is a material-specific correction factor that is typically of order 0.5, and A0 is a universal constant given by
where and are the mass and charge of an electron, respectively, and is the Planck constant.
In fact, by about 1930 there was agreement that, due to the wave-like nature of electrons, some proportion rav of the outgoing electrons would be reflected as they reached the emitter surface, so the emission current density would be reduced, and λR would have the value . Thus, one sometimes sees the thermionic emission equation written in the form:
.
However, a modern theoretical treatment by Modinos assumes that the band-structure of the emitting material must also be taken into account. This would introduce a second correction factor λB into λR, giving . Experimental values for the "generalized" coefficient AG are generally of the order of magnitude of A0, but do differ significantly as between different emitting materials, and can differ as between different crystallographic faces of the same material. At least qualitatively, these experimental differences can be explained as due to differences in the value of λR.
Considerable confusion exists in the literature of this area because: (1) many sources do not distinguish between AG and A0, but just use the symbol A (and sometimes the name "Richardson constant") indiscriminately; (2) equations with and without the correction factor here denoted by λR are both given the same name; and (3) a variety of names exist for these equations, including "Richardson equation", "Dushman's equation", "Richardson–Dushman equation" and "Richardson–Laue–Dushman equation". In the literature, the elementary equation is sometimes given in circumstances where the generalized equation would be more appropriate, and this in itself can cause confusion. To avoid misunderstandings, the meaning of any "A-like" symbol should always be explicitly defined in terms of the more fundamental quantities involved.
Because of the exponential function, the current increases rapidly with temperature when kT is less than W. (For essentially every material, melting occurs well before .)
The thermionic emission law has been recently revised for 2D materials in various models.
Schottky emission
In electron emission devices, especially electron guns, the thermionic electron emitter will be biased negative relative to its surroundings. This creates an electric field of magnitude E at the emitter surface. Without the field, the surface barrier seen by an escaping Fermi-level electron has height W equal to the local work-function. The electric field lowers the surface barrier by an amount ΔW, and increases the emission current. This is known as the Schottky effect (named for Walter H. Schottky) or field enhanced thermionic emission. It can be modeled by a simple modification of the Richardson equation, by replacing W by (W − ΔW). This gives the equation
where ε0 is the electric constant (also called the vacuum permittivity).
Electron emission that takes place in the field-and-temperature-regime where this modified equation applies is often called Schottky emission. This equation is relatively accurate for electric field strengths lower than about . For electric field strengths higher than , so-called Fowler–Nordheim (FN) tunneling begins to contribute significant emission current. In this regime, the combined effects of field-enhanced thermionic and field emission can be modeled by the Murphy-Good equation for thermo-field (T-F) emission. At even higher fields, FN tunneling becomes the dominant electron emission mechanism, and the emitter operates in the so-called "cold field electron emission (CFE)" regime.
Thermionic emission can also be enhanced by interaction with other forms of excitation such as light. For example, excited Cesium (Cs) vapors in thermionic converters form clusters of Cs-Rydberg matter which yield a decrease of collector emitting work function from 1.5 eV to 1.0–0.7 eV. Due to long-lived nature of Rydberg matter this low work function remains low which essentially increases the low-temperature converter's efficiency.
Photon-enhanced thermionic emission
Photon-enhanced thermionic emission (PETE) is a process developed by scientists at Stanford University that harnesses both the light and heat of the sun to generate electricity and increases the efficiency of solar power production by more than twice the current levels. The device developed for the process reaches peak efficiency above 200 °C, while most silicon solar cells become inert after reaching 100 °C. Such devices work best in parabolic dish collectors, which reach temperatures up to 800 °C. Although the team used a gallium nitride semiconductor in its proof-of-concept device, it claims that the use of gallium arsenide can increase the device's efficiency to 55–60 percent, nearly triple that of existing systems, and 12–17 percent more than existing 43 percent multi-junction solar cells.
See also
Space charge
Thermal ionization
Nottingham effect
References
External links
How vacuum tubes really work with a section on thermionic emission, with equations, john-a-harper.com.
Thermionic Phenomena and the Laws which Govern Them, Owen Richardson's Nobel lecture on thermionics. nobelprize.org. December 12, 1929. (PDF)
Derivations of thermionic emission equations from an undergraduate lab , csbsju.edu.
Atomic physics
Electricity
Energy conversion
Vacuum tubes
Thomas Edison | Thermionic emission | [
"Physics",
"Chemistry"
] | 2,509 | [
"Vacuum tubes",
"Vacuum",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Matter",
" and optical physics"
] |
215,706 | https://en.wikipedia.org/wiki/Supermassive%20black%20hole | A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions, of times the mass of the Sun (). Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, including light. Observational evidence indicates that almost every large galaxy has a supermassive black hole at its center. For example, the Milky Way galaxy has a supermassive black hole at its center, corresponding to the radio source Sagittarius A*. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.
Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way's center (Sagittarius A*).
Description
Supermassive black holes are classically defined as black holes with a mass above 100,000 () solar masses (); some have masses of . Supermassive black holes have physical properties that clearly distinguish them from lower-mass classifications. First, the tidal forces in the vicinity of the event horizon are significantly weaker for supermassive black holes. The tidal force on a body at a black hole's event horizon is inversely proportional to the square of the black hole's mass: a person at the event horizon of a black hole experiences about the same tidal force between their head and feet as a person on the surface of the Earth. Unlike with stellar-mass black holes, one would not experience significant tidal force until very deep into the black hole's event horizon.
It is somewhat counterintuitive to note that the average density of a SMBH within its event horizon (defined as the mass of the black hole divided by the volume of space within its Schwarzschild radius) can be smaller than the density of water. This is because the Schwarzschild radius () is directly proportional to its mass. Since the volume of a spherical object (such as the event horizon of a non-rotating black hole) is directly proportional to the cube of the radius, the density of a black hole is inversely proportional to the square of the mass, and thus higher mass black holes have a lower average density.
The Schwarzschild radius of the event horizon of a nonrotating and uncharged supermassive black hole of around is comparable to the semi-major axis of the orbit of planet Uranus, which is about 19 AU.
Some astronomers refer to black holes of greater than as ultramassive black holes (UMBHs or UBHs), but the term is not broadly used. Possible examples include the black holes at the cores of TON 618, NGC 6166, ESO 444-46 and NGC 4889, which are among the most massive black holes known.
Some studies have suggested that the maximum natural mass that a black hole can reach, while being luminous accretors (featuring an accretion disk), is typically on the order of about . However, a 2020 study suggested even larger black holes, dubbed stupendously large black holes (SLABs), with masses greater than , could exist based on used models; some studies place the black hole at the core of Phoenix A in this category.
History of research
The story of how supermassive black holes were found began with the investigation by Maarten Schmidt of the radio source 3C 273 in 1963. Initially this was thought to be a star, but the spectrum proved puzzling. It was determined to be hydrogen emission lines that had been redshifted, indicating the object was moving away from the Earth. Hubble's law showed that the object was located several billion light-years away, and thus must be emitting the energy equivalent of hundreds of galaxies. The rate of light variations of the source dubbed a quasi-stellar object, or quasar, suggested the emitting region had a diameter of one parsec or less. Four such sources had been identified by 1964.
In 1963, Fred Hoyle and W. A. Fowler proposed the existence of hydrogen-burning supermassive stars (SMS) as an explanation for the compact dimensions and high energy output of quasars. These would have a mass of about . However, Richard Feynman noted stars above a certain critical mass are dynamically unstable and would collapse into a black hole, at least if they were non-rotating. Fowler then proposed that these supermassive stars would undergo a series of collapse and explosion oscillations, thereby explaining the energy output pattern. Appenzeller and Fricke (1972) built models of this behavior, but found that the resulting star would still undergo collapse, concluding that a non-rotating SMS "cannot escape collapse to a black hole by burning its hydrogen through the CNO cycle".
Edwin E. Salpeter and Yakov Zeldovich made the proposal in 1964 that matter falling onto a massive compact object would explain the properties of quasars. It would require a mass of around to match the output of these objects. Donald Lynden-Bell noted in 1969 that the infalling gas would form a flat disk that spirals into the central "Schwarzschild throat". He noted that the relatively low output of nearby galactic cores implied these were old, inactive quasars. Meanwhile, in 1967, Martin Ryle and Malcolm Longair suggested that nearly all sources of extra-galactic radio emission could be explained by a model in which particles are ejected from galaxies at relativistic velocities, meaning they are moving near the speed of light. Martin Ryle, Malcolm Longair, and Peter Scheuer then proposed in 1973 that the compact central nucleus could be the original energy source for these relativistic jets.
Arthur M. Wolfe and Geoffrey Burbidge noted in 1970 that the large velocity dispersion of the stars in the nuclear region of elliptical galaxies could only be explained by a large mass concentration at the nucleus; larger than could be explained by ordinary stars. They showed that the behavior could be explained by a massive black hole with up to , or a large number of smaller black holes with masses below . Dynamical evidence for a massive dark object was found at the core of the active elliptical galaxy Messier 87 in 1978, initially estimated at . Discovery of similar behavior in other galaxies soon followed, including the Andromeda Galaxy in 1984 and the Sombrero Galaxy in 1988.
Donald Lynden-Bell and Martin Rees hypothesized in 1971 that the center of the Milky Way galaxy would contain a massive black hole. Sagittarius A* was discovered and named on February 13 and 15, 1974, by astronomers Bruce Balick and Robert Brown using the Green Bank Interferometer of the National Radio Astronomy Observatory. They discovered a radio source that emits synchrotron radiation; it was found to be dense and immobile because of its gravitation. This was, therefore, the first indication that a supermassive black hole exists in the center of the Milky Way.
The Hubble Space Telescope, launched in 1990, provided the resolution needed to perform more refined observations of galactic nuclei. In 1994 the Faint Object Spectrograph on the Hubble was used to observe Messier 87, finding that ionized gas was orbiting the central part of the nucleus at a velocity of ±500 km/s. The data indicated a concentrated mass of lay within a span, providing strong evidence of a supermassive black hole.
Using the Very Long Baseline Array to observe Messier 106, Miyoshi et al. (1995) were able to demonstrate that the emission from an H2O maser in this galaxy came from a gaseous disk in the nucleus that orbited a concentrated mass of , which was constrained to a radius of 0.13 parsecs. Their ground-breaking research noted that a swarm of solar mass black holes within a radius this small would not survive for long without undergoing collisions, making a supermassive black hole the sole viable candidate. Accompanying this observation which provided the first confirmation of supermassive black holes was the discovery of the highly broadened, ionised iron Kα emission line (6.4 keV) from the galaxy MCG-6-30-15. The broadening was due to the gravitational redshift of the light as it escaped from just 3 to 10 Schwarzschild radii from the black hole.
On April 10, 2019, the Event Horizon Telescope collaboration released the first horizon-scale image of a black hole, in the center of the galaxy Messier 87. In March 2020, astronomers suggested that additional subrings should form the photon ring, proposing a way of better detecting these signatures in the first black hole image.
Formation
The origin of supermassive black holes remains an active field of research. Astrophysicists agree that black holes can grow by accretion of matter and by merging with other black holes. There are several hypotheses for the formation mechanisms and initial masses of the progenitors, or "seeds", of supermassive black holes. Independently of the specific formation channel for the black hole seed, given sufficient mass nearby, it could accrete to become an intermediate-mass black hole and possibly a SMBH if the accretion rate persists.
Distant and early supermassive black holes, such as J0313–1806, and ULAS J1342+0928, are hard to explain so soon after the Big Bang. Some postulate they might come from direct collapse of dark matter with self-interaction. A small minority of sources argue that they may be evidence that the Universe is the result of a Big Bounce, instead of a Big Bang, with these supermassive black holes being formed before the Big Bounce.
First stars
The early progenitor seeds may be black holes of that are left behind by the explosions of massive stars and grow by accretion of matter. Another model involves a dense stellar cluster undergoing core collapse as the negative heat capacity of the system drives the velocity dispersion in the core to relativistic speeds.
Before the first stars, large gas clouds could collapse into a "quasi-star", which would in turn collapse into a black hole of around . These stars may have also been formed by dark matter halos drawing in enormous amounts of gas by gravity, which would then produce supermassive stars with . The "quasi-star" becomes unstable to radial perturbations because of electron-positron pair production in its core and could collapse directly into a black hole without a supernova explosion (which would eject most of its mass, preventing the black hole from growing as fast).
A more recent theory proposes that SMBH seeds were formed in the very early universe each from the collapse of a supermassive star with mass of around .
Direct-collapse and primordial black holes
Large, high-redshift clouds of metal-free gas, when irradiated by a sufficiently intense flux of Lyman–Werner photons, can avoid cooling and fragmenting, thus collapsing as a single object due to self-gravitation. The core of the collapsing object reaches extremely large values of matter density, of the order of about , and triggers a general relativistic instability. Thus, the object collapses directly into a black hole, without passing from the intermediate phase of a star, or of a quasi-star. These objects have a typical mass of about and are named direct collapse black holes.
A 2022 computer simulation showed that the first supermassive black holes can arise in rare turbulent clumps of gas, called primordial halos, that were fed by unusually strong streams of cold gas. The key simulation result was that cold flows suppressed star formation in the turbulent halo until the halo's gravity was finally able to overcome the turbulence and formed two direct-collapse black holes of and . The birth of the first SMBHs can therefore be a result of standard cosmological structure formation — contrary to what had been thought for almost two decades.
Primordial black holes (PBHs) could have been produced directly from external pressure in the first moments after the Big Bang. These black holes would then have more time than any of the above models to accrete, allowing them sufficient time to reach supermassive sizes. Formation of black holes from the deaths of the first stars has been extensively studied and corroborated by observations. The other models for black hole formation listed above are theoretical.
The formation of a supermassive black hole requires a relatively small volume of highly dense matter having small angular momentum. Normally, the process of accretion involves transporting a large initial endowment of angular momentum outwards, and this appears to be the limiting factor in black hole growth. This is a major component of the theory of accretion disks. Gas accretion is both the most efficient and the most conspicuous way in which black holes grow. The majority of the mass growth of supermassive black holes is thought to occur through episodes of rapid gas accretion, which are observable as active galactic nuclei or quasars.
Observations reveal that quasars were much more frequent when the Universe was younger, indicating that supermassive black holes formed and grew early. A major constraining factor for theories of supermassive black hole formation is the observation of distant luminous quasars, which indicate that supermassive black holes of had already formed when the Universe was less than one billion years old. This suggests that supermassive black holes arose very early in the Universe, inside the first massive galaxies.
Maximum mass limit
There is a natural upper limit to how large supermassive black holes can grow. Supermassive black holes in any quasar or active galactic nucleus (AGN) appear to have a theoretical upper limit of physically around for typical parameters, as anything above this slows growth down to a crawl (the slowdown tends to start around ) and causes the unstable accretion disk surrounding the black hole to coalesce into stars that orbit it. A study concluded that the radius of the innermost stable circular orbit (ISCO) for SMBH masses above this limit exceeds the self-gravity radius, making disc formation no longer possible.
A larger upper limit of around was represented as the absolute maximum mass limit for an accreting SMBH in extreme cases, for example its maximal prograde spin with a dimensionless spin parameter of a = 1, although the maximum limit for a black hole's spin parameter is very slightly lower at a = 0.9982. At masses just below the limit, the disc luminosity of a field galaxy is likely to be below the Eddington limit and not strong enough to trigger the feedback underlying the M–sigma relation, so SMBHs close to the limit can evolve above this.
It was noted that, black holes close to this limit are likely to be rather even rarer, as it would require the accretion disc to be almost permanently prograde because the black hole grows and the spin-down effect of retrograde accretion is larger than the spin-up by prograde accretion, due to its ISCO and therefore its lever arm. This would require the hole spin to be permanently correlated with a fixed direction of the potential controlling gas flow, within the black hole's host galaxy, and thus would tend to produce a spin axis and hence AGN jet direction, which is similarly aligned with the galaxy. Current observations do not support this correlation.
The so-called 'chaotic accretion' presumably has to involve multiple small-scale events, essentially random in time and orientation if it is not controlled by a large-scale potential in this way. This would lead the accretion statistically to spin-down, due to retrograde events having larger lever arms than prograde, and occurring almost as often. There is also other interactions with large SMBHs that trend to reduce their spin, including particularly mergers with other black holes, which can statistically decrease the spin. All of these considerations suggested that SMBHs usually cross the critical theoretical mass limit at modest values of their spin parameters, so that in all but rare cases.
Although modern UMBHs within quasars and galactic nuclei cannot grow beyond around through the accretion disk and as well given the current age of the universe, some of these monster black holes in the universe are predicted to still continue to grow up to stupendously large masses of perhaps during the collapse of superclusters of galaxies in the extremely far future of the universe.
Activity and galactic evolution
Gravitation from supermassive black holes in the center of many galaxies is thought to power active objects such as Seyfert galaxies and quasars, and the relationship between the mass of the central black hole and the mass of the host galaxy depends upon the galaxy type. An empirical correlation between the size of supermassive black holes and the stellar velocity dispersion of a galaxy bulge is called the M–sigma relation.
An AGN is now considered to be a galactic core hosting a massive black hole that is accreting matter and displays a sufficiently strong luminosity. The nuclear region of the Milky Way, for example, lacks sufficient luminosity to satisfy this condition. The unified model of AGN is the concept that the large range of observed properties of the AGN taxonomy can be explained using just a small number of physical parameters. For the initial model, these values consisted of the angle of the accretion disk's torus to the line of sight and the luminosity of the source. AGN can be divided into two main groups: a radiative mode AGN in which most of the output is in the form of electromagnetic radiation through an optically thick accretion disk, and a jet mode in which relativistic jets emerge perpendicular to the disk.
Mergers and recoiled SMBHs
The interaction of a pair of SMBH-hosting galaxies can lead to merger events. Dynamical friction on the hosted SMBH objects causes them to sink toward the center of the merged mass, eventually forming a pair with a separation of under a kiloparsec. The interaction of this pair with surrounding stars and gas will then gradually bring the SMBH together as a gravitationally bound binary system with a separation of ten parsecs or less. Once the pair draw as close as 0.001 parsecs, gravitational radiation will cause them to merge. By the time this happens, the resulting galaxy will have long since relaxed from the merger event, with the initial starburst activity and AGN having faded away.
The gravitational waves from this coalescence can give the resulting SMBH a velocity boost of up to several thousand km/s, propelling it away from the galactic center and possibly even ejecting it from the galaxy. This phenomenon is called a gravitational recoil. The other possible way to eject a black hole is the classical slingshot scenario, also called slingshot recoil. In this scenario first a long-lived binary black hole forms through a merger of two galaxies. A third SMBH is introduced in a second merger and sinks into the center of the galaxy. Due to the three-body interaction one of the SMBHs, usually the lightest, is ejected. Due to conservation of linear momentum the other two SMBHs are propelled in the opposite direction as a binary. All SMBHs can be ejected in this scenario. An ejected black hole is called a runaway black hole.
There are different ways to detect recoiling black holes. Often a displacement of a quasar/AGN from the center of a galaxy or a spectroscopic binary nature of a quasar/AGN is seen as evidence for a recoiled black hole.
Candidate recoiling black holes include NGC 3718, SDSS1133, 3C 186, E1821+643 and SDSSJ0927+2943. Candidate runaway black holes are HE0450–2958, CID-42 and objects around RCP 28. Runaway supermassive black holes may trigger star formation in their wakes. A linear feature near the dwarf galaxy RCP 28 was interpreted as the star-forming wake of a candidate runaway black hole.
Hawking radiation
Hawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. This radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanish. If black holes evaporate via Hawking radiation, a non-rotating and uncharged stupendously large black hole with a mass of will evaporate in around . Black holes formed during the predicted collapse of superclusters of galaxies in the far future with would evaporate over a timescale of up to .
Evidence
Doppler measurements
Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer-generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the center of the Milky Way. However, the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly.
What already has been observed directly in many systems are the lower non-relativistic velocities of matter orbiting further out from what are presumed to be black holes. Direct Doppler measures of water masers surrounding the nuclei of nearby galaxies have revealed a very fast Keplerian motion, only possible with a high concentration of matter in the center. Currently, the only known objects that can pack enough matter in such a small space are black holes, or things that will evolve into black holes within astrophysically short timescales. For active galaxies farther away, the width of broad spectral lines can be used to probe the gas orbiting near the event horizon. The technique of reverberation mapping uses variability of these lines to measure the mass and perhaps the spin of the black hole that powers active galaxies.
In the Milky Way
Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:
The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours ( or 120 AU) from the center of the central object.
From the motion of star S2, the object's mass can be estimated as , or about .
The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit.
No known astronomical object other than a black hole can contain in this volume of space.
Infrared observations of bright flare activity near Sagittarius A* show orbital motion of plasma with a period of at a separation of six to ten times the gravitational radius of the candidate SMBH. This emission is consistent with a circularized orbit of a polarized "hot spot" on an accretion disk in a strong magnetic field. The radiating matter is orbiting at 30% of the speed of light just outside the innermost stable circular orbit.
On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers.
Outside the Milky Way
Unambiguous dynamical evidence for supermassive black holes exists only for a handful of galaxies; these include the Milky Way, the Local Group galaxies M31 and M32, and a few galaxies beyond the Local Group, such as NGC 4395. In these galaxies, the root mean square (or rms) velocities of the stars or gas rises proportionally to 1/ near the center, indicating a central point mass. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present.
Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The reason for this assumption is the M–sigma relation, a tight (low scatter) relation between the mass of the hole in the 10 or so galaxies with secure detections, and the velocity dispersion of the stars in the bulges of those galaxies. This correlation, although based on just a handful of galaxies, suggests to many astronomers a strong connection between the formation of the black hole and the galaxy itself.
On March 28, 2011, a supermassive black hole was seen tearing a mid-size star apart. That is the only likely explanation of the observations that day of sudden X-ray radiation and the follow-up broad-band observations. The source was previously an inactive galactic nucleus, and from study of the outburst the galactic nucleus is estimated to be a SMBH with mass of the order of a . This rare event is assumed to be a relativistic outflow (material being emitted in a jet at a significant fraction of the speed of light) from a star tidally disrupted by the SMBH. A significant fraction of a solar mass of material is expected to have accreted onto the SMBH. Subsequent long-term observation will allow this assumption to be confirmed if the emission from the jet decays at the expected rate for mass accretion onto a SMBH.
Individual studies
The nearby Andromeda Galaxy, 2.5 million light-years away, contains a central black hole, significantly larger than the Milky Way's. The largest supermassive black hole in the Milky Way's vicinity appears to be that of Messier 87 (i.e., M87*), at a mass of at a distance of 48.92 million light-years. The supergiant elliptical galaxy NGC 4889, at a distance of 336 million light-years away in the Coma Berenices constellation, contains a black hole measured to be .
Masses of black holes in quasars can be estimated via indirect methods that are subject to substantial uncertainty. The quasar TON 618 is an example of an object with an extremely large black hole, estimated at . Its redshift is 2.219. Other examples of quasars with large estimated black hole masses are the hyperluminous quasar APM 08279+5255, with an estimated mass of , and the quasar SMSS J215728.21-360215.1, with a mass of , or nearly 10,000 times the mass of the black hole at the Milky Way's Galactic Center.
Some galaxies, such as the galaxy 4C +37.11, appear to have two supermassive black holes at their centers, forming a binary system. If they collided, the event would create strong gravitational waves. Binary supermassive black holes are believed to be a common consequence of galactic mergers. The binary pair in OJ 287, 3.5 billion light-years away, contains the most massive black hole in a pair, with a mass estimated at . In 2011, a super-massive black hole was discovered in the dwarf galaxy Henize 2-10, which has no bulge. The precise implications for this discovery on black hole formation are unknown, but may indicate that black holes formed before bulges.
In 2012, astronomers reported an unusually large mass of approximately for the black hole in the compact, lenticular galaxy NGC 1277, which lies 220 million light-years away in the constellation Perseus. The putative black hole has approximately 59 percent of the mass of the bulge of this lenticular galaxy (14 percent of the total stellar mass of the galaxy). Another study reached a very different conclusion: this black hole is not particularly overmassive, estimated at between with being the most likely value. On February 28, 2013, astronomers reported on the use of the NuSTAR satellite to accurately measure the spin of a supermassive black hole for the first time, in NGC 1365, reporting that the event horizon was spinning at almost the speed of light.
In September 2014, data from different X-ray telescopes have shown that the extremely small, dense, ultracompact dwarf galaxy M60-UCD1 hosts a 20 million solar mass black hole at its center, accounting for more than 10% of the total mass of the galaxy. The discovery is quite surprising, since the black hole is five times more massive than the Milky Way's black hole despite the galaxy being less than five-thousandths the mass of the Milky Way.
Some galaxies lack any supermassive black holes in their centers. Although most galaxies with no supermassive black holes are very small, dwarf galaxies, one discovery remains mysterious: The supergiant elliptical cD galaxy A2261-BCG has not been found to contain an active supermassive black hole of at least , despite the galaxy being one of the largest galaxies known; over six times the size and one thousand times the mass of the Milky Way. Despite that, several studies gave very large mass values for a possible central black hole inside A2261-BGC, such as about as large as or as low as . Since a supermassive black hole will only be visible while it is accreting, a supermassive black hole can be nearly invisible, except in its effects on stellar orbits. This implies that either A2261-BGC has a central black hole that is accreting at a low level or has a mass rather below .
In December 2017, astronomers reported the detection of the most distant quasar known by this time, ULAS J1342+0928, containing the most distant supermassive black hole, at a reported redshift of z = 7.54, surpassing the redshift of 7 for the previously known most distant quasar ULAS J1120+0641.
In February 2020, astronomers reported the discovery of the Ophiuchus Supercluster eruption, the most energetic event in the Universe ever detected since the Big Bang. It occurred in the Ophiuchus Cluster in the galaxy NeVe 1, caused by the accretion of nearly of material by its central 7 billion supermassive black hole. The eruption lasted for about 100 million years and released 5.7 million times more energy than the most powerful gamma-ray burst known. The eruption released shock waves and jets of high-energy particles that punched the intracluster medium, creating a cavity about 1.5 million light-years wide – ten times the Milky Way's diameter.
In February 2021, astronomers released, for the first time, a very high-resolution image of 25,000 active supermassive black holes, covering four percent of the Northern celestial hemisphere, based on ultra-low radio wavelengths, as detected by the Low-Frequency Array (LOFAR) in Europe.
See also
Notes
References
Further reading
External links
Black Holes: Gravity's Relentless Pull Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute
Images of supermassive black holes
NASA images of supermassive black holes
The black hole at the heart of the Milky Way
ESO video clip of stars orbiting a galactic black hole
Star Orbiting Massive Milky Way Centre Approaches to within 17 Light-Hours ESO, October 21, 2002
Images, Animations, and New Results from the UCLA Galactic Center Group
Washington Post article on Supermassive black holes
Video (2:46) – Simulation of stars orbiting Milky Way's central massive black hole
Video (2:13) – Simulation reveals supermassive black holes (NASA, October 2, 2018)
From Super to Ultra: Just How Big Can Black Holes Get?
+
Concepts in astronomy
Galaxies
Articles containing video clips | Supermassive black hole | [
"Physics",
"Astronomy"
] | 6,706 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Galaxies",
"Unsolved problems in physics",
"Supermassive black holes",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
215,791 | https://en.wikipedia.org/wiki/Erythrocyte%20sedimentation%20rate | The erythrocyte sedimentation rate (ESR or sed rate) is the rate at which red blood cells in anticoagulated whole blood descend in a standardized tube over a period of one hour. It is a common hematology test, and is a non-specific measure of inflammation.
To perform the test, anticoagulated blood is traditionally placed in an upright tube, known as a Westergren tube, and the distance which the red blood cells fall is measured and reported in millimetres at the end of one hour.
Since the introduction of automated analyzers into the clinical laboratory, the ESR test has been automatically performed.
The ESR is influenced by the aggregation of red blood cells: blood plasma proteins, mainly fibrinogen, promote the formation of red cell clusters called rouleaux or larger structures (interconnected rouleaux, irregular clusters). As according to Stokes' law the sedimentation velocity varies like the square of the object's diameter, larger aggregates settle faster. While aggregation already takes place at normal physiological fibrinogen levels, these tend to increase when an inflammatory process is present, leading to increased ESR.
The ESR is increased in inflammation, pregnancy, anemia, autoimmune disorders (such as rheumatoid arthritis and lupus), infections, some kidney diseases and some cancers (such as lymphoma and multiple myeloma). The ESR is decreased in polycythemia, hyperviscosity, sickle cell anemia, leukemia, chronic fatigue syndrome, low plasma protein (due to liver or kidney disease) and congestive heart failure. Although increases in immunoglobulins usually increase the ESR, very high levels can reduce it again due to hyperviscosity of the plasma. This is especially likely with IgM-class paraproteins, and to a lesser extent, IgA-class. The basal ESR is slightly higher in females.
Stages
Erythrocyte sedimentation rate (ESR) is the measure of ability of erythrocytes (red blood cell) to fall through the blood plasma and accumulate together at the base of container in one hour.
There are three stages in erythrocyte sedimentation:
Rouleaux formation
Sedimentation or settling stage
Packing stage - 10 minutes (sedimentation slows and cells start to pack at the bottom of the tube)
In normal conditions, the red blood cells are negatively charged and therefore repel each other rather than stacking. ESR is also reduced by high blood viscosity, which slows the rate of fall.
Causes of elevation
The rate of erythrocyte sedimentation is affected by both inflammatory and non-inflammatory conditions.
Inflammation
In inflammatory conditions, fibrinogen, other clotting proteins, and alpha globulin are positively charged, thus increasing the ESR. ESR begins to rise at 24 to 48 hours after the onset of acute self-limited inflammation, decreases slowly as inflammation resolves, and can take weeks to months to return to normal levels. For ESR values more than 100 mm/hour, there is a 90% probability that an underlying cause would be found upon investigation.
Non-inflammatory conditions
In non-inflammatory conditions, plasma albumin concentration, size, shape, and number of red blood cells, and the concentration of immunoglobulin can affect the ESR. Non-inflammatory conditions that can cause raised ESR include anemia, kidney failure, obesity, ageing, and female sex. ESR is also higher in women during menstruation and pregnancy. The value of ESR does not change whether dialysis is performed or not. Therefore, ESR is not a reliable measure of inflammation in those with kidney injuries as the ESR value is already elevated.
Causes of reduction
An increased number of red blood cells (polycythemia) causes reduced ESR as blood viscosity increases. Hemoglobinopathy such as sickle-cell disease can have low ESR due to an improper shape of red blood cells that impairs stacking.
Medical uses
Diagnosis
ESR can sometimes be useful in diagnosing diseases, such as multiple myeloma, temporal arteritis, polymyalgia rheumatica, various autoimmune diseases, systemic lupus erythematosus, rheumatoid arthritis, inflammatory bowel disease and chronic kidney diseases. In many of these cases, the ESR may exceed 100 mm/hour.
It is commonly used for a differential diagnosis for Kawasaki's disease (from Takayasu's arteritis; which would have a markedly elevated ESR) and it may be increased in some chronic infective conditions like tuberculosis and infective endocarditis. It is also elevated in subacute thyroiditis also known as DeQuervain's.
In markedly increased ESR of over 100 mm/h, infection is the most common cause (33% of cases in an American study), followed by cancer (17%), kidney disease (17%) and noninfectious inflammatory disorders (14%). Yet, in pneumonia the ESR stays under 100.
The usefulness of the ESR in current practice has been questioned by some, as it is a relatively imprecise and non-specific test compared to other available diagnostic tests. Current literature suggests that and ESR should be "obtained on all patients over the age of 50" who have an intense headache.
Disease severity
It is a component of the PCDAI (pediatric Crohn's disease activity index), an index for assessment of the severity of inflammatory bowel disease in children.
Monitoring response to therapy
The clinical usefulness of ESR is limited to monitoring the response to therapy in certain inflammatory diseases such as temporal arteritis, polymyalgia rheumatica and rheumatoid arthritis. It can also be used as a crude measure of response in Hodgkin's lymphoma. Additionally, ESR levels are used to define one of the several possible adverse prognostic factors in the staging of Hodgkin's lymphoma.
Normal values
Note: mm/h. = millimeters per hour.
Westergren's original normal values (men 3 mm/h and women 7 mm/h) made no allowance for a person's age. Later studies from 1967 confirmed that ESR values tend to rise with age and to be generally higher in women.
Values of the ESR also appear to be slightly higher in normal populations of African-Americans than Caucasians of both genders. Values also appear to be higher in anemic individuals than non-anemic individuals.
Adults
The widely used rule calculating normal maximum ESR values in adults (98% confidence limit) is given by a formula devised in 1983 from a study of ≈1000 individuals over the age of 20: The normal values of ESR in men is age (in years) divided by 2; for women, the normal value is age (in years) plus 10, divided by 2.
Other studies confirm a dependence of ESR on age and gender, as seen in the following:
ESR reference ranges from a large 1996 study of 3,910 healthy adults (NB. these use 95% confidence intervals rather than the 98% intervals used in the study used to derive the formula above, and because of the skewness of the data, these values appear to be less than expected from the above formula):
Children
Normal values of ESR have been quoted as 1 to 2 mm/h at birth, rising to 4 mm/h 8 days after delivery, and then to 17 mm/h by day 14.
Typical normal ranges quoted are:
Newborn: 0 to 2 mm/h
Neonatal to puberty: 3 to 13 mm/h, but other laboratories place an upper limit of 20.
Relation to C-reactive protein
C-reactive protein (CRP) is an acute phase protein. Therefore, it is a better marker for acute phase reaction than ESR. While ESR and CRP generally together correlate with the degree of inflammation, this is not always the case and results may be discordant in 12.5% of the cases. Cases with raised CRP but normal ESR may demonstrate a combination of infection and some other tissue damage such as myocardial infarction, and venous thromboembolism. Such inflammation may not be enough to raise the level of ESR. Those with high ESR usually do not have demonstrable inflammation. However, in cases of low grade bacterial infections of bone and joints such as coagulase negative staphylococcus (CoNS), and systemic lupus erythematosus (SLE), ESR can be a good marker for the inflammatory process. This may be due to the production of Interferon type I that inhibits the CRP production in liver cells during SLE. CRP is a better marker for other autoimmune diseases such as polymyalgia rheumatica, giant cell arteritis, post-operative sepsis, and neonatal sepsis. ESR may be reduced in those who are taking statins and non-steroidal anti-inflammatory drugs (NSAIDs).
History
The test was invented in 1897 by the Polish pathologist Edmund Biernacki. In some parts of the world the test continues to be referred to as Biernacki's Reaction (, OB). In 1918, Dr Robert Fåhræus noted that ESR differed only during pregnancy. Therefore, he suggested that ESR could be used as an indicator of pregnancy. In 1921, Dr Alf Vilhelm Albertsson Westergren used ESR to measure the disease outcome of tuberculosis. He defined the measurement standards of ESR which is still being used today. Robert Fåhræus and Alf Vilhelm Albertsson Westergren are eponymously remembered for the 'Fahraeus-Westergren test' (abbreviated as FW test; in the UK, usually termed Westergren test), which uses sodium citrate-anti-coagulated specimens.
Research
According to a study released in 2015, a stop gain mutation in HBB gene (p. Gln40stop) was shown to be associated with ESR values in Sardinian population. The red blood cell count, whose values are inversely related to ESR, is affected in carriers of this SNP. This mutation is almost exclusive of the inhabitants of Sardinia and is a common cause of beta thalassemia.
According to a 2010 study, there is a reverse correlation between ESR and general intelligence (IQ) in Swedish males aged 18–20.
References
External links
Mediscuss on ESR
ESR at Lab Tests Online
Blood tests
Temporal rates | Erythrocyte sedimentation rate | [
"Physics",
"Chemistry"
] | 2,234 | [
"Temporal quantities",
"Blood tests",
"Physical quantities",
"Temporal rates",
"Chemical pathology"
] |
216,021 | https://en.wikipedia.org/wiki/Periscope | A periscope is an instrument for observation over, around or through an object, obstacle or condition that prevents direct line-of-sight observation from an observer's current position.
In its simplest form, it consists of an outer case with mirrors at each end set parallel to each other at a 45° angle. This form of periscope, with the addition of two simple lenses, served for observation purposes in the trenches during World War I. Military personnel also use periscopes in some gun turrets and in armoured vehicles.
More complex periscopes using prisms or advanced fiber optics instead of mirrors and providing magnification operate on submarines and in various fields of science. The overall design of the classical submarine periscope is very simple: two telescopes pointed into each other. If the two telescopes have different individual magnification, the difference between them causes an overall magnification or reduction.
Early examples
Johannes Hevelius described an early periscope (which he called a "polemoscope") with lenses in 1647 in his work Selenographia, sive Lunae descriptio [Selenography, or an account of the Moon]. Hevelius saw military applications for his invention.
Mikhail Lomonosov invented an "optical tube" which was similar to a periscope. In 1834, it was used in a submarine, designed by Karl Andreevich Schilder.
In 1854, Hippolyte Marié-Davy invented the first naval periscope, consisting of a vertical tube with two small mirrors fixed at each end at 45°. Simon Lake used periscopes in his submarines in 1902. Sir Howard Grubb perfected the device in World War I. Morgan Robertson (1861–1915) claimed to have tried to patent the periscope: he described a submarine using a periscope in his fictional works.
Periscopes, in some cases fixed to rifles, served in World War I (1914–1918) to enable soldiers to see over the tops of trenches, thus avoiding exposure to enemy fire (especially from snipers). The periscope rifle also saw use during the war – this was an infantry rifle sighted by means of a periscope, so the shooter could aim and fire the weapon from a safe position below the trench parapet.
During World War II (1939–1945), artillery observers and officers used specifically manufactured periscope binoculars with different mountings. Some of them also allowed estimating the distance to a target, as they were designed as stereoscopic rangefinders.
Armored vehicle periscopes
Tanks and armoured vehicles use periscopes: they enable drivers, tank commanders, and other vehicle occupants to inspect their situation through the vehicle roof. Prior to periscopes, direct vision slits were cut in the armour for occupants to see out. Periscopes permit view outside of the vehicle without needing to cut these weaker vision openings in the front and side armour, better protecting the vehicle and occupants.
A protectoscope is a related periscopic vision device designed to provide a window in armoured plate, similar to a direct vision slit. A compact periscope inside the protectoscope allows the vision slit to be blanked off with spaced armoured plate. This prevents a potential ingress point for small arms fire, with only a small difference in vision height, but still requires the armour to be cut.
In the context of armoured fighting vehicles, such as tanks, a periscopic vision device may also be referred to as an episcope. In this context a periscope refers to a device that can rotate to provide a wider field of view (or is fixed into an assembly that can), while an episcope is fixed into position.
Periscopes may also be referred to by slang, e.g. "shufti-scope".
Gundlach and Vickers 360-degree periscopes
An important development, the Gundlach rotary periscope, incorporated a rotating top with a selectable additional prism which reversed the view. This allowed a tank commander to obtain a 360-degree field of view without moving his seat, including rear vision by engaging the extra prism. This design, patented by Rudolf Gundlach in 1936, first saw use in the Polish 7-TP light tank (produced from 1935 to 1939).
As a part of Polish–British pre-World War II military cooperation, the patent was sold to Vickers-Armstrong where it saw further development for use in British tanks, including the Crusader, Churchill, Valentine, and Cromwell models as the Vickers Tank Periscope MK.IV.
The Gundlach-Vickers technology was shared with the American Army for use in its tanks including the Sherman, built to meet joint British and US requirements. This saw post-war controversy through legal action: "After the Second World War and a long court battle, in 1947 he, Rudolf Gundlach, received a large payment for his periscope patent from some of its producers."
The USSR also copied the design and used it extensively in its tanks, including the T-34 and T-70. The copies were based on Lend-Lease British vehicles, and many parts remain interchangeable. Germany also made and used copies.
Periscopic gun-sights
Periscopic sights were also introduced during the Second World War. In British use, the Vickers periscope was provided with sighting lines, enabling front and rear prisms to be directly aligned to gain an accurate direction. On later tanks such as the Churchill and Cromwell, a similarly marked episcope provided a backup sighting mechanism aligned with a vane sight on the turret roof.
Later, US-built Sherman tanks and British Centurion and Charioteer tanks replaced the main telescopic sight with a true periscopic sight in the primary role. The periscopic sight was linked to the gun itself, allowing elevation to be captured (rotation being fixed as part of rotating turret). The sights formed part of the overall periscope, providing the gunner with greater overall vision than previously possible with the telescopic sight. The FV4201 Chieftain used the TESS (TElescopic Sighting System) developed in the early 1980s that was later sold as surplus for use on the RAF Phantom aircraft.
Modern specialised AFV periscopes
In modern use, specialised periscopes can also provide night vision. The Embedded Image Periscope (EIP) designed and patented by Kent Periscopes provides standard unity vision periscope functionality for normal daytime viewing of the vehicle surroundings plus the ability to display digital images from a range of on-vehicle sensors and cameras (including thermal and low light) such that the resulting image appears "embedded" internally within the unit and projected at a comfortable viewing positions.
Naval use
Periscopes allow a submarine, when submerged at a relatively shallow depth, to search visually for nearby targets and threats on the surface of the water and in the air. When not in use, a submarine's periscope retracts into the hull. A submarine commander in tactical conditions must exercise discretion when using his periscope, since it creates a visible wake (and may also become detectable by radar), giving away the submarine's position.
Marie-Davey built a simple, fixed naval periscope using mirrors in 1854. Thomas H. Doughty of the United States Navy later invented a prismatic version for use in the American Civil War of 1861–1865.
Submarines adopted periscopes early. Captain Arthur Krebs adapted two on the experimental French submarine in 1888 and 1889. The Spanish inventor Isaac Peral equipped his submarine (developed in 1886 but launched on September 8, 1888) with a fixed, non-retractable periscope that used a combination of prisms to relay the image to the submariner. (Peral also developed a primitive gyroscope for submarine navigation and pioneered the ability to fire live torpedoes while submerged.)
The invention of the collapsible periscope for use in submarine warfare is usually credited to Simon Lake in 1902. Lake called his device the "omniscope" or "skalomniscope".
modern submarine periscopes incorporate lenses for magnification and function as telescopes. They typically employ prisms and total internal reflection instead of mirrors, because prisms, which do not require coatings on the reflecting surface, are much more rugged than mirrors. They may have additional optical capabilities such as range-finding and targeting. The mechanical systems of submarine periscopes typically use hydraulics and need to be quite sturdy to withstand the drag through water. The periscope chassis may also support a radio or radar antenna.
Submarines traditionally had two periscopes; a navigation or observation periscope and a targeting, or commander's, periscope. Navies originally mounted these periscopes in the conning tower, one forward of the other in the narrow hulls of diesel-electric submarines. In the much wider hulls of US Navy submarines the two operate side-by-side. The observation scope, used to scan the sea surface and sky, typically had a wide field of view and no magnification or low-power magnification. The targeting or "attack" periscope, by comparison, had a narrower field of view and higher magnification. In World War II and earlier submarines it was the only means of gathering target data to accurately fire a torpedo, since sonar was not yet sufficiently advanced for this purpose (ranging with sonar required emission of an acoustic "ping" that gave away the location of the submarine) and most torpedoes were unguided.
Twenty-first-century submarines do not necessarily have periscopes. The United States Navy's s and the Royal Navy's s instead use photonics masts, pioneered by the Royal Navy's , which lift an electronic imaging sensor-set above the water. Signals from the sensor-set travel electronically to workstations in the submarine's control center. While the cables carrying the signal must penetrate the submarine's hull, they use a much smaller and more easily sealed—and therefore less expensive and safer—hull opening than those required by periscopes. Eliminating the telescoping tube running through the conning tower also allows greater freedom in designing the pressure hull and in placing internal equipment.
Aircraft use
Periscopes have also been used on aircraft for sections with limited view. The first known use of aircraft periscope was on the Spirit of St. Louis. The Vickers VC10 had a periscope that could be used on four locations of the aircraft fuselage, V-Bombers such as the Avro Vulcan and Handley Page Victor and the Nimrod MR1 as the "on top sight". Various US bomber aircraft such as the B-52 used sextant periscopes for celestial navigation before the introduction of GPS. This also allowed the aircrew to navigate without the use of an astrodome in the fuselage. An emergency periscope was used on all Boeing 737 models manufactured before 1997 found under "Seat D" behind the over wing exit row to regulate the landing gear. High speed and hypersonic aircraft such as the North American X-15 used a periscope.
See also
Aquascope
Coincidence rangefinder
Relay lens
Rangefinder
Vickers Tank Periscope MK.IV
References
External links
The Fleet Type Submarine Online: Submarine Periscope Manual United States Navy Navpers 16165, June 1979
Simulation of a Periscope at NTNUJAVA Virtual Physics Laboratory
Periscope used for Celestial Navigation in Petan.net
Air Facts
THE V-FORCE
Air Navigation Periscope Sextants
Periscope Sextant in a Douglas DC-8
Optical devices
British inventions
1902 introductions
Submarine components | Periscope | [
"Materials_science",
"Engineering"
] | 2,410 | [
"Glass engineering and science",
"Optical devices"
] |
216,049 | https://en.wikipedia.org/wiki/Adaptive%20optics | Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.
Adaptive optics should not be confused with active optics, which work on a longer timescale to correct the primary mirror geometry.
Other methods can achieve resolving power exceeding the limit imposed by atmospheric distortion, such as speckle imaging, aperture synthesis, and lucky imaging, or by moving outside the atmosphere with space telescopes, such as the Hubble Space Telescope.
History
Adaptive optics was first envisioned by Horace W. Babcock in 1953, and was also considered in science fiction, as in Poul Anderson's novel Tau Zero (1970), but it did not come into common usage until advances in computer technology during the 1990s made the technique practical.
Some of the initial development work on adaptive optics was done by the US military during the Cold War and was intended for use in tracking Soviet satellites.
Microelectromechanical systems (MEMS) deformable mirrors and magnetics concept deformable mirrors are currently the most widely used technology in wavefront shaping applications for adaptive optics given their versatility, stroke, maturity of technology, and the high-resolution wavefront correction that they afford.
Tip–tilt correction
The simplest form of adaptive optics is tip–tilt correction, which corresponds to correction of the tilts of the wavefront in two dimensions (equivalent to correction of the position offsets for the image). This is performed using a rapidly moving tip–tilt mirror that makes small rotations around two of its axes. A significant fraction of the aberration introduced by the atmosphere can be removed in this way.
Tip–tilt mirrors are effectively segmented mirrors having only one segment which can tip and tilt, rather than having an array of multiple segments that can tip and tilt independently. Due to the relative simplicity of such mirrors and having a large stroke, meaning they have large correcting power, most AO systems use these, first, to correct low-order aberrations. Higher-order aberrations may then be corrected with deformable mirrors.
In astronomy
Atmospheric seeing
When light from a star or another astronomical object enters the Earth's atmosphere, atmospheric turbulence (introduced, for example, by different temperature layers and different wind speeds interacting) can distort and move the image in various ways. Visual images produced by any telescope larger than approximately are blurred by these distortions.
Wavefront sensing and correction
An adaptive optics system tries to correct these distortions, using a wavefront sensor which takes some of the astronomical light, a deformable mirror that lies in the optical path, and a computer that receives input from the detector. The wavefront sensor measures the distortions the atmosphere has introduced on the timescale of a few milliseconds; the computer calculates the optimal mirror shape to correct the distortions and the surface of the deformable mirror is reshaped accordingly. For example, an telescope (like the VLT or Keck) can produce AO-corrected images with an angular resolution of 30–60 milliarcsecond (mas) resolution at infrared wavelengths, while the resolution without correction is of the order of 1 arcsecond.}
In order to perform adaptive optics correction, the shape of the incoming wavefronts must be measured as a function of position in the telescope aperture plane. Typically the circular telescope aperture is split up into an array of pixels in a wavefront sensor, either using an array of small lenslets (a Shack–Hartmann wavefront sensor), or using a curvature or pyramid sensor which operates on images of the telescope aperture. The mean wavefront perturbation in each pixel is calculated. This pixelated map of the wavefronts is fed into the deformable mirror and used to correct the wavefront errors introduced by the atmosphere. It is not necessary for the shape or size of the astronomical object to be known – even Solar System objects which are not point-like can be used in a Shack–Hartmann wavefront sensor, and time-varying structure on the surface of the Sun is commonly used for adaptive optics at solar telescopes. The deformable mirror corrects incoming light so that the images appear sharp.
Using guide stars
Natural guide stars
Because a science target is often too faint to be used as a reference star for measuring the shape of the optical wavefronts, a nearby brighter guide star can be used instead. The light from the science target has passed through approximately the same atmospheric turbulence as the reference star's light and so its image is also corrected, although generally to a lower accuracy.
The necessity of a reference star means that an adaptive optics system cannot work everywhere on the sky, but only where a guide star of sufficient luminosity (for current systems, about magnitude 12–15) can be found very near to the object of the observation. This severely limits the application of the technique for astronomical observations. Another major limitation is the small field of view over which the adaptive optics correction is good. As the angular distance from the guide star increases, the image quality degrades. A technique known as "multiconjugate adaptive optics" uses several deformable mirrors to achieve a greater field of view.
Artificial guide stars
An alternative is the use of a laser beam to generate a reference light source (a laser guide star, LGS) in the atmosphere. There are two kinds of LGSs: Rayleigh guide stars and sodium guide stars. Rayleigh guide stars work by propagating a laser, usually at near ultraviolet wavelengths, and detecting the backscatter from air at altitudes between . Sodium guide stars use laser light at 589 nm to resonantly excite sodium atoms higher in the mesosphere and thermosphere, which then appear to "glow". The LGS can then be used as a wavefront reference in the same way as a natural guide star – except that (much fainter) natural reference stars are still required for image position (tip/tilt) information. The lasers are often pulsed, with measurement of the atmosphere being limited to a window occurring a few microseconds after the pulse has been launched. This allows the system to ignore most scattered light at ground level; only light which has travelled for several microseconds high up into the atmosphere and back is actually detected.}
In retinal imaging
Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. These optical aberrations diminish the quality of the image formed on the retina, sometimes necessitating the wearing of spectacles or contact lenses. In the case of retinal imaging, light passing out of the eye carries similar wavefront distortions, leading to an inability to resolve the microscopic structure (cells and capillaries) of the retina. Spectacles and contact lenses correct "low-order aberrations", such as defocus and astigmatism, which tend to be stable in humans for long periods of time (months or years). While correction of these is sufficient for normal visual functioning, it is generally insufficient to achieve microscopic resolution. Additionally, "high-order aberrations", such as coma, spherical aberration, and trefoil, must also be corrected in order to achieve microscopic resolution. High-order aberrations, unlike low-order, are not stable over time, and may change over time scales of 0.1s to 0.01s. The correction of these aberrations requires continuous, high-frequency measurement and compensation.
Measurement of ocular aberrations
Ocular aberrations are generally measured using a wavefront sensor, and the most commonly used type of wavefront sensor is the Shack–Hartmann. Ocular aberrations are caused by spatial phase nonuniformities in the wavefront exiting the eye. In a Shack-Hartmann wavefront sensor, these are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront allowing one to numerically reconstruct the wavefront information—an estimate of the phase nonuniformities causing aberration.
Correction of ocular aberrations
Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator such as a deformable mirror at yet another plane in the system conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. Alternatively, the local phase errors can be used directly to calculate the deformable mirror instructions.
Open loop vs. closed loop operation
If the wavefront error is measured before it has been corrected by the wavefront corrector, then operation is said to be "open loop".
If the wavefront error is measured after it has been corrected by the wavefront corrector, then operation is said to be "closed loop". In the latter case then the wavefront errors measured will be small, and errors in the measurement and correction are more likely to be removed. Closed loop correction is the norm.
Applications
Adaptive optics was first applied to flood-illumination retinal imaging to produce images of single cones in the living human eye. It has also been used in conjunction with scanning laser ophthalmoscopy to produce (also in living human eyes) the first images of retinal microvasculature and associated blood flow and retinal pigment epithelium cells in addition to single cones. Combined with optical coherence tomography, adaptive optics has allowed the first three-dimensional images of living cone photoreceptors to be collected.
In microscopy
In microscopy, adaptive optics is used to correct for sample-induced aberrations. The required wavefront correction is either measured directly using wavefront sensor or estimated by using sensorless AO techniques.
Other uses
Besides its use for improving nighttime astronomical imaging and retinal imaging, adaptive optics technology has also been used in other settings. Adaptive optics is used for solar astronomy at observatories such as the Swedish 1-m Solar Telescope, Dunn Solar Telescope, and Big Bear Solar Observatory. It is also expected to play a military role by allowing ground-based and airborne laser weapons to reach and destroy targets at a distance including satellites in orbit. The Missile Defense Agency Airborne Laser program is the principal example of this.
Adaptive optics has been used to enhance the performance of classical
and quantum free-space optical communication systems, and to control the spatial output of optical fibers.
Medical applications include imaging of the retina, where it has been combined with optical coherence tomography. Also the development of Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) has enabled correcting for the aberrations of the wavefront that is reflected from the human retina and to take diffraction limited images of the human rods and cones. Adaptive and active optics are also being developed for use in glasses to achieve better than 20/20 vision, initially for military applications.
After propagation of a wavefront, parts of it may overlap leading to interference and preventing adaptive optics from correcting it. Propagation of a curved wavefront always leads to amplitude variation. This needs to be considered if a good beam profile is to be achieved in laser applications. In material processing using lasers, adjustments can be made on the fly to allow for variation of focus-depth during piercing for changes in focal length across the working surface. Beam width can also be adjusted to switch between piercing and cutting mode. This eliminates the need for optic of the laser head to be switched, cutting down on overall processing time for more dynamic modifications.
Adaptive optics, especially wavefront-coding spatial light modulators, are frequently used in optical trapping applications to multiplex and dynamically reconfigure laser foci that are used to micro-manipulate biological specimens.
Beam stabilization
A rather simple example is the stabilization of the position and direction of laser beam between modules in a large free space optical communication system. Fourier optics is used to control both direction and position. The actual beam is measured by photo diodes. This signal is fed into analog-to-digital converters and then a microcontroller which runs a PID controller algorithm. The controller then drives digital-to-analog converters which drive stepper motors attached to mirror mounts.
If the beam is to be centered onto 4-quadrant diodes, no analog-to-digital converter is needed. Operational amplifiers are sufficient.
See also
Active optics
Adjustable-focus eyeglasses
Angular diameter
Angular size
Atmospheric correction (for satellite imaging of the Earth)
Claire Max, adaptive optics pioneer
Deformable mirror
Greenwood frequency
Holography: real-time holography
Image stabilization
List of telescope parts and construction
Nonlinear optics: optical phase conjugation
Van Cittert–Zernike theorem#Adaptive optics
Wavefront
Wavefront sensor
William Happer, adaptive optics pioneer
References
Bibliography
External links
10th International Workshop on Adaptive Optics for Industry and Medicine, Padova (Italy), 15–19 June 2015
Adaptive Optics Tutorial at CTIO A. Tokovinin
Research groups and companies with interests in Adaptive Optics
Space-based vs. Ground-based telescopes with Adaptive Optics
Ten Years of VLT Adaptive Optics (ESO : ann11078 : 25 November 2011)
Center for Adaptive Optics
Telescopes
Astronomical imaging
Optical devices
Articles containing video clips
sv:Teleskop#Adaptiv optik | Adaptive optics | [
"Materials_science",
"Astronomy",
"Engineering"
] | 2,899 | [
"Glass engineering and science",
"Telescopes",
"Optical devices",
"Astronomical instruments"
] |
216,102 | https://en.wikipedia.org/wiki/Genetically%20modified%20food | Genetically modified foods (GM foods), also known as genetically engineered foods (GE foods), or bioengineered foods are foods produced from organisms that have had changes introduced into their DNA using various methods of genetic engineering. Genetic engineering techniques allow for the introduction of new traits as well as greater control over traits when compared to previous methods, such as selective breeding and mutation breeding.
The discovery of DNA and the improvement of genetic technology in the 20th century played a crucial role in the development of transgenic technology. In 1988, genetically modified microbial enzymes were first approved for use in food manufacture. Recombinant rennet was used in few countries in the 1990s. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its unsuccessful Flavr Savr delayed-ripening tomato. Most food modifications have primarily focused on cash crops in high demand by farmers such as soybean, maize/corn, canola, and cotton. Genetically modified crops have been engineered for resistance to pathogens and herbicides and for better nutrient profiles. The production of golden rice in 2000 marked a further improvement in the nutritional value of genetically modified food. GM livestock have been developed, although, , none were on the market. As of 2015, the AquAdvantage salmon was the only animal approved for commercial production, sale and consumption by the FDA. It is the first genetically modified animal to be approved for human consumption.
Genes encoded for desired features, for instance an improved nutrient level, pesticide and herbicide resistances, and the possession of therapeutic substances, are often extracted and transferred to the target organisms, providing them with superior survival and production capacity. The improved utilization value usually gave consumers benefit in specific aspects.
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation, which varied due to geographical, religious, social, and other factors.
Definition
Genetically modified foods are foods produced from organisms that have had changes introduced into their DNA using the methods of genetic engineering as opposed to traditional cross breeding. In the U.S., the Department of Agriculture (USDA) and the Food and Drug Administration (FDA) favor the use of the term genetic engineering over genetic modification as being more precise; the USDA defines genetic modification to include "genetic engineering or other more traditional methods".
According to the World Health Organization, "Foods produced from or using GM organisms are often referred to as GM foods."
What constitutes a genetically modified organism (GMO) is not clear and varies widely between countries, international bodies and other communities, has changed significantly over time, and was subject to numerous exceptions based on "convention", such as exclusion of mutation breeding from the EU definition.
Even greater inconsistency and confusion is associated with various "Non-GMO" or "GMO-free" labelling schemes in food marketing, where even products such as water or salt, that do not contain any organic substances and genetic material (and thus cannot be genetically modified by definition) are being labelled to create an impression of being "more healthy."
History
Human-directed genetic manipulation of food began with the domestication of plants and animals through artificial selection at about 10,500 to 10,100 BC. The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification (GM). With the discovery of DNA in the early 1900s and various advancements in genetic techniques through the 1970s it became possible to directly alter the DNA and genes within food.
Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration. In the early 1990s, recombinant chymosin was approved for use in several countries. Cheese had typically been made using the enzyme complex rennet that had been extracted from cows' stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds.
The first genetically modified food approved for release was the Flavr Savr tomato in 1994. Developed by Calgene, it was engineered to have a longer shelf life by inserting an antisense gene that delayed ripening. China was the first country to commercialize a transgenic crop in 1993 with the introduction of virus-resistant tobacco. In 1995, Bacillus thuringiensis (Bt) Potato was approved for cultivation, making it the first pesticide producing crop to be approved in the US. Other genetically modified crops receiving marketing approval in 1995 were: canola with modified oil composition, Bt maize/corn, cotton resistant to the herbicide bromoxynil, Bt cotton, glyphosate-tolerant soybeans, virus-resistant squash, and another delayed ripening tomato.
With the creation of golden rice in 2000, scientists had genetically modified food to increase its nutrient value for the first time.
By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported. The US was the leading country in the production of GM foods in 2011, with twenty-five GM crops having received regulatory approval. In 2015, 92% of corn, 94% of soybeans, and 94% of cotton produced in the US were genetically modified varieties.
The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.
A GM white button mushroom (Agaricus bisporus) has been approved in the United States since 2016. See §Mushroom below.
The most widely planted GMOs are designed to tolerate herbicides. The use of herbicides presents a strong selection pressure on treated weeds to gain resistance to the herbicide. Widespread planting of GM crops resistant to glyphosate has led to the use of glyphosate to control weeds and many weed species, such as Palmer amaranth, acquiring resistance to the herbicide.
In 2021, the first CRISPR-edited food has gone on public sale in Japan. Tomatoes were genetically modified for around five times the normal amount of possibly calming GABA. CRISPR was first applied in tomatoes in 2014. Shortly afterwards, the first CRISPR-gene-edited marine animal/seafood and second set of CRISPR-edited food has gone on public sale in Japan: two fish of which one species grows to twice the size of natural specimens due to disruption of leptin, which controls appetite, and the other grows to 1.2 the natural average size with the same amount of food due to disabled myostatin, which inhibits muscle growth.
Process
Creating genetically modified food is a multi-step process. The first step is to identify a useful gene from another organism that you would like to add. The gene can be taken from a cell or artificially synthesised, and then combined with other genetic elements, including a promoter and terminator region and a selectable marker. Then the genetic elements are inserted into the targets genome. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors. In plants the DNA is often inserted using Agrobacterium-mediated recombination, biolistics or electroporation. As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene.
Traditionally the new genetic material was inserted randomly within the host genome. Gene targeting techniques, which creates double-stranded breaks and takes advantage on the cells natural homologous recombination repair systems, have been developed to target insertion to exact locations. Genome editing uses artificially engineered nucleases that create breaks at specific points. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient.
By organism
Crops
Genetically modified crops (GM crops) are genetically modified plants that are used in agriculture. The first crops developed were used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops could be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. GM crops have been produced to improve harvests through reducing insect pressure, increase nutrient value and tolerate different abiotic stresses. As of 2018, the commercialised crops are limited mostly to cash crops like cotton, soybean, maize/corn and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance.
The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties. Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis (Bt) bacterium and code for delta endotoxins. A few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence and altering the plants composition.
Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100. Geographically though the spread has been uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa in 2013 only 10% of world cropland was GM, with the US, Canada, Brazil, and Argentina being 90% of that. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013. Although doubts have been raised, most studies have found growing GM crops to be beneficial to farmers through decreased pesticide use as well as increased crop yield and farm profit.
Fruits and vegetables
Long before humans began using transgenics, sweet potato emerged naturally 8000 years ago by embedding of genes from bacteria, that increased its sugar content. Kyndt et al 2015 finds Agrobacterium tumefaciens DNA from this natural transgenic event still in the crop's genome today.
Papaya was genetically modified to resist the ringspot virus (PSRV). "SunUp" is a transgenic red-fleshed Sunset papaya cultivar that is homozygous for the coat protein gene PRSV; "Rainbow" is a yellow-fleshed F1 hybrid developed by crossing 'SunUp' and nontransgenic yellow-fleshed "Kapoho". The GM cultivar was approved in 1998 and by 2010 80% of Hawaiian papaya was genetically engineered. The New York Times stated, "without it, the state's papaya industry would have collapsed". In China, a transgenic PRSV-resistant papaya was developed by South China Agricultural University and was first approved for commercial planting in 2006; as of 2012 95% of the papaya grown in Guangdong province and 40% of the papaya grown in Hainan province was genetically modified. In Hong Kong, where there is an exemption on growing and releasing any varieties of GM papaya, more than 80% of grown and imported papayas were transgenic.
The New Leaf potato, a GM food developed using Bacillus thuringiensis (Bt), was made to provide in-plant protection from the yield-robbing Colorado potato beetle. The New Leaf potato, brought to market by Monsanto in the late 1990s, was developed for the fast food market. It was withdrawn in 2001 after retailers rejected it and food processors ran into export problems. In 2011, BASF requested the European Food Safety Authority's approval for cultivation and marketing of its Fortuna potato as feed and food. The potato was made resistant to late blight by adding resistant genes blb1 and blb2 that originate from the Mexican wild potato Solanum bulbocastanum. In February 2013, BASF withdrew its application. In 2014, the USDA approved a genetically modified potato developed by J. R. Simplot Company that contained ten genetic modifications that prevent bruising and produce less acrylamide when fried. The modifications eliminate specific proteins from the potatoes, via RNA interference, rather than introducing novel proteins.
As of 2005, about 13% of the Zucchini grown in the US was genetically modified to resist three viruses; that variety is also grown in Canada.
In 2013, the USDA approved the import of a GM pineapple that is pink in color and that "overexpresses" a gene derived from tangerines and suppress other genes, increasing production of lycopene. The plant's flowering cycle was changed to provide for more uniform growth and quality. The fruit "does not have the ability to propagate and persist in the environment once they have been harvested", according to USDA APHIS. According to Del Monte's submission, the pineapples are commercially grown in a "monoculture" that prevents seed production, as the plant's flowers aren't exposed to compatible pollen sources. Importation into Hawaii is banned for "plant sanitation" reasons. Del Monte launched sales of their pink pineapples in October 2020, marketed under the name "Pinkglow".
In February 2015 Arctic Apples were approved by the USDA, becoming the first genetically modified apple approved for sale in the US. Gene silencing is used to reduce the expression of polyphenol oxidase (PPO), thus preventing the fruit from browning.
Maize/corn
Maize/corn used for food and ethanol has been genetically modified to tolerate various herbicides and to express a protein from Bacillus thuringiensis (Bt) that kills certain insects. About 90% of the corn grown in the US was genetically modified in 2010. In the US in 2015, 81% of corn acreage contained the Bt trait and 89% of corn acreage contained the glyphosate-tolerant trait. Corn can be processed into grits, meal and flour as an ingredient in pancakes, muffins, doughnuts, breadings and batters, as well as baby foods, meat products, cereals and some fermented products. Corn-based masa flour and masa dough are used in the production of taco shells, corn chips and tortillas.
Soy
Soybeans accounted for half of all genetically modified crops planted in 2014. Genetically modified soybean has been modified to tolerate herbicides and produce healthier oils. In 2015, 94% of soybean acreage in the U.S. was genetically modified to be glyphosate-tolerant.
Rice
Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A, a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. In 2018 it gained its first approvals for use as food.
Wheat
As of December 2017, genetically modified wheat has been evaluated in field trials, but has not been released commercially.
Mushroom
In April 2016, a white button mushroom (Agaricus bisporus) modified using the CRISPR technique received de facto approval in the United States, after the USDA said it would not have to go through the agency's regulatory process. The agency considers the mushroom exempt because the editing process did not involve the introduction of foreign DNA, rather several base pairs were deleted from a duplicated gene coding for an enzyme that causes browning causing a 30% reduction in the level of that enzyme.
Livestock
Genetically modified livestock are organisms from the group of cattle, sheep, pigs, goats, birds, horses and fish kept for human consumption, whose genetic material (DNA) has been altered using genetic engineering techniques. In some cases, the aim is to introduce a new trait to the animals which does not occur naturally in the species, i.e. transgenesis.
A 2003 review published on behalf of Food Standards Australia New Zealand examined transgenic experimentation on terrestrial livestock species as well as aquatic species such as fish and shellfish. The review examined the molecular techniques used for experimentation as well as techniques for tracing the transgenes in animals and products as well as issues regarding transgene stability.
Some mammals typically used for food production have been modified to produce non-food products, a practice sometimes called Pharming.
Salmon
A GM salmon, awaiting regulatory approval since 1997, was approved for human consumption by the American FDA in November 2015, to be raised in specific land-based hatcheries in Canada and Panama.
Microbes
Bacteriophages are an economically significant cause of culture failure in cheese production. Various culture microbes - especially Lactococcus lactis and Streptococcus thermophilus - have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications.
Derivative products
Lecithin
Lecithin is a naturally occurring lipid. It can be found in egg yolks and oil-producing plants. It is an emulsifier and thus is used in many foods. Corn, soy and safflower oil are sources of lecithin, though the majority of lecithin commercially available is derived from soy. Sufficiently processed lecithin is often undetectable with standard testing practices. According to the FDA, no evidence shows or suggests hazard to the public when lecithin is used at common levels. Lecithin added to foods amounts to only 2 to 10 percent of the 1 to 5 g of phosphoglycerides consumed daily on average. Nonetheless, consumer concerns about GM food extend to such products. This concern led to policy and regulatory changes in Europe in 2000, when Regulation (EC) 50/2000 was passed which required labelling of food containing additives derived from GMOs, including lecithin. Because of the difficulty of detecting the origin of derivatives like lecithin with current testing practices, European regulations require those who wish to sell lecithin in Europe to employ a comprehensive system of Identity preservation (IP).
Sugar
The US imports 10% of its sugar, while the remaining 90% is extracted from sugar beet and sugarcane. After deregulation in 2005, glyphosate-resistant sugar beet was extensively adopted in the United States. 95% of beet acres in the US were planted with glyphosate-resistant seed in 2011. GM sugar beets are approved for cultivation in the US, Canada and Japan; the vast majority are grown in the US. GM beets are approved for import and consumption in Australia, Canada, Colombia, EU, Japan, Korea, Mexico, New Zealand, Philippines, the Russian Federation and Singapore. Pulp from the refining process is used as animal feed. The sugar produced from GM sugar beets contains no DNA or protein – it is just sucrose that is chemically indistinguishable from sugar produced from non-GM sugar beets. Independent analyses conducted by internationally recognized laboratories found that sugar from Roundup Ready sugar beets is identical to the sugar from comparably grown conventional (non-Roundup Ready) sugar beets.
Vegetable oil
Most vegetable oil used in the US is produced from GM crops canola, maize/corn, cotton and soybeans. Vegetable oil is sold directly to consumers as cooking oil, shortening and margarine and is used in prepared foods. There is a vanishingly small amount of protein or DNA from the original crop in vegetable oil. Vegetable oil is made of triglycerides extracted from plants or seeds and then refined and may be further processed via hydrogenation to turn liquid oils into solids. The refining process removes all, or nearly all non-triglyceride ingredients.
Other uses
Animal feed
Livestock and poultry are raised on animal feed, much of which is composed of the leftovers from processing crops, including GM crops. For example, approximately 43% of a canola seed is oil. What remains after oil extraction is a meal that becomes an ingredient in animal feed and contains canola protein. Likewise, the bulk of the soybean crop is grown for oil and meal. The high-protein defatted and toasted soy meal becomes livestock feed and dog food. 98% of the US soybean crop goes for livestock feed. In 2011, 49% of the US maize/corn harvest was used for livestock feed (including the percentage of waste from distillers grains). "Despite methods that are becoming more and more sensitive, tests have not yet been able to establish a difference in the meat, milk, or eggs of animals depending on the type of feed they are fed. It is impossible to tell if an animal was fed GM soy just by looking at the resulting meat, dairy, or egg products. The only way to verify the presence of GMOs in animal feed is to analyze the origin of the feed itself."
A 2012 literature review of studies evaluating the effect of GM feed on the health of animals did not find evidence that animals were adversely affected, although small biological differences were occasionally found. The studies included in the review ranged from 90 days to two years, with several of the longer studies considering reproductive and intergenerational effects.
Enzymes produced by genetically modified microorganisms are also integrated into animal feed to enhance availability of nutrients and overall digestion. These enzymes may also provide benefit to the gut microbiome of an animal, as well as hydrolyse antinutritional factors present in the feed.
Proteins
The foundation of genetic engineering is DNA, which directs the production of proteins. Proteins are also the common source of human allergens. When new proteins are introduced they must be assessed for potential allergenicity.
Rennet is a mixture of enzymes used to coagulate milk into cheese. Originally it was available only from the fourth stomach of calves, and was scarce and expensive, or was available from microbial sources, which often produced unpleasant tastes. Genetic engineering made it possible to extract rennet-producing genes from animal stomachs and insert them into bacteria, fungi or yeasts to make them produce chymosin, the key enzyme. The modified microorganism is killed after fermentation. Chymosin is isolated from the fermentation broth, so that the Fermentation-Produced Chymosin (FPC) used by cheese producers has an amino acid sequence that is identical to bovine rennet. The majority of the applied chymosin is retained in the whey. Trace quantities of chymosin may remain in cheese.
FPC was the first artificially produced enzyme to be approved by the US Food and Drug Administration. FPC products have been on the market since 1990 and as of 2015 had yet to be surpassed in commercial markets. In 1999, about 60% of US hard cheese was made with FPC. Its global market share approached 80%. By 2008, approximately 80% to 90% of commercially made cheeses in the US and Britain were made using FPC.
In some countries, recombinant (GM) bovine somatotropin (also called rBST, or bovine growth hormone or BGH) is approved for administration to increase milk production. rBST may be present in milk from rBST treated cows, but it is destroyed in the digestive system and even if directly injected into the human bloodstream, has no observable effect on humans. The FDA, World Health Organization, American Medical Association, American Dietetic Association and the National Institutes of Health have independently stated that dairy products and meat from rBST-treated cows are safe for human consumption. On 30 September 2010, the United States Court of Appeals, Sixth Circuit, analyzing submitted evidence, found a "compositional difference" between milk from rBGH-treated cows and milk from untreated cows. The court stated that milk from rBGH-treated cows has: increased levels of the hormone Insulin-like growth factor 1 (IGF-1); higher fat content and lower protein content when produced at certain points in the cow's lactation cycle; and more somatic cell counts, which may "make the milk turn sour more quickly".
Benefits
Genetically modified foods are usually edited to have some desired characteristics, including certain benefits for surviving extreme environments, an enhanced level to nutrition, the access of therapeutic substances, and the resistance genes to pesticide and herbicides. These characteristics could be beneficial to humans and the environment in certain ways.
Prepare for extreme weather
Plants that have undergone genetic modification are capable of surviving extreme weather. Genetically modified (GM) food crops can be cultivated in locations with unfavorable climatic conditions on occasion. The quality and yield of genetically modified foods are often improved. These foods tend to grow more quickly than conventionally cultivated ones. Furthermore, the application of genetically modified food could be beneficial in resisting drought and poor soil.
Nutritional enhancement
Increased levels of specific nutrients in food crops can be achieved by genetic engineering. The study of this technique, sometimes known as nutritional improvement, is already well advanced. Foods are well monitored to gain specific qualities that became practical, for example, concentrated nutraceutical levels and health-promoting chemicals, making them a desirable component of a varied diet. Among the notable breakthroughs of genetic modification is Golden Rice, whose genome is altered by the injection of the vitamin A gene from a daffodil plant conditioning provitamin A production. This increases the activity of phytoene synthase, which therefore synthesizes a higher amount of beta-carotene, followed by modification and improvement of the level of iron and bioavailability. This affects the rice’s color and vitamin content, which is beneficial in places where vitamin A shortage is common. In addition, increased mineral, vitamin A, and protein content has played a critical role in preventing childhood blindness and iron deficiency anemia.
Lipid composition could also be manipulated to produce desirable traits and essential nutrients. Scientific evidence has shown that inadequate consumption of omega-3 polyunsaturated fatty acids is generally associated with the development of chronic diseases and developmental aberrations. Alimentary lipids can be modified to gain an increased saturated fatty acid together with a decreased polyunsaturated fatty acid component. Genes coded for the synthesis of unsaturated fatty acids are therefore introduced into plant cells, increasing the synthesis of polyunsaturated omega-3 acids. This omega-3 polyunsaturated fatty acid is responsible to lower the level of LDL cholesterol and triglyceride level as well as the incidence rate of cardiovascular diseases.
Production of therapeutic substances
The genetically modified organisms, including potato, tomato, and spinach are applied in the production of substances that stimulate the immune system to respond to specific pathogens. With the help of recombinant DNA techniques, the genes encoded for viral or bacterial antigens could be genetically transcribed and translated into plant cells. Antibodies are often produced in response to the introduction of antigens, in which the pathological microflora obtains the immune response towards specific antigens. The transgenic organisms are usually applied to use as oral vaccines, which allows the active substances to enter the human digestive system, targeting the alimentary tract in which stimulate a mucosal immune response. This technique has been widely used in vaccine production including rice, maize, and soybeans. Additionally, transgenic plants are widely used as bioreactors in the production of pharmaceutical proteins and peptides, including vaccines, hormones, human serum albumin (HSA), etc. The suitability of transgenic plants can helps meet the demand for the rapid growth of therapeutic antibodies. All this has given new impetus to the development of medicine.
Health and safety
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.
Opponents claim that long-term health risks have not been adequately assessed and propose various combinations of additional testing, labeling or removal from the market.
There are no certifications for foods that have been verified to both be genetically modified – in particular in a way that is ensured to be well-understood, safe and environmentally friendly – as well as otherwise organic (i.e. produced without the use of chemical pesticides) in the U.S. and possibly the world, giving consumers the binary choice of either genetically modified food or organic food.
Testing
The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence to determine if further testing is required, while many countries such as those in the European Union, Brazil and China only authorize GMO cultivation on a case-by-case basis. In the U.S. the FDA determined that GMOs are "generally recognized as safe" (GRAS) and therefore do not require additional testing if the GMO product is substantially equivalent to the non-modified product. If new substances are found, further testing may be required to satisfy concerns over potential toxicity, allergenicity, possible gene transfer to humans or genetic outcrossing to other organisms.
Some studies purporting to show harm have been discredited, in some cases leading to academic condemnation against the researchers such as the Pusztai affair and the Séralini affair.
Regulation
Government regulation of GMO development and release varies widely between countries. Marked differences separate GMO regulation in the U.S. and GMO regulation in the European Union. Regulation also varies depending on the intended product's use. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. European and EU regulation has been far more restrictive than anywhere else in the world: In 2013 only 1 cultivar of maize/corn and 1 cultivar of potato were approved, and eight EU member states did not allow even those.
United States regulations
In the U.S., three government organizations regulate GMOs. The FDA checks the chemical composition of organisms for potential allergens. The United States Department of Agriculture (USDA) supervises field testing and monitors the distribution of GM seeds. The United States Environmental Protection Agency (EPA) is responsible for monitoring pesticide usage, including plants modified to contain proteins toxic to insects. Like USDA, EPA also oversees field testing and the distribution of crops that have had contact with pesticides to ensure environmental safety. In 2015 the Obama administration announced that it would update the way the government regulated GM crops.
In 1992 FDA published "Statement of Policy: Foods derived from New Plant Varieties". This statement is a clarification of FDA's interpretation of the Food, Drug, and Cosmetic Act with respect to foods produced from new plant varieties developed using recombinant deoxyribonucleic acid (rDNA) technology. FDA encouraged developers to consult with the FDA regarding any bioengineered foods in development. The FDA says developers routinely do reach out for consultations. In 1996 FDA updated consultation procedures.
The StarLink corn recalls occurred in the autumn of 2000, when over 300 food products were found to contain a genetically modified maize/corn that had not been approved for human consumption. It was the first-ever recall of a genetically modified food.
European regulations
The European Union's control of genetically modified organisms is a particular part of an image of the promise and limitations of debate as a framework for supranational regulation. The issues posed by the EU’s GMO regulation have caused major problems in agriculture, politics, societies, status, and other fields. 12 The EU law regulates the development and use of GMOs by allocating responsibilities to different authorities, public and private, accompanied by limited recognition of public information, consultation, and participation rights. The European Convention on Human Rights (ECHR) provided certain rights and protection for GM biotechnology in the EU. However, the value of human dignity, liberty, equality, and solidarity, as well as the status of democracy and law, as emphasized in the European Charter of Fundamental Rights, are considered the ethical framework governing the employment of scientific and technological research and development.
Due to the political, religious, and social differences in EU countries, the EU’s position on GM has been divided geographically, including more than 100 “GM-free” regions. Different regional attitudes to GM foods make it nearly impossible to reach a common agreement on GM foods. In recent years, however, the sense of crisis that this has generated for the European Union has intensified. Some member states, including Germany, France, Austria, Italy, and Luxembourg, have even banned the planting of certain GM food in their countries in response to public resistance to GM foods. The whole thing is set against a backdrop of consumers holding the attitude that GM foods are harmful to both the environment and human health, revolting against GM foods in an anti-biotech coalition. The current political deadlock over GM foods is also a consequence of the ban and has yet to be resolved by scientific methods and processes. Public opinion tends to politicize the GM issue, which is the main obstacle to an agreement in the EU.
In the United Kingdom, the Food Standards Agency assesses GM foods for their toxicity, nutritional value, and potential to cause allergic reactions. GM foods can be authorised for sale where they present no risk to health, do not mislead consumers, and have nutritional value at least equivalent to non-modified counterparts. The Genetic Technology (Precision Breeding) Act passed into law on 23 March 2023. The UK government said it would allow farmers to "grow crops which are drought and disease resistant, reduce use of fertilisers and pesticides, and help breed animals that are protected from catching harmful diseases".
Labeling
As of 2015, 64 countries require labeling of GMO products in the marketplace.
US and Canadian national policy is to require a label only given significant composition differences or documented health impacts, although some individual US states (Vermont, Connecticut and Maine) enacted laws requiring them. In July 2016, Public Law 114-214 was enacted to regulate labeling of GMO food on a national basis.
In some jurisdictions, the labeling requirement depends on the relative quantity of GMO in the product. A study that investigated voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%.
In the European Union all food (including processed food) or feed that contains greater than 0.9% GMOs must be labelled.
At the same time, due to lack of single, clear definition of GMO, a number of foods created using genetic engineering techniques (such as mutation breeding) are excluded from labelling and regulation based on "convention" and traditional usage.
The Non-GMO Project is the sole U.S. organization that does verifiable testing and places seals on labels for presence of GMO in products. The "Non-GMO Project Seal" indicates that the product contains 0.9% or less GMO ingredients, which is the European Union's standard for labeling.
Efforts across the world that are being made to help restrict and label GMO's in food involve anti-genetic engineering campaigns and in America the "Just Label It" movement is joining organizations together to call for mandatory labeling.
Detection
Testing on GMOs in food and feed is routinely done using molecular techniques such as PCR and bioinformatics.
In a January 2010 paper, the extraction and detection of DNA along a complete industrial soybean oil processing chain was described to monitor the presence of Roundup Ready (RR) soybean: "The amplification of soybean lectin gene by end-point polymerase chain reaction (PCR) was successfully achieved in all the steps of extraction and refining processes, until the fully refined soybean oil. The amplification of RR soybean by PCR assays using event-specific primers was also achieved for all the extraction and refining steps, except for the intermediate steps of refining (neutralisation, washing and bleaching) possibly due to sample instability. The real-time PCR assays using specific probes confirmed all the results and proved that it is possible to detect and quantify genetically modified organisms in the fully refined soybean oil. To our knowledge, this has never been reported before and represents an important accomplishment regarding the traceability of genetically modified organisms in refined oils."
According to Thomas Redick, detection and prevention of cross-pollination is possible through the suggestions offered by the Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS). Suggestions include educating farmers on the importance of coexistence, providing farmers with tools and incentives to promote coexistence, conducting research to understand and monitor gene flow, providing assurance of quality and diversity in crops, and providing compensation for actual economic losses for farmers.
Regulation methodology design
Controversies
The genetically modified foods controversy consists of a set of disputes over the use of food made from genetically modified crops. The disputes involve consumers, farmers, biotechnology companies, governmental regulators, non-governmental organizations, environmental and political activists and scientists. The major disagreements include whether GM foods can be safely consumed, harm the human body and the environment and/or are adequately tested and regulated. The objectivity of scientific research and publications has been challenged. Farming-related disputes include the use and impact of pesticides, seed production and use, side effects on non-GMO crops/farms, and potential control of the GM food supply by seed companies.
The conflicts have continued since GM foods were invented. They have occupied the media, the courts, local, regional, national governments, and international organizations.
"GMO-free" labelling schemes are causing controversies in farming community due to lack of clear definition, inconsistency of their application and are described as "deceptive".
Allergenicity
New allergies could be introduced inadvertently, according to scientists, community groups, and members of the public concerned about the genetic variation of foods. An example involves the methionine rich soybean production. Methionine is an amino acid obtained by synthesizing substances derived from Brazil nuts, which could be an allergen. A gene from the Brazil nut was inserted into soybeans during laboratory trials. Because it was discovered that those who were allergic to Brazil nuts could also be allergic to genetically modified soybeans, the experiment was stopped. In vitro assays such as RAST or serum from people allergic to the original crop could be applied to test the allergenicity of GM goods with known source of the gene. This was established in GM soybeans that expressed Brazil nut 2S proteins and GM potatoes that expressed cod protein genes. The expression and synthesis of new proteins that were previously unavailable in parental cells were achieved by gene transfer from the cells of one organism to the nuclei of another organism. The potential risks of allergy that may develop with the intake of transgenic food come from the amino acid sequence in protein formation. However, there have been no reports of allergic reactions to currently approved GM foods for human consumption, and experiments showed no measurable difference in allergenicity between GM and non-GM soybeans.
Resistance genes
Scientists suggest that consumers should also pay attention to the health issues associated with the utilizations of pesticide-resistant and herbicide-resistant plants. The ‘Bt’ genes cause insect resistance in today's GM crops; however, other methods to confer insect resistance are in the works. The Bt genes are usually obtained from the soil bacteria Bacillus thuringiensis, and they can generate a protein that breaks down in the insect’s gut, releasing a toxin called delta-endotoxin, which causes paralysis and death. Concerns about resistance and off-target effects of crops expressing Bt toxins, consequences of transgenic herbicide-tolerant plants caused by the use of herbicide, and the transfer of gene expression from GM crops via vertical and horizontal gene transfer are all related to the expression of transgenic material.
Environmental impacts
Another concern raised by ecologists is the possible spread of the pest-resistant genes to wildlife. This is an example of gene pollution, which is often associated with a decrease in biodiversity, proliferation resistant weeds, and the formation of new pests and pathogens.
Studies have proven that herbicide resistant pollen from transgenic rapeseed could spread up to 3 km, while the average gene spread of transgenic crops is 2 km and even reach to maximum 21 kilometers. The high aggressiveness of these GM crops could cause certain disasters by competing with traditional crops for water, light, and nutrients. Crossbreeding of spreading pollens with the surrounding organisms has led to the introduction of the modified resistant genes. An international database that demonstrated genetic contaminations with undesired seeds has been a major problem due to the expansion of field trials and commercially viable cultivation of GM crops around the world. Even a decrease in the number of one pest under the impact of a pest-resistant weed could increase the population of other pests that compete with it. Beneficial insects, so named because they prey on crop pests, were also exposed to dangerous doses of Bt.
Other concerns
The introduction of GM crops in place of more locally adapted varieties could lead to long-term negative effects on the entire agricultural system. Much of the concern with GM technology involves encoding genes that increase or decrease biochemicals. Alternatively, the newly programmed enzyme might result in the consumption of the substrate, forming and accumulating the products.
In terms of socioeconomics, GM crops are usually dependent on high levels of external products, for example, pesticides and herbicides, which limit GM crops to high-input agriculture. This, coupled with the widespread patents held on GM crops, limited farmers’ trading rights over the harvested seeds without paying royalties. Other arguments against GM crops held by some opponents are based on the high costs of isolating and distributing GM crops over non-GM crops.
Consumers could be categorized based on their attitudes regarding genetically modified foods. The ‘attitudinal’ sector of US consumers could be explained in part by cognitive characteristics that are not always observable. Individual characteristics and values, for example, can play a role in shaping consumer acceptance of biotechnology. The concept of transplanting animal DNA into plants is unsettling for many people. Studies have shown that consumers' attitudes towards GM technology are positively correlated to their knowledge about it. It was found that elevated acceptance of genetic modification is usually associated with a high education level, whereas high levels of perceived risks are associated with the opposite. People tend to worry about unpredictable dangers due to the lack of sufficient knowledge to predict or avoid negative impacts.
Another crucial link of the change in consumer attitudes towards genetically modified foods has been shown to be closely related to their interaction with socioeconomic and demographic characteristics, for example, age, ethnicity, residence, and level of consumption. Opposition to genetically modified foods could also include religious and cultural groups, because the nature of GM foods goes against what they believe are natural products. On the one hand, it was found that consumers in most European countries, especially in northern Europe, the UK and Germany, believe that the benefits of GM foods do not outweigh the potential risks. On the other hand, consumers in the United States and other European countries generally hold to view that the risks of GM foods could be far less than the benefits it brought. GM foods are then expected to be supported by more appropriate policies and clearer regulations.
See also
List of genetically modified crops
Genetically modified crops
Genetically modified food controversies
Genetically modified organisms
California Proposition 37 (2012) - rejected labeling initiative
Pharming (genetics) – use of genetically modified mammals to produce drugs
Regulation of the release of genetic modified organisms
StarLink corn recall in 2000
References
External links
Food industry
Genetic engineering
Genetically modified organisms in agriculture
Molecular biology | Genetically modified food | [
"Chemistry",
"Engineering",
"Biology"
] | 9,466 | [
"Biochemistry",
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
216,104 | https://en.wikipedia.org/wiki/Protein%20engineering | Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017.
There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code.
The applications in numerous fields, including medicine and industrial bioprocessing, are vast and numerous.
Approaches
Rational design
In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins.
Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.
Multiple sequence alignment
Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below.
This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment.
Clustal omega
This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments.
MAFFT
This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions.
K-Align
This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments.
Multiple sequence comparison by log expectation (MUSCLE)
This method utilizes Kmer and Kimura distances to generate multiple sequence alignments.
T-Coffee
This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W.
Coevolutionary analysis
Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally.
Structural prediction
De novo generation of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading.
Ab initio
These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein.
Fragment based
These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR.
Homology modeling
These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER.
Protein threading
Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER.
For more information on rational design see site-directed mutagenesis.
Multivalent binding
Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding.
Multivalent proteins
Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics.
The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy.
Directed evolution
In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily.
Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature.
The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have showed large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions.
Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability.
Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods.
Asexual methods
Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis.
Random mutagenesis
Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below.
Error prone PCR
Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions.
Rates of error in PCR can be increased in the following ways:
Increase concentration of magnesium chloride, which stabilizes non complementary base pairing.
Add manganese chloride to reduce base pair specificity.
Increased and unbalanced addition of dNTPs.
Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP.
Increase concentration of Taq polymerase.
Increase extension time.
Increase cycle time.
Use less accurate Taq polymerase.
Also see polymerase chain reaction for more information.
Rolling circle error-prone PCR
This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal.
Chemical mutagenesis
Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow.
Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil.
Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication.
Nitrous acid causes transversion by de-amination of adenine and cytosine.
The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties.
Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM)
This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites.
Mutagenesis by random insertion and deletion
This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids.
Transposon based random mutagenesis
Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons.
Random mutagenesis methods altering the target DNA length
These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene.
Mutator strains
Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome.
Focused mutagenesis
Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below.
Site saturation mutagenesis
Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR.
Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time.
Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified.
Sequence saturation mutagenesis (SeSaM)
Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III.
Single primer reactions in parallel (SPRINP)
This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation.
Mega primed and ligase free focused mutagenesis
This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR.
Ω-PCR
This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid.
PFunkel-ominchange-OSCARR
This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene.
Trimer-dimer mutagenesis
This method removes redundant codons and stop codons.
Cassette mutagenesis
This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA.
Sexual methods
Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes.
In vitro homologous recombination
Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics.
DNA shuffling
This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR.
Random priming in vitro recombination (RPR)
This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA.
Truncated metagenomic gene-specific PCR
This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes.
Staggered extension process (StEP)
This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme.
Random chimeragenesis on transient templates (RACHITT)
This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA.
Synthetic shuffling
Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included.
In vivo Homologous Recombination
Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli.
Mutagenic organized recombination process by homologous in vivo grouping (MORPHING)
This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast.
Phage-assisted continuous evolution (PACE)
This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene.
In vitro non-homologous recombination methods
These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology.
Exon shuffling
Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening.
Incremental truncation for the creation of hybrid enzymes (ITCHY)
Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes.
SCRATCHY
This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching.
Recombined extension on truncated templates (RETT)
This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained.
Sequence homology-independent protein recombination (SHIPREC)
This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct.
Sequence independent site directed chimeragenesis (SISDC)
This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries.
Degenerate homo-duplex recombination (DHR)
This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated.
Random multi-recombinant PCR (RM-PCR)
This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units.
User friendly DNA recombination (USERec)
This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli.
Golden Gate shuffling (GGS) recombination
This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels.
Phosphoro thioate-based DNA recombination method (PRTec)
This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks.
Integron
This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system.
Y-Ligation based shuffling (YLBS)
This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle.
Semi-rational design
Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties.
Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics.
Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs.
Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts.
Screening and selection techniques
Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing.
Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes.
Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization.
Enzyme engineering
Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives.
An enzyme reactor consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution. Also Microorganisms are one of important origin for genuine enzymes .
Examples of engineered proteins
Computing methods have been used to design a protein with a novel fold, such as Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome.
Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones.
Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis.
A in silico method, PoreDesigner, was developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices.
See also
Display:
Bacterial display
Phage display
mRNA display
Ribosome display
Yeast display
Biomolecular engineering
Enzymology
Expanded genetic code
Fast parallel proteolysis (FASTpp)
Gene synthesis
Genetic engineering
In situ cyclization of proteins
Nucleic acid analogues
Protein structure prediction software
Proteomics
Proteome
SCOPE (protein engineering)
Structural biology
Synthetic biology
References
External links
servers for protein engineering and related topics based on the WHAT IF software
Enzymes Built from Scratch – Researchers engineer never-before-seen catalysts using a new computational technique, Technology Review, March 10, 2008
Biochemistry
Enzymes
Biological engineering
Biotechnology
Chemical biology | Protein engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 9,444 | [
"Biological engineering",
"Biochemistry",
"Biotechnology",
"nan",
"Chemical biology"
] |
216,187 | https://en.wikipedia.org/wiki/Incineration | Incineration is a waste treatment process that involves the combustion of substances contained in waste materials. Industrial plants for waste incineration are commonly referred to as waste-to-energy facilities. Incineration and other high-temperature waste treatment systems are described as "thermal treatment". Incineration of waste materials converts the waste into ash, flue gas and heat. The ash is mostly formed by the inorganic constituents of the waste and may take the form of solid lumps or particulates carried by the flue gas. The flue gases must be cleaned of gaseous and particulate pollutants before they are dispersed into the atmosphere. In some cases, the heat that is generated by incineration can be used to generate electric power.
Incineration with energy recovery is one of several waste-to-energy technologies such as gasification, pyrolysis and anaerobic digestion. While incineration and gasification technologies are similar in principle, the energy produced from incineration is high-temperature heat whereas combustible gas is often the main energy product from gasification. Incineration and gasification may also be implemented without energy and materials recovery.
In several countries, there are still concerns from experts and local communities about the environmental effect of incinerators (see arguments against incineration).
In some countries, incinerators built just a few decades ago often did not include a materials separation to remove hazardous, bulky or recyclable materials before combustion. These facilities tended to risk the health of the plant workers and the local environment due to inadequate levels of gas cleaning and combustion process control. Most of these facilities did not generate electricity.
Incinerators reduce the solid mass of the original waste by 80–85% and the volume (already compressed somewhat in garbage trucks) by 95–96%, depending on composition and degree of recovery of materials such as metals from the ash for recycling. This means that while incineration does not completely replace landfilling, it significantly reduces the necessary volume for disposal. Garbage trucks often reduce the volume of waste in a built-in compressor before delivery to the incinerator. Alternatively, at landfills, the volume of the uncompressed garbage can be reduced by approximately 70% by using a stationary steel compressor, albeit with a significant energy cost. In many countries, simpler waste compaction is a common practice for compaction at landfills.
Incineration has particularly strong benefits for the treatment of certain waste types in niche areas such as clinical wastes and certain hazardous wastes where pathogens and toxins can be destroyed by high temperatures. Examples include chemical multi-product plants with diverse toxic or very toxic wastewater streams, which cannot be routed to a conventional wastewater treatment plant.
Waste combustion is particularly popular in countries such as Japan, Singapore and the Netherlands, where land is a scarce resource. Denmark and Sweden have been leaders by using the energy generated from incineration for more than a century, in localised combined heat and power facilities supporting district heating schemes. In 2005, waste incineration produced 4.8% of the electricity consumption and 13.7% of the total domestic heat consumption in Denmark. A number of other European countries rely heavily on incineration for handling municipal waste, in particular Luxembourg, the Netherlands, Germany, and France.
History
The first UK incinerators for waste disposal were built in Nottingham by Manlove, Alliott & Co. Ltd. in 1874 to a design patented by Alfred Fryer. They were originally known as destructors.
The first US incinerator was built in 1885 on Governors Island in New York, NY. The first facility in Austria-Hungary was built in 1905 in Brunn.
Technology
An incinerator is a furnace for burning waste. Modern incinerators include pollution mitigation equipment such as flue gas cleaning. There are various types of incinerator plant design: moving grate, fixed grate, rotary-kiln, and fluidised bed.
Burn pile
The burn pile or the burn pit is one of the simplest and earliest forms of waste disposal, essentially consisting of a mound of combustible materials piled on the open ground and set on fire, leading to pollution.
Burn piles can and have spread uncontrolled fires, for example, if the wind blows burning material off the pile into surrounding combustible grasses or onto buildings. As interior structures of the pile are consumed, the pile can shift and collapse, spreading the burn area. Even in a situation of no wind, small lightweight ignited embers can lift off the pile via convection, and waft through the air into grasses or onto buildings, igniting them. Burn piles often do not result in full combustion of waste and therefore produce particulate pollution.
Burn barrel
The burn barrel is a somewhat more controlled form of private waste incineration, containing the burning material inside a metal barrel, with a metal grating over the exhaust. The barrel prevents the spread of burning material in windy conditions, and as the combustibles are reduced they can only settle down into the barrel. The exhaust grating helps to prevent the spread of burning embers. Typically steel drums are used as burn barrels, with air vent holes cut or drilled around the base for air intake. Over time, the very high heat of incineration causes the metal to oxidize and rust, and eventually the barrel itself is consumed by the heat and must be replaced.
The private burning of dry cellulosic/paper products is generally clean-burning, producing no visible smoke, but plastics in the household waste can cause private burning to create a public nuisance, generating acrid odors and fumes that make eyes burn and water. A two-layered design enables secondary combustion, reducing smoke. Most urban communities ban burn barrels and certain rural communities may have prohibitions on open burning, especially those home to many residents not familiar with this common rural practice.
in the United States, private rural household or farm waste incineration of small quantities was typically permitted so long as it is not a nuisance to others, does not pose a risk of fire such as in dry conditions, and the fire does not produce dense, noxious smoke. A handful of states, such as New York, Minnesota, and Wisconsin, have laws or regulations either banning or strictly regulating open burning due to health and nuisance effects. People intending to burn waste may be required to contact a state agency in advance to check current fire risk and conditions, and to alert officials of the controlled fire that will occur.
Moving grate
The typical incineration plant for municipal solid waste is a moving grate incinerator. The moving grate enables the movement of waste through the combustion chamber to be optimized to allow a more efficient and complete combustion. A single moving grate boiler can handle up to of waste per hour, and can operate 8,000 hours per year with only one scheduled stop for inspection and maintenance of about one month's duration. Moving grate incinerators are sometimes referred to as municipal solid waste incinerators (MSWIs).
The waste is introduced by a waste crane through the "throat" at one end of the grate, from where it moves down over the descending grate to the ash pit in the other end. Here the ash is removed through a water lock.
Part of the combustion air (primary combustion air) is supplied through the grate from below. This air flow also has the purpose of cooling the grate itself. Cooling is important for the mechanical strength of the grate, and many moving grates are also water-cooled internally.
Secondary combustion air is supplied into the boiler at high speed through nozzles over the grate. It facilitates complete combustion of the flue gases by introducing turbulence for better mixing and by ensuring a surplus of oxygen. In multiple/stepped hearth incinerators, the secondary combustion air is introduced in a separate chamber downstream the primary combustion chamber.
According to the European Waste Incineration Directive, incineration plants must be designed to ensure that the flue gases reach a temperature of at least for 2 seconds in order to ensure proper breakdown of toxic organic substances. In order to comply with this at all times, it is required to install backup auxiliary burners (often fueled by oil), which are fired into the boiler in case the heating value of the waste becomes too low to reach this temperature alone.
The flue gases are then cooled in the superheaters, where the heat is transferred to steam, heating the steam to typically at a pressure of for the electricity generation in the turbine. At this point, the flue gas has a temperature of around , and is passed to the flue gas cleaning system.
In Scandinavia, scheduled maintenance is always performed during summer, where the demand for district heating is low. Often, incineration plants consist of several separate 'boiler lines' (boilers and flue gas treatment plants), so that waste can continue to be received at one boiler line while the others are undergoing maintenance, repair, or upgrading.
Fixed grate
The older and simpler kind of incinerator was a brick-lined cell with a fixed metal grate over a lower ash pit, with one opening in the top or side for loading and another opening in the side for removing incombustible solids called clinkers. Many small incinerators formerly found in apartment houses have now been replaced by waste compactors.
Rotary-kiln
The rotary-kiln incinerator is used by municipalities and by large industrial plants. This design of incinerator has two chambers: a primary chamber and secondary chamber. The primary chamber in a rotary kiln incinerator consists of an inclined refractory lined cylindrical tube. The inner refractory lining serves as sacrificial layer to protect the kiln structure. This refractory layer needs to be replaced from time to time. Movement of the cylinder on its axis facilitates movement of waste. In the primary chamber, there is conversion of solid fraction to gases, through volatilization, destructive distillation and partial combustion reactions. The secondary chamber is necessary to complete gas phase combustion reactions.
The clinkers spill out at the end of the cylinder. A tall flue-gas stack, fan, or steam jet supplies the needed draft. Ash drops through the grate, but many particles are carried along with the hot gases. The particles and any combustible gases may be combusted in an "afterburner".
Fluidized bed
A strong airflow is forced through a sandbed. The air seeps through the sand until a point is reached where the sand particles separate to let the air through and mixing and churning occurs, thus a fluidized bed is created and fuel and waste can now be introduced. The sand with the pre-treated waste and/or fuel is kept suspended on pumped air currents and takes on a fluid-like character. The bed is thereby violently mixed and agitated keeping small inert particles and air in a fluid-like state. This allows all of the mass of waste, fuel and sand to be fully circulated through the furnace.
Specialized incinerator
Furniture factory sawdust incinerators need much attention as these have to handle resin powder and many flammable substances. Controlled combustion, burn back prevention systems are essential as dust when suspended resembles the fire catch phenomenon of any liquid petroleum gas.
Use of heat
The heat produced by an incinerator can be used to generate steam which may then be used to drive a turbine in order to produce electricity. The typical amount of net energy that can be produced per tonne municipal waste is about 2/3 MWh of electricity and 2 MWh of district heating. Thus, incinerating about per day of waste will produce about 400 MWh of electrical energy per day (17 MW of electrical power continuously for 24 hours) and 1200 MWh of district heating energy each day.
Pollution
Incineration has a number of outputs such as the ash and the emission to the atmosphere of flue gas. Before the flue gas cleaning system, if installed, the flue gases may contain particulate matter, heavy metals, dioxins, furans, sulfur dioxide, and hydrochloric acid. If plants have inadequate flue gas cleaning, these outputs may add a significant pollution component to stack emissions.
In a study from 1997, Delaware Solid Waste Authority found that, for same amount of produced energy, incineration plants emitted fewer particles, hydrocarbons and less SO2, HCl, CO and NOx than coal-fired power plants, but more than natural gas–fired power plants. According to Germany's Ministry of the Environment, waste incinerators reduce the amount of some atmospheric pollutants by substituting power produced by coal-fired plants with power from waste-fired plants.
Gaseous emissions
Dioxin and furans
The most publicized concerns about the incineration of municipal solid wastes (MSW) involve the fear that it produces significant amounts of dioxin and furan emissions. Dioxins and furans are considered by many to be serious health hazards. The EPA announced in 2012 that the safe limit for human oral consumption is 0.7 picograms Toxic Equivalence (TEQ) per kilogram bodyweight per day, which works out to 17 billionths of a gram for a 150 lb person per year.
In 2005, the Ministry of the Environment of Germany, where there were 66 incinerators at that time, estimated that "...whereas in 1990 one third of all dioxin emissions in Germany came from incineration plants, for the year 2000 the figure was less than 1%. Chimneys and tiled stoves in private households alone discharge approximately 20 times more dioxin into the environment than incineration plants."
According to the United States Environmental Protection Agency, the combustion percentages of the total dioxin and furan inventory from all known and estimated sources in the U.S. (not only incineration) for each type of incineration are as follows: 35.1% backyard barrels; 26.6% medical waste; 6.3% municipal wastewater treatment sludge; 5.9% municipal waste combustion; 2.9% industrial wood combustion. Thus, the controlled combustion of waste accounted for 41.7% of the total dioxin inventory.
In 1987, before the governmental regulations required the use of emission controls, there was a total of Toxic Equivalence (TEQ) of dioxin emissions from US municipal waste combustors. Today, the total emissions from the plants are TEQ annually, a reduction of 99%.
Backyard barrel burning of household and garden wastes, still allowed in some rural areas, generates of dioxins annually.
Studies conducted by the US-EPA demonstrated that one family using a burn barrel produced more emissions than an incineration plant disposing of of waste per day by 1997 and five times that by 2007 due to increased chemicals in household trash and decreased emission by municipal incinerators using better technology.
Most of the improvement in U.S. dioxin emissions has been for large-scale municipal waste incinerators. As of 2000, although small-scale incinerators (those with a daily capacity of less than 250 tons) processed only 9% of the total waste combusted, these produced 83% of the dioxins and furans emitted by municipal waste combustion.
Dioxin cracking methods and limitations
The breakdown of dioxin requires exposure of the molecular ring to a sufficiently high temperature so as to trigger thermal breakdown of the strong molecular bonds holding it together. Small pieces of fly ash may be somewhat thick, and too brief an exposure to high temperature may only degrade dioxin on the surface of the ash. For a large volume air chamber, too brief an exposure may also result in only some of the exhaust gases reaching the full breakdown temperature. For this reason there is also a time element to the temperature exposure to ensure heating completely through the thickness of the fly ash and the volume of waste gases.
There are trade-offs between increasing either the temperature or exposure time. Generally where the molecular breakdown temperature is higher, the exposure time for heating can be shorter, but excessively high temperatures can also cause wear and damage to other parts of the incineration equipment. Likewise the breakdown temperature can be lowered to some degree but then the exhaust gases would require a greater lingering period of perhaps several minutes, which would require large/long treatment chambers that take up a great deal of treatment plant space.
A side effect of breaking the strong molecular bonds of dioxin is the potential for breaking the bonds of nitrogen gas (N2) and oxygen gas (O2) in the supply air. As the exhaust flow cools, these highly reactive detached atoms spontaneously reform bonds into reactive oxides such as NOx in the flue gas, which can result in smog formation and acid rain if they were released directly into the local environment. These reactive oxides must be further neutralized with selective catalytic reduction (SCR) or selective non-catalytic reduction (see below).
Dioxin cracking in practice
The temperatures needed to break down dioxin are typically not reached when burning plastics outdoors in a burn barrel or garbage pit, causing high dioxin emissions as mentioned above. While plastic does usually burn in an open-air fire, the dioxins remain after combustion and either float off into the atmosphere, or may remain in the ash where it can be leached down into groundwater when rain falls on the ash pile. Fortunately, dioxin and furan compounds bond very strongly to solid surfaces and are not dissolved by water, so leaching processes are limited to the first few millimeters below the ash pile. The gas-phase dioxins can be substantially destroyed using catalysts, some of which can be present as part of the fabric filter bag structure.
Modern municipal incinerator designs include a high-temperature zone, where the flue gas is sustained at a temperature above for at least 2 seconds before it is cooled down. They are equipped with auxiliary heaters to ensure this at all times. These are often fueled by oil or natural gas, and are normally only active for a very small fraction of the time. Further, most modern incinerators utilize fabric filters (often with Teflon membranes to enhance collection of sub-micron particles) which can capture dioxins present in or on solid particles.
For very small municipal incinerators, the required temperature for thermal breakdown of dioxin may be reached using a high-temperature electrical heating element, plus a selective catalytic reduction stage.
Although dioxins and furans may be destroyed by combustion, their reformation by a process known as 'de novo synthesis' as the emission gases cool is a probable source of the dioxins measured in emission stack tests from plants that have high combustion temperatures held at long residence times.
CO2
As for other complete combustion processes, nearly all of the carbon content in the waste is emitted as CO2 to the atmosphere. MSW contains approximately the same mass fraction of carbon as CO2 itself (27%), so incineration of 1 ton of MSW produces approximately 1 ton of CO2.
If the waste was landfilled without prior stabilization (typically via anaerobic digestion), 1 ton of MSW would produce approximately methane via the anaerobic decomposition of the biodegradable part of the waste. Since the global warming potential of methane is 34 and the weight of 62 cubic meters of methane at 25 degrees Celsius is 40.7 kg, this is equivalent to 1.38 ton of CO2, which is more than the 1 ton of CO2 which would have been produced by incineration. In some countries, large amounts of landfill gas are collected. Still the global warming potential of the landfill gas emitted to atmosphere is significant. In the US it was estimated that the global warming potential of the emitted landfill gas in 1999 was approximately 32% higher than the amount of CO2 that would have been emitted by incineration. Since this study, the global warming potential estimate for methane has been increased from 21 to 35, which alone would increase this estimate to almost the triple GWP effect compared to incineration of the same waste.
In addition, nearly all biodegradable waste has biological origin. This material has been formed by plants using atmospheric CO2 typically within the last growing season. If these plants are regrown the CO2 emitted from their combustion will be taken out from the atmosphere once more.
Such considerations are the main reason why several countries administrate incineration of biodegradable waste as renewable energy. The rest – mainly plastics and other oil and gas derived products – is generally treated as non-renewables.
Different results for the CO2 footprint of incineration can be reached with different assumptions. Local conditions (such as limited local district heating demand, no fossil fuel generated electricity to replace or high levels of aluminium in the waste stream) can decrease the CO2 benefits of incineration.
The methodology and other assumptions may also influence the results significantly. For example, the methane emissions from landfills occurring at a later date may be neglected or given less weight, or biodegradable waste may not be considered CO2 neutral. A study by Eunomia Research and Consulting in 2008 on potential waste treatment technologies in London demonstrated that by applying several of these (according to the authors) unusual assumptions the average existing incineration plants performed poorly for CO2 balance compared to the theoretical potential of other emerging waste treatment technologies.
Other emissions
Other gaseous emissions in the flue gas from incinerator furnaces include nitrogen oxides, sulfur dioxide, hydrochloric acid, heavy metals, and fine particles. Of the heavy metals, mercury is a major concern due to its toxicity and high volatility, as essentially all mercury in the municipal waste stream may exit in emissions if not removed by emission controls.
The steam content in the flue may produce visible fume from the stack, which can be perceived as a visual pollution. It may be avoided by decreasing the steam content by flue-gas condensation and reheating, or by increasing the flue gas exit temperature well above its dew point. Flue-gas condensation allows the latent heat of vaporization of the water to be recovered, subsequently increasing the thermal efficiency of the plant.
Flue-gas cleaning
The quantity of pollutants in the flue gas from incineration plants may or may not be reduced by several processes, depending on the plant.
Particulate is collected by particle filtration, most often electrostatic precipitators (ESP) and/or baghouse filters. The latter are generally very efficient for collecting fine particles. In an investigation by the Ministry of the Environment of Denmark in 2006, the average particulate emissions per energy content of incinerated waste from 16 Danish incinerators were below 2.02 g/GJ (grams per energy content of the incinerated waste). Detailed measurements of fine particles with sizes below 2.5 micrometres (PM2.5) were performed on three of the incinerators: One incinerator equipped with an ESP for particle filtration emitted 5.3 g/GJ fine particles, while two incinerators equipped with baghouse filters emitted 0.002 and 0.013 g/GJ PM2.5. For ultra fine particles (PM1.0), the numbers were 4.889 g/GJ PM1.0 from the ESP plant, while emissions of 0.000 and 0.008 g/GJ PM1.0 were measured from the plants equipped with baghouse filters.
Acid gas scrubbers are used to remove hydrochloric acid, nitric acid, hydrofluoric acid, mercury, lead and other heavy metals. The efficiency of removal will depend on the specific equipment, the chemical composition of the waste, the design of the plant, the chemistry of reagents, and the ability of engineers to optimize these conditions, which may conflict for different pollutants. For example, mercury removal by wet scrubbers is considered coincidental and may be less than 50%. Basic scrubbers remove sulfur dioxide, forming gypsum by reaction with lime.
Waste water from scrubbers must subsequently pass through a waste water treatment plant.
Sulfur dioxide may also be removed by dry desulfurisation by injection limestone slurry into the flue gas before the particle filtration.
NOx is either reduced by catalytic reduction with ammonia in a catalytic converter (selective catalytic reduction, SCR) or by a high-temperature reaction with ammonia in the furnace (selective non-catalytic reduction, SNCR). Urea may be substituted for ammonia as the reducing reagent but must be supplied earlier in the process so that it can hydrolyze into ammonia. Substitution of urea can reduce costs and potential hazards associated with storage of anhydrous ammonia.
Heavy metals are often adsorbed on injected active carbon powder, which is collected by particle filtration.
Solid outputs
Incineration produces fly ash and bottom ash just as is the case when coal is combusted. The total amount of ash produced by municipal solid waste incineration ranges from 4 to 10% by volume and 15–20% by weight of the original quantity of waste, and the fly ash amounts to about 10–20% of the total ash. The fly ash, by far, constitutes more of a potential health hazard than does the bottom ash because the fly ash often contain high concentrations of heavy metals such as lead, cadmium, copper and zinc as well as small amounts of dioxins and furans. The bottom ash seldom contain significant levels of heavy metals. At present although some historic samples tested by the incinerator operators' group would meet the being ecotoxic criteria at present the EA say "we have agreed" to regard incinerator bottom ash as "non-hazardous" until the testing programme is complete.
Other pollution issues
Odor pollution can be a problem with old-style incinerators, but odors and dust are extremely well controlled in newer incineration plants. They receive and store the waste in an enclosed area with a negative pressure with the airflow being routed through the boiler which prevents unpleasant odors from escaping into the atmosphere. A study found that the strongest odor at an incineration facility in Eastern China occurred at its waste tipping port.
An issue that affects community relationships is the increased road traffic of waste collection vehicles to transport municipal waste to the incinerator. Due to this reason, most incinerators are located in industrial areas. This problem can be avoided to an extent through the transport of waste by rail from transfer stations.
Health effects
Scientific researchers have investigated the human health effects of pollutants produced by waste incineration. Many studies have examined health impacts from exposure to pollutants utilizing U.S. EPA modeling guidelines. Exposure through inhalation, ingestion, soil, and dermal contact are incorporated in these models. Research studies have also assessed exposure to pollutants through blood or urine samples of residents and workers who live near waste incinerators. Findings from a systematic review of previous research identified a number of symptoms and diseases related to incinerator pollution exposure. These include neoplasia, respiratory issues, congenital anomalies, and infant deaths or miscarriages. Populations near old, inadequately maintained incinerators experience a higher degree of health issues. Some studies also identified possible cancer risk. However, difficulties in separating incinerator pollution exposure from combined industry, motor vehicle, and agriculture pollution limits these conclusions on health risks.
Many communities have advocated for the improvement or removal of waste incinerator technology. Specific pollutant exposures, such as high levels of nitrogen dioxide, have been cited in community-led complaints relating to increased emergency room visits for respiratory issues. Potential health effects of waste incineration technology have been publicized, notably when located in communities already facing disproportionate health burdens. For example Wheelabrator Baltimore in Maryland has been investigated due to increased rates of asthma in its neighboring community, which is predominantly occupied by low-income, people of color. Community-led efforts have suggested a need for future research to address a lack of real-time pollution data. These sources have also cited a need for academic, government, and non-profit partnerships to better determine the health impacts of incineration.
Debate
Use of incinerators for waste management is controversial. The debate over incinerators typically involves business interests (representing both waste generators and incinerator firms), government regulators, environmental activists and local citizens who must weigh the economic appeal of local industrial activity with their concerns over health and environmental risk.
People and organizations professionally involved in this issue include the U.S. Environmental Protection Agency and a great many local and national air quality regulatory agencies worldwide.
Arguments for incineration
The concerns over the health effects of dioxin and furan emissions have been significantly lessened by advances in emission control designs and very stringent new governmental regulations that have resulted in large reductions in the amount of dioxins and furans emissions.
The U.K. Health Protection Agency concluded in 2009 that "Modern, well managed incinerators make only a small contribution to local concentrations of air pollutants. It is possible that such small additions could have an impact on health but such effects, if they exist, are likely to be very small and not detectable."
Incineration plants can generate electricity and heat that can substitute power plants powered by other fuels at the regional electric and district heating grid, and steam supply for industrial customers. Incinerators and other waste-to-energy plants generate at least partially biomass-based renewable energy that offsets greenhouse gas pollution from coal-, oil- and gas-fired power plants. The E.U. considers energy generated from biogenic waste (waste with biological origin) by incinerators as non-fossil renewable energy under its emissions caps. These greenhouse gas reductions are in addition to those generated by the avoidance of landfill methane.
The bottom ash residue remaining after combustion has been shown to be a non-hazardous solid waste that can be safely put into landfills or recycled as construction aggregate. Samples are tested for ecotoxic metals.
In densely populated areas, finding space for additional landfills is becoming increasingly difficult.
Fine particles can be efficiently removed from the flue gases with baghouse filters. Even though approximately 40% of the incinerated waste in Denmark was incinerated at plants with no baghouse filters, estimates based on measurements by the Danish Environmental Research Institute showed that incinerators were only responsible for approximately 0.3% of the total domestic emissions of particulate smaller than 2.5 micrometres (PM2.5) to the atmosphere in 2006.
Incineration of municipal solid waste avoids the release of methane. Every ton of MSW incinerated, prevents about one ton of carbon dioxide equivalents from being released to the atmosphere.
Most municipalities that operate incineration facilities have higher recycling rates than neighboring cities and countries that do not send their waste to incinerators.. In a country overview from 2016 by the European Environmental Agency the top recycling performing countries are also the ones having the highest penetration of incineration, even though all material recovery from waste sent to incineration (e.g. metals and construction aggregate) is per definition not counted as recycling in European targets. The recovery of glass, stone and ceramic materials reused in construction, as well as ferrous and in some cases non-ferrous metals recovered from combustion residue thus adds further to the actual recycled amounts. Metals recovered from ash would typically be difficult or impossible to recycle through conventional means, as the removal of attached combustible material through incineration provides an alternative to labor- or energy-intensive mechanical separation methods.
Volume of combusted waste is reduced by approximately 90%, increasing the life of landfills. Ash from modern incinerators is vitrified at temperatures of to , reducing the leachability and toxicity of residue. As a result, special landfills are generally no longer required for incinerator ash from municipal waste streams, and existing landfills can see their life dramatically increased by combusting waste, reducing the need for municipalities to site and construct new landfills.
Arguments against incineration
The Scottish Protection Agency's (SEPA) comprehensive health effects research concluded "inconclusively" on health effects in October 2009. The authors stress, that even though no conclusive evidence of non-occupational health effects from incinerators were found in the existing literature, "small but important effects might be virtually impossible to detect". The report highlights epidemiological deficiencies in previous UK health studies and suggests areas for future studies. The U.K. Health Protection Agency produced a lesser summary in September 2009. Many toxicologists criticise and dispute this report as not being comprehensive epidemiologically, thin on peer review and the effects of fine particle effects on health.
Combustion produces ash concentrates ecotoxic heavy metals from waste into ash, mostly the fly ash component. This ash must be stored in specialized landfills. The less toxic bottom ash (incinerator bottom ash, IBA) can be encased into concrete as a building material, but there is a risk of hydrogen gas explosion due to the aluminum content. The UK Highway Authority put the use of IBA in foam concrete on hold as it investigates a series of explosions in 2009. Recovery of useful metals from ash is a new but even less mature approach.
The health effects of dioxin and furan emissions from old incinerators; especially during start up and shut down, or where filter bypass is required continue to be a problem.
Incinerators emit varying levels of heavy metals such as vanadium, manganese, chromium, nickel, arsenic, mercury, lead and cadmium, which can be toxic at very minute levels.
Alternative technologies are available or in development such as mechanical biological treatment, anaerobic digestion (MBT/AD), autoclaving or mechanical heat treatment (MHT) using steam or plasma arc gasification (PGP), which is incineration using electrically produced extreme high temperatures, or combinations of these treatments.
Erection of incinerators compete with the development and introduction of other emerging technologies. A UK government WRAP report, August 2008 found that in the UK median incinerator costs per ton were generally higher than those for MBT treatments by £18 per metric ton; and £27 per metric ton for most modern (post 2000) incinerators.
Building and operating waste processing plants such as incinerators requires long contract periods to recover initial investment costs, causing a long-term lock-in. Incinerator lifetimes normally range from 25 to 30 years. This was highlighted by Peter Jones, OBE, the Mayor of London's waste representative in April 2009.
Incinerators produce fine particles in the furnace. Even with modern particle filtering of the flue gases, a small part of these is emitted to the atmosphere. PM2.5 is not separately regulated in the European Waste Incineration Directive, even though they are repeatedly correlated spatially to infant mortality in the UK (M. Ryan's ONS data based maps around the EfW/CHP waste incinerators at Edmonton, Coventry, Chineham, Kirklees and Sheffield). Under WID there is no requirement to monitor stack top or downwind incinerator PM2.5 levels. Several European doctors associations (including cross discipline experts such as physicians, environmental chemists and toxicologists) in June 2008 representing over 33,000 doctors wrote a keynote statement directly to the European Parliament citing widespread concerns on incinerator particle emissions and the absence of specific fine and ultrafine particle size monitoring or in depth industry/government epidemiological studies of these minute and invisible incinerator particle size emissions.
Local communities are often opposed to the idea of locating waste processing plants such as incinerators in their vicinity (the Not in My Back Yard phenomenon). Studies in Andover, Massachusetts correlated 10% property devaluations with close incinerator proximity.
Prevention, waste minimisation, reuse and recycling of waste should all be preferred to incineration according to the waste hierarchy. Supporters of zero waste consider incinerators and other waste treatment technologies as barriers to recycling and separation beyond particular levels, and that waste resources are sacrificed for energy production.
A 2008 Eunomia report found that under some circumstances and assumptions, incineration causes less CO2 reduction than other emerging EfW and CHP technology combinations for treating residual mixed waste. The authors found that CHP incinerator technology without waste recycling ranked 19 out of 24 combinations (where all alternatives to incineration were combined with advanced waste recycling plants); being 228% less efficient than the ranked 1 Advanced MBT maturation technology; or 211% less efficient than plasma gasification/autoclaving combination ranked 2.
Some incinerators are visually undesirable. In many countries they require a visually intrusive chimney stack.
If reusable waste fractions are handled in waste processing plants such as incinerators in developing nations, it would cut out viable work for local economies. It is estimated that there are 1 million people making a livelihood off collecting waste.
The reduced levels of emissions from municipal waste incinerators and waste to energy plants from historical peaks are largely the product of the proficient use of emission control technology. Emission controls add to the initial and operational expenses. It should not be assumed that all new plants will employ the best available control technology if not required by law.
Waste that has been deposited on a landfill can be mined even decades and centuries later, and recycled with future technologies – which is not the case with incineration.
Trends in incinerator use
The history of municipal solid waste (MSW) incineration is linked intimately to the history of landfills and other waste treatment technology. The merits of incineration are inevitably judged in relation to the alternatives available. Since the 1970s, recycling and other prevention measures have changed the context for such judgements. Since the 1990s alternative waste treatment technologies have been maturing and becoming viable.
Incineration is a key process in the treatment of hazardous wastes and clinical wastes. It is often imperative that medical waste be subjected to the high temperatures of incineration to destroy pathogens and toxic contamination it contains.
In North America
The first incinerator in the U.S. was built in 1885 on Governors Island in New York.
In 1949, Robert C. Ross founded one of the first hazardous waste management companies in the U.S. He began Robert Ross Industrial Disposal because he saw an opportunity to meet the hazardous waste management needs of companies in northern Ohio. In 1958, the company built one of the first hazardous waste incinerators in the U.S.
The first full-scale, municipally operated incineration facility in the U.S. was the Arnold O. Chantland Resource Recovery Plant built in 1975 in Ames, Iowa. The plant is still in operation and produces refuse-derived fuel that is sent to local power plants for fuel. The first commercially successful incineration plant in the U.S. was built in Saugus, Massachusetts, in October 1975 by Wheelabrator Technologies, and is still in operation today.
There are several environmental or waste management corporations that transport ultimately to an incinerator or cement kiln treatment center. Currently (2009), there are three main businesses that incinerate waste: Clean Harbours, WTI-Heritage, and Ross Incineration Services. Clean Harbours has acquired many of the smaller, independently run facilities, accumulating 5–7 incinerators in the process across the U.S. WTI-Heritage has one incinerator, located in the southeastern corner of Ohio across the Ohio River from West Virginia.
Several old generation incinerators have been closed; of the 186 MSW incinerators in 1990, only 89 remained by 2007, and of the 6200 medical waste incinerators in 1988, only 115 remained in 2003.
No new incinerators were built between 1996 and 2007. The main reasons for lack of activity have been:
Economics. With the increase in the number of large inexpensive regional landfills and, up until recently, the relatively low price of electricity, incinerators were not able to compete for the 'fuel', i.e., waste in the U.S.
Tax policies. Tax credits for plants producing electricity from waste were rescinded in the U.S. between 1990 and 2004.
There has been renewed interest in incineration and other waste-to-energy technologies in the U.S. and Canada. In the U.S., incineration was granted qualification for renewable energy production tax credits in 2004. Projects to add capacity to existing plants are underway, and municipalities are once again evaluating the option of building incineration plants rather than continue landfilling municipal wastes. However, many of these projects have faced continued political opposition in spite of renewed arguments for the greenhouse gas benefits of incineration and improved air pollution control and ash recycling.
In Europe
In Europe, as a result of a ban on landfilling untreated waste, many incinerators have been built in the last decade, with more under construction. Recently, a number of municipal governments have begun the process of contracting for the construction and operation of incinerators. In Europe, some of the electricity generated from waste is deemed to be from a 'Renewable Energy Source' (RES) and is thus eligible for tax credits if privately operated. Also, some incinerators in Europe are equipped with waste recovery, allowing the reuse of ferrous and non-ferrous materials found in the burned waste. A prominent example is the AEB Waste Fired Power Plant, Amsterdam.
In Sweden, about 50% of the generated waste is burned in waste-to-energy facilities, producing electricity and supplying local cities' district heating systems. The importance of waste in Sweden's electricity generation scheme is reflected on their 2,700,000 tons of waste imported per year (in 2014) to supply waste-to-energy facilities.
Due to increasing targets for municipal solid waste recycling in the EU, at least 55% by 2025 up to 65% by 2035, several traditional incineration countries are at risk of not meeting them, since at most 35% will remain available for thermal treatment and disposal. Denmark has since decided to reduce its incineration capacity by 30% by 2030.
Incineration of non-hazardous waste was not included as a form of green investment in the EU taxonomy for sustainable activities due to concerns about harming the circularity agenda, effectively stopping future EU funding to the municipal solid waste incineration sector.
In the United Kingdom
The technology employed in the UK waste management industry has been greatly lagging behind that of Europe due to the wide availability of landfills. The Landfill Directive set down by the European Union led to the Government of the United Kingdom imposing waste legislation including the landfill tax and Landfill Allowance Trading Scheme. This legislation is designed to reduce the release of greenhouse gases produced by landfills through the use of alternative methods of waste treatment. It is the UK Government's position that incineration will play an increasingly large role in the treatment of municipal waste and supply of energy in the UK.
In 2008, plans for potential incinerator locations exists for approximately 100 sites. These have been interactively mapped by UK NGO's.
Under a new plan in June 2012, a DEFRA-backed grant scheme (The Farming and Forestry Improvement Scheme) was set up to encourage the use of low-capacity incinerators on agricultural sites to improve their bio security.
A permit has recently been granted for what would be the UK's largest waste incinerator in the centre of the Cambridge – Milton Keynes – Oxford corridor, in Bedfordshire. Following the construction of a large incinerator at Greatmoor in Buckinghamshire, and plans to construct a further one near Bedford, the Cambridge – Milton Keynes – Oxford corridor will become a major incineration hub in the UK.
Mobile incinerators
Incineration units for emergency use
Emergency incineration systems exist for the urgent and biosecure disposal of animals and their by-products following a mass mortality or disease outbreak. An increase in regulation and enforcement from governments and institutions worldwide has been forced through public pressure and significant economic exposure.
Contagious animal disease has cost governments and industry $200 billion over 20 years to 2012 and is responsible for over 65% of infectious disease outbreaks worldwide in the past sixty years. One-third of global meat exports (approx 6 million tonnes) is affected by trade restrictions at any time and as such the focus of Governments, public bodies and commercial operators is on cleaner, safer and more robust methods of animal carcass disposal to contain and control disease.
Large-scale incineration systems are available from niche suppliers and are often bought by governments as a safety net in case of contagious outbreak. Many are mobile and can be quickly deployed to locations requiring biosecure disposal.
Small incinerator units
Small-scale incinerators exist for special purposes. For example, mobile small-scale incinerators are aimed for hygienically safe destruction of medical waste in developing countries. Companies such as Inciner8, a UK based company, are a good example of mobile incinerator manufacturers with their I8-M50 and I8-M70 models. Small incinerators can be quickly deployed to remote areas where an outbreak has occurred to dispose of infected animals quickly and without the risk of cross contamination.
Containerised incinerator units
Containerised incinerators are a unique type of incinerator that are specifically designed to function in remote locations where traditional infrastructure may not be available. These incinerators are typically built within a shipping container for easy transport and installation.
See also
Burn pit
Cremation
Exposure assessment
Gasification
Incinerating toilet
List of solid waste treatment technologies
Plasma gasification
Pyrolysis
Thermal oxidizer
Thermal treatment
Waste Incineration Directive
Waste management
Waste-to-energy
Zero waste
References
External links
Anti-incineration groups
EU information
BREF Drafts & Papers, eippcb.jrc.es
English inventions
Occupational safety and health
Waste management
Waste treatment technology | Incineration | [
"Chemistry",
"Engineering"
] | 9,512 | [
"Water treatment",
"Combustion engineering",
"Incineration",
"Environmental engineering",
"Waste treatment technology"
] |
216,238 | https://en.wikipedia.org/wiki/Value%20of%20life | The value of life is an economic value used to quantify the benefit of avoiding a fatality. It is also referred to as the cost of life, value of preventing a fatality (VPF), implied cost of averting a fatality (ICAF), and value of a statistical life (VSL). In social and political sciences, it is the marginal cost of death prevention in a certain class of circumstances. In many studies the value also includes the quality of life, the expected life time remaining, as well as the earning potential of a given person especially for an after-the-fact payment in a wrongful death claim lawsuit.
As such, it is a statistical term, the value of reducing the average number of deaths by one. It is an important issue in a wide range of disciplines including economics, health care, adoption, political economy, insurance, worker safety, environmental impact assessment, globalization, and process safety.
The motivation for placing a monetary value on life is to enable policy and regulatory analysts to allocate the limited supply of resources, infrastructure, labor, and tax revenue. Estimates for the value of a life are used to compare the life-saving and risk-reduction benefits of new policies, regulations, and projects against a variety of other factors, often using a cost-benefit analysis.
Estimates for the statistical value of life are published and used in practice by various government agencies. In Western countries and other liberal democracies, estimates for the value of a statistical life typically range from –; for example, the United States FEMA estimated the value of a statistical life at in 2020.
Treatment in economics and methods of calculation
There is no standard concept for the value of a specific human life in economics. However, when looking at risk/reward trade-offs that people make with regard to their health, economists often consider the value of a statistical life (VSL). The VSL is very different from the value of an actual life. It is the value placed on changes in the likelihood of death, not the price someone would pay to avoid certain death. This is best explained by way of an example. From the EPA's website:Suppose each person in a sample of 100,000 people were asked how much he or she would be willing to pay for a reduction in their individual risk of dying by 1 in 100,000, or 0.001%, over the next year. Since this reduction in risk would mean that we would expect one fewer death among the sample of 100,000 people over the next year on average, this is sometimes described as "one statistical life saved.” Now suppose that the average response to this hypothetical question was $100. Then the total dollar amount that the group would be willing to pay to save one statistical life in a year would be $100 per person × 100,000 people, or $10 million. This is what is meant by the "value of a statistical life”.
This again emphasizes that VSL is more of an estimate of willingness to pay for small reductions in mortality risks rather than how much a human life is worth. Using government spending to see how much is spent to save lives in order to estimate the average individual VSL is a popular method of calculation. The United States government does not have an official value of life threshold, but different values are used in different agencies. It might be that the government values lives quite highly or that calculation standard are not applied uniformly. Using the EPA as an example, the Agency uses estimates of how much people are willing to pay for small reductions in their risks of dying from adverse health conditions that may be caused by environmental pollution in their cost-benefit analyses.
Economists often estimate the VSL by looking at the risks that people are voluntarily willing to take and how much they must be paid for taking them. This method is known as revealed preference, where the actions of the individual reveal how much they value something. In this context, economists would look at how much individuals are willing to pay for something that reduces their chance of dying. Similarly, compensating differentials, which are the reduced or additional wage payments that are intended to compensate workers for conveniences or downsides of a job, can be used for VSL calculations. For example, a job that is more dangerous for a worker's health might require that the worker be compensated more. The compensating differentials method has several weaknesses. One issue is that the approach assumes that people have information, which is not always available. Another issue is that people may have higher or lower perceptions of risk they are facing that do not equate to actual statistical risk. In general, it is difficult for people to accurately understand and assess risk. It is also hard to control for other aspects of a job or different types of work when using this method. Overall, revealed preference may not represent population preferences as a whole because of the differences between individuals.
One method that can be used to calculate VSL is summing the total present discounted value of lifetime earnings. There are a couple of problems using this method. One potential source of variability is that different discount rates can be used in this calculation, resulting in dissimilar VSL estimates. Another potential issue when using wages to value life is that the calculation does not take into account the value of time that is not spent working, such as vacation or leisure. As a result, VSL estimates may be inaccurate because time spent on leisure could be valued at a higher rate than an individual's wage.
Another method used to estimate VSL is contingent valuation. Contingent valuation asks individuals to value an option either that they have not chosen or are unable to currently choose. Economists might estimate the VSL by simply asking people (e.g. through questionnaires) how much they would be willing to pay for a reduction in the likelihood of dying, perhaps by purchasing safety improvements. These types of studies are referred to as stated preference studies. However, contingent valuation has some flaws. The first problem is known as the isolation of issues, where participants may give different values when asked to value something alone versus when they are asked to value multiple things. The order of how these issues are presented to people matters as well. Another potential issue is the “embedding effect” identified by Diamond and Hausman 1994. All of these methods might result in a VSL that is overstated or understated.
When calculating value of statistical life, it is important to discount and adjust it for inflation and real income growth over the years. An example of a formula needed to adjust the VSL of a specific year is given by the following:
where
VSLO = Original Base Year, VSLT = Updated Base Year, PT = Price Index in Year t, IT = Real Incomes in Year t, ε = Income Elasticity of VSL.
Value of preventing a casualty
(VPC) is a more general concept to value of preventing a fatality. It means the value of preventing a fatality or a serious injury. According to Economic and Social Council's provisional agenda for review and analysis of the economic costs of level crossing accidents, "the value of preventing a casualty should be established by either Willingness-To-Pay or Human Capital/Lost Output approaches. It is essential to consider not only fatal injuries, but also serious (or even minor injuries) in this statistical life valuation exercise."
Comparisons to other methods
The value of statistical life (VSL) estimates are often used in the transport sector and in process safety (where it may be coupled with the ALARP concept). In health economics and in the pharmaceutical sector, however, the value of a quality-adjusted life-year (QALY) is used more often than the VSL. Both of these measures are used in cost-benefit analyses as a method of assigning a monetary value of bettering or worsening one's life conditions. While QALY measures the quality of life ranging from 0–1, VSL monetizes the values using willingness-to-pay.
Researchers have first attempted to monetize QALY in the 1970s, with countless studies being done to standardize values between and within countries. However, as with the QALY, VSL estimates have also had a history of vastly differing ranges of estimates within countries, notwithstanding a standardization among countries. One of the biggest movements to do so was the EuroVaQ project which used a sample of 40,000 individuals to develop the WTP of several European countries.
Policy applications
Value of life estimates are frequently used to estimate the benefits added due to a new policy or act passed by the government. One example is the 6-year retroactive study on the benefits and costs of the 1970 Clean Air Act in the period from 1970 to 1990. This study was commissioned by the U.S. Environmental Protection Agency (EPA), Office of Air and Radiation and Office of Policy, Planning and Evaluation, but was carried out by an independent board of public health experts, economists, and scientists headed by Dr. Richard Schmalensee of MIT.
On conducting the benefit-cost analysis, the team measured each dollar value of an environmental benefit by estimating a how many dollars a person is willing to pay in order to decrease or eliminate a current threat to their health, otherwise known as their "willingness-to-pay" (WTP). The WTP of the U.S. population was estimated and summed for separate categories including mortality, chronic bronchitis, hypertension, IQ changes, and strokes. Thus, the individual WTPs were added to get the value of a statistical life (VSL) for each category considered in the valuation of the act's benefits. Each valuation in figure 1 was the product of several studies which compiled both solicited WTP information from individuals and WTP estimates from risk compensation demanded in the current labor market and was averaged to find a singular VSL. Such data from the labor market was taken from the Census of Fatal Occupational Injuries collected by the Bureau of Labor Statistics.
For example, the valuation estimates used for mortality were divided by the typical life expectancy of each survey sample in order to get a dollar estimate per life-year lost or saved which was discounted with a 5 percent discount rate.
Using these estimates, the paper concluded that the benefits, ranging from $5.6 to $49.4 trillion in 1990 dollars, of implementing the Clean Air Act from 1970 to 1990 outweighed the economic costs of $523 billion in 1990 dollars.
Estimates of the value of life
Equivalent parameters are used in many countries, with significant variation in the value assigned.
European Union
Sweden
In Sweden, the value of a statistical life has been estimated from 9 to 98 million SEK (€0.9 - 10.6 million).
34.6 million SEK (€3.7 million) mean of studies in Sweden from 1995 and on
23 million SEK (€2.5 million) median of studies in Sweden from 1995 and on
22 million SEK (€2.4 million) recommended by official authorities
Australia
In Australia, the value of a statistical life has been set at:
AU$5.4 million (2023)
AU$235,000 per year (2023)
India
Using a hedonic wage approach, the VSL in India among blue-collar male workers in manufacturing industries of Ahmedabad, Gujarat has been estimated to be 44.69 million INR ($0.64 million) in 2018.
New Zealand
In New Zealand, the value of a statistical life has been set at:
NZ$2 million (1991) by NZTA
NZ$3.85 million (2013) by The Treasury
NZ$4.14 million (2016) by NZTA
NZ$4.53 million (June 2019) by Ministry of Transport
NZ$150,000 per year (2022) by Ministry of Health
NZ$12.5 million (April 2023) by NZTA
Singapore
The value of statistical life (VSL) in Singapore was estimated in 2007 via a contingent valuation survey that elicits willingness-to-pay (WTP) for mortality risk reductions, which interviewed 801 Singaporeans and Singapore Permanent Residents aged 40 and above, entailing a value of statistical life of approximately S$850,000 to S$2.05 million (in 2007 S$, which is approximately 1.36 S$ in 2022). Mean WTP was also shown to have an inverse relationship with age, and is about 20% lower for persons aged 70 and older. Consistent with existing literature, the study also finds that mean WTP is not affected by physical health; but is affected by mental health. In addition, mean WTP is not affected by covariates such as gender, race, and personal income, but is affected by covariates such as household income, age, occupation and level of education.
For traffic accidents, the WTP-based VSL was estimated in 2008 at S$1.87 million (in 2008 S$, which is approximately 1.27 S$ in 2022). This was also compared against WTP-based VSL estimates in other countries, including 4.63 million for the US, 3.11 million for Sweden, 2.41 million for the UK, 2.38 million for New Zealand and 1.76 million for the EU (in 2008 S$).
The VSL obtained by other methods may differ significantly. For instance, if the VSL is estimated from the World Bank VSL adjusted to country-specific gross domestic product, which reflects a human capital approach, then the VSL in Singapore would be calculated to be US$8.96 million in 2014 (S$11.3 million in 2014, in 2014 S$, which is approximately 1.09 S$ in 2022).
Turkey
Studies by Hacettepe University estimated the VSL at about half a million purchasing power parity adjusted 2012 US dollars, the value of a healthier and longer life (VHLL) for Turkey at about 42,000 lira (about $27,600 in PPP-adjusted 2012 USD), and the value of a life year (VOLY) as about 10,300 TL (about $6,800 in PPP-adjusted 2012 USD), all .
the estimated produced economic value for a life time for Turkey was US$59,000 which was 5.4 times GDP per capita.
Russia
According to different estimates life value in Russia varies from $40,000 up to $2 million. On the results of opinion poll life value (as the cost of financial compensation for the death) in the beginning of 2015 was about $71,500.
United Kingdom
As of 2013, the value of preventing a fatal casualty was £1.7m (2013 prices) in UK.
United States
The following estimates have been applied to the value of life. The estimates are either for one year of additional life or for the statistical value of a single life.
$50,000 per year of quality life (the "dialysis standard", which had been a de facto international standard most private and government-run health insurance plans worldwide use to determine whether to cover a new medical procedure)
$129,000 per year of quality life (an update to the "dialysis standard")
$7.5 million (Federal Emergency Management Agency, Jul. 2020)
$9.1 million (Environmental Protection Agency, 2010)
$9.2 million (Department of Transportation, 2014)
$9.6 million (Department of Transportation, Aug. 2016)
$12.5 million (Department of Transportation, 2022)
$13.2 million (Department of Transportation, 2023)
The income elasticity of the value of statistical life has been estimated at 0.5 to 0.6. Developing markets have smaller statistical value of life. The statistical value of life also decreases with age.
Historically, children were valued little monetarily, but changes in cultural norms have resulted in a substantial increase as evinced by trends in damage compensation from wrongful death lawsuits.
Uses
Knowing the value of life is helpful when performing a cost-benefit analysis, especially in regard to public policy. In order to decide whether or not a policy is worth undertaking, it is important to accurately measure costs and benefits. Public programs that deal with things like safety (i.e. highways, disease control, housing) require accurate valuations in order to budget spending.
Since resources are finite, trade-offs are inevitable, even regarding potential life-or-death decisions. The assignment of a value to individual life is one possible approach to attempting to make rational decisions about these trade-offs.
When deciding on the appropriate level of health care spending, a typical method is to equate the marginal cost of the health care to the marginal benefits received. In order to obtain a marginal benefit amount, some estimation of the dollar value of life is required. One notable example was found by Stanford professor Stefanos Zenios, whose team calculated the cost-effectiveness of kidney dialysis. His team found that the VSL implied by then current dialysis practice averages about US$129,000 per quality-adjusted life year (QALY). This calculation has important implications for health care as Zenios explained: "That means that if Medicare paid an additional $129,000 to treat a group of patients, on average, group members would get one more quality-adjusted life year."
In risk management activities such as in the areas of workplace safety, and insurance, it is often useful to put a precise economic value on a given life. The Occupational Safety and Health Administration under the Department of Labor sets penalties and regulations for companies to comply with safety standards to prevent workplace injuries and deaths. It can be argued that these high penalties are intended to act as a deterrent so that companies have an incentive to avoid them. As such, the price of the fines would have to be roughly equivalent to the value of a human life. Although some studies of the effectiveness of fines as a deterrent have found mixed results.
In transportation modes it is very important to consider the external cost that is paid by the society but is not calculated, for making it more sustainable. The external cost, although consisting of impacts on climate, crops and public health among others, is largely determined by impacts on mortality rate.
Criticisms
The value of a statistical life has come under criticism from a range of sources both in economics and philosophy. These criticisms range from concerns with the specific methodology used, to value a statistical life to the very prospect of valuing life and using it in cost benefit analyses.
Concerns with aggregation
Some economists have argued that the value of a statistical life should be "disaggregated" to better capture the differences in mortality risk reduction preferences. Cass Sunstein and others have argued that the value of a statistical life should vary by type of risks, as people are more concerned about some risks than others, and by individuals, as some people are more risk seeking than others. This is proposed to ensure the accuracy of the measurement, as using an average may force some people to pay more than they are willing to for risk reduction, and prevent policies from being enacted for people who are willing to pay more than average for mortality risk reduction.
Concerns with valuing life
Some philosophers and policymakers have concerns about the underlying idea of valuing a statistical life at all. While some of these concerns represent a misunderstanding of what is meant by the value of a statistical life, many express concerns with the project of valuing lives. Elizabeth Anderson and other philosophers have argued that the methods for measuring the value of a statistical life are insufficiently accurate as they rely on wage studies that are conducted in non-competitive labor markets where workers have insufficient information about their working conditions to accurately determine the risk of death from taking a particular job. Further these philosophers contend that some goods (including mortality risk, as well as environmental goods) are simply incommensurate, it is impossible to compare them, and therefore impossible to monetize them and put them on a single scale, making the very practice of valuing a statistical life problematic.
Economists have responded to the more superficial concerns by advocating renaming or rebranding the value of a statistical life as a "micromort" or the amount someone would be willing to pay to reduce a one in one million risk of death, though philosophers contend that this does not resolve the underlying issues.
See also
ALARP
Disability-adjusted life year
Hedonic damages
Intrinsic value (ethics)
Psychological significance and value in life
Rational choice theory
Utilitarianism
Value (personal and cultural)
References
Further reading
Viscusi, W. Kip (2003). The Value of Life: Estimates with Risks by Occupation and Industry (PDF). Discussion Paper No. 422. Cambridge, Mass.: Harvard Law School. . Retrieved 17 July 2023.
External links
Demographic economics
Philosophy of life
Valuation (finance)
Business ethics
Welfare economics
Utilitarianism
Safety analysis
Actuarial science | Value of life | [
"Mathematics"
] | 4,266 | [
"Applied mathematics",
"Actuarial science"
] |
19,111,851 | https://en.wikipedia.org/wiki/Shear%20velocity | Shear velocity, also called friction velocity, is a form by which a shear stress may be re-written in units of velocity. It is useful as a method in fluid mechanics to compare true velocities, such as the velocity of a flow in a stream, to a velocity that relates shear between layers of flow.
Shear velocity is used to describe shear-related motion in moving fluids. It is used to describe:
Diffusion and dispersion of particles, tracers, and contaminants in fluid flows
The velocity profile near the boundary of a flow (see Law of the wall)
Transport of sediment in a channel
Shear velocity also helps in thinking about the rate of shear and dispersion in a flow. Shear velocity scales well to rates of dispersion and bedload sediment transport. A general rule is that the shear velocity is between 5% and 10% of the mean flow velocity.
For river base case, the shear velocity can be calculated by Manning's equation.
n is the Gauckler–Manning coefficient. Units for values of n are often left off, however it is not dimensionless, having units of: (T/[L1/3]; s/[ft1/3]; s/[m1/3]).
Rh is the hydraulic radius (L; ft, m);
the role of a is a dimension correction factor. Thus a= 1 m1/3/s = 1.49 ft1/3/s.
Instead of finding and for the specific river of interest, the range of possible values can be examined; for most rivers, is between 5% and 10% of :
For general case
where τ is the shear stress in an arbitrary layer of fluid and ρ is the density of the fluid.
Typically, for sediment transport applications, the shear velocity is evaluated at the lower boundary of an open channel:
where τb is the shear stress given at the boundary.
Shear velocity is linked to the Darcy friction factor by equating wall shear stress, giving:
where is the friction factor.
Shear velocity can also be defined in terms of the local velocity and shear stress fields (as opposed to whole-channel values, as given above).
Friction velocity in turbulence
The friction velocity is often used as a scaling parameter for the fluctuating component of velocity in turbulent flows. One method of obtaining the shear velocity is through non-dimensionalization of the turbulent equations of motion. For example, in a fully developed turbulent channel flow or turbulent boundary layer, the streamwise momentum equation in the very near wall region reduces to:
.
By integrating in the y-direction once, then non-dimensionalizing with an unknown velocity scale u∗ and viscous length scale , the equation reduces down to:
or
.
Since the right hand side is in non-dimensional variables, they must be of order 1. This results in the left hand side also being of order one, which in turn give us a velocity scale for the turbulent fluctuations (as seen above):
.
Here, τw refers to the local shear stress at the wall.
Planetary boundary layer
Within the lowest portion of the planetary boundary layer a semi-empirical log wind profile is commonly used to describe the vertical distribution of horizontal mean wind speeds.
The simplified equation that describe it is
where is the Von Kármán constant (~0.41), is the zero plane displacement (in metres).
The zero-plane displacement () is the height in meters above the ground at which zero wind speed is achieved as a result of flow obstacles such as trees or buildings. It can be approximated as 2/3 to 3/4 of the average height of the obstacles. For example, if estimating winds over a forest canopy of height 30 m, the zero-plane displacement could be estimated as d = 20 m.
Thus, you can extract the friction velocity by knowing the wind velocity at two levels (z).
Due to the limitation of observation instruments and the theory of mean values, the levels (z) should be chosen where there is enough difference between the measurement readings. If one has more than two readings, the measurements can be fit to the above equation to determine the shear velocity.
References
Fluid mechanics
Geophysics
Geomorphology
Sedimentology | Shear velocity | [
"Physics",
"Engineering"
] | 861 | [
"Civil engineering",
"Applied and interdisciplinary physics",
"Fluid mechanics",
"Geophysics"
] |
19,114,012 | https://en.wikipedia.org/wiki/Ursell%20number | In fluid dynamics, the Ursell number indicates the nonlinearity of long surface gravity waves on a fluid layer. This dimensionless parameter is named after Fritz Ursell, who discussed its significance in 1953.
The Ursell number is derived from the Stokes wave expansion, a perturbation series for nonlinear periodic waves, in the long-wave limit of shallow water – when the wavelength is much larger than the water depth. Then the Ursell number U is defined as:
which is, apart from a constant 3 / (32 π2), the ratio of the amplitudes of the second-order to the first-order term in the free surface elevation.
The used parameters are:
H : the wave height, i.e. the difference between the elevations of the wave crest and trough,
h : the mean water depth, and
λ : the wavelength, which has to be large compared to the depth, λ ≫ h.
So the Ursell parameter U is the relative wave height H / h times the relative wavelength λ / h squared.
For long waves (λ ≫ h) with small Ursell number, U ≪ 32 π2 / 3 ≈ 100, linear wave theory is applicable. Otherwise (and most often) a non-linear theory for fairly long waves (λ > 7 h) – like the Korteweg–de Vries equation or Boussinesq equations – has to be used.
The parameter, with different normalisation, was already introduced by George Gabriel Stokes in his historical paper on surface gravity waves of 1847.
Notes
References
In 2 parts, 967 pages.
722 pages.
Dimensionless numbers of fluid mechanics
Fluid dynamics
Water waves | Ursell number | [
"Physics",
"Chemistry",
"Engineering"
] | 335 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
10,050,588 | https://en.wikipedia.org/wiki/Amanita%20parvipantherina | Amanita parvipantherina, also known as the Asian small panther amanita, is a Chinese species of agaric which fruits in July and August. It has a brown cap up to wide covered with whitish remnants of the universal veil. The stem is up to 9 cm tall. The similar A. pantherina is usually larger and less fragile, with fainter striations around the cap margin.
The species is restricted to Yunnan province in China, where it is strongly associated with Pinus yunnanensis (the Yunnan pine).
See also
List of Amanita species
References
Yang ZL, Weiss M & Oberwinkler F. (2004) New species of Amanita from the eastern Himalaya and adjacent regions
parvipantherina
Fungi of Asia
Fungi described in 2004
Fungus species | Amanita parvipantherina | [
"Biology"
] | 167 | [
"Fungi",
"Fungus species"
] |
10,050,972 | https://en.wikipedia.org/wiki/Ginsberg%27s%20theorem | Ginsberg's theorem is an epigrammatic paraphrase and parody "theorem" which restates the consequences of the four laws of thermodynamics of physics in terms of a person playing a game. It has various formulations, but it can be more or less expressed as:
The theorem is named after the poet Allen Ginsberg, though there does not appear to be any concrete evidence that Ginsberg himself coined the theorem. The phrase is sometimes stated as a general adage without specific reference to the laws of thermodynamics.
History
A comprehensive history and etymology of the epigrammatic phrase can also be found from the etymologist Barry Popik.
The phrase is often attributed to the British scientist C. P. Snow, who apparently was credited by his students for using it to help learn the laws of thermodynamics in the 1950s. However this claim appears to be without a source.
A semblance of the phrase appears to have been first printed in a 1953 issue of the science fiction magazine Astounding Science Fiction, whose editor, John Wood Campbell Jr., referenced acoustic engineer and professor Dwight Wayne Batteau of Harvard University:
In a 1956 issue of the same magazine, Batteau himself expanded it further in what appears to have been the first complete mention of the epigrammatic phrase in print:
It was later presented in the literary magazine The Kenyon Review in a 1960 short story titled "Entropy" from widely-regarded novelist Thomas Pynchon, who was still then an engineering physics undergraduate at Cornell University:
Physicist William R. Corliss also partly wrote about the phrase in an 1964 educational booklet freely distributed by the United States Atomic Energy Commission to disseminate knowledge about atomic energy to the American public:
Science writer Isaac Asimov stated at least the first two laws in an 1970 article, and was being credited with the paraphrased version by the end of the decade.
The phrase then appeared in a non-scientific setting in the opening lines of the popular song "You Can't Win" originally written by songwriter Charlie Smalls for the stage musical The Wiz:
The song was written by Smalls in 1974 and performed during the 1974 Baltimore run of the musical. The song later reached number 81 on the Billboard Hot 100. Though the song was formally released in 1979 as part of a musical soundtrack album, it was originally written and copyrighted by Smalls in 1974.
Remarkably, Allen Ginsberg appears to have only ever written about the laws of thermodynamics once, in his 1973 poem "Yes and It's Hopeless", though not in any connection to the original epigrammatic phrase:
Thus Ginsberg was seemingly, at the very least, cognizant of the laws of thermodynamics by the time of 1973. It is claimed that Ginsberg supposedly mentioned the epigrammatic phrase as a fun fact during a poetry session in or around 1974. In 1975, someone — possibly either Ginsberg's gay partner and poet Peter Orlovsky, poetry associate William Burroughs, or Philip Whalen — compiled a collection of quirky laws, including a "Ginsberg's Theorem" based on Ginsberg's prior musings.
In 1975, Ginsberg's theorem formally appeared by name, with no association to thermodynamics, in a listing of parody-like proverb laws by Conrad Schneiker in the counterculture magazine The CoEvolution Quarterly:
It may be possible that this appearance originated from a slight misstatement of the lines in the earlier 1974 song by Charlie Smalls.
Writer Arthur Bloch, in his popular 1977 book "Murphy's Law and Other Reasons Why Things Go Wrong!" which popularized Murphy's law, conflated the Ginsberg's theorem with the science of thermodynamics:
Notably, the book's acknowledgements mention Conrad Schneiker, who had written about Ginsberg's theorem in The CoEvolution Quarterly just two years prior in 1975. The theorem may have also been relayed to Bloch in conversation with his acquaintance Harris Freeman, who he knew from University of California, Santa Cruz, and who had found a collection of "laws", including Murphy's Law, Ginsberg's Theorem, and many others, somewhere on the ARPANET (a precursor of the Internet) in the mid 1970s while working as a systems administrator for ILLIAC IV (the world's first massively parallel computer) at the NASA Ames Research Center near Mountain View, California. With the publication of Bloch's book, Ginsberg's theorem seemingly thereafter became much more widely known.
References
External links
Laws of thermodynamics
Adages | Ginsberg's theorem | [
"Physics",
"Chemistry"
] | 969 | [
"Thermodynamics",
"Laws of thermodynamics"
] |
10,051,447 | https://en.wikipedia.org/wiki/Hydrophobic%20concrete | Hydrophobic concrete is concrete that repels water. It meets the standards outlined in the definition of waterproof concrete. Developed in Australia in the mid-20th century, millions of cubic yards of hydrophobic concrete have been laid in Australia, Asia, and Europe, and in the United States since 1999. Its effective use in hundreds of structures has contributed to its large acceptance and growing use.
Structure
Typical concrete is quite hydrophilic. This comes from its intricate system of tiny capillaries, which suck water through the microcrack network within a concrete slab. This hardened matrix creates a continuous "source to sink" cycle, meaning water from above is constantly pulled to an area of lower elevation. Darcy's coefficient refers to the ability of liquefied water under pressure to flow through any pores and capillaries that are present. A lower Darcy's constant correlates with a higher quality material.
Commercial companies use different approaches to modify a regular concrete mixture in order to create hydrophobic concrete, all of which involve somehow filling the porous concrete mixture. Some of the most commonly used methods include polymer formation, small speck infusion, and crystalline formations, the latter being the most widely used.
Polymer formation works by having a water-soluble pre-polymer polymerize via ion exchange with di-valent metal ions such as Ca and Fe ions to form rubbery insoluble particles. These small particles migrate and concentrate in the small fissures and capillaries formed in the concrete as it dries. As polymerization proceeds, rubber plugs form and permanently seal these water pathways, greatly reducing both water absorption and water permeability.
Crystalline technology is used to create hydrophobic concrete by causing crystal structures to form in the tiny capillaries, pores and other air pockets left behind in the concrete curing process. During this formation, by-products are left behind in the capillaries and pores of the freshly cured concrete, typically calcium hydroxide, sulfates, sodium carbonates, potassium, calcium, and hydrated and unhydrated cement particles. These crystal structures then plug the pores and capillaries, preventing water from flowing through them. Once the crystalline chemicals are added to the concrete mixture, through either an admixture or coating, they react with the by-products in the presence of water. This reaction then forms an insoluble crystal structure that clogs the pores. This process continues until all the chemicals have reacted. When applied as a coating, the chemical reaction proceeds through the process of chemical diffusion. This is a process of a high chemical density solution migrating towards the low density chemical solution until the two come into equilibrium. Soaking the concrete in water creates a low chemical density in the pores, and applying the crystalline chemical as a coating then creates a high chemical density. These two fluids diffuse through the inner structure of the concrete until they reach equilibrium throughout the inner structure. When this process is finished, the hydrophobic concrete's crystal structure is complete.
Properties
The ultimate goal when forming a hydrophobic material is to reduce the polarity of the molecules. Because water molecules are very polar, they are easily attract to partially positive or partially negative charges. On a neutral surface, water molecules bunch up and attract each other, creating a spherical droplet of water. These droplets can then evaporate off the concrete surface rather than be absorbed into the capillaries of the concrete. The exact structure and composition of the crystals used in hydrophobic concrete is not public information; due to its properties, however, it can be assumed that it is a non-polar molecule.
The property to repel water gives hydrophobic concrete the ability to avoid contamination by particles dissolved in water drops. Because the crystals themselves are not polar, there is little interaction between the crystals and dissolved oxygen. This allows the concrete to withstand the rebar rusting that so often compromises the strength of concrete that has iron bars running through it. Standard commercial concrete has an average water absorption of 4-10%. In contrast, hydrophobic concrete has an average of 0.3-1%.
An overlooked property of hydrophobic concrete is its ability to repel humidity in the air as well. In contrast to liquid water, water molecules in the air moving with a higher kinetic energy and ultimately exist in a gas-like form. The crystal structures in hydrophobic concrete are compact enough to prevent humidity from moving through the capillaries of the concrete.
Processing
Hydrophobic concrete is produced in a variety of ways that fall under two categories; coatings or admixtures. Both allow the crystal structures to form in the presence of water.
When creating hydrophobic concrete through a coating process, a coating is sprayed or brushed onto a porous surfaces. In most cases, it is applied to a regular concrete slab that then undergoes a corrosive process to expose more of the concrete's capillaries. This can be achieved by water blasting the surface at about 3,000-4,000 psi. Sandblasting and acid etching are also suitable processes. The addition of water is the next step. It can be applied either vertically or horizontally, but temperatures should not go below 33 degrees Fahrenheit to prevent freezing. Excessive evaporation should also be avoided. In areas with high evaporation rates, this process often takes place overnight when temperatures are cooler. Once the pores are saturated as much as possible with water, the coating is applied. Hydrophobic chemicals are in a powder form and mixed with water at a ratio of five parts powder to two parts water for application by brush. For spray application, the ratio is five parts powder to three parts water. The coating is applied between 1.25-1.5 lb per square yard and continues until the whole surface is covered. If the surface requires another coat, it must be applied within forty eight hours of the initial application of the hydrophobic mixture. Once applied, the concrete must cure in a moist environment two to three hours after the application. This is achieved by spraying the surface with water at least three times a day for a few days. Evaporation retardants are also occasionally used. Depending on the climate, the curing process may take longer and require more frequent wetting. Once the concrete is cured, it sits for two to three weeks before the process is complete.
When hydrophobic concrete is made through the use of an admixture, a powder with the hydrophobic chemicals is added during the batching process. In other words, it is added to the concrete mixture itself when the concrete is laid. The usual dosage is two to three percent of the concrete mixture. Because water is a part of the batching process, an additional curing process is not required. This approach is easier and less labor-intensive, but it can only be used when new concrete is laid.
Uses
Hydrophobic concrete can be used in the same applications as regular concrete, most often where regular concrete is dangerous to repair or the cost of structural damage would be highly detrimental. Tunnel work is a major application of hydrophobic concrete as underground repairs are difficult and costly. It is also a favorite choice for laying foundations for buildings and sidewalks in locations below the water table.
Underwater use of hydrophobic concrete is a major application in marine facilities. Is often used to hold water to create pools and ponds. NASA used hydrophobic concrete to build the swimming pool used to train astronauts for walking on the Moon. Hydrophobic concrete is also used in applications that are exposed to rain or rain puddling, such as green roofs, other kinds of roofs, parking structures, and plazas.
Advantages
Amongst the many benefits of using hydrophobic concrete, it reduces installation time and lowers costs. Use of hydrophobic concrete can reduce the labor time of industrial project because normal concrete involves a corrosion proofing period as well as a waterproofing period. With hydrophobic concrete, both corrosion proofing and waterproofing are done at the same time.
Likewise, time reduction reduces installation costs. Regular, membrane-backed concrete can cost around US$5 per square foot, although prices can vary based on the application. The one-step installation process of hydrophobic concrete brings the cost down to about US$3.20 per square foot. Such savings can quickly add up over the course of a project, as reported by the Hycrete company of southern California.
An estimated five billion dollars were spent in Western Europe alone on hydrophilic cement repairs in 1998. Most of the repairs were necessary due to the damage of water corrosion in urban areas. Because there is little or no water corrosion, hydrophobic concrete is better preserved than regular concrete, which typically looks worn and aged after a few years.
From an environmental standpoint, hydrophobic concrete is also beneficial because it is "green". Its ability to be re-crushed makes it easily reusable. Although regular concrete can be re-crushed, it involves a very costly process, which often means that the concrete ends up in a landfill. This advantage of hydrophobic concrete enables its cost-efficient reuse in future projects.
Disadvantages
Some other cons to hydrophobic concrete come from the application process. When applied as a coating, it can only penetrate up to 12 inches into the material. Also, the coating process itself is extremely labor-intensive. If the structure is thicker than 12 inches, or it is a large-area project, an admixture approach would have better results.
Using the alternative crystalline technology to produce hydrophobic concrete is only possible when water is present, since the surface must be carefully wetted before the coating is applied.
References
Concrete | Hydrophobic concrete | [
"Engineering"
] | 1,965 | [
"Structural engineering",
"Concrete"
] |
10,052,417 | https://en.wikipedia.org/wiki/Polymer%20concrete | Polymer concrete is a type of concrete that uses a polymer to replace lime-type cements as a binder. One specific type is epoxy granite, where the polymer used is exclusively epoxy. In some cases the polymer is used in addition to portland cement to form Polymer Cement Concrete (PCC) or Polymer Modified Concrete (PMC). Polymers in concrete have been overseen by Committee 548 of the American Concrete Institute since 1971.
Composition
In polymer concrete, thermoplastic polymers are often used, but more typically thermosetting resins are used as the principal polymer component due to their high thermal stability and resistance to a wide variety of chemicals.
Polymer concrete is also composed of aggregates that include silica, quartz, granite, limestone, or other material. The aggregate should be of good quality, free of dust and other debris, and dry. Failure to fulfill these criteria can reduce the bond strength between the polymer binder and the aggregate.
Uses
Polymer concrete may be used for new construction or repairing of old concrete. The adhesive properties of polymer concrete allow repair of both polymer and conventional cement-based concretes. The corrosion resistance and low permeability of polymer concrete allows it to be used in swimming pools, sewer structure applications, drainage channels, electrolytic cells for base metal recovery, and other structures that contain liquids or corrosive chemicals. It is especially suited to the construction and rehabilitation of manholes due to their ability to withstand toxic and corrosive sewer gases and bacteria commonly found in sewer systems. Unlike traditional concrete structures, polymer concrete requires no coating or welding of PVC-protected seams. It can also be used as a bonded wearing course for asphalt pavement, for higher durability and higher strength upon a concrete substrate, and in skate parks, as it is a very smooth surface.
Polymer concrete has historically not been widely adopted due to the high costs and difficulty associated with traditional manufacturing techniques. However, recent progress has led to significant reductions in cost, meaning that the use of polymer concrete is gradually becoming more widespread.
Polymer concrete in the form of epoxy granite is becoming more widely used in the construction of machine tool bases (such as mills and metal lathes) in place of cast iron due to its superior mechanical properties and a high chemical resistance.
Properties
The exact properties depend on the mixture, polymer, aggregate used etc. Generally speaking with mixtures used:
The binder is more expensive than cement
Significantly greater tensile strength than unreinforced Portland concrete (since polymer plastic is 'stickier' than cement and has reasonable tensile strength)
Similar or greater compressive strength to Portland concrete
Faster curing
Good adhesion to most surfaces, including to reinforcements
Good long-term durability with respect to freeze and thaw cycles
Low permeability to water and aggressive solutions
Improved chemical resistance
Good resistance against corrosion
Lighter weight (slightly less dense than traditional concrete, depending on the resin content of the mix)
May be vibrated to fill voids in forms
Allows use of regular form-release agents (in some applications)
Product hard to manipulate with conventional tools such as drills and presses due to its density. Recommend getting pre-modified product from the manufacturer
Small boxes are more costly when compared to its precast counterpart however pre cast concretes induction of stacking or steel covers quickly bridge the gap.
Specifications
Following are some specification examples of the features of polymer concrete:
References
Further reading
External links
Concrete
Pavements | Polymer concrete | [
"Engineering"
] | 698 | [
"Structural engineering",
"Concrete"
] |
10,053,774 | https://en.wikipedia.org/wiki/Herman%20Francis%20Mark | Herman Francis Mark (born Hermann Franz Mark; May 3, 1895, Vienna – April 6, 1992, Austin, Texas) was an Austrian-American chemist regarded for his contributions to the development of polymer science. Mark's X-ray diffraction work on the molecular structure of fibers provided important evidence for the macromolecular theory of polymer structure. Together with Houwink he formulated an equation, now called the Mark–Houwink or Mark–Houwink–Sakurada equation, describing the dependence of the intrinsic viscosity of a polymer on its relative molecular mass (molecular weight). He was a long-time faculty at Polytechnic Institute of Brooklyn. In 1946, he established the Journal of Polymer Science.
Biography
Early life
Mark was born in Vienna in 1895, the son of Hermann Carl Mark, a physician, and Lili Mueller. Mark's father was Jewish, but converted to Christianity (Lutheran Church) upon marriage.
Several early stimuli apparently steered Herman Mark to science. He was greatly influenced by a teacher, Franz Hlawaty, who made mathematics and physics understandable. At the age of twelve, he and his friend toured the laboratories of the University of Vienna. His friend's father, who taught science, arranged the tour. The visit excited both boys and before long they turned their bedrooms into laboratories. Both had access to chemicals through their fathers, and they were soon performing experiments.
World War I
Mark served as an Officer in the elite k.k. Kaiserschützen Regiment Nr. II of the Austro-Hungarian Army during World War I. He was highly decorated and the Austrian hero of the alpine Battle of Mount Ortigara in June 1917.
X-ray diffraction
Mark worked on X-ray diffraction caused by passage through gases along with physicist Raimund Wierl. This led to the computation of intermolecular distances. Linus Pauling learned X-ray diffraction from Mark, and that knowledge led to Pauling's seminal work on the structure of proteins.
Albert Einstein asked Mark and his colleagues to use the intense and powerful X-ray tubes available at their laboratory to verify the Compton Effect; this work provided the strongest confirmation yet of Einstein's light quantum theory for which he won the Nobel Prize in Physics.
IG Farben
In 1926, chemist Kurt Meyer of IG Farben offered Mark the assistant directorship of research at one of the company's laboratories. In his years at Farben, Mark worked on the first serious attempts at the commercialization of polystyrene, polyvinyl chloride, polyvinyl alcohol, and the first synthetic rubbers. Mark helped make Farben a leader in manufacturing and distribution of new polymers and co-polymers.
Mark's son Hans Mark (1929–2021) later became an American government official.
With the rise of Nazi power, Mark's plant manager recognised that as a foreigner and the son of a Jewish father he would be most vulnerable. Mark took his manager's advice and accepted a position as professor of physical chemistry at the University of Vienna, which brought him back to the city where he grew up. Mark's stay in Vienna lasted six very successful years during which he designed a new curriculum in polymer chemistry and continued research in the field of macromolecules.
In September 1937, Mark met C.B. Thorne, an official with the Canadian International Pulp and Paper Company, in Dresden. At the meeting, Thorne offered Mark a position as research manager with the company in Hawkesbury, Ontario, Canada, with the goal of modernizing its production of wood pulp for the purpose of making rayon, cellulose acetate, and cellophane. Mark replied that he was busy but that he would try to visit Canada the following year to help reorganize the company's research facilities.
Escape from Nazi Europe
In early 1938 Mark began preparing to leave Austria by delegating his administrative duties to colleagues. At the same time he clandestinely started to buy platinum wire, worth roughly $50,000, which he bent into coat hangers while his wife knitted covers so that the hangers could be taken out of the country.
When Hitler's troops invaded Austria and declared the Anschluss (the political union of Germany and Austria), Mark was arrested and thrown into a Gestapo prison. He was released with a warning not to contact anyone Jewish. He was also stripped of his passport. He retrieved his passport by paying a bribe equal to a year's salary, and he obtained a visa to enter Canada and transit visas through Switzerland, France, and England. At the end of April, Mark and his family mounted a Nazi flag on the radiator of their car, strapped ski equipment on the roof, and drove across the border, reaching Zurich the next day. From there, the family traveled to England via France, and in September, Mark, temporarily leaving his family behind, boarded a boat to Montreal.
United States
From Canada, Mark went to the United States, where he joined the Polytechnic Institute of Brooklyn. There he established a strong polymer program which included not only research but the first undergraduate polymer education in the United States.
Some of Mark's earliest work at the Brooklyn Polytechnic involved experiments with reinforcing ice by mixing water with wood pulp or cotton wool before freezing. In 1942, the results of these experiments were later passed to Max Perutz who had been a student of Mark in Vienna, but was now in the UK. Max Perutz's work would lead to the development of Pykrete.
In 1946, Mark established the Polymer Research Institute at Polytechnic Institute of Brooklyn, the first research facility in the United States dedicated to polymer research. Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members.
In 2003, the American Chemical Society designated the Polymer Research Institute as a National Historic Chemical Landmark.
Decorations and awards
1960: William H. Nichols Medal
1965: Austrian Decoration for Science and Art
1966: Foreign member of the Soviet Academy of Sciences
1966: Elliott Cresson Medal
1972: Chemical Pioneer Award
1975: Willard Gibbs Award
1975: Aachen and Munich Prize for Technology and Applied Sciences
1976: Harvey Prize
1979: National Medal of Science (United States)
1979: Wolf Prize in Chemistry
1980: Colwyn medal
1980: Perkin Medal
1988: Charles Goodyear Medal
Books
Giant Molecules (Series: LIFE Science Library) (1966)
Encyclopedia of Polymer Science and Technology 1st. Ed. 1964 4th. Ed. 2007
References
Notes
General references
External links
Interview (in german) with Hermann Mark in the online archive of the Österreichische Mediathek
Scientists from Vienna
1895 births
1992 deaths
20th-century American chemists
Austrian chemists
Austrian Lutherans
Jewish American scientists
Jewish chemists
Jewish emigrants from Austria after the Anschluss to the United States
Polymer scientists and engineers
Foreign members of the USSR Academy of Sciences
Foreign members of the Russian Academy of Sciences
National Medal of Science laureates
Recipients of the Austrian Decoration for Science and Art
Wolf Prize in Chemistry laureates
Polytechnic Institute of New York University faculty
20th-century Lutherans
20th-century American Jews
Austro-Hungarian military personnel of World War I
Fellows of the American Physical Society
Presidents of the Society of Rheology | Herman Francis Mark | [
"Chemistry",
"Materials_science"
] | 1,508 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
10,055,024 | https://en.wikipedia.org/wiki/Gravitational%20compression | In astrophysics, gravitational compression is a phenomenon in which gravity, acting on the mass of an object, compresses it, reducing its size and increasing the object's density.
At the center of a planet or star, gravitational compression produces heat by the Kelvin–Helmholtz mechanism. This is the mechanism that explains how Jupiter continues to radiate heat produced by its gravitational compression.
The most common reference to gravitational compression is stellar evolution. The Sun and other main-sequence stars are produced by the initial gravitational collapse of a molecular cloud. Assuming the mass of the material is large enough, gravitational compression reduces the size of the core, increasing its temperature until hydrogen fusion can begin. This hydrogen-to-helium fusion reaction releases energy that balances the inward gravitational pressure and the star becomes stable for millions of years. No further gravitational compression occurs until the hydrogen is nearly used up, reducing the thermal pressure of the fusion reaction. At the end of the Sun's life, gravitational compression will turn it into a white dwarf.
At the other end of the scale are massive stars. These stars burn their fuel very quickly, ending their lives as supernovae, after which further gravitational compression will produce either a neutron star or a black hole from the remnants.
For planets and moons, equilibrium is reached when the gravitational compression is balanced by a pressure gradient. This pressure gradient is in the opposite direction due to the strength of the material, at which point gravitational compression ceases.
References
Astrophysics | Gravitational compression | [
"Physics",
"Astronomy"
] | 301 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
10,058,495 | https://en.wikipedia.org/wiki/Admittance%20parameters | Admittance parameters or Y-parameters (the elements of an admittance matrix or Y-matrix) are properties used in many areas of electrical engineering, such as power, electronics, and telecommunications. These parameters are used to describe the electrical behavior of linear electrical networks. They are also used to describe the small-signal (linearized) response of non-linear networks. Y parameters are also known as short circuited admittance parameters. They are members of a family of similar parameters used in electronic engineering, other examples being: S-parameters, Z-parameters, H-parameters, T-parameters or ABCD-parameters.
The Y-parameter matrix
A Y-parameter matrix describes the behaviour of any linear electrical network that can be regarded as a black box with a number of ports. A port in this context is a pair of electrical terminals carrying equal and opposite currents into and out of the network, and having a particular voltage between them. The Y-matrix gives no information about the behaviour of the network when the currents at any port are not balanced in this way (should this be possible), nor does it give any information about the voltage between terminals not belonging to the same port. Typically, it is intended that each external connection to the network is between the terminals of just one port, so that these limitations are appropriate.
For a generic multi-port network definition, it is assumed that each of the ports is allocated an integer ranging from 1 to , where is the total number of ports. For port , the associated Y-parameter definition is in terms of the port voltage and port current, and respectively.
For all ports the currents may be defined in terms of the Y-parameter matrix and the voltages by the following matrix equation:
where Y is an matrix the elements of which can be indexed using conventional matrix notation. In general the elements of the Y-parameter matrix are complex numbers and functions of frequency. For a one-port network, the Y-matrix reduces to a single element, being the ordinary admittance measured between the two terminals.
Two-port networks
The Y-parameter matrix for the two-port network is probably the most common. In this case the relationship between the port voltages, port currents and the Y-parameter matrix is given by:
.
where
For the general case of an -port network,
Admittance relations
The input admittance of a two-port network is given by:
where is the admittance of the load connected to port two.
Similarly, the output admittance is given by:
where is the admittance of the source connected to port one.
Relation to S-parameters
The Y-parameters of a network are related to its S-parameters by
and
where is the identity matrix, is a diagonal matrix having the square root of the characteristic admittance (the reciprocal of the characteristic impedance) at each port as its non-zero elements,
and is the corresponding diagonal matrix of square roots of characteristic impedances. In these expressions the matrices represented by the bracketed factors commute and so, as shown above, may be written in either order.
Two port
In the special case of a two-port network, with the same and real characteristic admittance at each port, the above expressions reduce to
where
The above expressions will generally use complex numbers for and . Note that the value of can become 0 for specific values of so the division by in the calculations of may lead to a division by 0.
The two-port S-parameters may also be obtained from the equivalent two-port Y-parameters by means of the following expressions.
where
and is the characteristic impedance at each port (assumed the same for the two ports).
Relation to Z-parameters
Conversion from Z-parameters to Y-parameters is much simpler, as the Y-parameter matrix is just the inverse of the Z-parameter matrix. The following expressions show the applicable relations:
where
In this case is the determinant of the Z-parameter matrix.
Vice versa the Y-parameters can be used to determine the Z-parameters, essentially using the
same expressions since
and
See also
Nodal admittance matrix
Scattering parameters
Impedance parameters
Two-port network
Hybrid-pi model
Power gain
Notes
References
Two-port networks
Transfer functions
de:Zweitor#Zweitorgleichungen und Parameter | Admittance parameters | [
"Engineering"
] | 868 | [
"Two-port networks",
"Electronic engineering"
] |
10,058,792 | https://en.wikipedia.org/wiki/Backlash%20%28engineering%29 | In mechanical engineering, backlash, sometimes called lash, play, or slop, is a clearance or lost motion in a mechanism caused by gaps between the parts. It can be defined as "the maximum distance or angle through which any part of a mechanical system may be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence."p. 1-8 An example, in the context of gears and gear trains, is the amount of clearance between mated gear teeth. It can be seen when the direction of movement is reversed and the slack or lost motion is taken up before the reversal of motion is complete. It can be heard from the railway couplings when a train reverses direction. Another example is in a valve train with mechanical tappets, where a certain range of lash is necessary for the valves to work properly.
Depending on the application, backlash may or may not be desirable. Some amount of backlash is unavoidable in nearly all reversing mechanical couplings, although its effects can be negated or compensated for. In many applications, the theoretical ideal would be zero backlash, but in actual practice some backlash must be allowed to prevent jamming. Reasons for specifying a requirement for backlash include allowing for lubrication, manufacturing errors, deflection under load, and thermal expansion. A principal cause of undesired backlash is wear.
Gears
Factors affecting the amount of backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, and run-out. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears.
Backlash due to tooth thickness changes is typically measured along the pitch circle and is defined by:
where:
Backlash, measured on the pitch circle, due to operating center modifications is defined by:
The speed of the machine.
The material in the machine
where:
Standard practice is to make allowance for half the backlash in the tooth thickness of each gear. However, if the pinion (the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth. The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired.
As a rule of thumb the average backlash is defined as 0.04 divided by the diametral pitch; the minimum being 0.03 divided by the diametral pitch and the maximum 0.05 divided by the diametral pitch. In metric, you can just multiply the values with the module:
In a gear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components.
Anti-backlash designs
In certain applications, backlash is an undesirable characteristic and should be minimized.
Gear trains where positioning is key but power transmission is light
The best example here is an analog radio tuner dial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original.
One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by small coil springs that rotate the free gear relative to the fixed gear. In this way, the spring compression rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated.
Leadscrews where positioning and power are both important
Another area where backlash matters is in leadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context is screw threads. The linear sliding axes (machine slides) of machine tools are an example application.
Most machine slides for many decades, and many even today, have been simple (but accurate) cast-iron linear bearing surfaces, such as a dovetail- or box-slide, with an Acme leadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, a machinist's means for compensating for backlash is to approach all precise positions using the same direction of travel, that is, if they have been dialing left, and next want to move to a rightward point, they will move rightward past it, then dial leftward back to it; the setups, tool approaches, and toolpaths must in that case be designed within this constraint.
The next-more complex method than the simple nut is a split nut, whose halves can be adjusted, and locked with screws, so that the two sides ride, respectively, against leftward thread and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminate all backlash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore, this idea can't totally obviate the always-approach-from-the-same-direction concept; nevertheless, backlash can be held to a small amount (1 or 2 thousandths of an inch or), which is more convenient, and in some non-precise work is enough to allow one to "ignore" the backlash, i.e., to design as if there were none.
CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today, because hydraulic anti-backlash split nuts, and newer forms of leadscrew than Acme/trapezoidal -- such as recirculating ball screws -- effectively eliminate the backlash. The axis can move in either direction without the go-past-and-come-back motion.
The simplest CNCs, such as microlathes or manual-to-CNC conversions, which use nut-and-Acme-screw drives can be programmed to correct for the total backlash on each axis, so that the machine's control system will automatically move the extra distance required to take up the slack when it changes directions. This programmatic "backlash compensation" is a cheap solution, but professional grade CNCs use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with constant rigidity and without delays.
In mechanical computers a more complex solution is required, namely a frontlash gearbox. This works by turning slightly faster when the direction is reversed to 'use up' the backlash slack.
Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in a closed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and thus more prone to oscillation.
Minimum backlash
Minimum backlash is calculated as the minimum transverse backlash at the operating pitch circle allowable when the gear teeth with the greatest allowable functional tooth thickness are in mesh with the pinion teeth with their greatest allowable functional tooth thickness, at the smallest allowable center distance, under static conditions.
Backlash variation is defined as the difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears.
Applications
Backlash in gear couplings allows for slight angular misalignment.
There can be significant backlash in unsynchronized transmissions because of the intentional gap between the dogs in dog clutches. The gap is necessary to engage dogs when input shaft (engine) speed and output shaft (driveshaft) speed are imperfectly synchronized. If there was a smaller clearance, it would be nearly impossible to engage the gears because the dogs would interfere with each other in most configurations. In synchronized transmissions, synchromesh solves this problem.
However, backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by choosing ball screws or leadscrews with preloaded nuts, and mounting them in preloaded bearings. A preloaded bearing uses a spring and/or a second bearing to provide a compressive axial force that maintains bearing surfaces in contact despite reversal of the load direction.
See also
Bauschinger effect
Harmonic drive
Hysteresis
List of gear nomenclature
References
Gears
Screws
Mechanical engineering | Backlash (engineering) | [
"Physics",
"Engineering"
] | 2,020 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
10,059,041 | https://en.wikipedia.org/wiki/Nociceptin%20receptor | The nociceptin opioid peptide receptor (NOP), also known as the nociceptin/orphanin FQ (N/OFQ) receptor or kappa-type 3 opioid receptor, is a protein that in humans is encoded by the OPRL1 (opioid receptor-like 1) gene. The nociceptin receptor is a member of the opioid subfamily of G protein-coupled receptors whose natural ligand is the 17 amino acid neuropeptide known as nociceptin (N/OFQ). This receptor is involved in the regulation of numerous brain activities, particularly instinctive and emotional behaviors. Antagonists targeting NOP are under investigation for their role as treatments for depression and Parkinson's disease, whereas NOP agonists have been shown to act as powerful, non-addictive painkillers in non-human primates.
Although NOP shares high sequence identity (~60%) with the ‘classical’ opioid receptors μ-OP (MOP), κ-OP (KOP), and δ-OP (DOP), it possesses little or no affinity for opioid peptides or morphine-like compounds. Likewise, classical opioid receptors possess little affinity towards NOP's endogenous ligand nociceptin, which is structurally related to dynorphin A.
Discovery
In 1994, Mollereau et al. cloned a receptor that was highly homologous to the classical opioid receptors (OPs) μ-OR (MOP), κ-OR (KOP), and δ-OR (DOP) that came to be known as the Nociceptin Opioid Peptide receptor (NOP). As these “classical” opioid receptors were identified 30 years earlier in the mid-1960s, the physiological and pharmacological characterization of NOP as well as therapeutic development targeting this receptor remain decades behind. Although research on NOP has blossomed into its own sub-field, the lack of widespread knowledge of NOP's existence means that it is commonly omitted from studies that investigate the OP family, despite its promising role as a therapeutic target.
Mechanism and pharmacology
NOP cellular signalling partners
Like most G-protein coupled receptors, NOP signals through canonical G proteins upon activation. G proteins are heterotrimeric complexes consisting of α, β, and γ subunits. NOP signals through a variety of Gα subtypes that trigger diverse downstream signaling cascades. NOP coupling to Gαi or Gαo subunits leads to an inhibition of adenylyl cyclase (AC) causing an intracellular decrease in cyclic adenosine monophosphate(cAMP) levels, an important second messenger for many signal transduction pathways. NOP acting through Gαi/o pathways has also been shown to activate Phospholipase A2 (PLA2), thereby initiating Mitogen-activated protein kinase (MAPK) signaling cascades. In contrast to classical OPs, NOP also couples to Pertussis toxin (PTX)-insensitive subtypes Gαz, Gα14, and Gα16, as well as potentially to Gα12 and Gαs. Activation of NOP's canonical β-arrestin pathway causes receptor phosphorylation, internalization, and eventual downregulation and recycling. NOP activation also causes indirect inhibition of opioid receptors MOP and KOP, resulting in anti-opioid activity in certain tissues. Additionally, NOP activation leads to the activation of potassium channels and inhibition of calcium channels which collectively inhibit neuronal firing.
Neuroanatomy
Nociceptin controls a wide range of biological functions ranging from nociception to food intake, from memory processes to cardiovascular and renal functions, from spontaneous locomotor activity to gastrointestinal motility, from anxiety to the control of neurotransmitter release at peripheral and central sites.
Pain circuitry
The outcome of NOP activation on the brain's pain circuitry is site-specific. Within the central nervous system its action can be either similar or opposite to those of opioids depending on their location. In animal models, activation of NOP in the brain stem and higher brain regions has mixed action, resulting in overall anti-opioid activity. NOP activation at the spinal cord and peripheral nervous system results in morphine-comparable analgesia in non-human primates.
Reward circuitry
NOP is highly expressed in every node of the mesocorticolimbic reward circuitry. Unlike MOP agonists such as codeine and morphine, NOP agonists do not have reinforcing effects. Nociceptin is thought to be an endogenous antagonist of dopamine transport that may act either directly on dopamine or by inhibiting GABA to affect dopamine levels. In animal models, the result of NOP activation in the central nervous system has been shown to eliminate conditioned place preference induced by morphine, cocaine, alcohol, and methamphetamine.
Therapeutic potential
Analgesia and abuse liability
Recent studies indicate that targeting NOP is a promising alternative route to relieving pain without the deleterious side effects of traditional MOP-activating opioid therapies. In primates, specifically activating NOP through systemic or intrathecal administration induces long-lasting, morphine-comparable analgesia without causing itch, respiratory depression, or the reinforcing effects that lead to addiction in an intravenous self-administration paradigm; thus eliminating all of the serious side-effects of current opioid therapies.
Several commonly used opioid drugs including etorphine and buprenorphine have been demonstrated to bind to nociceptin receptors, but this binding is relatively insignificant compared to their activity at other opioid receptors in the acute setting (however the non-analgesic NOPr antagonist SB-612,111 was demonstrated to potentiate the therapeutic benefits of morphine). Chronic administration of nociceptin receptor agonists results in an attenuation of the analgesic and anti-allodynic effects of opiates; this mechanism inhibits the action of endogenous opioids as well, resulting in an increase in pain severity, depression, and both physical and psychological opiate dependence following chronic NOPr agonist administration. Administration of the NOPr antagonist SB-612,111 has been shown to inhibit this process. More recently a range of selective ligands for NOP have been developed, which show little or no affinity to other opioid receptors and so allow NOP-mediated responses to be studied in isolation.
Agonists
AT-121 (Experimental agonist of both the μ-opioid and nociceptin receptors, showing promising results in non-human primates.)
Buprenorphine (partial agonist, not selective for NOP, also partial agonist of μ-opioid receptors, and competitive antagonist of δ-opioid and κ-opioid receptors)
BU08028 (Analogue of buprenorphine, partial agonist, agonist of μ-opioid receptor, has analgesic properties without physical dependence.)
Cebranopadol (full agonist at NOP, μ-opioid and δ-opioid receptors, partial agonist at κ-opioid receptor)
Etorphine
MCOPPB (full agonist)
MT-7716
Nociceptin
Norbuprenorphine (full agonist; non-selective (also full agonist at the MOR and DOR and partial agonist at the KOR); peripherally-selective)
NNC 63-0532
Ro64-6198
Ro65-6570
SCH-221,510
SR-8993
SR-16435 (mixed MOR / NOP partial agonist)
TH-030418
Antagonists
AT-076 (non-selective)
JTC-801
J-113,397
LY-2940094
SB-612,111
SR-16430
Thienorphine
Applications
NOP agonists are being studied as treatments for heart failure and migraine while nociceptin antagonists such as JTC-801 may have analgesic and antidepressant qualities.
References
Further reading
External links
G protein-coupled receptors | Nociceptin receptor | [
"Chemistry"
] | 1,757 | [
"G protein-coupled receptors",
"Signal transduction"
] |
10,059,094 | https://en.wikipedia.org/wiki/%CE%94-opioid%20receptor | The δ-opioid receptor, also known as delta opioid receptor or simply delta receptor, abbreviated DOR or DOP, is an inhibitory 7-transmembrane G-protein coupled receptor coupled to the G protein Gi/G0 and has enkephalins as its endogenous ligands. The regions of the brain where the δ-opioid receptor is largely expressed vary from species model to species model. In humans, the δ-opioid receptor is most heavily expressed in the basal ganglia and neocortical regions of the brain.
Function
The endogenous system of opioid receptors is well known for its analgesic potential; however, the exact role of δ-opioid receptor activation in pain modulation is largely up for debate. This also depends on the model at hand since receptor activity is known to change from species to species. Activation of delta receptors produces analgesia, perhaps as significant potentiators of μ-opioid receptor agonists. However, it seems like delta agonism provides heavy potentiation to any mu agonism. Therefore, even selective mu agonists can cause analgesia under the right conditions, whereas under others can cause none whatsoever. It is also suggested however that the pain modulated by the μ-opioid receptor and that modulated by the δ-opioid receptor are distinct types, with the assertion that DOR modulates the nociception of chronic pain, while MOR modulates acute pain.
Evidence for whether delta agonists produce respiratory depression is mixed; high doses of the delta agonist peptide DPDPE produced respiratory depression in sheep. In contrast both the peptide delta agonist Deltorphin II and the non-peptide delta agonist (+)-BW373U86 actually stimulated respiratory function and blocked the respiratory depressant effect of the potent μ-opioid agonist alfentanil, without affecting pain relief. It thus seems likely that while δ-opioid agonists can produce respiratory depression at very high doses, at lower doses they have the opposite effect, a fact that may make mixed mu/delta agonists such as DPI-3290 potentially very useful drugs that might be much safer than the μ agonists currently used for pain relief. Many delta agonists may also cause seizures at high doses, although not all delta agonists produce this effect.
Of additional interest is the potential for delta agonists to be developed for use as a novel class of antidepressant drugs, following robust evidence of both antidepressant effects and also upregulation of BDNF production in the brain in animal models of depression. These antidepressant effects have been linked to endogenous opioid peptides acting at δ- and μ-opioid receptors, and so can also be produced by enkephalinase inhibitors such as RB-101. However, in human models the data for antidepressant effects remains inconclusive. In the 2008 Phase 2 clinical trial by Astra Zeneca, NCT00759395, 15 patients were treated with the selective delta agonist AZD 2327. The results showed no significant effect on mood suggesting that δ-opioid receptor modulation might not participate in the regulation of mood in humans. However, doses were administered at low doses and the pharmacological data also remains inconclusive. Further trials are required.
Another interesting aspect of δ-opioid receptor function is the suggestion of μ/δ-opioid receptor interactions. At the extremes of this suggestion lies the possibility of a μ/δ opioid receptor oligomer. The evidence for this stems from the different binding profiles of typical mu and delta agonists such as morphine and DAMGO respectively, in cells that coexpress both receptors compared to those in cells that express them individually. In addition, work by Fan and coworkers shows the restoration of the binding profiles when distal carboxyl termini are truncated at either receptor, suggesting that the termini play a role in the oligomerization. While this is exciting, rebuttal by the Javitch and coworkers suggest the idea of oligomerization may be overplayed. Relying on RET, Javitch and coworkers showed that RET signals were more characteristic of random proximity between receptors, rather than an actual bond formation between receptors, suggesting that discrepancies in binding profiles may be the result of downstream interactions, rather than novel effects due to oligomerization. Nevertheless, coexpression of receptors remains unique and potentially useful in the treatment of mood disorders and pain.
Recent work indicates that exogenous ligands that activate the delta receptors mimic the phenomenon known as ischemic preconditioning. Experimentally, if short periods of transient ischemia are induced the downstream tissues are robustly protected if longer-duration interruption of the blood supply is then affected. Opiates and opioids with DOR activity mimic this effect. In the rat model, introduction of DOR ligands results in significant cardioprotection.
Ligands
Until comparatively recently, there were few pharmacological tools for the study of δ receptors. As a consequence, our understanding of their function is much more limited than those of the other opioid receptors for which selective ligands have long been available.
However, there are now several selective δ-opioid receptor agonists available, including peptides such as DPDPE and deltorphin II, and non-peptide drugs such as SNC-80, the more potent (+)-BW373U86, a newer drug DPI-287, which does not produce the problems with convulsions seen with the earlier agents, and the mixed μ/δ agonist DPI-3290, which is a much more potent analgesic than the more highly selective δ agonists. Selective antagonists for the δ receptor are also available, with the best known being the opiate derivative naltrindole.
Agonists
Peptides
Leu-enkephalin
Met-enkephalin
Deltorphins
DADLE
DPDPE
Non-peptides
ADL-5859
BU-48
BW373U86
DPI-221
DPI-287
DPI-3290
RWJ-394674-
SNC-80
TAN-67
Amoxapine (partial agonist)
Cannabidiol (allosteric modulator, non-selective)
Desmethylclozapine
Mitragynine
Mitragynine pseudoindoxyl
Norbuprenorphine (peripherally restricted)
N-Phenethyl-14-ethoxymetopon
7-Spiroindanyloxymorphone
Tetrahydrocannabinol (allosteric modulator, non-selective)
Xorphanol
Antagonists
Buprenorphine
Naltriben
Naltrindole
Mitragynine
7-Hydroxymitragynine
Interactions
δ-opioid receptors have been shown to interact with β2 adrenergic receptors, arrestin β1 and GPRASP1.
See also
κ-opioid receptor
μ-opioid receptor
References
Further reading
External links
G protein-coupled receptors
Opioid receptors | Δ-opioid receptor | [
"Chemistry"
] | 1,521 | [
"G protein-coupled receptors",
"Opioid receptors",
"Signal transduction"
] |
2,415,863 | https://en.wikipedia.org/wiki/Differential%20graded%20algebra | In mathematics – particularly in homological algebra, algebraic topology, and algebraic geometry – a differential graded algebra (or DGA, or DG algebra) is an algebraic structure often used to capture information about a topological or geometric space. Explicitly, a differential graded algebra is a graded associative algebra with a chain complex structure that is compatible with the algebra structure.
In geometry, the de Rham algebra of differential forms on a manifold has the structure of a differential graded algebra, and it encodes the de Rham cohomology of the manifold. In algebraic topology, the singular cochains of a topological space form a DGA encoding the singular cohomology. Moreover, American mathematician Dennis Sullivan developed a DGA to encode the rational homotopy type of topological spaces.
Definitions
Let be a -graded algebra, with product , equipped with a map of degree (homologically graded) or degree (cohomologically graded). We say that is a differential graded algebra if is a differential, giving the structure of a chain complex or cochain complex (depending on the degree), and satisfies a graded Leibniz rule. In what follows, we will denote the "degree" of a homogeneous element by . Explicitly, the map satisfies the conditions
Often one omits the differential and multiplication and simply writes or to refer to the DGA .
A linear map between graded vector spaces is said to be of degree n if for all . When considering (co)chain complexes, we restrict our attention to chain maps, that is, maps of degree 0 that commute with the differentials . The morphisms in the category of DGAs are chain maps that are also algebra homomorphisms.
Categorical Definition
One can also define DGAs more abstractly using category theory. There is a category of chain complexes over a ring , often denoted , whose objects are chain complexes and whose morphisms are chain maps. We define the tensor product of chain complexes and by
with differential
This operation makes into a symmetric monoidal category. Then, we can equivalently define a differential graded algebra as a monoid object in . Heuristically, it is an object in with an associative and unital multiplication.
Homology and Cohomology
Associated to any chain complex is its homology. Since , it follows that is a subobject of . Thus, we can form the quotient
This is called the th homology group, and all together they form a graded vector space . In fact, the homology groups form a DGA with zero differential. Analogously, one can define the cohomology groups of a cochain complex, which also form a graded algebra with zero differential.
Every chain map of complexes induces a map on (co)homology, often denoted (respectively ). If this induced map is an isomorphism on all (co)homology groups, the map is called a quasi-isomorphism. In many contexts, this is the natural notion of equivalence one uses for (co)chain complexes. We say a morphism of DGAs is a quasi-isomorphism if the chain map on the underlying (co)chain complexes is.
Properties of DGAs
Commutative Differential Graded Algebras
A commutative differential graded algebra (or CDGA) is a differential graded algebra, , which satisfies a graded version of commutativity. Namely,
for homogeneous elements . Many of the DGAs commonly encountered in math happen to be CDGAs, like the de Rham algebra of differential forms.
Differential graded Lie algebras
A differential graded Lie algebra (or DGLA) is a differential graded analogue of a Lie algebra. That is, it is a differential graded vector space, , together with an operation , satisfying the following graded analogues of the Lie algebra axioms.
An example of a DGLA is the de Rham algebra tensored with a Lie algebra , with the bracket given by the exterior product of the differential forms and Lie bracket; elements of this DGLA are known as Lie algebra–valued differential forms. DGLAs also arise frequently in the study of deformations of algebraic structures where, over a field of characteristic 0, "nice" deformation problems are described by the space of Maurer-Cartan elements of some suitable DGLA.
Formal DGAs
A (co)chain complex is called formal if there is a chain map to its (co)homology (respectively ), thought of as a complex with 0 differential, that is a quasi-isomorphism. We say that a DGA is formal if there exists a morphism of DGAs (respectively ) that is a quasi-isomorphism. This notion is important, for instance, when one wants to consider quasi-isomorphic chain complexes or DGAs as being equivalent, as in the derived category.
Examples
Trivial DGAs
Notice that any graded algebra has the structure of a DGA with trivial differential, i.e., . In particular, as noted above, the (co)homology of any DGA forms a trivial DGA, since it is a graded algebra.
The de-Rham algebra
Let be a manifold. Then, the differential forms on , denoted by , naturally have the structure of a (cohomologically graded) DGA. The graded vector space is , where the grading is given by form degree. This vector space has a product, given by the exterior product, which makes it into a graded algebra. Finally, the exterior derivative satisfies and the graded Leibniz rule. In fact, the exterior product is graded-commutative, which makes the de Rham algebra an example of a CDGA.
Singular Cochains
Let be a topological space. Recall that we can associate to its complex of singular cochains with coefficients in a ring , denoted , whose cohomology is the singular cohomology of . On , one can define the cup product of cochains, which gives this cochain complex the structure of a DGA. In the case where is a smooth manifold and , the de Rham theorem states that the singular cohomology is isomorphic to the de Rham cohomology and, moreover, the cup product and exterior product of differential forms induce the same operation on cohomology.
Note, however, that while the cup product induces a graded-commutative operation on cohomology, it is not graded commutative directly on cochains. This is an important distinction, and the failure of a DGA to be commutative is referred to as the "commutative cochain problem". This problem is important because if, for any topological space , one can associate a commutative DGA whose cohomology is the singular cohomology of over , then this CDGA determines the -homotopy type of .
The Free DGA
Let be a (non-graded) vector space over a field . The tensor algebra is defined to be the graded algebra
where, by convention, we take . This vector space can be made into a graded algebra with the multiplication given by the tensor product . This is the free algebra on , and can be thought of as the algebra of all non-commuting polynomials in the elements of .
One can give the tensor algebra the structure of a DGA as follows. Let be any linear map. Then, this extends uniquely to a derivation of of degree (homologically graded) by the formula
One can think of the minus signs on the right-hand side as coming from "jumping" the map over the elements , which are all of degree 1 in . This is commonly referred to as the Koszul sign rule.
One can extend this construction to differential graded vector spaces. Let be a differential graded vector space, i.e., and . Here we work with a homologically graded DG vector space, but this construction works equally well for a cohomologically graded one. Then, we can endow the tensor algebra with a DGA structure which extends the DG structure on V. The differential is given by
This is similar to the previous case, except that now the elements of can have different degrees, and is no longer graded by the number of tensor products but instead by the sum of the degrees of the elements of , i.e., .
The Free CDGA
Similar to the previous case, one can also construct the free CDGA. Given a graded vector space , we define the free graded commutative algebra on it by
where denotes the symmetric algebra and denotes the exterior algebra. If we begin with a DG vector space (either homologically or cohomologically graded), then we can extend to such that is a CDGA in a unique way.
Models for DGAs
As mentioned previously, oftentimes one is most interested in the (co)homology of a DGA. As such, the specific (co)chain complex we use is less important, as long as it has the right (co)homology. Given a DGA , we say that another DGA is a model for if it comes with a surjective DGA morphism that is a quasi-isomorphism.
Minimal Models
Since one could form arbitrarily large (co)chain complexes with the same cohomology, it is useful to consider the "smallest" possible model of a DGA. We say that a DGA is a minimal if it satisfies the following conditions.
Note that some conventions, often used in algebraic topology, additionally require that be simply connected, which means that and . This condition on the 0th and 1st degree components of mirror the (co)homology groups of a simply connected space.
Finally, we say that is a minimal model for if it is both minimal and a model for . The fundamental theorem of minimal models states that if is simply connected then it admits a minimal model, and that if a minimal model exists it is unique up to (non-unique) isomorphism.
The Sullivan minimal model
Minimal models were used with great success by Dennis Sullivan in his work on rational homotopy theory. Given a simplicial complex , one can define a rational analogue of the (real) de Rham algebra: the DGA of "piecewise polynomial" differential forms with -coefficients. Then, has the structure of a CDGA over the field , and in fact the cohomology is isomorphic to the singular cohomology of . In particular, if is a simply connected topological space then is simply connected as a DGA, thus there exists a minimal model.
Moreover, since is a CDGA whose cohomology is the singular cohomology of with -coefficients, it is a solution to the commutative cochain problem. Thus, if is a simply connected CW complex with finite dimensional rational homology groups, the minimal model of the CDGA captures entirely the rational homotopy type of .
See also
Differential graded Lie algebra
Rational homotopy theory
Homotopy associative algebra
Notes
References
Algebras
Homological algebra
Algebraic topology
Algebraic geometry
Commutative algebra
Differential algebra | Differential graded algebra | [
"Mathematics"
] | 2,273 | [
"Differential algebra",
"Mathematical structures",
"Algebras",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Algebraic structures",
"Category theory",
"Algebraic geometry",
"Commutative algebra",
"Homological algebra"
] |
2,416,422 | https://en.wikipedia.org/wiki/Cross-linked%20polyethylene | Cross-linked polyethylene, commonly abbreviated PEX, XPE or XLPE, is a form of polyethylene with cross-links. It is used predominantly in building services pipework systems, hydronic radiant heating and cooling systems, domestic water piping, insulation for high tension (high voltage) electrical cables, and baby play mats. It is also used for natural gas and offshore oil applications, chemical transportation, and transportation of sewage and slurries. PEX is an alternative to polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC) or copper tubing for use as residential water pipes.
Properties
Low-temperature impact strength, abrasion resistance and environmental stress cracking resistance can be increased significantly by crosslinking, whereas hardness and rigidity are somewhat reduced. Compared to thermoplastic polyethylene, PEX does not melt (analogous to elastomers) and is thermally resistant (over longer periods of up to 120 °C, for short periods without electrical or mechanical load up to 250 °C). With increasing crosslinking density also the maximum shear modulus increases (even at higher temperatures). PEX has significantly enhanced properties compared with ordinary PE.
Almost all PEX used for pipe and tubing is made from high-density polyethylene (HDPE). PEX contains cross-linked bonds in the polymer structure, changing the thermoplastic to a thermoset. Cross-linking is accomplished during or after the extrusion of the tubing. The required degree of cross-linking, according to ASTM Standard F876, is between 65% and 89%. A higher degree of cross-linking could result in brittleness and stress cracking of the material, while a lower degree of cross-linking could result in product with poorer physical properties.
PEX has significantly enhanced properties compared to ordinary PE. This is due to the introduction of crosslinks in the system, which can significantly improve the chemical, thermal, and mechanical properties of the polymer. While HDPE and PEX both display increases in the initial tangent modulus and yield stress under temperature or strain-rate increases when undergoing compression, HDPE tends to exhibit flow behavior after reaching a higher yield stress and PEX tends to exhibit strain-hardening after reaching its slightly lower yield stress. The latter exhibits some flow behavior but only after reaching higher true strains. The behavior observed in PEX is also mimicked by the thermoplastic ultra-high molecular weight polyethylene (UHMWPE). However, PEX displays a stronger temperature and strain-rate dependence than UHMWPE. Additionally, PEX is notable for its high thermal stability. It displays improved creep behavior (i.e. resists creep deformation) and maintains high strength and hardness at very high temperatures compared to thermoplastic polyethylene.
The type of initial polymer structure and amount of crosslinking can have a large impact on the resulting mechanical properties of PEX. One study looked at the effect of crosslinking low-density polyethylene (LDPE) with different amounts of dicumyl peroxide (DCP). It was found that increasing the weight percent of the peroxide crosslinker resulted in a lower degree of crystallinity, as observed via differential scanning calorimetry (DSC). The degree to which a polymer crystallizes and crosslinks can have a significant impact on its properties, and it was indeed found that the increase in crosslinking degree and corresponding decrease in crystallinity correlated to a lower elongation at break. It was suggested that this was due to the higher presence of chemical crosslinks (the peroxides) compared to the physical crosslinks (formed by the crystallites), as chemical crosslinks tend to inhibit the elongation behavior of polymers. Additionally, it was found that the maximum tensile strength tended to increase since the intermolecular forces between chains increases with additional crosslinks. Similar results have been found with the addition of silane crosslinkers. In another study, the amount of silane crosslinker added to linear low-density polyethylene (LLDPE) was varied. The resulting Young's modulus and maximum tensile strength increased with crosslinker concentration but the elongation at break decreased due to decreases in crystallinity. The presence of fillers can further strengthen PEX's mechanical properties. In the same study, the researchers looked at the effect of adding a filler known as montmorillonite (MMT) nanoclay and observed even higher Young's moduli and tensile strengths, indicating a strong interfacial interaction between the silane crosslinked LLDPE and the MMT.
Almost all cross-linkable polyethylene compounds (XLPE) for wire and cable applications are based on LDPE. XLPE-insulated cables have a rated maximum conductor temperature of 90 °C and an emergency rating up to 140 °C, depending on the standard used. They have a conductor short-circuit rating of 250 °C. XLPE has excellent dielectric properties, making it useful for medium voltage—1 to 69 kV AC, and high-voltage cables—up to 380 kV AC-voltage, and several hundred kV DC.
Numerous modifications in the basic polymer structure can be made to maximize productivity during the manufacturing process. For medium voltage applications, reactivity can be boosted significantly. This results in higher line speeds in cases where limitations in either the curing or cooling processes within the continuous vulcanization (CV) tubes used to cross-link the insulation. This is particularly useful for high-voltage cable and extra-high voltage cable applications, where degassing requirements can significantly lengthen cable manufacturing time.
Preparation methods
Various methods can be used to prepare PEX from thermoplastic polyethylene (PE-LD, PE-LLD or PE-HD). The first PEX material was prepared in the 1930s, by irradiating the extruded tube with an electron beam. The electron beam processing method was made feasible in the 1970s, but was still expensive. In the 1960s, Engel cross-linking was developed. In this method, a peroxide is mixed with the HDPE before extruding. In 1968, the Sioplas process using silicon hydride (silane) was patented, followed by another silane-based process, Monosil, in 1974. A process using vinylsilane followed in 1986.
Types of crosslinking
A basic distinction is made between peroxide crosslinking (PE-Xa), silane crosslinking (PE-Xb), electron beam crosslinking (PE-Xc) and azo crosslinking (PE-Xd).
Shown are the peroxide, the silane and irradiation crosslinking. In each method, a hydrogen atom is removed from the polyethylene chain (top center), either by radiation or by peroxides (R-O-O-R), forming a radical. Then, two radical chains can crosslink, either directly (bottom left) or indirectly via silane compounds (bottom right).
Peroxide crosslinking (PE-Xa): The crosslinking of polyethylene using peroxides (e.g. dicumyl peroxide or di-tert-butyl peroxide) is still of major importance. In the so-called Engel process, a mixture of HDPE and 2% peroxide is at first mixed at low temperatures in an extruder and then crosslinked at high temperatures (between 200 °C and 250 °C). The peroxide decomposes to peroxide radicals (RO•), which abstract (remove) hydrogen atoms from the polymer chain, leading to radicals. When these combine, a crosslinked network is formed. The resulting polymer network is uniform, of low tension and high flexibility, whereby it is softer and tougher than (the irradiated) PE-Xc. The same process is used for LDPE as well, though the temperature may vary from 160 °C to 220 °C.
Silane crosslinking (PE-Xb): In the presence of silanes (e.g. trimethoxyvinylsilane) polyethylene can initially be Si-functionalized by irradiation or by a small amount of a peroxide. Later Si-OH groups can be formed in a water bath by hydrolysis, which condense then and crosslink the PE by the formation of Si-O-Si bridges. [16] Catalysts such as dibutyltin dilaurate may accelerate the reaction.
Irradiation crosslinking (PE-Xc): The crosslinking of polyethylene is also possible by a downstream radiation source (usually an electron accelerator, occasionally an isotopic radiator). PE products are crosslinked below the crystalline melting point by splitting off hydrogen atoms. β-radiation possesses a penetration depth of 10 mm, ɣ-radiation 100 mm. Thereby the interior or specific areas can be excluded from the crosslinking. However, due to high capital and operating costs radiation crosslinking plays only a minor role compared with the peroxide crosslinking. In contrast to peroxide crosslinking, the process is carried out in the solid state. Thereby, the cross-linking takes place primarily in the amorphous regions, while the crystallinity remains largely intact.
Azo crosslinking (PE-Xd): In the so-called Lubonyl process polyethylene is crosslinked preadded azo compounds after extrusion in a hot salt bath.
Degree of crosslinking
A low degree of crosslinking leads initially only to a multiplication of the molecular weight. The individual macromolecules are not linked and no covalent network is formed yet. Polyethylene that consists of those large molecules behaves similar to polyethylene of ultra high molecular weight (PE-UHMW), i.e. like a thermoplastic elastomer.
Upon further crosslinking (crosslinking degree about 80%), the individual macromolecules are eventually connected to a network. This crosslinked polyethylene (PE-X) is chemically seen a thermoset, it shows above the melting point rubber-elastic behavior and cannot be processed in the melt anymore.
The degree of crosslinking (and hence the extent of the change) is different in intensity depending on the process. According to DIN 16892 (a quality requirement for pipes made of PE-X) at least the following degree of crosslinking must be achieved:
in peroxide crosslinking (PE-Xa): 75%
with silane crosslinking (PE-Xb): 65%
with electron beam crosslinking (PE-Xc): 60%
in azo crosslinking (PE-Xd): 60%
Classification
North America
All PEX pipe is manufactured with its design specifications listed directly on the pipe. These specifications are listed to explain the pipe's many standards as well as giving specific detailing about the manufacturer. The reason that all these specifications are given, are so that the installer is aware if the product is meeting standards for the necessary local codes. The labeling ensures the user that the tubing is up to all the standards listed.
Materials used in PEX pipes in North America are defined by cell classifications that are described in ASTM standards, the most common being ASTM F876. Cell classifications for PEX include 0006, 0008, 1006, 1008, 3006, 3008, 5006 and 5008, the most common being 5006. Classifications 0306, 3306, 5206 and 5306 are also common, these materials containing ultraviolet blockers and/or inhibitors for limited UV resistance. In North America all PEX tubing products are manufactured to ASTM, NSF and CSA product standards, among them the aforementioned ASTM standard F876 as well as F877, NSF International standards NSF 14 and NSF 61 ("NSF-pw"), and Canadian Standards Association standard B137.5, to which the pipes are tested, certified and listed. The listings and certifications met by each product appear on the printline of the pipe or tubing to ensure the product is used in the proper applications for which it was designed.
Europe
In European standards. there are three classifications referred to as PEX-A, -B, and -C. The classes are not related to any type of rating system.
PEX-A (PE-Xa, PEXa)
PEX-A is produced by the peroxide (Engel) method. This method performs "hot" cross-linking, above the crystal melting point. However, the process takes slightly longer than the other two methods as the polymer has to be kept at high temperature and pressure for long periods during the extrusion process. The cross-linked bonds are between carbon atoms.
PEX-B (PE-Xb, PEXb)
The silane method, also called the "moisture cure" method, results in PEX-B. In this method, cross-linking is performed in a secondary post-extrusion process, producing cross-links between a cross-linking agent. The process is accelerated with heat and moisture. The cross-linked bonds are formed through silanol condensation between two grafted vinyltrimethoxysilane (VTMS) units, connecting the polyethylene chains with C-C-Si-O-Si-C-C bridges.
PEX-C (PE-Xc, PEXc)
PEX-C is produced through electron beam processing, in a "cold" cross-linking process (below the crystal melting point). It provides less uniform, lower-degree cross-linking than the Engel method, especially at tube diameters over one inch (2.5 cm). When the process is not controlled properly, the outer layer of the tube may become brittle. However, it is the cleanest, most environmentally friendly method of the three, since it does not involve other chemicals and uses only high-energy electrons to split the carbon-hydrogen bonds and facilitate cross-linking.
Plumbing
PEX tubing is widely used to replace copper in plumbing applications. One estimate from 2006 was that residential use of PEX for delivering drinking water to home faucets was increasing by 40% annually. In 2006, The Philadelphia Inquirer recommended that plumbing installers switch from copper pipes to PEX.
In the early to mid 20th century, mass-produced plumbing pipes were made from galvanized steel. As users experienced problems with the internal build-up of rust, which reduced water volume, these were replaced by copper pipes in the late 1960s. Plastic pipes with fittings using glue were used as well in later decades. Initially PEX tubing was the most popular way to transport water in hydronic radiant heating systems, and it was used first in hydronic systems from the 1960s onwards. Hydronic systems circulate water from a boiler or heater to places in the house needing heat, such as baseboard heaters or radiators. PEX is suitable for recirculating hot water.
Gradually, PEX became more accepted for more indoor plumbing uses, such as carrying pressurized water to fixtures throughout the house. Increasingly, since the 2000s, copper pipes as well as plastic PVC pipes are being replaced with PEX. PEX can be used for underground purposes, although one report suggested that appropriate "sleeves" be used for such applications.
Benefits
Benefits of using PEX in plumbing include:
Flexibility. PEX is a popular solution for residential water plumbing in new construction due to its flexibility. PEX tubing can easily bend without buckling or cracking, so pipe runs do not need to be straight. PEX is often sold in long rolls, which eliminates the need to couple individual lengths of straight pipe together for long runs. For shallow bends, PEX tubing can be bent and supported with a metal or hard plastic brace, so elbow fittings are only required for sharp corners. By contrast, other common indoor plumbing materials—namely PVC, CPVC and copper—are rigid and require angled fittings to accommodate any significant bend in a pipe run.
Direct routing of pipes. Since PEX tubing does not require elbow joints in most cases, it is often possible to run a supply line directly from a distribution point to an outlet fixture without any splices or connections in the line. This eliminates the potential structural weakness or cost associated with joints.
Less pressure drop due to turbulence. Since PEX pipe lines typically have fewer sharp turns and splices than lines constructed from rigid tube materials, less pressure loss can be expected between the distribution point and outlet fixtures. Less pressure drop translates to extra water pressure at sinks, showers, and toilets for a given supply pressure. Conversely, PEX may allow for a weaker (and less expensive) pump than alternative piping to achieve the equivalent pressure at the outlet fixtures.
Lower materials cost. Cost of materials for PEX tubing is approximately 25% of alternatives. By contrast, the inflation-adjusted price of copper more than quadrupled in the two decades between 2002 and 2022.
Easier installation. Installing PEX is much less labor-intensive than copper or PVC pipes, since there is no need to solder or glue pipes together. Builders installing radiant heating systems found that PEX pipes "made installation easy and operation problem-free". PEX connections can be made by pushing two matching parts together using a compression fitting, or by using an adjustable wrench or a special crimping tool. Generally, fewer connections and fittings are needed in a PEX installation.
Non-corrosive. Unlike copper, PEX is not subject to corrosion when exposed to minerals or moisture.
No fire risk during installation. The oldest and most common method for joining copper piping is to solder pieces together using a torch. PEX eliminates the risk associated with this open flame.
Ability to merge new PEX with existing copper and PVC systems. Fittings that allow installers to join a copper pipe on one end with a PEX line at the other are widely available. These couplings allow the installer to reduce or expand the diameter of the pipes at the transition to PEX if desired.
Suitable for hot and cold pipes. A convenient arrangement is to use color-coding to lessen the possibility of confusion. Typically, red PEX tubing is used for hot water and blue PEX tubing is used for cold water.
Less likely to burst from freezing. PEX, due to its flexibility, is typically understood to be more burst-resistant in freezing conditions than copper or PVC pipe. One account suggested that PEX water-filled pipes, frozen over time, will swell and tear; in contrast, copper pipe "rips" and PVC "shatters". Home expert Steve Maxwell suggested in 2007 that PEX water-filled pipes could endure "five or six freeze-thaw cycles without splitting" while copper would split apart promptly on the first freeze. In new unheated seasonal homes, it is still recommended to drain pipes during an unheated cold season or take other measures to prevent pipes from bursting because of the cold. In new construction, it is recommended that all water pipes be sloped slightly to permit drainage, if necessary.
Pipe insulation possible. Conventional foam wrap insulation materials can easily be added to PEX piping to reduce heat loss from hot water water lines, reduce heat transfer into cold water lines, and mitigate the risk of freezing in outdoor environments.
Drawbacks
Degradation from sunlight. PEX tubing cannot be used in applications exposed to sunlight, as it degrades fairly rapidly. Prior to installation it must be stored away from sunlight, and needs to be shielded from daylight after installation. Leaving it exposed to direct sunlight for as little as 30 days may result in premature failure of the tubing due to embrittlement.
Perforation by insects. PEX tubing is vulnerable to being perforated by the mouthparts of plant-feeding insects; in particular, the Western conifer seed bug (Leptoglossus occidentalis) is known to sometimes pierce through PEX tubing, resulting in leakage.
Problems with yellow brass fittings. There have been some claimed PEX systems failures in the U.S., Canada and Europe resulting in several pending class action lawsuits. The failures are claimed to be a result of the brass fittings used in the PEX system. Generally, builders and manufacturers have learned from these experiences and have found the best materials for use in fittings used to connect pipe with connectors, valves and other fittings. But there were problems reported with a specific type of brass fitting used in connection with installations in Nevada that caused a negative interaction between its mineral-rich hard water and so-called "yellow brass" fittings. Zinc in the fittings leached into the pipe material in a chemical reaction known as dezincification, causing some leaks or blockages. A solution was to replace the yellow brass fittings, which had 30% zinc, with red brass fittings, which had 5–10% zinc. It led California building authorities to insist on fittings made from "red brass" which typically has a lower zinc content, and is unlikely to cause problems in the future since problems with these specific fittings have become known.
Initial adjustment to a new plumbing system. There were a few reported problems in the early stages as plumbers and homeowners learned to adjust to the new fittings, and when connections were poorly or improperly made, but home inspectors have generally not noticed any problems with PEX since 2000.
Limited adhesives for pipe insulation. Some pipe insulation applied to PEX using certain adhesives could have a detrimental effect causing the pipe to age prematurely; however, other insulating materials can be used, such as conventional foam wrap insulation, without negative effects.
Fitting expenses. Generally, PEX fittings, particularly the do-it-yourself push-fit ones, are more expensive than copper ones, although there is no soldering required. Due to the flexibility of PEX, it generally requires fewer fittings, which tends to offset the higher cost per fitting.
Potential problems for PEX radiant heating with iron-based components. If plain PEX tubing is used in a radiant heating system that has ferrous radiators or other parts, meaning they are made out of iron or its alloys, then there is the possibility of rust developing over time; if this is the case, then one solution is to have an "oxygen barrier" in these systems to prevent rust from developing. Most modern installations of PEX for heating use oxygen barrier coated PEX.
Odors, chemical taste, and possible health effects. There was controversy in California during the 2000s about health concerns. Several groups blocked adoption of PEX for concerns about chemicals getting into the water, either from chemicals outside the pipes, or from chemicals inside the pipes such as methyl tertiary butyl ether and tertiary butyl alcohol. These concerns delayed statewide adoption of PEX for almost a decade. After substantial "back-and-forth legal wrangling", which was described as a "judicial rollercoaster", the disputing groups came to a consensus, and California permitted use of PEX in all occupancies. An environmental impact report and subsequent studies determined there were no cause for concerns about public health from use of PEX piping.
Government approvals
PEX has been approved for use in all fifty states of the United States as well as Canada, including the state of California, which approved its use in 2009. California allowed the use of PEX for domestic water systems on a case-by-case basis only in 2007. This was due mostly to concerns about corrosion of the manifolds (rather than the tubing itself) and California allowed PEX to be used for hydronic radiant heating systems but not potable water. In 2009, the Building Standards Commission approved PEX plastic pipe and tubing to the California Plumbing Code (CPC), allowing its use in hospitals, clinics, residences, and commercial construction throughout the state. Formal adoption of PEX into the CPC occurred on August 1, 2009, allowing local jurisdictions to approve its general use, although there were additional issues, and new approvals were issued in 2010 with revised wordings to the 2007 act.
Alternative materials
Alternative plumbing choices include
Aluminum plastic composite are aluminum tubes laminated on the interior and exterior with plastic layers for protection.
Corrugated stainless steel tubing, continuous flexible pipes made out of stainless steel with a PVC interior and are air-tested for leaks.
Polypropylene Pipe, similar in application to CPVC but a chemically inert material containing no harmful substances and reduced dangerous emissions when consumed by fire. It is primarily utilized in radiant floor systems but is gaining popularity as a leach-free domestic potable water pipe, primarily in commercial applications.
Polybutylene (PB) Pipe is a form of plastic polymer that was used in the manufacture of potable water piping from late 1970s until 1995. However, it was discovered that the polyoxymethylene (POM or Acetal) connectors originally used to connect the polybutylene tubes were susceptible to stress enhanced chemical attack by hypochlorite additions (a common chemical used to sanitize water). Degraded connectors can crack and leak in highly stressed crimped areas, causing damage to the surrounding building structure. Later systems containing copper fittings do not appear to have issues with hypochlorite attack, but polybutylene has still fallen out of favor due to costly structural damage caused by earlier issues and is not accepted in Canada and U.S.
PEX-AL-PEX
PEX-AL-PEX pipes, or AluPEX, or PEX/Aluminum/PEX, or Multilayer pipes are made of a layer of aluminum sandwiched between two layers of PEX. The metal layer serves as an oxygen barrier, stopping the oxygen diffusion through the polymer matrix, so it cannot dissolve into the water in the tube and corrode the metal components of the system. The aluminium layer is thin, typically 1 or 2 mm, and provides some rigidity to the tube such that when bent it retains the shape formed (normal PEX tube will spring back to straight). The aluminium layer also provides additional structural rigidity such that the tube will be suitable for higher safe operating temperatures and pressures.
The use of AluPex tubing has grown greatly since 2010. It is easy to work and position. Curves may be easily formed by hand. Tube exists for use with both hot and cold water and also for gas.
This product in Canada has been discontinued due to water infiltrating between the Layers resulting in premature failures.
PEX tools
There are two types of fitting that may be used. Crimped or compressive. Crimped connectors are less expensive but require a specialised crimping tool. Compression fittings are tightened with normal spanners and are designed to allow sections of the system to be easily disassembled, they are also popular for small works, esp. DIY, avoiding the need for extra tools.
A PEX tool kit includes a number of basic tools required for making fittings and connections with PEX tubing. In most cases, such kits are either bought at a local hardware store, plumbing supply store or assembled by either a home owner or a contractor. PEX tools kits range from under $100 and can go up to $300+. A typical PEX tool kit includes crimp tools, an expander tool for joining, clamp tools, PEX cutters, rings, boards, and staplers.
Other uses
Artificial joints: Highly cross-linked polyethylene is used in artificial joints as a wear-resistant material. Cross-linked polyethylene is preferred in hip replacement because of its resistance to abrasive wear. Knee replacement, however, requires PE made with different parameters because cross-linking may affect mechanical strength and there is greater stress-concentration in knee joints due to lower geometric congruency of the bearing surfaces. Manufacturers start with ultra high molecular weight polyethylene, and crosslink with either electron beam or gamma irradiation.
Dental applications: Some application of PEX has also been seen in dental restoration as a composite filling material.
Watercraft: PEX is also used in many canoes and kayaks. The PEX is listed by the name Ram-X, and other brand specific names. Because of the properties of cross-linked polyethylene, repair of any damage to the hull is rather difficult. Some adhesives, such as 3M's DP-8005, are able to bond to PEX, while larger repairs require melting and mixing more Polyethylene into the canoe/kayak to form a solid bond and fill the damaged area.
Power cable insulation: Cross-linked polyethylene is widely used as electrical insulation in power cables of all voltage ranges but it is especially well suited to medium voltage applications. It is the most common polymeric insulation material. The acronym XLPE is commonly used to denote cross-linked polyethylene insulation.
Automotive ducts and housings: PEX also referred to as XLPE is widely used in the aftermarket automotive industry for cold air intake systems and filter housings. Its properties include high heat deflection temperature, good impact resistance, chemical resistance, low flexural modulus and good environmental stress crack resistance. This form of XLPE is most commonly used in rotational molding; the XLPE resin comes in the form of a 35 mesh (500 μm) resin powder.
Domestic appliances: Washing machines and dishwashers from Asko use a PEX inlet hose instead of using a double-walled rubber/plastic safety hose.
See also
High-density polyethylene (HDPE)
Linear low-density polyethylene (LLDPE)
Low-density polyethylene (LDPE)
Medium-density polyethylene (MDPE)
Polyolefin and cross-linked polyolefin (XLPO), used as insulator
Stretch wrap
Ultra-high-molecular-weight polyethylene (UHMWPE)
References
External links
Analytical techniques to characterize crosslinked polyethylene
Plastics
Plumbing
Polyolefins | Cross-linked polyethylene | [
"Physics",
"Engineering"
] | 6,317 | [
"Plumbing",
"Unsolved problems in physics",
"Construction",
"Amorphous solids",
"Plastics"
] |
2,416,505 | https://en.wikipedia.org/wiki/Pourbaix%20diagram | In electrochemistry, and more generally in solution chemistry, a Pourbaix diagram, also known as a potential/pH diagram, EH–pH diagram or a pE/pH diagram, is a plot of possible thermodynamically stable phases (i.e., at chemical equilibrium) of an aqueous electrochemical system. Boundaries (50 %/50 %) between the predominant chemical species (aqueous ions in solution, or solid phases) are represented by lines. As such a Pourbaix diagram can be read much like a standard phase diagram with a different set of axes. Similarly to phase diagrams, they do not allow for reaction rate or kinetic effects. Beside potential and pH, the equilibrium concentrations are also dependent upon, e.g., temperature, pressure, and concentration. Pourbaix diagrams are commonly given at room temperature, atmospheric pressure, and molar concentrations of 10−6 and changing any of these parameters will yield a different diagram.
The diagrams are named after Marcel Pourbaix (1904–1998), the Belgian engineer who invented them.
Naming
Pourbaix diagrams are also known as EH-pH diagrams due to the labeling of the two axes.
Diagram
The vertical axis is labeled EH for the voltage potential with respect to the standard hydrogen electrode (SHE) as calculated by the Nernst equation. The "H" stands for hydrogen, although other standards may be used, and they are for room temperature only.
For a reversible redox reaction described by the following chemical equilibrium:
With the corresponding equilibrium constant :
The Nernst equation is:
sometimes formulated as:
or, more simply directly expressed numerically as:
where:
volt is the thermal voltage or the "Nernst slope" at standard temperature
λ = ln(10) ≈ 2.30, so that volt.
The horizontal axis is labeled pH for the −log function of the H+ ion activity.
The lines in the Pourbaix diagram show the equilibrium conditions, that is, where the activities are equal, for the species on each side of that line. On either side of the line, one form of the species will instead be said to be predominant.
In order to draw the position of the lines with the Nernst equation, the activity of the chemical species at equilibrium must be defined. Usually, the activity of a species is approximated as equal to the concentration (for soluble species) or partial pressure (for gases). The same values should be used for all species present in the system.
For soluble species, the lines are often drawn for concentrations of 1 M or 10−6 M. Sometimes additional lines are drawn for other concentrations.
If the diagram involves the equilibrium between a dissolved species and a gas, the pressure is usually set to P0 = 1 atm = , the minimum pressure required for gas evolution from an aqueous solution at standard conditions.
In addition, changes in temperature and concentration of solvated ions in solution will shift the equilibrium lines in accordance with the Nernst equation.
The diagrams also do not take kinetic effects into account, meaning that species shown as unstable might not react to any significant degree in practice.
A simplified Pourbaix diagram indicates regions of "immunity", "corrosion" and "passivity", instead of the stable species. They thus give a guide to the stability of a particular metal in a specific environment. Immunity means that the metal is not attacked, while corrosion shows that general attack will occur. Passivation occurs when the metal forms a stable coating of an oxide or other salt on its surface, the best example being the relative stability of aluminium because of the alumina layer formed on its surface when exposed to air.
Applicable chemical systems
While such diagrams can be drawn for any chemical system, it is important to note that the addition of a metal binding agent (ligand) will often modify the diagram. For instance, carbonate () has a great effect upon the diagram for uranium. (See diagrams at right). The presence of trace amounts of certain species such as chloride ions can also greatly affect the stability of certain species by destroying passivating layers.
Limitations
Even though Pourbaix diagrams are useful for a metal corrosion potential estimation they have, however, some important limitations:
Equilibrium is always assumed, though in practice it may differ.
The diagram does not provide information on actual corrosion rates.
Does not apply to alloys.
Does not indicate whether passivation (in the form of oxides or hydroxides) is protective or not. Diffusion of oxygen ions through thin oxide layers are possible.
Excludes corrosion by chloride ions (, etc.).
Usually applicable only to temperature of , which is assumed by default. The Pourbaix diagrams for higher temperatures exist.
Expression of the Nernst equation as a function of pH
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . explicitly denotes expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The equilibrium constant of this reduction reaction is:
where curly braces { } indicate activities (), rectangle braces [ ] denote molar or molal concentrations (), represent the activity coefficients, and the stoichiometric coefficients are shown as exponents.
Activities correspond to thermodynamic concentrations and take into account the electrostatic interactions between ions present in solution. When the concentrations are not too high, the activity () can be related to the measurable concentration () by a linear relationship with the activity coefficient ():
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is the Faraday's constant. The Nernst equation relates pH and as follows:
In the following, the Nernst slope (or thermal voltage) is used, which has a value of 0.02569... V at STP. When base-10 logarithms are used, VT λ = 0.05916... V at STP where λ = ln[10] = 2.3026.
This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2. is then often noted as to indicate that it refers to the standard hydrogen electrode (SHE) whose = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0).
Calculation of a Pourbaix diagram
When the activities () can be considered as equal to the molar, or the molal, concentrations () at sufficiently diluted concentrations when the activity coefficients () tend to one, the term regrouping all the activity coefficients is equal to one, and the Nernst equation can be written simply with the concentrations () denoted here with square braces [ ]:
There are three types of line boundaries in a Pourbaix diagram: Vertical, horizontal, and sloped.
Vertical boundary line
When no electrons are exchanged (z = 0), the equilibrium between , , , and only depends on and is not affected by the electrode potential. In this case, the reaction is a classical acid-base reaction involving only protonation/deprotonation of dissolved species. The boundary line will be a vertical line at a particular value of pH. The reaction equation may be written:
and the energy balance is written as , where is the equilibrium constant:
Thus:
or, in base-10 logarithms,
which may be solved for the particular value of pH.
For example consider the iron and water system, and the equilibrium line between the ferric ion Fe3+ ion and hematite Fe2O3. The reaction equation is:
2 Fe^{3+}(aq) + 3 H_2 O (l) <=> Fe_2 O_3 (s) + 6 H^+ (aq)
which has . The pH of the vertical line on the Pourbaix diagram can then be calculated:
Because the activities (or the concentrations) of the solid phases and water are equal to unity: [Fe2O3] = [H2O] = 1, the pH only depends on the concentration in dissolved :
At STP, for [Fe3+] = 10−6, this yields pH = 1.76.
Horizontal boundary line
When H+ and OH− ions are not involved in the reaction, the boundary line is horizontal and independent of pH. The reaction equation is thus written:
As, the standard Gibbs free energy :
Using the definition of the electrode potential ∆G = -zFE, where F is the Faraday constant, this may be rewritten as a Nernst equation:
or, using base-10 logarithms:
For the equilibrium /, taken as example here, considering the boundary line between Fe2+ and Fe3+, the half-reaction equation is:
Fe^3+ (aq) + e^- <=> Fe^2+ (aq)
Since H+ ions are not involved in this redox reaction, it is independent of pH.
Eo = 0.771 V with only one electron involved in the redox reaction.
The potential Eh is a function of temperature via the thermal voltage and directly depends on the ratio of the concentrations of the and ions:
For both ionic species at the same concentration (e.g., ) at STP, log 1 = 0, so, , and the boundary will be a horizontal line at Eh = 0.771 volts. The potential will vary with temperature.
Sloped boundary line
In this case, both electrons and H+ ions are involved and the electrode potential is a function of pH. The reaction equation may be written:
Using the expressions for the free energy in terms of potentials, the energy balance is given by a Nernst equation:
For the iron and water example, considering the boundary line between the ferrous ion Fe2+ and hematite Fe2O3, the reaction equation is:
Fe2O3(s) + 6 H+(aq) + 2 e^- <=> 2 Fe^{2+}(aq) + 3 H2O(l)
with .
The equation of the boundary line, expressed in base-10 logarithms is:
As, the activities, or the concentrations, of the solid phases and water are always taken equal to unity by convention in the definition of the equilibrium constant : [Fe2O3] = [H2O] = 1.
The Nernst equation thus limited to the dissolved species and is written as:
For, [Fe2+] = 10−6 M, this yields:
Note the negative slope (-0.1775) of this line in a Eh–pH diagram.
The stability region of water
In many cases, the possible conditions in a system are limited by the stability region of water. In the Pourbaix diagram for uranium presented here above, the limits of stability of water are marked by the two dashed green lines, and the stability region for water falls between these two lines. It is also depicted here beside by the two dashed red lines in the simplified Pourbaix diagram restricted to the water stability region only.
Under highly reducing conditions (low EH), water is reduced to hydrogen according to:
2 H+ + 2e^- -> H2(g) (at low pH)
and,
2 H2O + 2e^- -> H2(g) + 2 OH^- (at high pH)
Using the Nernst equation, setting E0 = 0 V as defined by convention for the standard hydrogen electrode (SHE, serving as reference in the reduction potentials series) and the hydrogen gas fugacity (corresponding to chemical activity for a gas) at 1, the equation for the lower stability line of water in the Pourbaix diagram at standard temperature and pressure is:
Below this line, water is reduced to hydrogen, and it will usually not be possible to pass beyond this line as long as there is still water present in the system to be reduced.
Correspondingly, under highly oxidizing conditions (high EH) water is oxidized into oxygen gas according to:
2 H2O -> 4 H+ + O2(g) + 4e^- (at low pH)
and,
4 OH^- -> O2(g) + 2 H_2O + 4e^- (at high pH)
Using the Nernst equation as above, but with E0 = −ΔG0H2O/2F = 1.229 V for water oxidation, gives an upper stability limit of water as a function of the pH value:
at standard temperature and pressure. Above this line, water is oxidized to form oxygen gas, and it will usually not be possible to pass beyond this line as long as there is still water present in the system to be oxidized.
The two upper and lower stability lines having the same negative slope (−59 mV/pH unit), they are parallel in a Pourbaix diagram and the reduction potential decreases with pH.
Applications
Pourbaix diagrams have many applications in different fields dealing with e.g., corrosion problems, geochemistry, and environmental sciences. Using the Pourbaix diagram correctly will help shedding light not only on the nature of the species present in aqueous solution, or in the solid phases, but may also help to understand the reaction mechanism.
Concept of in environmental chemistry
Pourbaix diagrams are widely used to describe the behaviour of chemical species in the hydrosphere. In this context, reduction potential is often used instead of . The main advantage is to directly work with a logarithm scale.
is a dimensionless number and can easily be related to by the equation:
Where, is the thermal voltage, with , the gas constant (), , the absolute temperature in Kelvin (298.15 K = 25 °C = 77 °F), and , the Faraday constant (96 485 coulomb/mol of ). Lambda, λ = ln(10) ≈ 2.3026.
Moreover,
, an expression with a similar form to that of pH.
values in environmental chemistry ranges from −12 to +25, since at low or high potentials water will be respectively reduced or oxidized. In environmental applications, the concentration of dissolved species is usually set to a value between 10−2 M and 10−5 M for the determination of the equilibrium lines.
Gallery
See also
Nernst equation
Dependency of reduction potential on pH
Ellingham diagram
Latimer diagram
Frost diagram
Ionic partition diagram
Bjerrum plot
Notes
References
External links
Marcel Pourbaix — Corrosion Doctors
DoITPoMS Teaching and Learning Package- "The Nernst Equation and Pourbaix Diagrams"
Software
ChemEQL Free software for calculation of chemical equilibria from Eawag.
FactSage Commercial thermodynamic databank software, also available in a free web application.
The Geochemist's Workbench Commercial geochemical modeling software from Aqueous Solutions LLC.
GWB Community Edition Free download of the popular geochemical modeling software package.
HYDRA/MEDUSA Free software for creating chemical equilibrium diagrams from the KTH Department of Chemistry.
HSC Chemistry Commercial thermochemical calculation software from Outotec Oy.
PhreePlot Free program for making geochemical plots using the USGS code PHREEQC.
Thermo-Calc Windows Commercial software for thermodynamic calculations from Thermo-Calc Software.
Materials Project Public website that can generate Pourbaix diagrams from a large database of computed material properties, hosted at NERSC.
Electrochemistry
Phase transitions | Pourbaix diagram | [
"Physics",
"Chemistry"
] | 3,313 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Electrochemistry",
"Statistical mechanics",
"Matter"
] |
2,416,901 | https://en.wikipedia.org/wiki/Bio-Rad%20Laboratories | Bio-Rad Laboratories, Inc. is an American developer and manufacturer of specialized technological products for the life science research and clinical diagnostics markets. The company was founded in 1952 in Berkeley, California, by husband and wife team David and Alice Schwartz, both graduates of the University of California, Berkeley. Bio-Rad is based in Hercules, California, and has operations worldwide.
Business segments
Bio-Rad’s life science products primarily include instruments, software, consumables, reagents, and content for the areas of cell biology, gene expression, protein purification, protein quantitation, drug discovery and manufacture, food safety, and science education. These products are based on technologies to separate, purify, identify, analyze, and amplify biological materials such as antibodies, proteins, nucleic acids, cells, and bacteria.
Bio-Rad’s diagnostic products and systems use a range of technologies and provide clinical information in the blood transfusion, diabetes monitoring, autoimmune, and infectious disease testing markets. These products are used to support the diagnosis, monitoring, and treatment of diseases and other medical conditions.
History
Bio-Rad Laboratories was founded in 1952 by David Schwartz and his wife Alice, both recent graduates of the University of California, Berkeley. In 1976, Bio-Rad acquired Environmental Chemical Specialties (ECS), a producer of human control serum.
In 2008, Bio-Rad were notable for being the opening bell ringers at the New York Stock Exchange on 24 October, a date which went down in financial history as 'Bloody Friday', which saw many of the world's stock exchanges experience the worst declines in their history, with drops of around 10% in most indices.
In 2011, Bio-Rad acquired a new technology, droplet digital PCR. Droplet digital PCR allows scientists to distinguish rare sequences in tumors and precisely measure copy number variation.
In January 2013, Bio-Rad purchased AbD Serotec, a division of MorphoSys AG. This added Serotec’s more than 15,000 antibodies, kits, and accessories to Bio-Rad’s portfolio of research and clinical diagnostic products.
In 2016, the company had direct distribution channels in over 35 countries outside the United States through subsidiaries whose focus is sales, customer service and product distribution. In some locations outside and inside these 35 countries, sales efforts were supplemented by distributors and agents.
In 2017, Bio-Rad acquired RainDance Technologies, a droplet-based PCR systems manufacturer.
In March 2021, Bio-Rad announced a partnership with Roche.
Vickers Instruments
In 1989, Bio-Rad purchased the British instrument-making firm, Vickers (1828 - 1999), apart from their defense products, which were sold to British Aerospace. This company had been known until 1963 as Cooke, Troughton & Simms. Cooke, Troughton & Simms was formed in 1922 by the merging of T. Cooke & Sons, a York-based instrument maker founded in 1837 by the self-taught schoolmaster Thomas Cooke, and the London instrument-maker, Troughton & Simms founded in 1828 by Edward Troughton who began his apprenticeship in 1773.
Bioradiations
Bioradiations is an online magazine created by Bio-Rad that offers researchers case studies, whitepapers, tips, techniques, and topics related to Bio-Rad products and services. Bioradiations began as a print magazine that was launched in 1965 and was printed until 2011 and replaced with the online publication.
FCPA bribery lawsuits
From 2005 to 2010, Bio-Rad subsidiaries made bribes to foreign government officials, and made other payments intentionally ignoring the high likelihood of bribery. The bribes were estimated to have made $35.1 million profit for Bio-Rad, primarily from sales in Russia, Vietnam, and Thailand:
In Russia, bribes were made as payments to foreign agents with phony Moscow addresses and off-shore bank accounts. These foreign agents aimed to win government contracts by influencing Russia's Ministry of Health.
In Vietnam, Bio-Rad's Singapore subsidiary regularly paid bribes to Vietnamese government officials. A regional sales manager raised concerns about this practice, and in response another employee proposed employing a middleman to pay the bribes instead.
In Thailand, Bio-Rad acquired Diamed Thailand with very little due diligence. Diamed Thailand was running an existing scheme to bribe government officials, which Bio-Rad's Asia Pacific General Manager later found out about in March 2008. They initiated an investigation, which confirmed the bribery was occurring, however did not instruct Diamed Thailand to stop the bribery and the payments continued until 2010.
In 2017, Bio-Rad paid $55 million to settle cases with the Department of Justice and the Securities and Exchange Commission for violating the Foreign Corrupt Practices Act (FCPA). The company was accused failing to prevent or detect bribes to foreign officials, and for falsifying its books to hide these bribes as legitimate expenses.
Whistleblower retaliation lawsuit
In 2013, Bio-Rad's general counsel of 25 years, Sanford Wadler, followed internal whistleblowing procedures by reporting suspected bribery to Bio-Rad's audit committee, believing Bio-Rad had falsified books and records. The company fired him for making this report, which the courts found to amount to California common law wrongful discharge in violation of public policy.
In 2017, a federal jury awarded Wadler with a $10.9 million settlement. This was appealed by Bio-Rad, and in 2019 the US appeals court reduced this to $7.96 million, plus $3.5 million in attorneys' fees and costs.
See also
Laboratory equipment
List of S&P 500 companies
References
External links
Companies based in Contra Costa County, California
Companies in the S&P 400
Companies listed on the New York Stock Exchange
1952 establishments in California
History of Berkeley, California
Laboratory equipment manufacturers
Life sciences industry
Research support companies
Manufacturing companies based in California
Manufacturing companies established in 1952
Technology companies established in 1952
Medical technology companies of the United States
Technology companies of the United States
American companies established in 1952
Hercules, California | Bio-Rad Laboratories | [
"Biology"
] | 1,250 | [
"Life sciences industry"
] |
2,419,994 | https://en.wikipedia.org/wiki/Rheometry | Rheometry () generically refers to the experimental techniques used to determine the rheological properties of materials, that is the qualitative and quantitative relationships between stresses and strains and their derivatives. The techniques used are experimental. Rheometry investigates materials in relatively simple flows like steady shear flow, small amplitude oscillatory shear, and extensional flow.
The choice of the adequate experimental technique depends on the rheological property which has to be determined. This can be the steady shear viscosity, the linear viscoelastic properties (complex viscosity respectively elastic modulus), the elongational properties, etc.
For all real materials, the measured property will be a function of the flow conditions during which it is being measured (shear rate, frequency, etc.) even if for some materials this dependence is vanishingly low under given conditions (see Newtonian fluids).
Rheometry is a specific concern for smart fluids such as electrorheological fluids and magnetorheological fluids, as it is the primary method to quantify the useful properties of these materials.
Rheometry is considered useful in the fields of quality control, process control, and industrial process modelling, among others. For some, the techniques, particularly the qualitative rheological trends, can yield the classification of materials based on the main interactions between different possible elementary components and how they qualitatively affect the rheological behavior of the materials. Novel applications of these concepts include measuring cell mechanics in thin layers, especially in drug screening contexts.
Of non-Newtonian fluids
The viscosity of a non-Newtonian fluid is defined by a power law:
where η is the viscosity after shear is applied, η0 is the initial viscosity, γ is the shear rate, and if
, the fluid is shear thinning,
, the fluid is shear thickening,
, the fluid is Newtonian.
In rheometry, shear forces are applied to non-Newtonian fluids in order to investigate their properties.
Shear thinning fluids
Due to the shear thinning properties of blood, computational fluid dynamics (CFD) is used to assess the risk of aneurysms. Using High-Resolution solution strategies, the results when using non-Newtonian rheology were found to be negligible.
Shear thickening fluids
A method for testing the behavior of shear thickening fluids is stochastic rotation dynamics-molecular dynamics (SRD-MD). The colloidal particles of a shear thickening fluid are simulated, and shear is applied. These particles create hydroclusters which exert a drag force resisting flow.
See also
Continuum mechanics
Dynamic shear rheometer
Electrorheological fluid
Ferrofluid
Fluid mechanics
Magnetorheological fluid
Rheology
Rheometer
Smart fluid
References
Continuum mechanics
Fluid mechanics
Rheology | Rheometry | [
"Physics",
"Chemistry",
"Engineering"
] | 584 | [
"Continuum mechanics",
"Classical mechanics",
"Civil engineering",
"Fluid mechanics",
"Rheology",
"Fluid dynamics"
] |
2,420,418 | https://en.wikipedia.org/wiki/Color%20of%20water | The color of water varies with the ambient conditions in which that water is present. While relatively small quantities of water appear to be colorless, pure water has a slight blue color that becomes deeper as the thickness of the observed sample increases. The hue of water is an intrinsic property and is caused by selective absorption and scattering of blue light. Dissolved elements or suspended impurities may give water a different color.
Intrinsic color
The intrinsic color of liquid water may be demonstrated by looking at a white light source through a long pipe that is filled with purified water and closed at both ends with a transparent window. The light cyan color is caused by weak absorption in the red part of the visible spectrum.
Absorptions in the visible spectrum are usually attributed to excitations of electronic energy states in matter. Water is a simple three-atom molecule, H2O, and all its electronic absorptions occur in the ultraviolet region of the electromagnetic spectrum and are therefore not responsible for the color of water in the visible region of the spectrum. The water molecule has three fundamental modes of vibration. Two stretching vibrations of the O–H bonds in the gaseous state of water occur at = 3650 cm and = 3755 cm−1. Absorption due to these vibrations occurs in the infrared region of the spectrum. The absorption in the visible spectrum is due mainly to the harmonic = 14,318 cm, which is equivalent to a wavelength of 698 nm. In liquid state at these vibrations are red-shifted by hydrogen bonding, resulting in red absorption at 740 nm, other harmonics such as giving red absorption at 660 nm. The absorption curve for heavy water (DO) is of a similar shape, but is shifted further towards the infrared end of the spectrum, because the vibrational transitions have a lower energy. For this reason, heavy water does not absorb red light and thus large bodies of DO would lack the characteristic cyan color of the more commonly found light water (HO).
Absorption intensity decreases markedly with each successive overtone, resulting in very weak absorption for the third overtone. For this reason, the pipe needs to have a length of a meter or more and the water must be purified by microfiltration to remove any particles that could produce Mie scattering.
Color of lakes and oceans
Lakes and oceans appear cyan for several reasons. One is that the surface of the water reflects the color of the sky, which ranges from cyan to light azure. It is a common misconception that this reflection is the sole reason bodies of water appear cyan, though it can contribute. This contribution usually makes the body of water appear more a shade of azure rather than cyan depending on how bright the sky is. Water in swimming pools with white-painted sides and bottom will appear cyan, even in indoor pools where there is no sky to be reflected. The deeper the pool, the more intense the cyan color becomes.
Some of the light hitting the surface of the ocean is reflected but most of it penetrates the water surface, interacting with water molecules and other substances in the water. Water molecules can vibrate in three different modes when they interact with light. The red, orange, and yellow wavelengths of light are absorbed so the remaining light seen is composed of green, cyan, and blue wavelengths. This is the main reason the ocean's color is cyan. The relative contribution of reflected skylight and the light scattered back from the depths is strongly dependent on observation angle.
Scattering from suspended particles also plays an important role in the color of lakes and oceans, causing the water to look greener or bluer in different areas. A few tens of meters of water will absorb all light, so without scattering, all bodies of water would appear black. Because most lakes and oceans contain suspended living matter and mineral particles, light from above is scattered and some of it is reflected upwards. Scattering from suspended particles would normally give a white color, as with snow, but because the light first passes through many meters of cyan-colored liquid, the scattered light appears cyan. In extremely pure water—as is found in mountain lakes, where scattering from particles is very low—the scattering from water molecules themselves also contributes a cyan color.
Diffuse sky radiation due to Rayleigh scattering in the atmosphere along one's line of sight gives distant objects a cyan or light azure tint. This is most commonly noticed with distant mountains, but also contributes to the cyanness of the ocean in the distance.
Color of glaciers
Glaciers are large bodies of ice and snow formed in cold climates by processes involving the compaction of fallen snow. While snowy glaciers appear white from a distance, the long path lengths of internal reflected light causes glaciers to appear a deep blue when viewed up close and when shielded from direct ambient light.
Relatively small amounts of regular ice appear white because plenty of air bubbles are present, and also because small quantities of water appear to be colorless. In glaciers, on the other hand, the pressure causes the air bubbles, trapped in the accumulated snow, to be squeezed out increasing the density of the created ice. Large quantities of water appear cyan, therefore a large piece of compressed ice, or a glacier, would also appear cyan.
Color of water samples
Dissolved and particulate material in water can cause it to be appear more green, tan, brown, or red. For instance, dissolved organic compounds called tannins can result in dark brown colors, or algae floating in the water (particles) can impart a green color. Color variations can be measured with reference to a standard color scale. Two examples of standard color scales for natural water bodies are the Forel-Ule scale and the Platinum-Cobalt scale. For example, slight discoloration is measured against the Platinum-Cobalt scale in Hazen units (HU).
The color of a water sample can be reported as:
Apparent color is the color of a body of water being reflected from the surface of the water, and consists of color from both dissolved and suspended components. Apparent color may also be changed by variations in sky color or the reflection of nearby vegetation.
True color is measured after a sample of water has been collected and purified (either by centrifuging or filtration). Pure water tends to look cyan in color and a sample can be compared to pure water with a predetermined color standard or comparing the results of a spectrophotometer.
Testing for color can be a quick and easy test which often reflects the amount of organic material in the water, although certain inorganic components like iron or manganese can also impart color.
Water color can reveal physical, chemical and bacteriological conditions. In drinking water, green can indicate copper leaching from copper plumbing and can also represent algae growth. Blue can also indicate copper, or might be caused by syphoning of industrial cleaners in the tank of commodes, commonly known as backflowing. Reds can be signs of rust from iron pipes or airborne bacteria from lakes, etc. Black water can indicate growth of sulfur-reducing bacteria inside a hot water tank set to too low a temperature. This usually has a strong sulfur or rotten egg (HS) odor and is easily corrected by draining the water heater and increasing the temperature to or higher. Sulfate reducing bacteria are not known to cause issues in cold water plumbing. Learning the water impurity indication color spectrum can make identifying and solving cosmetic, bacteriological and chemical problems easier.
Water quality and color
The presence of color in water does not necessarily indicate that the water is not drinkable. Water with high water clarity is generally more cyan in color due to low concentrations of particles and/or dissolved substances. Color-causing particulate substances can be easily removed by filtration. Color-causing dissolved substances such as tannins are only toxic to animals in large concentration.
Color from dissolved substances is not removed by typical water filters; however the use of coagulants may succeed in trapping the color-causing compounds within the resulting precipitate.
Other factors can affect the color seen:
Particles and solutes can absorb light, as in tea or coffee. Green algae in rivers and streams often lend a blue-green color. The Red Sea has occasional blooms of red Trichodesmium erythraeum algae.
Particles in water can scatter light. The Colorado River is often muddy red because of suspended reddish silt in the water—this gives the river its name, from Spanish , . Some mountain lakes and streams with finely ground rock, such as glacial flour, are turquoise. Light scattering by suspended matter is required in order that the blue light produced by water's absorption can return to the surface and be observed. Such scattering can also shift the spectrum of the emerging photons toward the green, a color often seen when water laden with suspended particles is observed.
Color names
Various cultures divide the semantic field of colors differently from the English language usage, and some do not distinguish between blue and green in the same way. An example is Welsh in which can mean blue or green, or Vietnamese in which likewise can mean either. Conversely, in Russian and some other languages, there is no single word for blue, but somewhat different words for light blue (, ) and dark blue (, ).
Other color names assigned to bodies of water are sea green and ultramarine blue. Unusual oceanic colorings have given rise to the terms red tide and black tide.
The Ancient Greek poet Homer uses the epithet "wine-dark sea"; in addition, he also describes the sea as "grey". William Ewart Gladstone has suggested that this is due to the Ancient Greek classifying colors primarily by luminosity rather than hue, while others believe Homer was color blind.
The Ancient Indian wisdom of Veda considers water's life-giving contributions a part of the divine. It recognizes water as an ancient god, Varuna, and the color of Varuna is described as blue. In the Gayatri associated with Varuna, the phrase "Neela purusha" comes in the second line, which calls the water deity the blue one.
References
Further reading
External links
Water Color, USGS Water Science School
What color is water?—WebExhibits' Causes of Colour
Color
Shades of blue
Water chemistry
Water pollution
Water physics
Water quality indicators | Color of water | [
"Physics",
"Chemistry",
"Materials_science",
"Environmental_science"
] | 2,092 | [
"Water pollution",
"Water quality indicators",
"Condensed matter physics",
"nan",
"Water physics"
] |
2,421,084 | https://en.wikipedia.org/wiki/Isotopomer | Isotopomers or isotopic isomers are isomers which differ by isotopic substitution, and which have the same number of atoms of each isotope but in a different arrangement. For example, CH3OD and CH2DOH are two isotopomers of monodeuterated methanol.
The molecules may be either structural isomers (constitutional isomers) or stereoisomers depending on the location of the isotopes. Isotopomers have applications in areas including nuclear magnetic resonance spectroscopy, reaction kinetics, and biochemistry.
Description
Isotopomers or isotopic isomers are isomers with isotopic atoms, having the same number of each isotope of each element but differing in their positions in the molecule. The result is that the molecules are either constitutional isomers or stereoisomers solely based on isotopic location. The term isotopomer was first proposed by Seeman and Paine in 1992 to distinguish isotopic isomers from isotopologues (isotopic homologues).
Examples
CH3CHDCH3 and CH3CH2CH2D are a pair of structural isotopomers of propane.
(R)- and (S)-CH3CHDOH are isotopic stereoisomers of ethanol.
(Z)- and (E)-CH3CH=CHD are examples of isotopic stereoisomers of propene.
Use
13C-NMR
In nuclear magnetic resonance spectroscopy, the highly abundant 12C isotope does not produce any signal whereas the comparably rare 13C isotope is easily detected. As a result, carbon isotopomers of a compound can be studied by carbon-13 NMR to learn about the different carbon atoms in the structure. Each individual structure that contains a single 13C isotope provides data about the structure in its immediate vicinity. A large sample of a chemical contains a mixture of all such isotopomers, so a single spectrum of the sample contains data about all carbons in it. Nearly all of the carbon in normal samples of carbon-based chemicals is 12C, with only about 1% abundance of 13C, so there is only about a 1% abundance of the total of the singly-substituted isotopologues, and exponentially smaller amounts of structures having two or more 13C in them. The rare case where two adjacent carbon atoms in a single structure are both 13C causes a detectable coupling effect between them as well as signals for each one itself. The INADEQUATE correlation experiment uses this effect to provide evidence for which carbon atoms in a structure are attached to each other, which can be useful for determining the actual structure of an unknown chemical.
Reaction kinetics
In reaction kinetics, a rate effect is sometimes observed between different isotopomers of the same chemical. This kinetic isotope effect can be used to study reaction mechanisms by analyzing how the differently massed atom is involved in the process.
Biochemistry
In biochemistry, differences between the isotopomers of biochemicals such as starches is of practical importance in archaeology. They offer clues to the diet of prehistoric humans that lived as long ago as paleolithic times. This is because naturally occurring carbon dioxide contains both 12C and 13C. Monocots, such as rice and oats, differ from dicots, such as potatoes and tree fruits, in the relative amounts of 12CO2 and 13CO2 that they incorporate into their tissues as products of photosynthesis. When tissues of such subjects are recovered, usually tooth or bone, the relative isotopic content can give useful indications of the main source of the staple foods of the subjects of the investigations.
Cumomer
A cumomer is a set of isotopomers sharing similar properties and is a concept that relates to metabolic flux analysis. The concept was developed in 1999. In a metabolic cascade, many molecules will contain the same pattern of isotope labelling. In order to simplify the analysis of such cascades, molecules with identically labelled atoms are aggregated into a virtual molecule called a cumomer (a conflation of cumulative and isotopomer).
See also
Mass (mass spectrometry)
Isotopocule
References
Further reading
Physical chemistry
Isomerism | Isotopomer | [
"Physics",
"Chemistry"
] | 867 | [
"Applied and interdisciplinary physics",
"Stereochemistry",
"nan",
"Isomerism",
"Physical chemistry"
] |
2,421,241 | https://en.wikipedia.org/wiki/Potassium%20titanyl%20phosphate | Potassium titanyl phosphate (KTP) is an inorganic compound with the formula . It is a white solid. KTP is an important nonlinear optical material that is commonly used for frequency-doubling diode-pumped solid-state lasers such as Nd:YAG and other neodymium-doped lasers.
Synthesis and structure
The compound is prepared by the reaction of titanium dioxide with a mixture of KH2PO4 and K2HPO4 near 1300 K. The potassium salts serve both as reagents and flux.
The material has been characterized by X-ray crystallography. KTP has an orthorhombic crystal structure. It features octahedral Ti(IV) and tetrahedral phosphate sites. Potassium has a high coordination number. All heavy atoms (Ti, P, K) are linked exclusively by oxides, which interconnect these atoms.
Operational aspects
Crystals of KTP are highly transparent for wavelengths between 350 and 2700 nm with a reduced transmission out to 4500 nm where the crystal is effectively opaque. Its second-harmonic generation (SHG) coefficient is about three times higher than KDP. It has a Mohs hardness of about 5.
KTP is also used as an optical parametric oscillator for near IR generation up to 4 μm. It is particularly suited to high power operation as an optical parametric oscillator due to its high damage threshold and large crystal aperture. The high degree of birefringent walk-off between the pump signal and idler beams present in this material limit its use as an optical parametric oscillator for very low power applications.
The material has a relatively high threshold to optical damage (~15 J/cm2), an excellent optical nonlinearity and excellent thermal stability in theory. In practice, KTP crystals need to have stable temperature to operate if they are pumped with 1064 nm (infrared, to output 532 nm green). However, it is prone to photochromic damage (called grey tracking) during high-power 1064 nm second-harmonic generation which tends to limit its use to low- and mid-power systems.
Other such materials include potassium titanyl arsenate (KTiOAsO4).
Some applications
It is used to produce "greenlight" to perform some laser prostate surgery. KTP crystals coupled with Nd:YAG or Nd:YVO4 crystals are commonly found in green laser pointers.
KTP is also used as an electro-optic modulator, optical waveguide material, and in directional couplers.
Periodically poled potassium titanyl phosphate (PPKTP)
Periodically poled potassium titanyl phosphate (PPKTP) consists of KTP with switched domain regions within the crystal for various nonlinear optic applications and frequency conversion. It can be wavelength tailored for efficient second-harmonic generation, sum-frequency generation, and difference frequency generation. The interactions in PPKTP are based upon quasi-phase-matching, achieved by periodic poling of the crystal, whereby a structure of regularly spaced ferroelectric domains with alternating orientations are created in the material.
PPKTP is commonly used for Type 1 & 2 frequency conversions for pump wavelengths of 730–3500 nm.
Other materials used for periodic poling are wide band gap inorganic crystals like lithium niobate (resulting in periodically poled lithium niobate, PPLN), lithium tantalate, and some organic materials.
See also
Other materials used for laser frequency doubling are
Lithium triborate (LBO), used for high output power green or blue DPSS lasers
Beta barium borate (BBO), used for high output power DPSS blue lasers
References
External links
AdvR – Periodically poled potassium titanyl phosphate & KTP waveguides
Potassium titanyl phosphate
Phosphates
Potassium compounds
Titanium(IV) compounds
Nonlinear optical materials
Ferroelectric materials
Crystals
Second-harmonic generation | Potassium titanyl phosphate | [
"Physics",
"Chemistry",
"Materials_science"
] | 803 | [
"Physical phenomena",
"Ferroelectric materials",
"Salts",
"Materials",
"Electrical phenomena",
"Crystallography",
"Crystals",
"Phosphates",
"Hysteresis",
"Matter"
] |
15,166,174 | https://en.wikipedia.org/wiki/Pinsky%20phenomenon | In mathematics, the Pinsky phenomenon is a result in Fourier analysis. This phenomenon was discovered by Mark Pinsky of Northwestern University. It involves the spherical inversion of the Fourier transform.
The phenomenon involves a lack of convergence at a point due to a discontinuity at boundary.
This lack of convergence in the Pinsky phenomenon happens far away from the boundary of the discontinuity, rather than at the discontinuity itself seen in the Gibbs phenomenon. This non-local phenomenon is caused by a lensing effect.
Prototypical example
Let a function g(x) = 1 for |x| < c in 3 dimensions, with g(x) = 0 elsewhere. The jump at |x| = c will cause an oscillatory behavior of the spherical partial sums, which prevents convergence at the center of the ball as well as the possibility of Fourier inversion at x = 0. Stated differently, spherical partial sums of a Fourier integral of the indicator function of a ball are divergent at the center of the ball but convergent elsewhere to the desired indicator function. This prototype example was coined the ”Pinsky phenomenon” by Jean-Pierre Kahane, CRAS, 1995.
Generalizations
This prototype example can be suitably generalized to Fourier integral expansions in higher dimensions, both in Euclidean space and other non-compact rank-one symmetric spaces.
Also related are eigenfunction expansions on a geodesic ball in a rank-one symmetric space, but one must consider boundary conditions. Pinsky and others also represent some results on the asymptotic behavior of the Fejer approximation in one dimension, inspired by work
of Bump, Persi Diaconis, and J. B. Keller.
References
Mathematics that describe the Pinsky phenomenon are available on pages 142 to 143, and generalizations on pages 143+, in the book Introduction to Fourier Analysis and Wavelets, by Mark A. Pinsky, 2002, Publisher: Thomson Brooks/Cole.
Real analysis
Fourier series | Pinsky phenomenon | [
"Mathematics"
] | 404 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
14,077,942 | https://en.wikipedia.org/wiki/Delaunay%20refinement | In mesh generation, Delaunay refinements are algorithms for mesh generation based on the principle of adding Steiner points to the geometry of an input to be meshed, in a way that causes the Delaunay triangulation or constrained Delaunay triangulation of the augmented input to meet the quality requirements of the meshing application. Delaunay refinement methods include methods by Chew and by Ruppert.
Chew's second algorithm
Chew's second algorithm takes a piecewise linear system (PLS) and returns a constrained Delaunay triangulation of only quality triangles where quality is defined by the minimum angle in a triangle. Developed by L. Paul Chew for meshing surfaces embedded in three-dimensional space, Chew's second algorithm has been adopted as a two-dimensional mesh generator due to practical advantages over Ruppert's algorithm in certain cases and is the default quality mesh generator implemented in the freely available Triangle package. Chew's second algorithm is guaranteed to terminate and produce a local feature size-graded meshes with minimum angle up to about 28.6 degrees.
The algorithm begins with a constrained Delaunay triangulation of the input vertices. At each step, the circumcenter of a poor-quality triangle is inserted into the triangulation with one exception: If the circumcenter lies on the opposite side of an input segment as the poor quality triangle, the midpoint of the segment is inserted. Moreover, any previously inserted circumcenters inside the diametral ball of the original segment (before it is split) are removed from the triangulation.
Circumcenter insertion is repeated until no poor-quality triangles exist.
Ruppert's algorithm
Ruppert's algorithm takes a planar straight-line graph (or in dimension higher than two a piecewise linear system) and returns a conforming Delaunay triangulation of only quality triangles. A triangle is considered poor-quality if it has a circumradius to shortest edge ratio larger than some prescribed threshold.
Discovered by Jim Ruppert in the early 1990s,
"Ruppert's algorithm for two-dimensional quality mesh generation is perhaps the first theoretically guaranteed meshing algorithm to be truly satisfactory in practice."
Motivation
When doing computer simulations such as computational fluid dynamics, one starts with a model such as a 2D outline of a wing section.
The input to a 2D finite element method needs to be in the form of triangles that fill all space, and each triangle to be filled with one kind of material – in this example, either "air" or "wing".
Long, skinny triangles cannot be simulated accurately.
The simulation time is generally proportional to the number of triangles, and so one wants to minimize the number of triangles, while still using enough triangles to give reasonably accurate results – typically by using an unstructured grid.
The computer uses Ruppert's algorithm (or some similar meshing algorithm) to convert the polygonal model into triangles suitable for the finite element method.
Algorithm
The algorithm begins with a Delaunay triangulation of the input vertices and then consists of two main operations.
The midpoint of a segment with non-empty diametral circles is inserted into the triangulation.
The circumcenter of a poor-quality triangle is inserted into the triangulation, unless this circumcenter lies in the diametral circle of some segment. In this case, the encroached segment is split instead.
These operations are repeated until no poor-quality triangles exist and all segments are not encroached.
Pseudocode
function Ruppert(points, segments, threshold) is
T := DelaunayTriangulation(points)
Q := the set of encroached segments and poor quality triangles
while Q is not empty: // The main loop
if Q contains a segment s:
insert the midpoint of s into T
else Q contains poor quality triangle t:
if the circumcenter of t encroaches a segment s:
add s to Q;
else:
insert the circumcenter of t into T
end if
end if
update Q
end while
return T
end Ruppert.
Practical usage
Without modification Ruppert's algorithm is guaranteed to terminate and generate a quality mesh for non-acute input and any poor-quality threshold less than about 20.7 degrees. To relax these restrictions various small improvements have been made. By relaxing the quality requirement near small input angles, the algorithm can be extended to handle any straight-line input. Curved input can also be meshed using similar techniques.
Ruppert's algorithm can be naturally extended to three dimensions, however its output guarantees are somewhat weaker due to the sliver type tetrahedron.
An extension of Ruppert's algorithm in two dimensions is implemented in the freely available Triangle package. Two variants of Ruppert's algorithm in this package are guaranteed to terminate for a poor-quality threshold of about 26.5 degrees. In practice these algorithms are successful for poor-quality thresholds over 30 degrees. However, examples are known which cause the algorithm to fail with a threshold greater than 29.06 degrees.
See also
Local feature size
Polygon mesh
TetGen
Voronoi diagram
References
Further reading
Mesh generation
Triangulation (geometry)
Articles containing video clips | Delaunay refinement | [
"Physics",
"Mathematics"
] | 1,098 | [
"Triangulation (geometry)",
"Mesh generation",
"Tessellation",
"Planar graphs",
"Planes (geometry)",
"Symmetry"
] |
14,087,255 | https://en.wikipedia.org/wiki/Bcl-2-like%20protein%201 | Bcl-2-like protein 1 is a protein encoded in humans by the BCL2L1 gene. Through alternative splicing, the gene encodes both of the human proteins Bcl-xL and Bcl-xS.
Function
The protein encoded by this gene belongs to the Bcl-2 protein family. Bcl-2 family members form hetero- or homodimers and act as anti- or pro-apoptotic regulators that are involved in a wide variety of cellular activities. The proteins encoded by this gene are located at the outer mitochondrial membrane, and have been shown to regulate outer mitochondrial membrane channel (voltage-dependent anion channels (VDACs) opening. VDACs regulate mitochondrial membrane potential, and thus controls the production of reactive oxygen species and release of cytochrome C by mitochondria, both of which are the potent inducers of cell apoptosis. Two alternatively spliced transcript variants, which encode distinct isoforms, have been reported. The longer isoform (Bcl-xL) acts as an apoptotic inhibitor and the shorter form (Bcl-xS) acts as an apoptotic activator.
Interactions
BCL2-like 1 (gene) has been shown to interact with:
APAF1,
BAK1,
BCAP31,
BCL2L11,
BNIP3,
BNIPL,
BAD,
BAX,
BIK,
Bcl-2,
HRK,
IKZF3,
Noxa,
PPP1CA,
PSEN2
RAD9A,
RTN1,
RTN4, and
VDAC1.
References
Further reading
External links
Proteins
Apoptosis | Bcl-2-like protein 1 | [
"Chemistry"
] | 350 | [
"Biomolecules by chemical classification",
"Signal transduction",
"Apoptosis",
"Molecular biology",
"Proteins"
] |
14,087,640 | https://en.wikipedia.org/wiki/Centroidal%20Voronoi%20tessellation | In geometry, a centroidal Voronoi tessellation (CVT) is a special type of Voronoi tessellation in which the generating point of each Voronoi cell is also its centroid (center of mass). It can be viewed as an optimal partition corresponding to an optimal distribution of generators. A number of algorithms can be used to generate centroidal Voronoi tessellations, including Lloyd's algorithm for K-means clustering or Quasi-Newton methods like BFGS.
Proofs
Gersho's conjecture, proven for one and two dimensions, says that "asymptotically speaking, all cells of the optimal CVT, while forming a tessellation, are congruent to a basic cell which depends on the dimension."
In two dimensions, the basic cell for the optimal CVT is a regular hexagon as it is proven to be the most dense packing of circles in 2D Euclidean space.
Its three dimensional equivalent is the rhombic dodecahedral honeycomb, derived from the most dense packing of spheres in 3D Euclidean space.
Applications
Centroidal Voronoi tessellations are useful in data compression, optimal quadrature, optimal quantization, clustering, and optimal mesh generation.
A weighted centroidal Voronoi diagrams is a CVT in which each centroid is weighted according to a certain function. For example, a grayscale image can be used as a density function to weight the points of a CVT, as a way to create digital stippling.
Occurrence in nature
Many patterns seen in nature are closely approximated by a centroidal Voronoi tessellation. Examples of this include the Giant's Causeway, the cells of the cornea, and the breeding pits of the male tilapia.
References
Discrete geometry
Geometric algorithms
Diagrams | Centroidal Voronoi tessellation | [
"Mathematics"
] | 373 | [
"Discrete geometry",
"Discrete mathematics"
] |
14,088,354 | https://en.wikipedia.org/wiki/Journal%20of%20Physics%20G | Journal of Physics G: Nuclear and Particle Physics is a peer-reviewed journal that publishes theoretical and experimental research into nuclear physics, particle physics and particle astrophysics, including all interface areas between these fields.
The editor-in-chief is Jacek Dobaczewski, University of York, England.
Scope
The journal publishes research articles on:
theoretical and experimental topics in the physics of elementary particles and fields;
intermediate-energy physics and nuclear physics;
experimental and theoretical research in particle, neutrino, and nuclear astrophysics;
research arising from all interface areas among these fields.
Research is published in the following formats:
Research Papers: Reports of original and high-quality research work;
Research Notes: Contributions from individuals (or small groups) within large collaborations, containing early results of analyses, detector development, simulations, etc. which might not otherwise be published in the wider literature;
Topical Reviews: Specially commissioned review articles on areas of current interest;
LabTalk: Article summaries written by the researchers themselves which introduce the findings, techniques, and possible applications of their research.
Abstracting and indexing information
The journal is indexed in INSPEC Information Services, ISI (Science Citation Index, SciSearch, ISI Alerting Services, Current Contents/Physical, Chemical and Earth Sciences), Article@INIST, and Chemical Abstracts.
References
External links
Journal of Physics G: Nuclear and Particle Physics website
IOP Publishing
IOP Publishing academic journals
Nuclear physics journals
Particle physics journals
Physics journals | Journal of Physics G | [
"Physics"
] | 300 | [
"Nuclear physics",
"Nuclear physics journals",
"Particle physics",
"Particle physics journals"
] |
7,785,512 | https://en.wikipedia.org/wiki/Preferential%20alignment | The preferential alignment is a criterion of an orientation of a molecule or atom. The preferential alignment can be related to the formation of the crystal structure of an amorphous structure.
Polymeric masses with high atomic distances can either be in an oriented or non oriented state. These higher distances (up to 1000 Å) form great regions, where the molecular chains may be preferentially oriented, something which can happen independent to the existence or not of crystallinity.
References
Crystallography | Preferential alignment | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 100 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
7,788,962 | https://en.wikipedia.org/wiki/Roman%20military%20engineering | Roman military engineering was of a scale and frequency far beyond that of its contemporaries. Indeed, military engineering was in many ways endemic in Roman military culture, as demonstrated by each Roman legionary having as part of his equipment a shovel, alongside his gladius (sword) and pila (javelins).
Workers, craftsmen, and artisans, known collectively as fabri, served in the Roman military. Descriptions of early Roman army structure (initially by phalanx, later by legion) attributed to king Servius Tullius state that two centuriae of fabri served under an officer, the praefectus fabrum.
Roman military engineering took both routine and extraordinary forms, the former a part of standard military procedure, and the latter of an extraordinary or reactive nature.
Proactive and routine military engineering
The Roman legionary fortified camp
Each Roman legion had a legionary fort as its permanent base. However, when on the march, particularly in enemy territory, the legion would construct a rudimentary fortified camp or castra, using only earth, turf and timber. Camp construction was the responsibility of engineering units to which specialists of many types belonged, officered by architecti (engineers), from a class of troops known as immunes who were excused from regular duties. These engineers would requisition manual labour from the soldiers at large as required. A legion could throw up a camp under enemy attack in a few hours. The names of the different types of camps apparently represent the amount of investment: tertia castra, quarta castra: "a camp of three days", "four days", etc.
Bridges
The engineers built bridges from timber and stone. Some Roman stone bridges survive. Stone bridges were made possible by the innovative use of keystone arches. One notable example was Julius Caesar's Bridge over the Rhine River. This bridge was completed in only ten days and is conservatively estimated to have been more than 100 m (328 feet) long. The construction was deliberately over-engineered for Caesar's stated purpose of impressing the Germanic tribes. Caesar writes in his War in Gaul that he rejected the idea of simply crossing in boats because it "would not be fitting for my own prestige and that of Rome" (at the time, he did not know that the Germanic tribes, with little knowledge of engineering, had already withdrawn from the area upon his arrival), and because a bridge would emphasize that Rome could travel wherever she wished. Caesar was able to cross over the completed bridge and explore the area uncontested, before crossing back over the subsequently dismantled bridge. Caesar related in War in Gaul that when he "sent messengers to the Sugambri to demand the surrender of those who had made war on me and on Gaul, they replied that the Rhine was the limit of Roman power". The bridge was intended to show otherwise.
Siege machines
Although most Roman siege engines were adaptations of earlier Greek designs, the Romans were adept at engineering them swiftly and efficiently, as well as innovating variations such as the repeating ballista. The 1st century BC army engineer Vitruvius describes in detail many of the Roman siege machines in his manuscript De architectura.
Roads
When invading enemy territories, the Roman army would often construct roads as it went, to allow swift reinforcement and resupply, or for easy retreat if necessary. Roman road-making skills were such that some survive today. Michael Grant credits the Roman building of the Via Appia with winning them the Second Samnite War.
Civilian engineering by military troops
When soldiers were not engaged in military campaigns, the legions had little to do, while costing the Roman state large sums of money. Thus, soldiers were involved in building civilian works to keep them well accustomed to hard physical labour and out of mischief, since it was believed that idle armies were a potential source of mutiny.
Soldiers were put to use in the construction of roads, town walls, the digging of canals, drainage projects, aqueducts, harbours, and even in the cultivation of vineyards.
Mining operations
Soldiers were used in mining operations such as building aqueducts needed for prospecting for metal veins, activities such as hydraulic mining, and building reservoirs to hold water at the minehead.
Reactive and extraordinary engineering
The knowledge and experience learned through routine engineering lent itself readily to extraordinary engineering projects. In such projects, Roman military engineering greatly exceeded that of its contemporaries in imagination and scope.
One notable project was the circumvallation of the entire city of Alesia and its Celtic leader Vercingetorix, within a massive double-wall – one inward-facing to prevent escape or offensive sallies, and one outward-facing to prevent attack by Celtic reinforcements. This wall is estimated to have been over long.
A second example is the massive ramp built using thousands of tons of stones and beaten earth up to the invested city of Masada during the Jewish Revolt. The siege works and the ramp remain in a remarkable state of preservation.
See also
Technological history of the Roman military
List of Roman pontoon bridges
Roman architecture
Roman aqueducts
Roman engineering
Notes
External links
Traianus - Technical investigation of Roman public works
Military engineering | Roman military engineering | [
"Engineering"
] | 1,039 | [
"Construction",
"Military engineering"
] |
7,792,469 | https://en.wikipedia.org/wiki/MRNA%20display | mRNA display is a display technique used for in vitro protein, and/or peptide evolution to create molecules that can bind to a desired target. The process results in translated peptides or proteins that are associated with their mRNA progenitor via a puromycin linkage. The complex then binds to an immobilized target in a selection step (affinity chromatography). The mRNA-protein fusions that bind well are then reverse transcribed to cDNA and their sequence amplified via a polymerase chain reaction. The result is a nucleotide sequence that encodes a peptide with high affinity for the molecule of interest.
Puromycin is an analogue of the 3’ end of a tyrosyl-tRNA with a part of its structure mimics a molecule of adenosine, and the other part mimics a molecule of tyrosine. Compared to the cleavable ester bond in a tyrosyl-tRNA, puromycin has a non-hydrolysable amide bond. As a result, puromycin interferes with translation, and causes premature release of translation products.
All mRNA templates used for mRNA display technology have puromycin at their 3’ end. As translation proceeds, ribosome moves along the mRNA template, and once it reaches the 3’ end of the template, the fused puromycin will enter ribosome’s A site and be incorporated into the nascent peptide. The mRNA-polypeptide fusion is then released from the ribosome (Figure 1).
To synthesize an mRNA-polypeptide fusion, the fused puromycin is not the only modification to the mRNA template. Oligonucleotides and other spacers need to be recruited along with the puromycin to provide flexibility and proper length for the puromycin to enter the A site. Ideally, the linker between the 3’ end of an mRNA and the puromycin has to be flexible and long enough to allow the puromycin to enter the A site upon translation of the last codon. This enables the efficient production of high-quality, full-length mRNA-polypeptide fusion. Rihe Liu et al. optimized the 3’-puromycin oligonucleotide spacer. They reported that dA25 in combination with a Spacer 9 (Glen Research), and dAdCdCP at the 5’ terminus worked the best for the fusion reaction. They found that linkers longer than 40 nucleotides and shorter than 16 nucleotides showed greatly reduced efficiency of fusion formation. Also, when the sequence rUrUP presented adjacent to the puromycin, fusion did not form efficiently.
In addition to providing flexibility and length, the poly dA portion of the linker also allows further purification of the mRNA-polypeptide fusion due to its high affinity for dT cellulose resin. The mRNA-polypeptide fusions can be selected over immobilized selection targets for several rounds with increasing stringency. After each round of selection, those library members that stay bound to the immobilized target are PCR amplified, and non-binders are washed off.
Method
The synthesis of an mRNA display library starts from the synthesis of a DNA library. A DNA library for any protein or small peptide of interest can be synthesized by solid-phase synthesis followed by PCR amplification. Usually, each member of this DNA library has a T7 RNA polymerase transcription site and a ribosomal binding site at the 5’ end. The T7 promoter region allows large-scale in vitro T7 transcription to transcribe the DNA library into an mRNA library, which provides templates for the in vitro translation reaction later. The ribosomal binding site in the 5’-untranslated region (5’ UTR) is designed according to the in vitro translation system to be used. There are two popular commercially available in vitro translation systems. One is E. coli S30 Extract System (Promega) that requires a Shine-Dalgarno sequence in the 5’ UTR as a ribosomal binding site; the other one is Red Nova Lysate (Novagen), which needs a ΔTMV ribosomal binding site.
Once the mRNA library is generated, it will be Urea-PAGE purified and ligated using T4 DNA ligase to the DNA spacer linker containing puromycin at the 3’ end. In this ligation step, a piece of mRNA is ligated with a single stranded DNA with the help from T4 DNA ligase. This is not a standard T4 DNA ligase ligation reaction, where two pieces of double stranded DNA are ligated together. To increase the yield of this special ligation, a single stranded DNA splint may be used to aid the ligation reaction. The 5’ terminus of the splint is designed to be complementary to the 3’ end of the mRNA, and the 3’ terminus of the splint is designed to be complementary to the 5’ end of the DNA spacer linker, which usually consists of poly dA nucleotides (Figure 2).
The ligated mRNA-DNA-puromycin library is translated in Red Nova Lysate (Novagen) or E. coli S30 Extract System (Promega), resulting in polypeptides covalently linked in cis to the encoding mRNA. The in vitro translation can also be done in a PURE (protein synthesis using recombinant elements) system. PURE system is an E. coli cell-free translation system in which only essential translation components are present. Some components, such as amino acids and aminoacyl-tRNA synthases (AARSs) can be omitted from the system. Instead, chemically acylated tRNA can be added into the PURE system. It has been shown that some unnatural amino acids, such as N-methyl-amino acid accylated tRNA can be incorporated into peptides or mRNA-polypeptide fusions in a PURE system.
After translation, the single-stranded mRNA portions of the fusions will be converted to heteroduplex of RNA/DNA by reverse transcriptase to eliminate any unwanted RNA secondary structures, and render the nucleic acid portion of the fusion more stable. This step is a standard reverse transcription reaction. For instance, it can be done by using Superscript II (GIBCO-BRL) following the manufacturer’s protocol.
The mRNA/DNA-polypeptide fusions can be selected over immobilized selection targets for several rounds (Figure 3). There might be a relatively high background for the first few rounds of selection, and this can be minimized by increasing selection stringency, such as adjusting salt concentration, amount of detergent, and/or temperature during the target/fusion binding period. Following binding selection, those library members that stay bound to the immobilized target are PCR amplified. The PCR amplification step will enrich the population from the mRNA-display library that has higher affinity for the immobilized target. Error-prone PCR can also be done in between each round of selection to further increase the diversity of the mRNA-display library and reduce background in selection.
A less time-consuming protocol for mRNA display was recently published.
Advantages
Although there are many other molecular display technologies, such as phage display, bacterial display, yeast display, and ribosome display, mRNA display technology has many advantages over the others. The first three biological display libraries listed have polypeptides or proteins expressed on the respective microorganism’s surface and the accompanying coding information for each polypeptide or protein is retrievable from the microorganism’s genome. However, the library size for these three in vivo display systems is limited by the transformation efficiency of each organism. For example, the library size for phage and bacterial display is limited to 1-10 × 10^9 different members. The library size for yeast display is even smaller. Moreover, these cell-based display system only allow the screening and enrichment of peptides/proteins containing natural amino acids. In contrast, mRNA display and ribosome display are in vitro selection methods. They allow a library size as large as 10^15 different members. The large library size increases the probability to select very rare sequences, and also improves the diversity of the selected sequences. In addition, in vitro selection methods remove unwanted selection pressure, such as poor protein expression, and rapid protein degradation, which may reduce the diversity of the selected sequences. Finally, in vitro selection methods allow the application of in vitro mutagenesis and recombination techniques throughout the selection process.
Although both ribosome display and mRNA display are in vitro selection methods, mRNA display has some advantage over the ribosome display technology. mRNA display utilizes covalent mRNA-polypeptide complexes linked through puromycin; whereas, ribosome display utilizes stalled, noncovalent ribosome-mRNA-polypeptide complexes. For ribosome display, selection stringency is limited to keep ribosome-mRNA-polypeptide in a complex because of the noncovalent ribosome-mRNA-polypeptide complexes. This may cause difficulties in reducing background binding during the selection cycle. Also, the polypeptides under selection in a ribosome display system are attached to an enormous rRNA-protein complex, a ribosome, which has a molecular weight of more than 2,000,000 Da. There might be some unpredictable interaction between the selection target and the ribosome, and this may lead to a loss of potential binders during the selection cycle. In contrast, the puromycin DNA spacer linker used in mRNA display technology is much smaller comparing to a ribosome. This linker may have less chance to interact with an immobilized selection target. Thus, mRNA display technology is more likely to give less biased results.
Application
In 1997, Roberts and Szostak showed that fusions between a synthetic mRNA and its encoded myc epitope could be enriched from a pool of random sequence mRNA-polypeptide fusions by immunoprecipitation.
Nine years later, Fukuda and colleagues chose mRNA display method for in vitro evolution of single-chain Fv (scFv) antibody fragments. They selected six different scFv mutants with five consensus mutations. However, kinetic analysis of these mutants showed that their antigen-specificity remained similar to that of the wild type. However, they have demonstrated that two of the five consensus mutations were within the complementarity determining regions (CDRs). And they concluded that mRNA display has the potential for rapid artificial evolution of high-affinity diagnostic and therapeutic antibodies by optimizing their CDRs.
Roberts and coworkers have demonstrated that unnatural peptide oligomers consisting of an N-substituted amino acid can be synthesized as mRNA-polypeptide fusions. N-substituted amino acid-containing peptides have been associated with good proteolytic stability and improved pharmacokinetic properties. This work indicates that mRNA display technology has the potential for selecting drug-like peptides for therapeutic usage resistant to proteolysis.
See also
Ribosome display
Protein engineering
Protein–protein interaction screening
References
Molecular biology
Display techniques | MRNA display | [
"Chemistry",
"Biology"
] | 2,353 | [
"Biochemistry methods",
"Biochemistry",
"Display techniques",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.