text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Fourier-transform spectroscopy (FTS) is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the radiation, electromagnetic or not. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy (FTIR, FT-NIRS), nuclear magnetic resonance (NMR) and magnetic resonance spectroscopic imaging (MRSI), mass spectrometry and electron spin resonance spectroscopy.
There are several methods for measuring the temporal coherence of the light (see: field-autocorrelation), including the continuous-wave and the pulsed Fourier-transform spectrometer or Fourier-transform spectrograph.
The term "Fourier-transform spectroscopy" reflects the fact that in all these techniques, a Fourier transform is required to turn the raw data into the actual spectrum, and in many of the cases in optics involving interferometers, is based on the Wiener–Khinchin theorem.
== Conceptual introduction ==
=== Measuring an emission spectrum ===
One of the most basic tasks in spectroscopy is to characterize the spectrum of a light source: how much light is emitted at each different wavelength. The most straightforward way to measure a spectrum is to pass the light through a monochromator, an instrument that blocks all of the light except the light at a certain wavelength (the un-blocked wavelength is set by a knob on the monochromator). Then the intensity of this remaining (single-wavelength) light is measured. The measured intensity directly indicates how much light is emitted at that wavelength. By varying the monochromator's wavelength setting, the full spectrum can be measured. This simple scheme in fact describes how some spectrometers work.
Fourier-transform spectroscopy is a less intuitive way to get the same information. Rather than allowing only one wavelength at a time to pass through to the detector, this technique lets through a beam containing many different wavelengths of light at once, and measures the total beam intensity. Next, the beam is modified to contain a different combination of wavelengths, giving a second data point. This process is repeated many times. Afterwards, a computer takes all this data and works backwards to infer how much light there is at each wavelength.
To be more specific, between the light source and the detector, there is a certain configuration of mirrors that allows some wavelengths to pass through but blocks others (due to wave interference). The beam is modified for each new data point by moving one of the mirrors; this changes the set of wavelengths that can pass through.
As mentioned, computer processing is required to turn the raw data (light intensity for each mirror position) into the desired result (light intensity for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform (hence the name, "Fourier-transform spectroscopy"). The raw data is sometimes called an "interferogram". Because of the existing computer equipment requirements, and the ability of light to analyze very small amounts of substance, it is often beneficial to automate many aspects of the sample preparation. The sample can be better preserved and the results are much easier to replicate. Both of these benefits are important, for instance, in testing situations that may later involve legal action, such as those involving drug specimens.
=== Measuring an absorption spectrum ===
The method of Fourier-transform spectroscopy can also be used for absorption spectroscopy. The primary example is "FTIR Spectroscopy", a common technique in chemistry.
In general, the goal of absorption spectroscopy is to measure how well a sample absorbs or transmits light at each different wavelength. Although absorption spectroscopy and emission spectroscopy are different in principle, they are closely related in practice; any technique for emission spectroscopy can also be used for absorption spectroscopy. First, the emission spectrum of a broadband lamp is measured (this is called the "background spectrum"). Second, the emission spectrum of the same lamp shining through the sample is measured (this is called the "sample spectrum"). The sample will absorb some of the light, causing the spectra to be different. The ratio of the "sample spectrum" to the "background spectrum" is directly related to the sample's absorption spectrum.
Accordingly, the technique of "Fourier-transform spectroscopy" can be used both for measuring emission spectra (for example, the emission spectrum of a star), and absorption spectra (for example, the absorption spectrum of a liquid).
== Continuous-wave Michelson or Fourier-transform spectrograph ==
The Michelson spectrograph is similar to the instrument used in the Michelson–Morley experiment. Light from the source is split into two beams by a half-silvered mirror, one is reflected off a fixed mirror and one off a movable mirror, which introduces a time delay—the Fourier-transform spectrometer is just a Michelson interferometer with a movable mirror. The beams interfere, allowing the temporal coherence of the light to be measured at each different time delay setting, effectively converting the time domain into a spatial coordinate. By making measurements of the signal at many discrete positions of the movable mirror, the spectrum can be reconstructed using a Fourier transform of the temporal coherence of the light. Michelson spectrographs are capable of very high spectral resolution observations of very bright sources.
The Michelson or Fourier-transform spectrograph was popular for infra-red applications at a time when infra-red astronomy only had single-pixel detectors. Imaging Michelson spectrometers are a possibility, but in general have been supplanted by imaging Fabry–Pérot instruments, which are easier to construct.
=== Extracting the spectrum ===
The intensity as a function of the path length difference (also denoted as retardation) in the interferometer
p
{\displaystyle p}
and wavenumber
ν
~
=
1
/
λ
{\displaystyle {\tilde {\nu }}=1/\lambda }
is
I
(
p
,
ν
~
)
=
I
(
ν
~
)
[
1
+
cos
(
2
π
ν
~
p
)
]
,
{\displaystyle I(p,{\tilde {\nu }})=I({\tilde {\nu }})[1+\cos \left(2\pi {\tilde {\nu }}p\right)],}
where
I
(
ν
~
)
{\displaystyle I({\tilde {\nu }})}
is the spectrum to be determined. Note that it is not necessary for
I
(
ν
~
)
{\displaystyle I({\tilde {\nu }})}
to be modulated by the sample before the interferometer. In fact, most FTIR spectrometers place the sample after the interferometer in the optical path. The total intensity at the detector is
I
(
p
)
=
∫
0
∞
I
(
p
,
ν
~
)
d
ν
~
=
∫
0
∞
I
(
ν
~
)
[
1
+
cos
(
2
π
ν
~
p
)
]
d
ν
~
.
{\displaystyle {\begin{aligned}I(p)&=\int _{0}^{\infty }I(p,{\tilde {\nu }})d{\tilde {\nu }}\\&=\int _{0}^{\infty }I({\tilde {\nu }})[1+\cos(2\pi {\tilde {\nu }}p)]\,d{\tilde {\nu }}.\end{aligned}}}
This is just a Fourier cosine transform. The inverse gives us our desired result in terms of the measured quantity
I
(
p
)
{\displaystyle I(p)}
:
I
(
ν
~
)
=
4
∫
0
∞
[
I
(
p
)
−
1
2
I
(
p
=
0
)
]
cos
(
2
π
ν
~
p
)
d
p
.
{\displaystyle I({\tilde {\nu }})=4\int _{0}^{\infty }\left[I(p)-{\frac {1}{2}}I(p=0)\right]\cos(2\pi {\tilde {\nu }}p)\,dp.}
== Pulsed Fourier-transform spectrometer ==
A pulsed Fourier-transform spectrometer does not employ transmittance techniques. In the most general description of pulsed FT spectrometry, a sample is exposed to an energizing event which causes a periodic response. The frequency of the periodic response, as governed by the field conditions in the spectrometer, is indicative of the measured properties of the analyte.
=== Examples of pulsed Fourier-transform spectrometry ===
In magnetic spectroscopy (EPR, NMR), a microwave pulse (EPR) or a radio frequency pulse (NMR) in a strong ambient magnetic field is used as the energizing event. This turns the magnetic particles at an angle to the ambient field, resulting in gyration. The gyrating spins then induce a periodic current in a detector coil. Each spin exhibits a characteristic frequency of gyration (relative to the field strength) which reveals information about the analyte.
In Fourier-transform mass spectrometry, the energizing event is the injection of the charged sample into the strong electromagnetic field of a cyclotron. These particles travel in circles, inducing a current in a fixed coil on one point in their circle. Each traveling particle exhibits a characteristic cyclotron frequency-field ratio revealing the masses in the sample.
=== Free induction decay ===
Pulsed FT spectrometry gives the advantage of requiring a single, time-dependent measurement which can easily deconvolute a set of similar but distinct signals. The resulting composite signal, is called a free induction decay, because typically the signal will decay due to inhomogeneities in sample frequency, or simply unrecoverable loss of signal due to entropic loss of the property being measured.
=== Nanoscale spectroscopy with pulsed sources ===
Pulsed sources allow for the utilization of Fourier-transform spectroscopy principles in scanning near-field optical microscopy techniques. Particularly in nano-FTIR, where the scattering from a sharp probe-tip is used to perform spectroscopy of samples with nanoscale spatial resolution, a high-power illumination from pulsed infrared lasers makes up for a relatively small scattering efficiency (often < 1%) of the probe.
== Stationary forms of Fourier-transform spectrometers ==
In addition to the scanning forms of Fourier-transform spectrometers, there are a number of stationary or self-scanned forms. While the analysis of the interferometric output is similar to that of the typical scanning interferometer, significant differences apply, as shown in the published analyses. Some stationary forms retain the Fellgett multiplex advantage, and their use in the spectral region where detector noise limits apply is similar to the scanning forms of the FTS. In the photon-noise limited region, the application of stationary interferometers is dictated by specific consideration for the spectral region and the application.
== Fellgett advantage ==
One of the most important advantages of Fourier-transform spectroscopy was shown by P. B. Fellgett, an early advocate of the method. The Fellgett advantage, also known as the multiplex principle, states that when obtaining a spectrum when measurement noise is dominated by detector noise (which is independent of the power of radiation incident on the detector), a multiplex spectrometer such as a Fourier-transform spectrometer will produce a relative improvement in signal-to-noise ratio, compared to an equivalent scanning monochromator, of the order of the square root of m, where m is the number of sample points comprising the spectrum. However, if the detector is shot-noise dominated, the noise will be proportional to the square root of the power, thus for a broad boxcar spectrum (continuous broadband source), the noise is proportional to the square root of m, thus precisely offset the Fellgett's advantage. For line emission sources the situation is even worse and there is a distinct `multiplex disadvantage' as the shot noise from a strong emission component will overwhelm the fainter components of the spectrum. Shot noise is the main reason Fourier-transform spectrometry was never popular for ultraviolet (UV) and visible spectra.
== See also ==
== References ==
== External links ==
Description of how a Fourier transform spectrometer works
The Michelson or Fourier transform spectrograph
Internet Journal of Vibrational Spectroscopy – How FTIR works
Fourier Transform Spectroscopy Topical Meeting and Tabletop Exhibit | Wikipedia/Fourier-transform_spectroscopy |
Fourier transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. An FTIR spectrometer simultaneously collects high-resolution spectral data over a wide spectral range. This confers a significant advantage over a dispersive spectrometer, which measures intensity over a narrow range of wavelengths at a time.
The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum.
== Conceptual introduction ==
The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible ("UV-vis") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. The most straightforward way to do this, the "dispersive spectroscopy" technique, is to shine a monochromatic light beam at a sample, measure how much of the light is absorbed, and repeat for each different wavelength. (This is how some UV–vis spectrometers work, for example.)
Fourier transform spectroscopy is a less intuitive way to obtain the same information. Rather than shining a monochromatic beam of light (a beam composed of only a single wavelength) at the sample, this technique shines a beam containing many frequencies of light at once and measures how much of that beam is absorbed by the sample. Next, the beam is modified to contain a different combination of frequencies, giving a second data point. This process is rapidly repeated many times over a short time span. Afterwards, a computer takes all this data and works backward to infer what the absorption is at each wavelength.
The beam described above is generated by starting with a broadband light source—one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer—a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked, transmitted, blocked, transmitted, by the interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment or mirror position the beam coming out of the interferometer has a different spectrum.
As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into the desired result (light absorption for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform. The Fourier transform converts one domain (in this case displacement of the mirror in cm) into its inverse domain (wavenumbers in cm−1). The raw data is called an "interferogram".
== History ==
The first low-cost spectrophotometer capable of recording an infrared spectrum was the Perkin-Elmer Infracord produced in 1957. This instrument covered the wavelength range from 2.5 μm to 15 μm (wavenumber range 4,000 cm−1 to 660 cm−1). The lower wavelength limit was chosen to encompass the highest known vibration frequency due to a fundamental molecular vibration. The upper limit was imposed by the fact that the dispersing element was a prism made from a single crystal of rock-salt (sodium chloride), which becomes opaque at wavelengths longer than about 15 μm; this spectral region became known as the rock-salt region. Later instruments used potassium bromide prisms to extend the range to 25 μm (400 cm−1) and caesium iodide 50 μm (200 cm−1). The region beyond 50 μm (200 cm−1) became known as the far-infrared region; at very long wavelengths it merges into the microwave region. Measurements in the far infrared needed the development of accurately ruled diffraction gratings to replace the prisms as dispersing elements, since salt crystals are opaque in this region. More sensitive detectors than the bolometer were required because of the low energy of the radiation. One such was the Golay detector. An additional issue is the need to exclude atmospheric water vapour because water vapour has an intense pure rotational spectrum in this region. Far-infrared spectrophotometers were cumbersome, slow and expensive. The advantages of the Michelson interferometer were well-known, but considerable technical difficulties had to be overcome before a commercial instrument could be built. Also an electronic computer was needed to perform the required Fourier transform, and this only became practicable with the advent of minicomputers, such as the PDP-8, which became available in 1965. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969. Digilab FTIRs are now a part of Agilent Technologies's molecular product line after Agilent acquired spectroscopy business from Varian.
== Michelson interferometer ==
In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. Ideally 50% of the light is refracted towards the fixed mirror and 50% is transmitted towards the moving mirror. Light is reflected from the two mirrors back to the beam splitter and some fraction of the original light passes into the sample compartment. There, the light is focused on the sample. On leaving the sample compartment the light is refocused on to the detector. The difference in optical path length between the two arms to the interferometer is known as the retardation or optical path difference (OPD). An interferogram is obtained by varying the OPD and recording the signal from the detector for various values of the OPD. The form of the interferogram when no sample is present depends on factors such as the variation of source intensity and splitter efficiency with wavelength. This results in a maximum at zero OPD, when there is constructive interference at all wavelengths, followed by series of "wiggles". The position of zero OPD is determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the background interferogram is modulated by the presence of absorption bands in the sample.
Commercial spectrometers use Michelson interferometers with a variety of scanning mechanisms to generate the path difference. Common to all these arrangements is the need to ensure that the two beams recombine exactly as the system scans. The simplest systems have a plane mirror that moves linearly to vary the path of one beam. In this arrangement the moving mirror must not tilt or wobble as this would affect how the beams overlap as they recombine. Some systems incorporate a compensating mechanism that automatically adjusts the orientation of one mirror to maintain the alignment. Arrangements that avoid this problem include using cube corner reflectors instead of plane mirrors as these have the property of returning any incident beam in a parallel direction regardless of orientation.
Systems where the path difference is generated by a rotary movement have proved very successful. One common system incorporates a pair of parallel mirrors in one beam that can be rotated to vary the path without displacing the returning beam. Another is the double pendulum design where the path in one arm of the interferometer increases as the path in the other decreases.
A quite different approach involves moving a wedge of an IR-transparent material such as KBr into one of the beams. Increasing the thickness of KBr in the beam increases the optical path because the refractive index is higher than that of air. One limitation of this approach is that the variation of refractive index over the wavelength range limits the accuracy of the wavelength calibration.
== Measuring and processing the interferogram ==
The interferogram has to be measured from zero path difference to a maximum length that depends on the resolution required. In practice the scan can be on either side of zero resulting in a double-sided interferogram. Mechanical design limitations may mean that for the highest resolution the scan runs to the maximum OPD on one side of zero only.
The interferogram is converted to a spectrum by Fourier transformation. This requires it to be stored in digital form as a series of values at equal intervals of the path difference between the two beams. To measure the path difference a laser beam is sent through the interferometer, generating a sinusoidal signal where the separation between successive maxima is equal to the wavelength of the laser (typically a 633 nm HeNe laser is used). This can trigger an analog-to-digital converter to measure the IR signal each time the laser signal passes through zero. Alternatively, the laser and IR signals can be measured synchronously at smaller intervals with the IR signal at points corresponding to the laser signal zero crossing being determined by interpolation. This approach allows the use of analog-to-digital converters that are more accurate and precise than converters that can be triggered, resulting in lower noise.
The result of Fourier transformation is a spectrum of the signal at a series of discrete wavelengths. The range of wavelengths that can be used in the calculation is limited by the separation of the data points in the interferogram. The shortest wavelength that can be recognized is twice the separation between these data points. For example, with one point per wavelength of a HeNe reference laser at 0.633 μm (15800 cm−1) the shortest wavelength would be 1.266 μm (7900 cm−1). Because of aliasing, any energy at shorter wavelengths would be interpreted as coming from longer wavelengths and so has to be minimized optically or electronically. The spectral resolution, i.e. the separation between wavelengths that can be distinguished, is determined by the maximum OPD. The wavelengths used in calculating the Fourier transform are such that an exact number of wavelengths fit into the length of the interferogram from zero to the maximum OPD as this makes their contributions orthogonal. This results in a spectrum with points separated by equal frequency intervals.
For a maximum path difference d adjacent wavelengths λ1 and λ2 will have n and (n+1) cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2:
The separation is the inverse of the maximum OPD. For example, a maximum OPD of 2 cm results in a separation of 0.5 cm−1. This is the spectral resolution in the sense that the value at one point is independent of the values at adjacent points. Most instruments can be operated at different resolutions by choosing different OPD's. Instruments for routine analyses typically have a best resolution of around 0.5 cm−1, while spectrometers have been built with resolutions as high as 0.001 cm−1, corresponding to a maximum OPD of 10 m. The point in the interferogram corresponding to zero path difference has to be identified, commonly by assuming it is where the maximum signal occurs. This so-called centerburst is not always symmetrical in real world spectrometers so a phase correction may have to be calculated. The interferogram signal decays as the path difference increases, the rate of decay being inversely related to the width of features in the spectrum. If the OPD is not large enough to allow the interferogram signal to decay to a negligible level there will be unwanted oscillations or sidelobes associated with the features in the resulting spectrum. To reduce these sidelobes the interferogram is usually multiplied by a function that approaches zero at the maximum OPD. This so-called apodization reduces the amplitude of any sidelobes and also the noise level at the expense of some reduction in resolution.
For rapid calculation the number of points in the interferogram has to equal a power of two. A string of zeroes may be added to the measured interferogram to achieve this. More zeroes may be added in a process called zero filling to improve the appearance of the final spectrum although there is no improvement in resolution. Alternatively, interpolation after the Fourier transform gives a similar result.
== Advantages ==
There are three principal advantages for an FT spectrometer compared to a scanning (dispersive) spectrometer.
The multiplex or Fellgett's advantage (named after Peter Fellgett). This arises from the fact that information from all wavelengths is collected simultaneously. It results in a higher signal-to-noise ratio for a given scan-time for observations limited by a fixed detector noise contribution (typically in the thermal infrared spectral region where a photodetector is limited by generation-recombination noise). For a spectrum with m resolution elements, this increase is equal to the square root of m. Alternatively, it allows a shorter scan-time for a given resolution. In practice multiple scans are often averaged, increasing the signal-to-noise ratio by the square root of the number of scans.
The throughput or Jacquinot's advantage (named after Pierre Jacquinot). This results from the fact that in a dispersive instrument, the monochromator has entrance and exit slits which restrict the amount of light that passes through it. The interferometer throughput is determined only by the diameter of the collimated beam coming from the source. Although no slits are needed, FTIR spectrometers do require an aperture to restrict the convergence of the collimated beam in the interferometer. This is because convergent rays are modulated at different frequencies as the path difference is varied. Such an aperture is called a Jacquinot stop. For a given resolution and wavelength this circular aperture allows more light through than a slit, resulting in a higher signal-to-noise ratio.
The wavelength accuracy or Connes' advantage (named after Janine Connes). The wavelength scale is calibrated by a laser beam of known wavelength that passes through the interferometer. This is much more stable and accurate than in dispersive instruments where the scale depends on the mechanical movement of diffraction gratings. In practice, the accuracy is limited by the divergence of the beam in the interferometer which depends on the resolution.
Another minor advantage is less sensitivity to stray light, that is radiation of one wavelength appearing at another wavelength in the spectrum. In dispersive instruments, this is the result of imperfections in the diffraction gratings and accidental reflections. In FT instruments there is no direct equivalent as the apparent wavelength is determined by the modulation frequency in the interferometer.
=== Resolution ===
The interferogram belongs in the length dimension. Fourier transform (FT) inverts the dimension, so the FT of the interferogram belongs in the reciprocal length dimension([L−1]), that is the dimension of wavenumber. The spectral resolution in cm−1 is equal to the reciprocal of the maximal OPD in cm. Thus a 4 cm−1 resolution will be obtained if the maximal OPD is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal OPD. This is not easy, as the moving mirror must travel in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful, as an outgoing ray from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes perpendicular to the axis of the light beam.
A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits.
In 1966 Janine Connes measured the temperature of the atmosphere of Venus by recording the vibration-rotation spectrum of Venusian CO2 at 0.1 cm−1 resolution. Michelson himself attempted to resolve the hydrogen Hα emission band in the spectrum of a hydrogen atom into its two components by using his interferometer. p25
== Motivation ==
FTIR is a method of measuring infrared absorption and emission spectra. For a discussion of why people measure infrared absorption and emission spectra, i.e. why and how substances absorb and emit infrared light, see the article: Infrared spectroscopy.
== Components ==
=== IR sources ===
FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2−25 μm (5,000–400 cm−1), the most common source is a silicon carbide (SiC) element heated to about 1,200 K (930 °C; 1,700 °F) (Globar). The output is similar to a blackbody. Shorter wavelengths of the near-IR, 1−2.5 μm (10,000–4,000 cm−1), require a higher temperature source, typically a tungsten-halogen lamp. The long wavelength output of these is limited to about 5 μm (2,000 cm−1) by the absorption of the quartz envelope. For the far-IR, especially at wavelengths beyond 50 μm (200 cm−1) a mercury discharge lamp gives higher output than a thermal source.
=== Detectors ===
Far-IR spectrometers commonly use pyroelectric detectors that respond to changes in temperature as the intensity of IR radiation falling on them varies. The sensitive elements in these detectors are either deuterated triglycine sulfate (DTGS) or lithium tantalate (LiTaO3). These detectors operate at ambient temperatures and provide adequate sensitivity for most routine applications. To achieve the best sensitivity the time for a scan is typically a few seconds. Cooled photoelectric detectors are employed for situations requiring higher sensitivity or faster response. Liquid nitrogen cooled mercury cadmium telluride (MCT) detectors are the most widely used in the mid-IR. With these detectors an interferogram can be measured in as little as 10 milliseconds. Uncooled indium gallium arsenide photodiodes or DTGS are the usual choices in near-IR systems. Very sensitive liquid-helium-cooled silicon or germanium bolometers are used in the far-IR where both sources and beamsplitters are inefficient.
=== Beam splitter ===
An ideal beam-splitter transmits and reflects 50% of the incident radiation. However, as any material has a limited range of optical transmittance, several beam-splitters may be used interchangeably to cover a wide spectral range.
In a simple Michelson interferometer, one beam passes twice through the beamsplitter but the other passes through only once. To correct for this, an additional compensator plate of equal thickness is incorporated.
For the mid-IR region, the beamsplitter is usually made of KBr with a germanium-based coating that makes it semi-reflective. KBr absorbs strongly at wavelengths beyond 25 μm (400 cm−1), so CsI or KRS-5 are sometimes used to extend the range to about 50 μm (200 cm−1). ZnSe is an alternative where moisture vapour can be a problem, but is limited to about 20 μm (500 cm−1).
CaF2 is the usual material for the near-IR, being both harder and less sensitive to moisture than KBr, but cannot be used beyond about 8 μm (1,200 cm−1).
Far-IR beamsplitters are mostly based on polymer films, and cover a limited wavelength range.
=== Attenuated total reflectance ===
Attenuated total reflectance (ATR) is one accessory of FTIR spectrophotometer to measure surface properties of solid or thin film samples rather than their bulk properties. Generally, ATR has a penetration depth of around 1 or 2 micrometers depending on sample conditions.
=== Fourier transform ===
The interferogram in practice consists of a set of intensities measured for discrete values of OPD. The difference between successive OPD values is constant. Thus, a discrete Fourier transform is needed. The fast Fourier transform (FFT) algorithm is used.
== Spectral range ==
=== Far-infrared ===
The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being used. For the relatively long wavelengths of the far infrared, ~10 μm tolerances are adequate, whereas for the rock-salt region tolerances have to be better than 1 μm. A typical instrument was the cube interferometer developed at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the detector response after each step was completed.
=== Mid-infrared ===
With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing ultra-high precision optical and mechanical components had to be solved. A wide range of instruments are now available commercially. Although instrument design has become more sophisticated, the basic principles remain the same. Nowadays, the moving mirror of the interferometer moves at a constant velocity, and sampling of the interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium–neon laser. In modern FTIR systems the constant mirror velocity is not strictly required, as long as the laser fringes and the original interferogram are recorded simultaneously with higher sampling rate and then re-interpolated on a constant grid, as pioneered by James W. Brault. This confers very high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration errors.
=== Near-infrared ===
The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial applications such as process control and chemical imaging.
== Applications ==
FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In addition, the improved sensitivity and speed have opened up new areas of application. Spectra can be measured in situations where very little energy reaches the detector. Fourier transform infrared spectroscopy is used in geology, chemistry, materials, botany and biology research fields.
=== Nano and biological materials ===
FTIR is also used to investigate various nanomaterials and proteins in hydrophobic membrane environments. Studies show the ability of FTIR to directly determine the polarity at a given site along the backbone of a transmembrane protein. The bond features involved with various organic and inorganic nanomaterials and their quantitative analysis can be done with the help of FTIR.
=== Microscopy and imaging ===
An infrared microscope allows samples to be observed and spectra measured from regions as small as 5 microns across. Images can be generated by combining a microscope with linear or 2-D array detectors. The spatial resolution can approach 5 microns with tens of thousands of pixels. The images contain a spectrum for each pixel and can be viewed as maps showing the intensity at any wavelength or combination of wavelengths. This allows the distribution of different chemical species within the sample to be seen. This technique has been applied in various biological applications including the analysis of tissue sections as an alternative to conventional histopathology, examining the homogeneity of pharmaceutical tablets, and for differentiating morphologically-similar pollen grains.
=== Nanoscale and spectroscopy below the diffraction limit ===
The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution.
=== FTIR as detector in chromatography ===
The speed of FTIR allows spectra to be obtained from compounds as they are separated by a gas chromatograph. However this technique is little used compared to GC-MS (gas chromatography-mass spectrometry) which is more sensitive. The GC-IR method is particularly useful for identifying isomers, which by their nature have identical masses. Liquid chromatography fractions are more difficult because of the solvent present. One notable exception is to measure chain branching as a function of molecular size in polyethylene using gel permeation chromatography, which is possible using chlorinated solvents that have no absorption in the area in question.
=== TG-IR (thermogravimetric analysis-infrared spectrometry) ===
Measuring the gas evolved as a material is heated allows qualitative identification of the species to complement the purely quantitative information provided by measuring the weight loss.
=== Water content determination in plastics and composites ===
FTIR analysis is used to determine water content in fairly thin plastic and composite parts, more commonly in the laboratory setting. Such FTIR methods have long been used for plastics, and became extended for composite materials in 2018, when the method was introduced by Krauklis, Gagani and Echtermeyer. FTIR method uses the maxima of the absorbance band at about 5,200 cm−1 which correlates with the true water content in the material.
== See also ==
Discrete Fourier transform – Function in discrete mathematics − for computing periodicity in evenly spaced data
Fourier transform – Mathematical transform that expresses a function of time as a function of frequency
Fourier transform spectroscopy – Spectroscopy based on time- or space-domain dataPages displaying short descriptions of redirect targets
Least-squares spectral analysis – Periodicity computation method − for computing periodicity in unevenly spaced data
== References ==
== External links ==
Infracord spectrometer photograph
The Grubb-Parsons-NPL cube interferometer Spectroscopy, part 2 by Dudley Williams, page 81
Infrared materials Properties of many salt crystals and useful links.
University FTIR lab example from the University of Bristol | Wikipedia/Fourier_transform_infrared_spectroscopy |
Saturated absorption spectroscopy measures the transition frequency of an atom or molecule between its ground state and an excited state. In saturated absorption spectroscopy, two counter-propagating, overlapped laser beams are sent through a sample of atomic gas. One of the beams stimulates photon emission in excited atoms or molecules when the laser's frequency matches the transition frequency. By changing the laser frequency until these extra photons appear, one can find the exact transition frequency. This method enables precise measurements at room temperature because it is insensitive to doppler broadening. Absorption spectroscopy measures the doppler-broadened transition, so the atoms must be cooled to millikelvin temperatures to achieve the same sensitivity as saturated absorption spectroscopy.
== Principle of saturated absorption spectroscopy ==
To overcome the problem of Doppler broadening without cooling down the sample to millikelvin temperatures, a classical pump–probe scheme is used. A laser with a relatively high intensity is sent through the atomic vapor, known as the pump beam. Another counter-propagating weak beam is also sent through the atoms at the same frequency, known as the probe beam. The absorption of the probe beam is recorded on a photodiode for various frequencies of the beams.
Although the two beams are at the same frequency, they address different atoms due to natural thermal motion. If the beams are red-detuned with respect to the atomic transition frequency, then the pump beam will be absorbed by atoms moving towards the beam source, while the probe beam will be absorbed by atoms moving away from that source at the same speed in the opposite direction. If the beams are blue-detuned, the opposite occurs.
If, however, the laser is approximately on resonance, these two beams address the same atoms, those with velocity vectors nearly perpendicular to the direction of laser propagation. In the two-state approximation of an atomic transition, the strong pump beam will cause many of the atoms to be in the excited state; when the number of atoms in the ground state and the excited state are approximately equal, the transition is said to be saturated. When a photon from the probe beam passes through the atoms, there is a good chance that, if it encounters an atom, the atom will be in the excited state and will thus undergo stimulated emission, with the photon passing through the sample. Thus, as the laser frequency is swept across the resonance, a small dip in the absorption feature will be observed at each atomic transition (generally hyperfine resonances). The stronger the pump beam, the wider and deeper the dips in the Gaussian Doppler-broadened absorption feature become. Under perfect conditions, the width of the dip can approach the natural linewidth of the transition.
A consequence of this method of counter-propagating beams on a system with more than two states is the presence of crossover lines. When two transitions are within a single Doppler-broadened feature and share a common ground state, a crossover peak at a frequency exactly between the two transitions can occur. This is the result of moving atoms seeing the pump and probe beams resonant with two separate transitions. The pump beam can cause the ground state to be depopulated, saturating one transition, while the probe beam finds much fewer atoms in the ground state because of this saturation, and its absorption falls. These crossover peaks can be quite strong, often stronger than the main saturated absorption peaks.
== Doppler broadening of the absorption spectrum of an atom ==
According to the description of an atom interacting with the electromagnetic field, the absorption of light by the atom depends on the frequency of the incident photons. More precisely, the absorption is characterized by a Lorentzian of width Γ/2 (for reference, Γ ≈ 2π × 6 MHz for common rubidium D-line transitions). If we have a cell of atomic vapour at room temperature, then the distribution of velocity will follow a Maxwell–Boltzmann distribution
n
(
v
)
d
v
=
N
m
2
π
k
B
T
e
−
m
v
2
2
k
B
T
d
v
,
{\displaystyle n(v)\,dv=N{\sqrt {\frac {m}{2\pi k_{B}T}}}e^{-{\frac {mv^{2}}{2k_{B}T}}}\,dv,}
where
N
{\displaystyle N}
is the number of atoms,
k
B
{\displaystyle k_{B}}
is the Boltzmann constant, and
m
{\displaystyle m}
is the mass of the atom. According to the Doppler effect formula in the case of non-relativistic speeds,
ω
lab
=
ω
0
(
1
±
v
c
)
,
{\displaystyle \omega _{\text{lab}}=\omega _{0}\left(1\pm {\frac {v}{c}}\right),}
where
ω
0
{\displaystyle \omega _{0}}
is the frequency of the atomic transition when the atom is at rest (the one which is being probed). The value of
v
{\displaystyle v}
as a function of
ω
0
{\displaystyle \omega _{0}}
and
ω
lab
{\displaystyle \omega _{\text{lab}}}
can be inserted in the distribution of velocities. The distribution of absorption as a function of the pulsation will therefore be proportional to a Gaussian with full width at half maximum
Δ
ω
lab
=
ω
0
8
k
B
T
ln
2
m
c
2
.
{\displaystyle \Delta \omega _{\text{lab}}=\omega _{0}{\sqrt {\frac {8k_{B}T\ln 2}{mc^{2}}}}.}
For a rubidium atom at room temperature,
Δ
ω
lab
≈
500
MHz
≈
2
π
⋅
80
MHz
≫
Γ
/
2
≈
2
π
⋅
3
MHz
.
{\displaystyle \Delta \omega _{\text{lab}}\approx 500~{\text{MHz}}\approx 2\pi \cdot 80~{\text{MHz}}\gg \Gamma /2\approx 2\pi \cdot 3~{\text{MHz}}.}
Therefore, without any special trick in the experimental setup probing the maximum of absorption of an atomic vapour, the uncertainty of the measurement will be limited by the Doppler broadening and not by the fundamental width of the resonance.
== Experimental realization ==
As the pump and the probe beam must have the same exact frequency, the most convenient solution is for them to come from the same laser. The probe beam can be made of a reflection of the pump beam passed through neutral density filter to reduce its intensity. To fine-tune the frequency of the laser, a diode laser with a piezoelectric transducer that controls the cavity wavelength can be used. Due to photodiode noise, the laser frequency can be swept across the transition and the photodiode reading averaged over many sweeps.
In real atoms, there are sometimes more than two relevant transitions within the sample's Doppler profile (e.g. in alkali atoms with hyperfine interactions). This will generate the apparition of other dips in the absorption feature due to these new resonances in addition to crossover resonances.
== References ==
Saturated Absorption Spectroscopy of Rubidium | Wikipedia/Saturated_absorption_spectroscopy |
Force spectroscopy is a set of techniques for the study of the interactions and the binding forces between individual molecules. These methods can be used to measure the mechanical properties of single polymer molecules or proteins, or individual chemical bonds. The name "force spectroscopy", although widely used in the scientific community, is somewhat misleading, because there is no true matter-radiation interaction.
Techniques that can be used to perform force spectroscopy include atomic force microscopy, optical tweezers, magnetic tweezers, acoustic force spectroscopy, microneedles, and biomembranes.
Force spectroscopy measures the behavior of a molecule under stretching or torsional mechanical force. In this way a great deal has been learned in recent years about the mechanochemical coupling in the enzymes responsible for muscle contraction, transport in the cell, energy generation (F1-ATPase), DNA replication and transcription (polymerases), DNA unknotting and unwinding (topoisomerases and helicases).
As a single-molecule technique, as opposed to typical ensemble spectroscopies, it allows a researcher to determine properties of the particular molecule under study. In particular, rare events such as conformational change, which are masked in an ensemble, may be observed.
== Experimental techniques ==
There are many ways to accurately manipulate single molecules. Prominent among these are optical or magnetic tweezers, atomic-force-microscope (AFM) cantilevers and acoustic force spectroscopy. In all of these techniques, a biomolecule, such as protein or DNA, or some other biopolymer has one end bound to a surface or micrometre-sized bead and the other to a force sensor. The force sensor is usually a micrometre-sized bead or a cantilever, whose displacement can be measured to determine the force.
=== Atomic force microscope cantilevers ===
Molecules adsorbed on a surface are picked up by a
microscopic tip (nanometres wide) that is located on the end of an elastic cantilever. In a more sophisticated version of this experiment (Chemical Force Microscopy) the tips are covalently functionalized with the molecules of interest. A piezoelectric controller then pulls up the cantilever. If some force is acting on the elastic cantilever (for example because some molecule is being stretched between the surface and the tip), this will deflect upward (repulsive force) or downward (attractive force). According to Hooke's law, this deflection will be proportional to the force acting on the cantilever. Deflection is measured by the position of a laser beam reflected by the cantilever. This kind of set-up can measure forces as low as 10 pN (10−11 N), the fundamental resolution limit is given by the cantilever's thermal noise.
The so-called force curve is the graph of force (or more precisely, of cantilever deflection) versus the piezoelectric position on the Z axis. An ideal Hookean spring, for example, would display a straight diagonal force curve.
Typically, the force curves observed in the force spectroscopy experiments consist of a contact (diagonal) region where the probe contacts the sample surface, and a non-contact region where the probe is off the sample surface. When the restoring force of the cantilever exceeds tip-sample adhesion force the probe jumps out of contact, and the magnitude of this jump is often used as a measure of adhesion force or rupture force. In general the rupture of a tip-surface bond is a stochastic process; therefore reliable quantification of the adhesion force requires taking multiple individual force curves. The histogram of the adhesion forces obtained in these multiple measurements provides the main data output for force spectroscopy measurement.
In biophysics, single-molecule force spectroscopy can be used to study the energy landscape underlying the interaction between two bio-molecules, like proteins. Here, one binding partner can be attached to a cantilever tip via a flexible linker molecule (PEG chain), while the other one is immobilized on a substrate surface. In a typical approach, the cantilever is repeatedly approached and retracted from the sample at a constant speed. In some cases, binding between the two partners will occur, which will become visible in the force curve, as the use of a flexible linker gives rise to a characteristic curve shape (see Worm-like chain model) distinct from adhesion. The collected rupture forces can then be analysed as a function of the bond loading rate. The resulting graph of the average rupture force as a function of the loading rate is called the force spectrum and forms the basic dataset for dynamic force spectroscopy.
In the ideal case of a single sharp energy barrier for the tip-sample interactions the dynamic force spectrum will show a linear increase of the rupture force as function of a logarithm of the loading rate, as described by a model proposed by Bell et al. Here, the slope of the rupture force spectrum is equal to the
k
B
T
x
β
{\displaystyle {\frac {k_{B}T}{x_{\beta }}}}
, where
x
β
{\displaystyle x_{\beta }}
is the distance from the energy minimum to the transition state. So far, a number of theoretical models exist describing the relationship between loading rate and rupture force, based upon different assumptions and predicting distinct curve shapes.
For example, Ma X.,Gosai A. et al., utilized dynamic force spectroscopy along with molecular dynamics simulations to find out the binding force between thrombin, a blood coagulation protein, and its DNA aptamer.
=== Acoustic force spectroscopy ===
A recently developed technique, acoustic force spectroscopy (AFS), allows the force manipulation of hundreds of single-molecules and single-cells in parallel, providing high experimental throughput. In this technique, a piezo element resonantly excites planar acoustic waves over a microfluidic chip. The generated acoustic waves are capable of exerting forces on microspheres with different density than the surrounding medium. Biomolecules, such as DNA, RNA or proteins, can be individually tethered between the microspheres and a surface and then probed by the acoustic forces exerted by the piezo sensor. With AFS devices it is possible to apply forces ranging from 0 to several hundreds of picoNewtons on hundreds of microspheres and obtain force-extension curves or histograms of rupture forces of many individual events in parallel.
This technique is mostly utilized to study DNA-bindings protein. For example, AFS was used to examine bacterial transcription with presence of antibacterial agents. Viral proteins also can be studied by AFS, for instance this technique was used to explore DNA compaction along with other single-molecule approaches.
Cells also can be manipulated by the acoustic forces directly, or by using microspheres as handles.
=== Optical tweezers ===
Another technique that has been gaining ground for single molecule experiments is the use of optical tweezers for applying mechanical forces on molecules. A strongly focused laser beam has the ability to catch and hold particles (of dielectric material) in a size range from nanometers to micrometers. The trapping action of optical tweezers results from the dipole or optical gradient force on the dielectric sphere. The technique of using a focused laser beam as an atom trap was first applied in 1984 at Bell laboratories. Until then experiments had been carried out using oppositely directed lasers as a means to trap particles. Later experiments, at the same project at Bell laboratories and others since, showed damage-free manipulation on cells using an infrared laser. Thus, the ground was made for biological experiments with optical trapping.
Each technique has its own advantages and disadvantages. For example, AFM cantilevers, can measure angstrom-scale, millisecond events and forces larger than 10 pN. While glass microfibers cannot achieve such fine spatial and temporal resolution, they can measure piconewton forces. Optical tweezers allow the measurement of piconewton forces and nanometer displacements which is an ideal range for many biological experiments. Magnetic tweezers can measure femtonewton forces, and additionally they can also be used to apply torsion. AFS devices allow the statistical analysis of the mechanical properties of biological systems by applying picoNewton forces to hundreds of individual particles in parallel, with sub-millisecond response time.
== Applications ==
Common applications of force spectroscopy are measurements of polymer elasticity, especially biopolymers such as RNA and DNA. Another biophysical application of polymer force spectroscopy is on protein unfolding. Modular proteins can be adsorbed to a gold or (more rarely) mica surface and then stretched. The sequential unfolding of modules is observed as a very characteristic sawtooth pattern of the force vs elongation graph; every tooth corresponds to the unfolding of a single protein module (apart from the last that is generally the detachment of the protein molecule from the tip). Much information about protein elasticity and protein unfolding can be obtained by this technique. Many proteins in the living cell must face mechanical stress.
Moreover, force spectroscopy can be used to investigate the enzymatic activity of proteins involved in DNA replication, transcription, organization and repair. This is achieved by measuring the position of a bead attached to a DNA-protein complex stalled on a DNA tether that has one end attached to a surface, while keeping the force constant. This technique has been used, for example, to study transcription elongation inhibition by Klebsidin and Acinetodin.
The other main application of force spectroscopy is the study of mechanical resistance of chemical bonds. In this case, generally the tip is functionalized with a ligand that binds to another molecule bound to the surface. The tip is pushed on the surface, allowing for contact between the two molecules, and then retracted until the newly formed bond breaks up. The force at which the bond breaks up is measured. Since mechanical breaking is a kinetic, stochastic process, the breaking force is not an absolute parameter, but it is a function of both temperature and pulling speed. Low temperatures and high pulling speeds correspond to higher breaking forces. By careful analysis of the breaking force at various pulling speeds, it is possible to map the energy landscape of the chemical bond under mechanical force. This is leading to interesting results in the study of antibody-antigen, protein-protein, protein-living cell interaction and catch bonds.
Recently this technique has been used in cell biology to measure the aggregative stochastic forces created by motor proteins that influence the motion of particles within the cytoplasm. In this way, force spectrum microscopy may be used better to understand the many cellular processes that require the motion of particles within cytoplasm.
== References ==
== Further reading == | Wikipedia/Force_spectroscopy |
Conversion electron Mössbauer spectroscopy (CEMS) is a Mössbauer spectroscopy technique based on conversion electron.
The CEM spectrum can be obtained either by collecting essentially all the electrons leaving the surface (integral technique), or by selecting the ones in a given energy range by means of a beta ray spectrometer (differential or depth selective CEMS).
This method allows the use of simple and inexpensive detecting equipment, mainly flow-type proportional detectors in which large counting rates can be obtained. This last characteristic makes possible the study of samples with the natural abundance of the Mössbauer isotope. The information furnished by the integral measurements can be increased by using various angles of incidence or by depositing thin layers of inert material on the sample.
== Theory ==
In the energy range used in CEMS, the incident radiation can interact with the absorber through two kinds of processes: (a) conventional interactions – photoelectric and Compton effects, and (b) nuclear resonant absorption – Mössbauer effect. Due to conventional interactions the beam is attenuated and electrons are emitted from the sample. The nuclear de-excitation following the resonant absorption takes place by emission of either a gamma ray or an internal conversion (IC) electron. In the latter case, the atom is left in an ‘excited’ state with a hole in an inner shell; the energy excess is given away with emission of Auger electrons and/or X-rays. Thus, the electrons emitted from the sample as a consequence of the Mössbauer absorptions are: (a) primary (IC or Auger) electrons originated in the de-excitations of the nuclei excited by the incident beam, and (b) secondary electrons originated by conventional interactions of photons (or resonant absorption of gamma rays) emitted after resonant absorptions.
== References ==
Nuclear Instruments and Methods in Physics Research Bl (1984) 70–84 | Wikipedia/Conversion_electron_Mössbauer_spectroscopy |
Differential scanning calorimetry (DSC) is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment.
Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well-defined heat capacity over the range of temperatures to be scanned.
Additionally, the reference sample must be stable, of high purity, and must not experience much change across the temperature scan. Typically, reference standards have been metals such as indium, tin, bismuth, and lead, but other standards such as polyethylene and fatty acids have been proposed to study polymers and organic compounds, respectively.
The technique was developed by E. S. Watson and M. J. O'Neill in 1962, and introduced commercially at the 1963 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy.
The first adiabatic differential scanning calorimeter that could be used in biochemistry was developed by P. L. Privalov and D. R. Monaselidze in 1964 at Institute of Physics in Tbilisi, Georgia. The term DSC was coined to describe this instrument, which measures energy directly and allows precise measurements of heat capacity.
== Types ==
There are two main types of DSC: Heat-flux DSC which measures the difference in heat flux between the sample and a reference (which gives it the alternative name Multi-Cell DSC) and Power differential DSC which measures the difference in power supplied to the sample and a reference.
=== Heat-flux DSC ===
With Heat-flux DSC, the changes in heat flow are calculated by integrating the ΔTref- curve. For this kind of experiment, a sample and a reference crucible are placed on a sample holder with integrated temperature sensors for temperature measurement of the crucibles. This arrangement is located in a temperature-controlled oven. Unlike the traditional design, the special feature of heat-flux DSC is that it uses flat temperature sensors placed vertically around a flat heater. This setup makes it possible to have a small, light, and low-heat capacity structure while still working like a regular DSC oven.
=== Power differential DSC ===
For this kind of setup, also known as Power compensating DSC, the sample and reference crucible are placed in thermally insulated furnaces and not next to each other in the same furnace as in heat-flux-DSC experiments. Then the temperature of both chambers is controlled so that the same temperature is always present on both sides. The electrical power that is required to obtain and maintain this state is then recorded rather than the temperature difference between the two crucibles.
=== Fast-scan DSC ===
The 2000s have witnessed the rapid development of Fast-scan DSC (FSC), a novel calorimetric technique that employs micromachined sensors. The key advances of this technique are the ultrahigh scanning rate, which can be as high as 106 K/s, and the ultrahigh sensitivity, with a heat capacity resolution typically better than 1 nJ/K.
Nanocalorimetry has attracted much attention in materials science, where it is applied to perform quantitative analysis of rapid phase transitions, particularly on fast cooling. Another emerging area of application of FSC is physical chemistry, with a focus on the thermophysical properties of thermally labile compounds. Quantities like fusion temperature, fusion enthalpy, sublimation, and vaporization pressures, and enthalpies of such molecules became available.
=== Temperature Modulated DSC ===
When performing Temperature Modulated DSC (TMDSC, MDSC), the underlying linear heating rate is superimposed by a sinusoidal temperature variation. The benefit of this procedure is the ability to separate overlapping DSC effects by calculating the reversing and the non-reversing signals. The reversing heat flow is related to the changes in specific heat capacity (→ glass transition) while the non-reversing heat flow corresponds to time-dependent phenomena such as curing, dehydration and relaxation.
== Detection of phase transitions ==
The basic principle underlying this technique is that when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic.
For example, as a solid sample melts to a liquid, it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle physical changes, such as glass transitions. It is widely used in industrial settings as a quality control instrument due to its applicability in evaluating sample purity and for studying polymer curing.
== DTA ==
An alternative technique, which shares much in common with DSC, is differential thermal analysis (DTA). In this technique it is the heat flow to the sample and reference that remains the same rather than the temperature. When the sample and reference are heated identically, phase changes and other thermal processes cause a difference in temperature between the sample and reference. Both DSC and DTA provide similar information. DSC measures the energy required to keep both the reference and the sample at the same temperature whereas DTA measures the difference in temperature between the sample and the reference when the same amount of energy has been introduced into both.
== DSC curves ==
The result of a DSC experiment is a curve of heat flux versus temperature or versus time. There are two different conventions: exothermic reactions in the sample shown with a positive or negative peak, depending on the kind of technology used in the experiment. This curve can be used to calculate enthalpies of transitions. This is done by integrating the peak corresponding to a given transition. It can be shown that the enthalpy of transition can be expressed using the following equation:
Δ
H
=
K
A
{\displaystyle \Delta H=KA}
where
Δ
H
{\displaystyle \Delta H}
is the enthalpy of transition,
K
{\displaystyle K}
is the calorimetric constant, and
A
{\displaystyle A}
is the area under the curve. The calorimetric constant will vary from instrument to instrument, and can be determined by analyzing a well-characterized sample with known enthalpies of transition.
== Applications ==
Differential scanning calorimetry can be used to measure a number of characteristic properties of a sample. Using this technique it is possible to observe fusion and crystallization events as well as glass transition temperatures Tg. DSC can also be used to study oxidation, as well as other chemical reactions.
Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change occurs.
As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature (Tc). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature (Tm). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.
Differential scanning calorimetry can also be used to obtain valuable thermodynamics information about proteins. The thermodynamics analysis of proteins can reveal important information about the global structure of proteins, and protein/ligand interaction. For example, many mutations lower the stability of proteins, while ligand binding usually increases protein stability. Using DSC, this stability can be measured by obtaining Gibbs Free Energy values at any given temperature. This allows researchers to compare the free energy of unfolding between ligand-free protein and protein-ligand complex, or wild type and mutant proteins. DSC can also be used in studying protein/lipid interactions, nucleotides, drug-lipid interactions. In studying protein denaturation using DSC, the thermal melt should be at least to some degree reversible, as the thermodynamics calculations rely on chemical equilibrium.
== Experimental considerations ==
There are various experimental and environmental parameters to consider during DSC measurements. Exemplary potential issues are briefly discussed in the following sections. All statements in these paragraphs are based on the books of Gabbott and Brown.
=== Crucibles ===
DSC measurements without crucibles promote the thermal transfer towards the sample and are possible if the DSC is designed for this purpose. Measurements without crucible should only be conducted with chemically stable materials at low temperatures, as otherwise there may be contamination or damage of the calorimeter. The safer way is to use a crucible, which is specified for the desired temperatures and does not react with the sample material (e.g. alumina, gold or platinum crucibles). If the sample is likely to evolve volatiles or is in the liquid state, the crucible should be sealed to prevent contamination. However, if the crucible is sealed, increasing pressure and possible measurement artefacts due to deformation of the crucible must be considered. In this case, crucibles with very small holes (∅~50 μm) or crucibles that can withstand very high pressures should be used.
=== Sample condition ===
The sample should be in good contact with the crucible surface. Therefore, the contact surface of a solid bulk sample should be plane parallel. For DSC measurements with powders, stronger signal might be observed for finer powders due to the enlarged contact surface. The minimum sample mass depends on the transformation to be analyzed. A small sample mass (~10 mg) is sufficient if the released or consumed heat during the transformation is high enough. Heavier samples could be used to obtain transformation associated with low heat release or consumption, as larger samples also enlarge the obtained peaks. However, the increasing sample size might worsen the resolution due to thermal gradients which may evolve during heating.
=== Temperature and scan rates ===
If the peaks are very small, it is possible to enlarge them by increasing the scan rate. Due to the faster scan rate, more energy is released or consumed in a shorter time which leads to higher and therefore more distinct peaks. However, faster scan rates lead to poor temperature resolution because of thermal lag. Due to this thermal lag, two phase transformations (or chemical reactions) occurring in a narrow temperature range might overlap. Generally, heating or cooling rates are too high to detect equilibrium transitions, so there is always a shift to higher or lower temperatures compared to phase diagrams representing equilibrium conditions.
=== Purge gas ===
Purge gas is used to control the sample environment, in order to reduce signal noise and to prevent contamination. Mostly nitrogen is used and for temperatures above 600 °C, argon can be utilized to minimize heat loss due to the low thermal conductivity of argon. Air or pure oxygen can be used for oxidative tests like oxidative induction time and He is used for very low temperatures due to the low boiling temperature (~4.2K at 101.325 kPa ).
== Examples ==
The technique is widely used across a range of applications, both as a routine quality test and as a research tool. The equipment is easy to calibrate, using low melting indium at 156.5985 °C for example, and is a rapid and reliable method of thermal analysis.
=== Polymers ===
DSC is used widely for examining polymeric materials to determine their thermal transitions. Important thermal transitions include the glass transition temperature (Tg), crystallization temperature (Tc), and melting temperature (Tm). The observed thermal transitions can be utilized to compare materials, although the transitions alone do not uniquely identify composition. The composition of unknown materials may be completed using complementary techniques such as IR spectroscopy. Melting points and glass transition temperatures for most polymers are available from standard compilations, and the method can show polymer degradation by the lowering of the expected melting temperature. Tm depends on the molecular weight of the polymer and thermal history.
The percent crystalline content of a polymer can be estimated from the crystallization/melting peaks of the DSC graph using reference heats of fusion found in the literature. DSC can also be used to study thermal degradation of polymers using an approach such as Oxidative Onset Temperature/Time (OOT); however, the user risks contamination of the DSC cell, which can be problematic. Thermogravimetric Analysis (TGA) may be more useful for decomposition behavior determination. Impurities in polymers can be determined by examining thermograms for anomalous peaks, and plasticisers can be detected at their characteristic boiling points. In addition, examination of minor events in first heat thermal analysis data can be useful as these apparently "anomalous peaks" can in fact also be representative of process or storage thermal history of the material or polymer physical aging. Comparison of first and second heat data collected at consistent heating rates can allow the analyst to learn about both polymer processing history and material properties. (see J.H.Flynn.(1993) Analysis of DSC results by integration. Thermochimica Acta, 217, 129-149.)
=== Liquid crystals ===
DSC is used in the study of liquid crystals. As some forms of matter go from solid to liquid they go through a third state, which displays properties of both phases. This anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is possible to observe the small energy changes that occur as matter transitions from a solid to a liquid crystal and from a liquid crystal to an isotropic liquid.
=== Oxidative stability ===
Using differential scanning calorimetry to study the stability to oxidation of samples generally requires an airtight sample chamber. It can be used to determine the oxidative-induction time (OIT) of a sample. Such tests are usually done isothermally (at constant temperature) by changing the atmosphere of the sample. First, the sample is brought to the desired test temperature under an inert atmosphere, usually nitrogen. Oxygen is then added to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such analysis can be used to determine the stability and optimum storage conditions for a material or compound. DSC equipment can also be used to determine the Oxidative-Onset Temperature (OOT) of a material. In this test a sample (and a reference) are exposed to an oxygen atmosphere and subjected to a constant rate of heating (typically from 50 to 300 °C). The DSC heat flow curve will deviate when the reaction with oxygen begins (the reaction being either exothermic or endothermic). Both OIT and OOT tests are used as a tools for determining the activity of antioxidants.
=== Safety screening ===
DSC makes a reasonable initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C/min, due to much heavier crucible) and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximal temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of a 3 °C increment per half-hour.
=== Drug analysis ===
DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist, DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer properties. The cross-linking of polymer molecules that occurs in the curing process is exothermic, resulting in a negative peak in the DSC curve that usually appears soon after the glass transition.
In the pharmaceutical industry it is necessary to have well-characterized drug compounds in order to define processing parameters. For instance, if it is necessary to deliver a drug in the amorphous form, it is desirable to process the drug at temperatures below those at which crystallization can occur.
=== General chemical analysis ===
Freezing-point depression can be used as a purity analysis tool when analysed by differential scanning calorimetry. This is possible because the temperature range over which a mixture of compounds melts is dependent on their relative amounts. Consequently, less pure compounds will exhibit a broadened melting peak that begins at lower temperature than a pure compound.
== See also ==
== References ==
== Further reading ==
== External links ==
The result of a DSC experiment is a curve of heat flux versus temperature or versus time. | Wikipedia/Differential_scanning_calorimetry |
Differential thermal analysis (DTA) is a thermoanalytic technique that is similar to differential scanning calorimetry. In DTA, the material under study and an inert reference are made to undergo identical thermal cycles, (i.e., same cooling or heating programme) while recording any temperature difference between sample and reference. This differential temperature is then plotted against time, or against temperature (DTA curve, or thermogram). Changes in the sample, either exothermic or endothermic, can be detected relative to the inert reference. Thus, a DTA curve provides data on the transformations that have occurred, such as glass transitions, crystallization, melting and sublimation. The area under a DTA peak is the enthalpy change and is not affected by the heat capacity of the sample.
== Apparatus ==
A DTA consists of a sample holder, thermocouples, sample containers and a ceramic or metallic block; a furnace; a temperature programmer; and a recording system. The key feature is the existence of two thermocouples connected to a voltmeter. One thermocouple is placed in an inert material such as Al2O3, while the other is placed in a sample of the material under study. As the temperature is increased, there will be a brief deflection of the voltmeter if the sample is undergoing a phase transition. This occurs because the input of heat will raise the temperature of the inert substance, but be incorporated as latent heat in the material changing phase. It consist of inert environment with inert gases which will not react with sample and reference. Generally helium or argon is used as inert gas.
== Today's instruments ==
In today's market most manufacturers don't make true DTA systems but rather have incorporated this technology into thermogravimetric analysis (TGA) systems, which provide both mass loss and thermal information. With today's advancements in software, even these instruments are being replaced by true TGA-DSC instruments that can provide the temperature and heat flow of the sample, simultaneously with mass loss.
== Applications ==
A DTA curve can be used only as a finger print for identification purposes but usually the applications of this method are the determination of phase diagrams, heat change measurements and decomposition in various atmospheres.
DTA is widely used in the pharmaceutical and food industries.
DTA may be used in cement chemistry, mineralogical research and in environmental studies.
DTA curves may also be used to date bone remains or to study archaeological materials.
Using DTA one can obtain liquidus & solidus lines of phase diagrams.
== References == | Wikipedia/Differential_thermal_analysis |
Collision-induced dissociation (CID), also known as collisionally activated dissociation (CAD), is a mass spectrometry technique to induce fragmentation of selected ions in the gas phase. The selected ions (typically molecular ions or protonated molecules) are usually accelerated by applying an electrical potential to increase the ion kinetic energy and then allowed to collide with neutral molecules (often helium, nitrogen, or argon). In the collision, some of the kinetic energy is converted into internal energy which results in bond breakage and the fragmentation of the molecular ion into smaller fragments. These fragment ions can then be analyzed by tandem mass spectrometry.
CID and the fragment ions produced by CID are used for several purposes. Partial or complete structural determination can be achieved. In some cases, identity can be established based on previous knowledge without determining structure. Another use is in simply achieving more sensitive and specific detection. By detecting a unique fragment ion, the precursor ion can be detected in the presence of other ions of the same m/z value (mass-to-charge ratio), reducing the background and increasing the limit of detection.
== Low-energy CID and high-energy CID ==
Low-energy CID is typically carried out with ion kinetic energies less than approximately 1 kiloelectron volt (1 keV). Low-energy CID is highly efficient in fragmenting the selected precursor ions, but the type of fragment ions observed in low-energy CID is strongly dependent on the ion kinetic energy. Very low collision energies favor ion structure rearrangement, and the probability of direct bond cleavage increases as ion kinetic energy increases, leading to higher ion internal energies. High-energy CID (HECID) is carried out in magnetic sector mass spectrometers or tandem magnetic sector mass spectrometers and in tandem time-of-flight mass spectrometers (TOF/TOF). High-energy CID involves ion kinetic energies in the kilovolt range (typically 1 keV to 20 keV). High-energy CID can produce some types of fragment ions that are not formed in low-energy CID, such as charge-remote fragmentation in molecules with hydrocarbon substructures or sidechain fragmentation in peptides.
== Triple quadrupole mass spectrometers ==
In a triple quadrupole mass spectrometer there are three quadrupoles. The first quadrupole termed "Q1" can act as a mass filter and transmits a selected ion and accelerates it towards "Q2" which is termed a collision cell. The pressure in Q2 is higher and the ions collides with neutral gas in the collision cell and are fragmented by CID. The fragments are then accelerated out of the collision cell and enter Q3 which scans through the mass range, analyzing the resulting fragments (as they hit a detector). This produces a mass spectrum of the CID fragments from which structural information or identity can be gained. Many other experiments using CID on a triple quadrupole exist such as precursor ion scans that determine where a specific fragment came from rather than what fragments are produced by a given molecule.
== Fourier transform ion cyclotron resonance ==
Ions trapped in the ICR cell can be excited by applying pulsed electric fields at their resonant frequency to increase their kinetic energy. The duration and amplitude of the pulse determines the ion kinetic energy. Because a collision gas present at low pressure requires a long time for excited ions to collide with neutral molecules, a pulsed valve can be used to introduce a short burst of collision gas. Trapped fragment ions or their ion-molecule reaction products can be re-excited for multistage mass spectrometry (MSn). If the excitation is not applied on the resonant frequency, but at a slightly off-resonant frequency, the ions will alternately be excited and de-excited, permitting multiple collisions at low collision energy. Sustained off-resonance irradiation collision-induced dissociation (SORI-CID) is a CID technique used in Fourier transform ion cyclotron resonance mass spectrometry which involves accelerating the ions in cyclotron motion (in a circle inside of an ion trap) in the presence of a collision gas.
== Higher-energy collisional dissociation ==
Higher-energy collisional dissociation (HCD) is a CID technique specific to the orbitrap mass spectrometer in which fragmentation takes place external to the trap. HCD was formerly known as higher-energy C-trap dissociation. In HCD, the ions pass through the C-trap and into the HCD cell, an added multipole collision cell, where dissociation takes place. The ions are then returned to the C-trap before injection into the orbitrap for mass analysis. HCD does not suffer from the low mass cutoff of resonant-excitation (CID) and therefore is useful for isobaric tag–based quantification as reporter ions can be observed. Despite the name, the collision energy of HCD is typically in the regime of low energy collision induced dissociation (less than 100 eV).
== Fragmentation mechanisms ==
Homolytic fragmentation is bond dissociation where each of the fragments retains one of the originally-bonded electrons.
Heterolytic fragmentation is bond cleavage where the bonding electrons remain with only one of the fragment species.
In CID, charge remote fragmentation is a type of covalent bond breaking that occurs in a gas phase ion in which the cleaved bond is not adjacent to the location of the charge. This fragmentation can be observed using tandem mass spectrometry.
== See also ==
Electron-capture dissociation (ECD)
Electron-transfer dissociation (ETD)
Infrared multiphoton dissociation (IRMPD)
== References == | Wikipedia/Higher-energy_collisional_dissociation |
Fourier-transform ion cyclotron resonance mass spectrometry is a type of mass analyzer (or mass spectrometer) for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap (a magnetic field with electric trapping plates), where they are excited (at their resonant cyclotron frequencies) to a larger cyclotron radius by an oscillating electric field orthogonal to the magnetic field. After the excitation field is removed, the ions are rotating at their cyclotron frequency in phase (as a "packet" of ions). These ions induce a charge (detected as an image current) on a pair of electrodes as the packets of ions pass close to them. The resulting signal is called a free induction decay (FID), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum.
== History ==
FT-ICR was invented by Melvin B. Comisarow and Alan G. Marshall at the University of British Columbia. The first paper appeared in Chemical Physics Letters in 1974. The inspiration was earlier developments in conventional ICR and Fourier-transform nuclear magnetic resonance (FT-NMR) spectrometry. Marshall has continued to develop the technique at The Ohio State University and Florida State University.
== Theory ==
The physics of FTICR is similar to that of a cyclotron at least in the first approximation.
In the simplest idealized form, the relationship between the cyclotron frequency and the mass-to-charge ratio is given by
f
=
q
B
2
π
m
,
{\displaystyle f={\frac {qB}{2\pi m}},}
where f = cyclotron frequency, q = ion charge, B = magnetic field strength and m = ion mass.
This is more often represented in angular frequency:
ω
c
=
q
B
m
,
{\displaystyle \omega _{\text{c}}={\frac {qB}{m}},}
where
ω
c
{\displaystyle \omega _{\text{c}}}
is the angular cyclotron frequency, which is related to frequency by the definition
f
=
ω
2
π
{\displaystyle f={\frac {\omega }{2\pi }}}
.
Because of the quadrupolar electrical field used to trap the ions in the axial direction, this relationship is only approximate. The axial electrical trapping results in axial oscillations within the trap with the (angular) frequency
ω
t
=
q
α
m
,
{\displaystyle \omega _{\text{t}}={\sqrt {\frac {q\alpha }{m}}},}
where
α
{\displaystyle \alpha }
is a constant similar to the spring constant of a harmonic oscillator and is dependent on applied voltage, trap dimensions and trap geometry.
The electric field and the resulting axial harmonic motion reduces the cyclotron frequency and introduces a second radial motion called magnetron motion that occurs at the magnetron frequency. The cyclotron motion is still the frequency being used, but the relationship above is not exact due to this phenomenon. The natural angular frequencies of motion are
ω
±
=
ω
c
2
±
(
ω
c
2
)
2
−
ω
t
2
2
,
{\displaystyle \omega _{\pm }={\frac {\omega _{\text{c}}}{2}}\pm {\sqrt {\left({\frac {\omega _{\text{c}}}{2}}\right)^{2}-{\frac {\omega _{\text{t}}^{2}}{2}}}},}
where
ω
t
{\displaystyle \omega _{\text{t}}}
is the axial trapping frequency due the axial electrical trapping and
ω
+
{\displaystyle \omega _{+}}
is the reduced cyclotron (angular) frequency and
ω
−
{\displaystyle \omega _{-}}
is the magnetron (angular) frequency. Again,
ω
+
{\displaystyle \omega _{+}}
is what is typically measured in FTICR. The meaning of this equation can be understood qualitatively by considering the case where
ω
t
{\displaystyle \omega _{\text{t}}}
is much smaller than
ω
c
{\displaystyle \omega _{\text{c}}}
, which is typically true in the mass spectrometer. In that case, the value of the radical is just slightly less than
ω
c
/
2
{\displaystyle \omega _{\text{c}}/2}
, and the value of
ω
+
{\displaystyle \omega _{+}}
is just slightly less than
ω
c
{\displaystyle \omega _{\text{c}}}
(the cyclotron frequency has been slightly reduced). For
ω
−
{\displaystyle \omega _{-}}
the value of the radical is the same (slightly less than
ω
c
/
2
{\displaystyle \omega _{\text{c}}/2}
), but it is being subtracted from
ω
c
/
2
{\displaystyle \omega _{\text{c}}/2}
, resulting in a small number equal to
ω
c
−
ω
+
{\displaystyle \omega _{\text{c}}-\omega _{+}}
(i.e. the amount that the cyclotron frequency was reduced by). In this regime, the frequencies are approximately
ω
+
≈
ω
c
−
ω
t
2
2
ω
c
ω
−
≈
ω
t
2
2
ω
c
{\displaystyle {\begin{aligned}\omega _{+}&\approx \omega _{c}-{\frac {\omega _{t}^{2}}{2\omega _{c}}}\\\omega _{-}&\approx {\frac {\omega _{t}^{2}}{2\omega _{c}}}\end{aligned}}}
== Instrumentation ==
FTICR-MS differs significantly from other mass spectrometry techniques in that the ions are not detected by hitting a detector such as an electron multiplier but only by passing near detection plates. Additionally the masses are not resolved in space or time as with other techniques but only by the ion cyclotron resonance (rotational) frequency that each ion produces as it rotates in a magnetic field. Thus, the different ions are not detected in different places as with sector instruments or at different times as with time-of-flight instruments, but all ions are detected simultaneously during the detection interval. This provides an increase in the observed signal-to-noise ratio owing to the principles of Fellgett's advantage. In FTICR-MS, resolution can be improved either by increasing the strength of the magnet (in teslas) or by increasing the detection duration.
=== Cells ===
A review of different cell geometries with their specific electric configurations is available in the literature. However, ICR cells can belong to one of the following two categories: closed cells or open cells.
Several closed ICR cells with different geometries were fabricated and their performance has been characterized. Grids were used as end caps to apply an axial electric field for trapping ions axially (parallel to the magnetic field lines). Ions can be either generated inside the cell or can be injected to the cell from an external ionization source. Nested ICR cells with double pair of grids were also fabricated to trap both positive and negative ions simultaneously.
The most common open cell geometry is a cylinder, which is axially segmented to produce electrodes in the shape of a ring. The central ring electrode is commonly used for applying radial excitation electric field and detection. DC electric voltage is applied on the terminal ring electrodes to trap ions along the magnetic field lines. Open cylindrical cells with ring electrodes of different diameters have also been designed. They proved not only capable in trapping and detecting both ion polarities simultaneously, but also they succeeded to separate positive from negative ions radially. This presented a large discrimination in kinetic ion acceleration between positive and negative ions trapped simultaneously inside the new cell. Several ion axial acceleration schemes were recently written for ion–ion collision studies.
=== Stored-waveform inverse Fourier transform ===
Stored-waveform inverse Fourier transform (SWIFT) is a method for the creation of excitation waveforms for FTMS. The time-domain excitation waveform is formed from the inverse Fourier transform of the appropriate frequency-domain excitation spectrum, which is chosen to excite the resonance frequencies of selected ions. The SWIFT procedure can be used to select ions for tandem mass spectrometry experiments.
== Applications ==
Fourier-transform ion cyclotron resonance (FTICR) mass spectrometry is a high-resolution technique that can be used to determine masses with high accuracy. Many applications of FTICR-MS use this mass accuracy to help determine the composition of molecules based on accurate mass. This is possible due to the mass defect of the elements. FTICR-MS is able to achieve higher levels of mass accuracy than other forms of mass spectrometer, in part, because a superconducting magnet is much more stable than radio-frequency (RF) voltage.
Another place that FTICR-MS is useful is in dealing with complex mixtures, such as biomass or waste liquefaction products, since the resolution (narrow peak width) allows the signals of two ions with similar mass-to-charge ratios (m/z) to be detected as distinct ions. This high resolution is also useful in studying large macromolecules such as proteins with multiple charges, which can be produced by electrospray ionization. For example, attomole level of detection of two peptides has been reported. These large molecules contain a distribution of isotopes that produce a series of isotopic peaks. Because the isotopic peaks are close to each other on the m/z axis, due to the multiple charges, the high resolving power of the FTICR is extremely useful. FTICR-MS is very useful in other studies of proteomics as well. It achieves exceptional resolution in both top-down and bottom-up proteomics. Electron-capture dissociation (ECD), collisional-induced dissociation (CID), and infrared multiphoton dissociation (IRMPD) are all utilized to produce fragment spectra in tandem mass spectrometry experiments. Although CID and IRMPD use vibrational excitation to further dissociate peptides by breaking the backbone amide linkages, which are typically low in energy and weak, CID and IRMPD may also cause dissociation of post-translational modifications. ECD, on the other hand, allows specific modifications to be preserved. This is quite useful in analyzing phosphorylation states, O- or N-linked glycosylation, and sulfating.
== References ==
== External links ==
What's in an Oil Drop? An Introduction to Fourier Transform Ion Cyclotron Resonance (FT-ICR) for Non-scientists National High Magnetic Field Laboratory
Scottish Instrumentation Resource Centre for Advanced Mass Spectrometry
Fourier-transform Ion Cyclotron Resonance (FT-ICR) FT-ICR Introduction University of Bristol | Wikipedia/Fourier-transform_ion_cyclotron_resonance |
Biomaterials Science is a peer-reviewed scientific journal that explores the underlying science behind the function, interactions and design of biomaterials. It is published by the Royal Society of Chemistry. The current editor-in-chief is Jianjun Cheng (Westlake University, China), while the executive editor is Maria Southall.
The journal was established in 2013 and since January 2018 has been the official journal of the European Society for Biomaterials. Since the start of 2016 the journal has been online only. It publishes primary research (Communications and full paper articles) and review-type articles (reviews and minireviews).
== Abstracting and indexing ==
The journal is abstracted and indexed in:
Science Citation Index
Index Medicus/MEDLINE/PubMed
Scopus
== See also ==
List of scientific journals in chemistry
Journal of Materials Chemistry B
MedChemComm
== References ==
== External links ==
Official website | Wikipedia/Biomaterials_Science |
Synthetic Reaction Updates was a current awareness bibliographic database from the Royal Society of Chemistry that provided alerts of recently published developments in synthetic organic chemistry.
It covered primary research in general and organic chemistry published in chemistry journals. Each record contains a reaction scheme, as well as bibliographic data and a link to the original article on the publisher's website. Subscribers were able to search by topic and reaction type or register for email alerts of new content based on their search preferences.
== History ==
The database was established in 2015 to replace the two discontinued databases Methods in Organic Synthesis (ISSN 0265-4245) and Catalysts and Catalysed Reactions (ISSN 1474-9173.
Methods in Organic Synthesis was an online database that was established in 1998 and updated weekly with the latest developments in organic synthesis. It was also available as a monthly print bulletin.
Catalysts & Catalysed Reactions was a monthly current-awareness journal that was published from 2002 to 2014. It covered the research areas of catalysed reactions and catalysts.
== References ==
== External links ==
Official website
Methods in Organic Synthesis
Catalysts & Catalysed Reactions | Wikipedia/Methods_in_Organic_Synthesis |
The Journal of Materials Chemistry B is a weekly peer-reviewed scientific journal covering the properties, applications, and synthesis of new materials related to biology and medicine. It is one of the three journals that were created after the Journal of Materials Chemistry was split at the end of 2012. The first issue was published in January 2013. It is published by the Royal Society of Chemistry. The other two parts of the Journal of Materials Chemistry family are Journal of Materials Chemistry A and Journal of Materials Chemistry C, which cover different materials science topics. The editor-in-chief for the Journal of Materials Chemistry family of journals is currently Nazario Martin. The current editor-in-chief for Journal of Materials Chemistry B is Jessica Winter.
== Abstracting and indexing ==
The journal is abstracted and indexed in the Science Citation Index.
== See also ==
List of scientific journals in chemistry
Journal of Materials Chemistry A
Journal of Materials Chemistry C
Soft Matter
Biomaterials Science
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Materials_Chemistry_B |
The Journal of Materials Chemistry C is a weekly peer-reviewed scientific journal covering the properties, applications, and synthesis of new materials related to optical, magnetic and electronic devices. It is one of the three journals created from the splitting of Journal of Materials Chemistry at the end of 2012. Its first issue was published in January 2013. The journal is published by the Royal Society of Chemistry and has two sister journals, Journal of Materials Chemistry A and Journal of Materials Chemistry B. The editor-in-chief for the Journal of Materials Chemistry family of journals is currently Nazario Martin. The deputy editor-in-chief for Journal of Materials Chemistry C is Natalie Stingelin.
== Abstracting and indexing ==
The journal is abstracted and indexed in the Science Citation Index.
== See also ==
List of scientific journals in chemistry
Materials Horizons
Journal of Materials Chemistry A
Journal of Materials Chemistry B
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Materials_Chemistry_C |
Energy & Environmental Science, also known as EES, is a monthly peer-reviewed scientific journal publishing original (primary) research and review articles. The journal covers agenda-setting work of an interdisciplinary nature relating to energy science. Energy & Environmental Science is published by the Royal Society of Chemistry.
According to the Journal Citation Reports, the journal has a 2023 impact factor of 32.4. The editor-in-chief is Jenny Nelson (Imperial College London).
In 2022, the Royal Society of Chemistry launched its first companion journal EES Catalysis, followed by two further new companion journals EES Batteries and EES Solar in 2024. With Energy & Environmental Science as the flagship family journal, these four journals now make up the EES Family.
== Article types ==
Energy & Environmental Science publishes the following types of articles: Research Papers (original scientific work); Review Articles, Perspectives, and Minireviews (feature review-type articles of broad interest); Communications (original scientific work of an urgent nature), Opinions (personal, often speculative, viewpoints or hypotheses on a current topic), and Analysis Articles (in-depth examination of energy and environmental technologies, strategies, policies, and general conceptual frameworks of general interest).
== Abstracting and indexing ==
According to the Thomson Reuters Master Journal List and CASSI, this journal is indexed by the following services:
Science Citation Index Expanded
Current Contents/ Agriculture, Biology & Environmental Sciences
Current Contents/ Physical, Chemical & Earth Sciences
Current Contents/ Engineering, Computing & Technology
Chemical Abstracts Service - CASSI
== References ==
== External links ==
Official website
== Further reading ==
Energy & Environmental Science
[1]
2021 ESS PI Meeting Research Summary
CO2-Ausstoß für Fahrzeuge online berechnen (in German)
Advancing understanding of Earth and environmental systems from the molecular to the global scale | Wikipedia/Energy_and_Environmental_Science |
Catalysis Science & Technology is a peer-reviewed scientific journal that is published monthly by the Royal Society of Chemistry. The editor-in-chief is Bert Weckhuysen (Utrecht University, Netherlands).
The first online articles were published in January 2011, and the first issue of Catalysis Science & Technology appeared in March 2011. All articles published up to the end of 2012 are available free online. According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.177.
== Scope ==
Catalysis Science & Technology covers both the science of catalysis and catalysis technology, including applications addressing global issues. The journal publishes research in the applied, fundamental, experimental and computational areas of catalysis. Contributions are made by the homogeneous, heterogeneous and biocatalysis communities.
== Article types ==
Catalysis Science & Technology publishes the following types of articles:
Full Papers (original scientific work)
Communications (preliminary accounts that merit urgent publication)
Mini-reviews (short accounts of the published articles on the topic of catalysis)
Perspectives (personal accounts or critical analyses of specialist areas)
== Abstracting and indexing ==
Catalysis Science & Technology is abstracted and indexed in the Science Citation Index and Scopus.
== References ==
== External links ==
Official website | Wikipedia/Catalysis_Science_&_Technology |
Environmental Science: Processes & Impacts is a monthly peer-reviewed scientific journal covering all aspects of environmental science. It is published by the Royal Society of Chemistry and Kris McNeill is the editor-in-chief. The journal was established in 1999 as the Journal of Environmental Monitoring and obtained its current title in 2013.
== Article types ==
The journal publishes full research papers, communications, perspectives, critical reviews, frontier reviews, tutorial reviews, comments, and highlights.
== Abstracting and indexing ==
According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.334.
The journal is abstracted and indexed in:
== Sister journals ==
The Royal Society of Chemistry publishes 2 other journals in the Environmental Science portfolio: Environmental Science: Nano was established in 2014 and Environmental Science: Water Research & Technology in 2015.
== See also ==
List of chemistry journals
== References ==
== External links ==
Official website | Wikipedia/Environmental_Science:_Processes_&_Impacts |
Photochemical & Photobiological Sciences is a monthly peer-reviewed scientific journal covering all areas of photochemistry and photobiology. It was established in 2002 and is published by Springer Science+Business Media on behalf of the European Photochemistry Association and the European Society for Photobiology. The editors-in-chief are Dario Bassani (University of Bordeaux) and Rex Tyrrell (University of Bath).
== Abstracting and indexing ==
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.328.
== See also ==
Chemical biology
== References ==
== External links ==
Official website
European Photochemistry Association
European Society for Photobiology | Wikipedia/Photochemical_and_Photobiological_Sciences |
The Journal of Materials Chemistry was a weekly peer-reviewed scientific journal covering the applications, properties and synthesis of new materials. It was established in 1991 and published by the Royal Society of Chemistry. At the end of 2012 the journal was split into three independent journals: Journal of Materials Chemistry A (energy and sustainability), Journal of Materials Chemistry B (biology and medicine) and Journal of Materials Chemistry C (optical, magnetic and electronic devices). The editor-in-chief was Liz Dunn.
== See also ==
List of scientific journals in chemistry
Soft Matter
Journal of Materials Chemistry A
Journal of Materials Chemistry B
Journal of Materials Chemistry C
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Materials_Chemistry |
The Journal of Materials Chemistry A is a weekly peer-reviewed scientific journal that covers the synthesis, properties, and applications of novel materials related to energy and sustainability. It is one of three journals created after the Journal of Materials Chemistry was split at the end of 2012. Its first issue was published in January 2013. The journal is published by the Royal Society of Chemistry and has two sister journals, Journal of Materials Chemistry B and Journal of Materials Chemistry C, which cover different materials science topics. The editor-in-chief for the Journal of Materials Chemistry family of journals is currently Nazario Martin. The deputy editor-in-chief for Journal of Materials Chemistry A is Anders Hagfeldt, while the executive editor is Michaela Mühlberg.
== Abstracting and indexing ==
The journal is abstracted and indexed in the Science Citation Index Expanded, Current Contents/Physical, Chemical & Earth Sciences, and Current Contents/Engineering, Computing & Technology.
== See also ==
List of scientific journals in chemistry
Materials Horizons
Journal of Materials Chemistry B
Journal of Materials Chemistry C
Soft Matter
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Materials_Chemistry_A |
Materials Horizons is a bimonthly peer-reviewed scientific journal that covers research across the breadth of materials science at the interface between chemistry, physics, biology and engineering. The current editor-in-chief is Martina Stenzel. The journal was established in 2014. A sister journal Nanoscale Horizons was launched in 2016.
== Article types ==
The journal publishes "communications" (articles for rapid publication), "reviews" (state-of-the-art accounts of a research field), "mini-reviews" (research highlights in an emerging area of materials science, usually from the past 2–3 years) and "focus articles" (educational articles providing an overview of a concept in materials science).
== Abstracting and indexing ==
The journal is indexed in the Science Citation Index. Selective content is also indexed in Polymer Library, Inspec, Biotechnology and Bioengineering Abstracts, METADEX, Mechanical Engineering Abstracts, Solid State and Superconductivity Abstracts, Metal Abstracts and CSA Technology Research Database, and CABI.
== See also ==
List of scientific journals in chemistry
Journal of Materials Chemistry A
Journal of Materials Chemistry B
Journal of Materials Chemistry C
== References ==
== External links ==
Official website | Wikipedia/Materials_Horizons |
A network solid or covalent network solid (also called atomic crystalline solids or giant covalent structures) is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules, and the entire crystal or amorphous solid may be considered a macromolecule. Formulas for network solids, like those for ionic compounds, are simple ratios of the component atoms represented by a formula unit.
Examples of network solids include diamond with a continuous network of carbon atoms and silicon dioxide or quartz with a continuous three-dimensional network of SiO2 units. Graphite and the mica group of silicate minerals structurally consist of continuous two-dimensional sheets covalently bonded within the layer, with other bond types holding the layers together. Disordered network solids are termed glasses. These are typically formed on rapid cooling of melts so that little time is left for atomic ordering to occur.
== Properties ==
Hardness: Very hard, due to the strong covalent bonds throughout the lattice (deformation can be easier, however, in directions that do not require the breaking of any covalent bonds, as with flexing or sliding of sheets in graphite or mica).
Melting point: High, since melting means breaking covalent bonds (rather than merely overcoming weaker intermolecular forces).
Solid-phase electrical conductivity: Variable, depending on the nature of the bonding: network solids in which all electrons are used for sigma bonds (e.g. diamond, quartz) are poor conductors, as there are no delocalized electrons. However, network solids with delocalized pi bonds (e.g. graphite) or dopants can exhibit metal-like conductivity.
Liquid-phase electrical conductivity: Low, as the macromolecule consists of neutral atoms, meaning that melting does not free up any new charge carriers (as it would for an ionic compound).
Solubility: Generally insoluble in any solvent due to the difficulty of solvating such a large molecule.
== Examples ==
Boron nitride (BN)
Diamond (carbon, C)
Quartz (SiO2)
Rhenium diboride (ReB2)
Silicon carbide (moissanite, carborundum, SiC)
Silicon (Si)
Germanium (Ge)
Aluminium nitride (AlN)
α-tin allotrope (gray tin, Sn)
== See also ==
Molecular solid
== References == | Wikipedia/Network_covalent_bonding |
In mathematics, a matrix (pl.: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
For example,
[
1
9
−
13
20
5
−
6
]
{\displaystyle {\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}}
is a matrix with two rows and three columns. This is often referred to as a "two-by-three matrix", a "
2
×
3
{\displaystyle 2\times 3}
matrix", or a matrix of dimension
2
×
3
{\displaystyle 2\times 3}
.
Matrices are commonly used in linear algebra, where they represent linear maps. In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or through their use in geometry and numerical analysis.
Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. The determinant of a square matrix is a number associated with the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant.
Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.
== Definition ==
A matrix is a rectangular array of numbers (or other mathematical objects), called the "entries" of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field
F
{\displaystyle F}
is a rectangular array of elements of
F
{\displaystyle F}
. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:
A
=
[
−
1.3
0.6
20.4
5.5
9.7
−
6.2
]
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.}
The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are respectively called rows and columns.
=== Size ===
The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of rows and columns that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with
m
{\displaystyle m}
rows and
n
{\displaystyle n}
columns is called an
m
×
n
{\displaystyle m\times n}
matrix, or
m
{\displaystyle {m}}
-by-
n
{\displaystyle {n}}
matrix, where
m
{\displaystyle {m}}
and
n
{\displaystyle {n}}
are called its dimensions. For example, the matrix
A
{\displaystyle {\mathbf {A} }}
above is a
3
×
2
{\displaystyle {3\times 2}}
matrix.
Matrices with a single row are called row matrices or row vectors, and those with a single column are called column matrices or column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.
== Notation ==
The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an
m
×
n
{\displaystyle m\times n}
matrix
A
{\displaystyle \mathbf {A} }
is represented as
A
=
[
a
11
a
12
⋯
a
1
n
a
21
a
22
⋯
a
2
n
⋮
⋮
⋱
⋮
a
m
1
a
m
2
⋯
a
m
n
]
=
(
a
11
a
12
⋯
a
1
n
a
21
a
22
⋯
a
2
n
⋮
⋮
⋱
⋮
a
m
1
a
m
2
⋯
a
m
n
)
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}.}
This may be abbreviated by writing only a single generic term, possibly along with indices, as in
A
=
(
a
i
j
)
,
[
a
i
j
]
,
or
(
a
i
j
)
1
≤
i
≤
m
,
1
≤
j
≤
n
{\displaystyle \mathbf {A} =\left(a_{ij}\right),\quad \left[a_{ij}\right],\quad {\text{or}}\quad \left(a_{ij}\right)_{1\leq i\leq m,\;1\leq j\leq n}}
or
A
=
(
a
i
,
j
)
1
≤
i
,
j
≤
n
{\displaystyle \mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}}
in the case that
n
=
m
{\displaystyle n=m}
.
Matrices are usually symbolized using upper-case letters (such as
A
{\displaystyle {\mathbf {A} }}
in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g.,
a
11
{\displaystyle a_{11}}
, or
a
1
,
1
{\displaystyle a_{1,1}}
), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface Roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in
A
_
_
{\displaystyle {\underline {\underline {A}}}}
.
The entry in the ith row and jth column of a matrix A is sometimes referred to as the
i
,
j
{\displaystyle {i,j}}
or
(
i
,
j
)
{\displaystyle (i,j)}
entry of the matrix, and commonly denoted by
a
i
,
j
{\displaystyle a_{i,j}}
or
a
i
j
{\displaystyle a_{ij}}
. Alternative notations for that entry are
A
[
i
,
j
]
{\displaystyle {\mathbf {A} [i,j]}}
and
A
i
,
j
{\displaystyle \mathbf {A} _{i,j}}
. For example, the
(
1
,
3
)
{\displaystyle (1,3)}
entry of the following matrix
A
{\displaystyle \mathbf {A} }
is 5 (also denoted
a
13
{\displaystyle a_{13}}
,
a
1
,
3
{\displaystyle a_{1,3}}
,
A
[
1
,
3
]
{\displaystyle \mathbf {A} [1,3]}
or
A
1
,
3
{\displaystyle {\mathbf {A} }_{1,3}}
):
A
=
[
4
−
7
5
0
−
2
0
11
8
19
1
−
3
12
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}4&-7&\color {red}{5}&0\\-2&0&11&8\\19&1&-3&12\end{bmatrix}}}
Sometimes, the entries of a matrix can be defined by a formula such as
a
i
,
j
=
f
(
i
,
j
)
{\displaystyle a_{i,j}=f(i,j)}
. For example, each of the entries of the following matrix
A
{\displaystyle \mathbf {A} }
is determined by the formula
a
i
j
=
i
−
j
{\displaystyle a_{ij}=i-j}
.
A
=
[
0
−
1
−
2
−
3
1
0
−
1
−
2
2
1
0
−
1
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}0&-1&-2&-3\\1&0&-1&-2\\2&1&0&-1\end{bmatrix}}}
In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as
A
=
[
i
−
j
]
{\displaystyle {\mathbf {A} }=[i-j]}
or
A
=
(
(
i
−
j
)
)
{\displaystyle \mathbf {A} =((i-j))}
. If matrix size is
m
×
n
{\displaystyle m\times n}
, the above-mentioned formula
f
(
i
,
j
)
{\displaystyle f(i,j)}
is valid for any
i
=
1
,
…
,
m
{\displaystyle i=1,\dots ,m}
and any
j
=
1
,
…
,
n
{\displaystyle j=1,\dots ,n}
. This can be specified separately or indicated using
m
×
n
{\displaystyle m\times n}
as a subscript. For instance, the matrix
A
{\displaystyle \mathbf {A} }
above is
3
×
4
{\displaystyle 3\times 4}
, and can be defined as
A
=
[
i
−
j
]
(
i
=
1
,
2
,
3
;
j
=
1
,
…
,
4
)
{\displaystyle {\mathbf {A} }=[i-j](i=1,2,3;j=1,\dots ,4)}
or
A
=
[
i
−
j
]
3
×
4
{\displaystyle \mathbf {A} =[i-j]_{3\times 4}}
.
Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an m-by-n matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an m-by-n matrix are indexed by
0
≤
i
≤
m
−
1
{\displaystyle 0\leq i\leq m-1}
and
0
≤
j
≤
n
−
1
{\displaystyle 0\leq j\leq n-1}
. This article follows the more common convention in mathematical writing where enumeration starts from 1.
The set of all m-by-n real matrices is often denoted
M
(
m
,
n
)
{\displaystyle {\mathcal {M}}(m,n)}
, or
M
m
×
n
(
R
)
{\displaystyle {\mathcal {M}}_{m\times n}(\mathbb {R} )}
. The set of all m-by-n matrices over another field, or over a ring R, is similarly denoted
M
(
m
,
n
,
R
)
{\displaystyle {\mathcal {M}}(m,n,R)}
, or
M
m
×
n
(
R
)
{\displaystyle {\mathcal {M}}_{m\times n}(R)}
. If m = n, such as in the case of square matrices, one does not repeat the dimension:
M
(
n
,
R
)
{\displaystyle {\mathcal {M}}(n,R)}
, or
M
n
(
R
)
{\displaystyle {\mathcal {M}}_{n}(R)}
. Often,
M
{\displaystyle M}
, or
Mat
{\displaystyle \operatorname {Mat} }
, is used in place of
M
{\displaystyle {\mathcal {M}}}
.
== Basic operations ==
Several basic operations can be applied to matrices. Some, such as transposition and submatrix do not depend on the nature of the entries. Others, such as matrix addition, scalar multiplication, matrix multiplication, and row operations involve operations on matrix entries and therefore require that matrix entries are numbers or belong to a field or a ring.
In this section, it is supposed that matrix entries belong to a fixed ring, which is typically a field of numbers.
=== Addition ===
Matrix addition and subtraction require matrices of a consistent size, and are calculated entrywise. The sum A + B and the difference A − B of two m×n matrices are:
(
A
+
B
)
i
,
j
=
A
i
,
j
+
B
i
,
j
,
1
≤
i
≤
m
,
1
≤
j
≤
n
.
(
A
−
B
)
i
,
j
=
A
i
,
j
−
B
i
,
j
,
1
≤
i
≤
m
,
1
≤
j
≤
n
.
{\displaystyle {\begin{aligned}({\mathbf {A}}+{\mathbf {B}})_{i,j}={\mathbf {A}}_{i,j}+{\mathbf {B}}_{i,j},\quad 1\leq i\leq m,\quad 1\leq j\leq n.\\({\mathbf {A}}-{\mathbf {B}})_{i,j}={\mathbf {A}}_{i,j}-{\mathbf {B}}_{i,j},\quad 1\leq i\leq m,\quad 1\leq j\leq n.\end{aligned}}}
For example,
[
1
3
1
1
0
0
]
+
[
0
0
5
7
5
0
]
=
[
1
+
0
3
+
0
1
+
5
1
+
7
0
+
5
0
+
0
]
=
[
1
3
6
8
5
0
]
{\displaystyle {\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}}
Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: A + B = B + A.
=== Scalar multiplication ===
The product cA of a number c (also called a scalar in this context) and a matrix A is computed by multiplying each entry of A by c:
(
c
A
)
i
,
j
=
c
⋅
A
i
,
j
{\displaystyle (c{\mathbf {A}})_{i,j}=c\cdot {\mathbf {A}}_{i,j}}
This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:
2
⋅
[
1
8
−
3
4
−
2
5
]
=
[
2
⋅
1
2
⋅
8
2
⋅
−
3
2
⋅
4
2
⋅
−
2
2
⋅
5
]
=
[
2
16
−
6
8
−
4
10
]
{\displaystyle 2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}}
Matrix subtraction is consistent with composition of matrix addition with scalar multiplication by –1:
A
−
B
=
A
+
(
−
1
)
⋅
B
{\displaystyle \mathbf {A} -\mathbf {B} =\mathbf {A} +(-1)\cdot \mathbf {B} }
=== Transpose ===
The transpose of an m×n matrix A is the n×m matrix AT (also denoted Atr or tA) formed by turning rows into columns and vice versa:
(
A
T
)
i
,
j
=
A
j
,
i
.
{\displaystyle \left({\mathbf {A}}^{\rm {T}}\right)_{i,j}={\mathbf {A}}_{j,i}.}
For example:
[
1
2
3
0
−
6
7
]
T
=
[
1
0
2
−
6
3
7
]
{\displaystyle {\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}}
The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A.
=== Matrix multiplication ===
Multiplication of two matrices corresponds to the composition of linear transformations represented by each matrix. It is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m×n matrix and B is an n×p matrix, then their matrix product AB is the m×p matrix whose entries are given by the dot product of the corresponding row of A and the corresponding column of B:
[
A
B
]
i
,
j
=
a
i
,
1
b
1
,
j
+
a
i
,
2
b
2
,
j
+
⋯
+
a
i
,
n
b
n
,
j
=
∑
r
=
1
n
a
i
,
r
b
r
,
j
,
{\displaystyle [\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},}
where 1 ≤ i ≤ m and 1 ≤ j ≤ p. For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:
[
2
_
3
_
4
_
1
0
0
]
[
0
1000
_
1
100
_
0
10
_
]
=
[
3
2340
_
0
1000
]
.
{\displaystyle {\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}}
Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A + B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined. The product AB may be defined without BA being defined, namely if A and B are m×n and n×k matrices, respectively, and m ≠ k. Even if both products are defined, they generally need not be equal, that is:
A
B
≠
B
A
.
{\displaystyle {\mathbf {AB}}\neq {\mathbf {BA}}.}
In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:
[
1
2
3
4
]
[
0
1
0
0
]
=
[
0
1
0
3
]
,
{\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},}
whereas
[
0
1
0
0
]
[
1
2
3
4
]
=
[
3
4
0
0
]
.
{\displaystyle {\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.}
Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation.
=== Row operations ===
There are three types of row operations:
row addition, that is, adding a row to another.
row multiplication, that is, multiplying all entries of a row by a non-zero constant;
row switching, that is, interchanging two rows of a matrix;
These operations are used in several ways, including solving linear equations and finding matrix inverses with Gauss elimination and Gauss–Jordan elimination, respectively.
=== Submatrix ===
A submatrix of a matrix is a matrix obtained by deleting any collection of rows and/or columns. For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:
A
=
[
1
2
3
4
5
6
7
8
9
10
11
12
]
→
[
1
3
4
5
7
8
]
.
{\displaystyle \mathbf {A} ={\begin{bmatrix}1&\color {red}{2}&3&4\\5&\color {red}{6}&7&8\\\color {red}{9}&\color {red}{10}&\color {red}{11}&\color {red}{12}\end{bmatrix}}\rightarrow {\begin{bmatrix}1&3&4\\5&7&8\end{bmatrix}}.}
The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.
A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain. Other authors define a principal submatrix as one in which the first k rows and columns, for some number k, are the ones that remain; this type of submatrix has also been called a leading principal submatrix.
== Linear equations ==
Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations. For example, if A is an m×n matrix, x designates a column vector (that is, n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation
A
x
=
b
{\displaystyle \mathbf {Ax} =\mathbf {b} }
is equivalent to the system of linear equations
a
1
,
1
x
1
+
a
1
,
2
x
2
+
⋯
+
a
1
,
n
x
n
=
b
1
⋮
a
m
,
1
x
1
+
a
m
,
2
x
2
+
⋯
+
a
m
,
n
x
n
=
b
m
{\displaystyle {\begin{aligned}a_{1,1}x_{1}+a_{1,2}x_{2}+&\cdots +a_{1,n}x_{n}=b_{1}\\&\ \ \vdots \\a_{m,1}x_{1}+a_{m,2}x_{2}+&\cdots +a_{m,n}x_{n}=b_{m}\end{aligned}}}
Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately. If n = m and the equations are independent, then this can be done by writing
x
=
A
−
1
b
{\displaystyle \mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} }
where A−1 is the inverse matrix of A. If A has no inverse, solutions—if any—can be found using its generalized inverse.
== Linear transformations ==
Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation
R
n
→
R
m
{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
mapping each vector x in
R
n
{\displaystyle \mathbb {R} ^{n}}
to the (matrix) product Ax, which is a vector in
R
m
.
{\displaystyle \mathbb {R} ^{m}.}
Conversely, each linear transformation
f
:
R
n
→
R
m
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f (ej), where ej = (0, ..., 0, 1, 0, ..., 0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f.
For example, the 2×2 matrix
A
=
[
a
c
b
d
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}a&c\\b&d\end{bmatrix}}}
can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors
[
0
0
]
{\displaystyle \left[{\begin{smallmatrix}0\\0\end{smallmatrix}}\right]}
,
[
1
0
]
{\displaystyle \left[{\begin{smallmatrix}1\\0\end{smallmatrix}}\right]}
,
[
1
1
]
{\displaystyle \left[{\begin{smallmatrix}1\\1\end{smallmatrix}}\right]}
, and
[
0
1
]
{\displaystyle \left[{\begin{smallmatrix}0\\1\end{smallmatrix}}\right]}
in turn. These vectors define the vertices of the unit square. The following table shows several 2×2 real matrices with the associated linear maps of
R
2
{\displaystyle \mathbb {R} ^{2}}
. The blue original is mapped to the green grid and shapes. The origin (0, 0) is marked with a black point.
Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps: if a k-by-m matrix B represents another linear map
g
:
R
m
→
R
k
{\displaystyle g:\mathbb {R} ^{m}\to \mathbb {R} ^{k}}
, then the composition g ∘ f is represented by BA since
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
=
g
(
A
x
)
=
B
(
A
x
)
=
(
B
A
)
x
.
{\displaystyle (g\circ f)({\mathbf {x}})=g(f({\mathbf {x}}))=g({\mathbf {Ax}})={\mathbf {B}}({\mathbf {Ax}})=({\mathbf {BA}}){\mathbf {x}}.}
The last equality follows from the above-mentioned associativity of matrix multiplication.
The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors. Equivalently it is the dimension of the image of the linear map represented by A. The rank–nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.
== Square matrix ==
A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied.
The entries aii form the main diagonal of a square matrix. They lie on the imaginary line running from the top left corner to the bottom right corner of the matrix.
Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring.
=== Main types ===
==== Diagonal and triangular matrix ====
If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly, if all entries of A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are zero, A is called a diagonal matrix.
==== Identity matrix ====
The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, for example,
I
1
=
[
1
]
,
I
2
=
[
1
0
0
1
]
,
⋮
I
n
=
[
1
0
⋯
0
0
1
⋯
0
⋮
⋮
⋱
⋮
0
0
⋯
1
]
{\displaystyle {\begin{aligned}\mathbf {I} _{1}&={\begin{bmatrix}1\end{bmatrix}},\\[4pt]\mathbf {I} _{2}&={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\\[4pt]\vdots &\\[4pt]\mathbf {I} _{n}&={\begin{bmatrix}1&0&\cdots &0\\0&1&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1\end{bmatrix}}\end{aligned}}}
It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged:
A
I
n
=
I
m
A
=
A
{\displaystyle {\mathbf {AI}}_{n}={\mathbf {I}}_{m}{\mathbf {A}}={\mathbf {A}}}
for any m-by-n matrix A.
A scalar multiple of an identity matrix is called a scalar matrix.
==== Symmetric or skew-symmetric matrix ====
A square matrix A that is equal to its transpose, that is, A = AT, is a symmetric matrix. If instead, A is equal to the negative of its transpose, that is, A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfies A∗ = A, where the star or asterisk denotes the conjugate transpose of the matrix, that is, the transpose of the complex conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns.
==== Invertible matrix and its inverse ====
A square matrix A is called invertible or non-singular if there exists a matrix B such that
A
B
=
B
A
=
I
n
,
{\displaystyle {\mathbf {AB}}={\mathbf {BA}}={\mathbf {I}}_{n},}
where In is the n×n identity matrix with 1 for each entry on the main diagonal and 0 elsewhere. If B exists, it is unique and is called the inverse matrix of A, denoted A−1.
There are many algorithms for testing whether a square matrix is invertible, and, if it is, computing its inverse. One of the oldest, which is still in common use is Gaussian elimination.
==== Definite matrix ====
A symmetric real matrix A is called positive-definite if the associated quadratic form
f
(
x
)
=
x
T
A
x
{\displaystyle f({\mathbf {x}})={\mathbf {x}}^{\rm {T}}{\mathbf {Ax}}}
has a positive value for every nonzero vector x in
R
n
{\displaystyle \mathbb {R} ^{n}}
. If f(x) yields only negative values then A is negative-definite; if f does produce both negative and positive values then A is indefinite. If the quadratic form f yields only non-negative values (positive or zero), the symmetric matrix is called positive-semidefinite (or if only non-positive values, then negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, that is, the matrix is positive-semidefinite and it is invertible. The table at the right shows two possibilities for 2-by-2 matrices. The eigenvalues of a diagonal matrix are simply the entries along the diagonal, and so in these examples, the eigenvalues can be read directly from the matrices themselves. The first matrix has two eigenvalues that are both positive, while the second has one that is positive and another that is negative.
Allowing as input two different vectors instead yields the bilinear form associated to A:
B
A
(
x
,
y
)
=
x
T
A
y
.
{\displaystyle B_{\mathbf {A}}({\mathbf {x}},{\mathbf {y}})={\mathbf {x}}^{\rm {T}}{\mathbf {Ay}}.}
In the case of complex matrices, the same terminology and results apply, with symmetric matrix, quadratic form, bilinear form, and transpose xT replaced respectively by Hermitian matrix, Hermitian form, sesquilinear form, and conjugate transpose xH.
==== Orthogonal matrix ====
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that is, orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:
A
T
=
A
−
1
,
{\displaystyle \mathbf {A} ^{\mathrm {T} }=\mathbf {A} ^{-1},\,}
which entails
A
T
A
=
A
A
T
=
I
n
,
{\displaystyle \mathbf {A} ^{\mathrm {T} }\mathbf {A} =\mathbf {A} \mathbf {A} ^{\mathrm {T} }=\mathbf {I} _{n},}
where In is the identity matrix of size n.
An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation without reflection, i.e., the transformation preserves the orientation of the transformed structure, while every orthogonal matrix with determinant −1 reverses the orientation, i.e., is a composition of a pure reflection and a (possibly null) rotation. The identity matrices have determinant 1 and are pure rotations by an angle zero.
The complex analog of an orthogonal matrix is a unitary matrix.
=== Main operations ===
==== Trace ====
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors:
tr
(
A
B
)
=
tr
(
B
A
)
.
{\displaystyle \operatorname {tr} (\mathbf {AB} )=\operatorname {tr} (\mathbf {BA} ).}
This is immediate from the definition of matrix multiplication:
tr
(
A
B
)
=
∑
i
=
1
m
∑
j
=
1
n
a
i
j
b
j
i
=
tr
(
B
A
)
.
{\displaystyle \operatorname {tr} (\mathbf {AB} )=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ji}=\operatorname {tr} (\mathbf {BA} ).}
It follows that the trace of the product of more than two matrices is independent of cyclic permutations of the matrices; however, this does not in general apply for arbitrary permutations. For example, tr(ABC) ≠ tr(BAC), in general. Also, the trace of a matrix is equal to that of its transpose, that is,
tr
(
A
)
=
tr
(
A
T
)
.
{\displaystyle \operatorname {tr} ({\mathbf {A}})=\operatorname {tr} ({\mathbf {A}}^{\rm {T}}).}
==== Determinant ====
The determinant of a square matrix A (denoted det(A) or |A|) is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in
R
2
{\displaystyle \mathbb {R} ^{2}}
) or volume (in
R
3
{\displaystyle \mathbb {R} ^{3}}
) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2-by-2 matrices is given by
det
[
a
b
c
d
]
=
a
d
−
b
c
.
{\displaystyle \det {\begin{bmatrix}a&b\\c&d\end{bmatrix}}=ad-bc.}
The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions.
The determinant of a product of square matrices equals the product of their determinants:
det
(
A
B
)
=
det
(
A
)
⋅
det
(
B
)
,
{\displaystyle \det({\mathbf {AB}})=\det({\mathbf {A}})\cdot \det({\mathbf {B}}),}
or using alternate notation:
|
A
B
|
=
|
A
|
⋅
|
B
|
.
{\displaystyle |{\mathbf {AB}}|=|{\mathbf {A}}|\cdot |{\mathbf {B}}|.}
Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices, the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.
==== Eigenvalues and eigenvectors ====
A number
λ
{\textstyle \lambda }
and a nonzero vector v satisfying
A
v
=
λ
v
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }
are called an eigenvalue and an eigenvector of A, respectively. The number λ is an eigenvalue of an n×n-matrix A if and only if (A − λIn) is not invertible, which is equivalent to
det
(
A
−
λ
I
)
=
0.
{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0.}
The polynomial pA in an indeterminate X given by evaluation of the determinant det(XIn − A) is called the characteristic polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions, that is, eigenvalues of the matrix. They may be complex even if the entries of A are real. According to the Cayley–Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its characteristic polynomial yields the zero matrix.
== Computational aspects ==
Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms and iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors xn converging to an eigenvector when n tends to infinity.
To choose the most appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra. As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability.
Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example, multiplication of matrices. Calculating the matrix product of two n-by-n matrices using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only n2.807 multiplications. Theoretically faster but impractical matrix multiplication algorithms have been developed, as have speedups to this problem using parallel algorithms or distributed computation systems such as MapReduce.
In many practical situations, additional information about the matrices involved is known. An important case is sparse matrices, that is, matrices whose entries are mostly zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.
An algorithm is, roughly speaking, numerically stable if little deviations in the input values do not lead to big deviations in the result. For example, one can calculate the inverse of a matrix by computing its adjugate matrix:
A
−
1
=
adj
(
A
)
/
det
(
A
)
.
{\displaystyle {\mathbf {A}}^{-1}=\operatorname {adj} ({\mathbf {A}})/\det({\mathbf {A}}).}
However, this may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix's inverse.
== Decomposition ==
There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix decomposition or matrix factorization techniques. These techniques are of interest because they can make computations easier.
The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U). Once this decomposition is calculated, linear systems can be solved more efficiently by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian elimination is a similar algorithm; it transforms any matrix to row echelon form. Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV∗, where U and V are unitary matrices and D is a diagonal matrix.
The eigendecomposition or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is a suitable invertible matrix. If A can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λn of A, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right. Given the eigendecomposition, the nth power of A (that is, n-fold iterated matrix multiplication) can be calculated via
A
n
=
(
V
D
V
−
1
)
n
=
V
D
V
−
1
V
D
V
−
1
…
V
D
V
−
1
=
V
D
n
V
−
1
{\displaystyle {\mathbf {A}}^{n}=({\mathbf {VDV}}^{-1})^{n}={\mathbf {VDV}}^{-1}{\mathbf {VDV}}^{-1}\ldots {\mathbf {VDV}}^{-1}={\mathbf {VD}}^{n}{\mathbf {V}}^{-1}}
and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for A instead. This can be used to compute the matrix exponential eA, a need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices. To avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed.
== Abstract algebraic aspects and generalizations ==
Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension is tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realized as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers. Matrices, subject to certain requirements tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as matrix rings. Though the product of matrices is not in general commutative certain matrices form fields known as matrix fields.
In general, matrices and their multiplication also form a category, the category of matrices.
=== Matrices with more general entries ===
This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, that is, a set where addition, subtraction, multiplication, and division operations are defined and well-behaved, may be used instead of
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
, for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a larger field than that of the entries of the matrix; for instance, they may be complex in the case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as
C
,
{\displaystyle \mathbb {C} ,}
from the outset.
More generally, matrices with entries in a ring R are widely used in mathematics. Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(n, R) (also denoted Mn(R)) of all square n-by-n matrices over R is a ring called matrix ring, isomorphic to the endomorphism ring of the left R-module Rn. If the ring R is commutative, that is, its multiplication is commutative, then the ring M(n, R) is also an associative algebra over R. The determinant of square matrices over a commutative ring R can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in R, generalizing the situation over a field F, where every nonzero element is invertible. Matrices over superrings are called supermatrices.
Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be square matrices, and thus need not be members of any ring; but their sizes must fulfill certain compatibility conditions.
=== Relationship to linear maps ===
Linear maps
R
n
→
R
m
{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
are equivalent to m-by-n matrices, as described above. More generally, any linear map f : V → W between finite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that
f
(
v
j
)
=
∑
i
=
1
m
a
i
,
j
w
i
for
j
=
1
,
…
,
n
.
{\displaystyle f(\mathbf {v} _{j})=\sum _{i=1}^{m}a_{i,j}\mathbf {w} _{i}\qquad {\mbox{for}}\ j=1,\ldots ,n.}
In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely determines the entries of the matrix A. The matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices. Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, concerning the dual bases.
These properties can be restated more naturally: the category of matrices with entries in a field
k
{\displaystyle k}
with multiplication as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field.
More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to the matrix ring of n×n matrices representing the endomorphism ring of Rn.
=== Matrix groups ===
A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an operation combining any two objects to a third, subject to certain requirements. A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group. Since a group of every element must be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups.
Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller group contained in) their general linear group, called a special linear group. Orthogonal matrices, determined by the condition
M
T
M
=
I
,
{\displaystyle {\mathbf {M}}^{\rm {T}}{\mathbf {M}}={\mathbf {I}},}
form the orthogonal group. Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determinant 1 form a subgroup called the special orthogonal group.
Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group. General groups can be studied using matrix groups, which are comparatively well understood, using representation theory.
=== Infinite matrices ===
It is also possible to consider matrices with infinitely many rows and/or columns. The basic operations introduced above are defined the same way in this case. Matrix multiplication, however, and all operations stemming therefrom are only meaningful when restricted to certain matrices, since the sum featuring in the above definition of the matrix product will contain an infinity of summands. An easy way to circumvent this issue is to restrict to matrices all of whose rows (or columns) contain only finitely many nonzero terms. As in the finite case (see above), where matrices describe linear maps, infinite matrices can be used to describe operators on Hilbert spaces, where convergence and continuity questions arise. However, the explicit point of view of matrices tends to obfuscate the matter, and the abstract and more powerful tools of functional analysis are used instead, by relating matrices to linear maps (as in the finite case above), but imposing additional convergence and continuity constraints.
=== Empty matrix ===
An empty matrix is a matrix in which the number of rows or columns (or both) is zero. Empty matrices help to deal with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite-dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants.
== Applications ==
There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of strategies the players choose. Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.
Complex numbers can be represented by particular real 2-by-2 matrices via
a
+
i
b
↔
[
a
−
b
b
a
]
,
{\displaystyle a+ib\leftrightarrow {\begin{bmatrix}a&-b\\b&a\end{bmatrix}},}
under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions and Clifford algebras in general.
Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break. Computer graphics uses matrices to represent objects; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image convolutions such as sharpening, blurring, edge detection, and more. Matrices over a polynomial ring are important in the study of control theory.
Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method.
=== Graph theory ===
The adjacency matrix of a finite graph is a basic notion of graph theory. It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example "yes" and "no", respectively) are called logical matrices. The distance (or cost) matrix contains information about the distances of the edges. These concepts can be applied to websites connected by hyperlinks, or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.
=== Analysis and geometry ===
The Hessian matrix of a differentiable function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
consists of the second derivatives of ƒ concerning the several coordinate directions, that is,
H
(
f
)
=
[
∂
2
f
∂
x
i
∂
x
j
]
.
{\displaystyle H(f)=\left[{\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}\right].}
It encodes information about the local growth behavior of the function: given a critical point x = (x1, ..., xn), that is, a point where the first partial derivatives
∂
f
/
∂
x
i
{\displaystyle \partial f/\partial x_{i}}
of f vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).
Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map
f
:
R
n
→
R
m
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
. If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as
J
(
f
)
=
[
∂
f
i
∂
x
j
]
1
≤
i
≤
m
,
1
≤
j
≤
n
.
{\displaystyle J(f)=\left[{\frac {\partial f_{i}}{\partial x_{j}}}\right]_{1\leq i\leq m,1\leq j\leq n}.}
If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.
Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question.
The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation.
=== Probability theory and statistics ===
Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states. A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain—like absorbing states, that is, states that any particle attains eventually—can be read off the eigenvectors of the transition matrices.
Statistics also makes use of matrices in many different forms. Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables. Another technique using matrices are linear least squares, a method that approximates a finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function
y
i
≈
a
x
i
+
b
,
i
=
1
,
…
,
N
{\displaystyle y_{i}\approx ax_{i}+b,\quad i=1,\ldots ,N}
which can be formulated in terms of matrices, related to the singular value decomposition of matrices.
Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.
=== Quantum mechanics and particle physics ===
The first model of quantum mechanics (Heisenberg, 1925) used infinite-dimensional matrices to define the operators that took over the role of variables like position, momentum and energy from classical physics. (This is sometimes referred to as matrix mechanics.) Matrices, both finite and infinite-dimensional, have since been employed for many purposes in quantum mechanics. One particular example is the density matrix, a tool used in calculating the probabilities of the outcomes of measurements performed on physical systems.
Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors. For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.
Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.
=== Normal modes ===
A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms. They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.
=== Geometrical optics ===
Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. There are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies.
The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.
The Jones calculus models the polarization of a light source as a
2
×
2
{\displaystyle 2\times 2}
vector, and the effects of optical filters on this polarization vector as a matrix.
=== Electronics ===
Electronic circuits that are composed of linear components (such as resistors, inductors and capacitors) obey Kirchhoff's circuit laws, which leads to a system of linear equations, which can be described with a matrix equation that relates the source currents and voltages to the resultant currents and voltages at each point in the circuit, and where the matrix entries are determined by the circuit.
== History ==
Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in the 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations, including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna. The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683. The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659). Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays. Cramer presented his rule in 1750.
The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list", and "register", are derived from mater—mother) was coined by James Joseph Sylvester in 1850, who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:
I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered from the womb of a common parent.
Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition. Early matrix theory had limited the use of arrays almost exclusively to determinants and Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858, Cayley published his A memoir on the theory of matrices in which he proposed and demonstrated the Cayley–Hamilton theorem.
The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation A = [ai,j] to represent a matrix where ai,j refers to the ith row and the jth column.
The modern study of determinants sprang from several sources. Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as x2 + xy − 2y2, and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as the definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by aj,k in the polynomial
a
1
a
2
⋯
a
n
∏
i
<
j
(
a
j
−
a
i
)
,
{\displaystyle a_{1}a_{2}\cdots a_{n}\prod _{i<j}(a_{j}-a_{i}),}
where
∏
{\displaystyle \textstyle \prod }
denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real. Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above. Kronecker's Vorlesungen über die Theorie der Determinanten and Weierstrass's Zur Determinantentheorie, both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established.
Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra, partially due to their use in the classification of the hypercomplex number systems of the previous century.
The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns. Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions.
=== Other historical usages of the word "matrix" in mathematics ===
The word has been used in unusual ways by at least two authors of historical importance.
Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" (0 order) the function is identical to its extension:
Let us give the name of matrix to any function, of however many variables, that does not involve any apparent variables. Then, any possible function other than a matrix derives from a matrix using generalization, that is, by considering the proposition that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined.
For example, a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, such as y, by "considering" the function for all possible values of "individuals" ai substituted in place of a variable x. And then the resulting collection of functions of the single variable y, that is, ∀ai: Φ(ai, y), can be reduced to a "matrix" of values by "considering" the function for all possible values of "individuals" bi substituted in place of variable y:
∀
b
j
∀
a
i
:
ϕ
(
a
i
,
b
j
)
.
{\displaystyle \forall b_{j}\forall a_{i}\colon \phi (a_{i},b_{j}).}
Alfred Tarski in his 1941 Introduction to Logic used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.
== See also ==
List of named matrices
Gram–Schmidt process – Orthonormalization of a set of vectors
Irregular matrix
Matrix calculus – Specialized notation for multivariable calculus
Matrix function – Function that maps matrices to matricesPages displaying short descriptions of redirect targets
== Notes ==
== References ==
=== Mathematical references ===
=== Physics references ===
=== Historical references ===
== Further reading ==
"Matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
The Matrix Cookbook (PDF), retrieved 24 March 2014
Brookes, Mike (2005), The Matrix Reference Manual, London: Imperial College, archived from the original on 16 December 2008, retrieved 10 Dec 2008
== External links ==
MacTutor: Matrices and determinants
Matrices and Linear Algebra on the Earliest Uses Pages
Earliest Uses of Symbols for Matrices and Vectors | Wikipedia/Matrix_equation |
Medical microbiology, the large subset of microbiology that is applied to medicine, is a branch of medical science concerned with the prevention, diagnosis and treatment of infectious diseases. In addition, this field of science studies various clinical applications of microbes for the improvement of health. There are four kinds of microorganisms that cause infectious disease: bacteria, fungi, parasites and viruses, and one type of infectious protein called prion.
A medical microbiologist studies the characteristics of pathogens, their modes of transmission, mechanisms of infection and growth. The academic qualification as a clinical/Medical Microbiologist in a hospital or medical research centre generally requires a Bachelors degree while in some countries a Masters in Microbiology along with Ph.D. in any of the life-sciences (Biochem, Micro, Biotech, Genetics, etc.). Medical microbiologists often serve as consultants for physicians, providing identification of pathogens and suggesting treatment options. Using this information, a treatment can be devised.
Other tasks may include the identification of potential health risks to the community or monitoring the evolution of potentially virulent or resistant strains of microbes, educating the community and assisting in the design of health practices. They may also assist in preventing or controlling epidemics and outbreaks of disease.
Not all medical microbiologists study microbial pathology; some study common, non-pathogenic species to determine whether their properties can be used to develop antibiotics or other treatment methods.
Epidemiology, the study of the patterns, causes, and effects of health and disease conditions in populations, is an important part of medical microbiology, although the clinical aspect of the field primarily focuses on the presence and growth of microbial infections in individuals, their effects on the human body, and the methods of treating those infections. In this respect the entire field, as an applied science, can be conceptually subdivided into academic and clinical sub-specialties, although in reality there is a fluid continuum between public health microbiology and clinical microbiology, just as the state of the art in clinical laboratories depends on continual improvements in academic medicine and research laboratories.
== History ==
In 1676, Anton van Leeuwenhoek observed bacteria and other microorganisms, using a single-lens microscope of his own design.
In 1796, Edward Jenner developed a method using cowpox to successfully immunize a child against smallpox. The same principles are used for developing vaccines today.
Following on from this, in 1857 Louis Pasteur also designed vaccines against several diseases such as anthrax, fowl cholera and rabies as well as pasteurization for food preservation.
In 1867 Joseph Lister is considered to be the father of antiseptic surgery. By sterilizing the instruments with diluted carbolic acid and using it to clean wounds, post-operative infections were reduced, making surgery safer for patients.
In the years between 1876 and 1884 Robert Koch provided much insight into infectious diseases. He was one of the first scientists to focus on the isolation of bacteria in pure culture. This gave rise to the germ theory, a certain microorganism being responsible for a certain disease. He developed a series of criteria around this that have become known as the Koch's postulates.
A major milestone in medical microbiology is the Gram stain. In 1884 Hans Christian Gram developed the method of staining bacteria to make them more visible and differentiated under a microscope. This technique is widely used today.
In 1910 Paul Ehrlich tested multiple combinations of arsenic based chemicals on infected rabbits with syphilis. Ehrlich then found that arsphenamine was found effective against syphilis spirochetes. The arsphenamines was then made available in 1910, known as Salvarsan.
In 1929 Alexander Fleming developed one of the most commonly used antibiotic substances both at the time and now: penicillin.
In 1939 Gerhard Domagk found Prontosil red protected mice from pathogenic streptococci and staphylococci without toxicity. Domagk received the Nobel Prize in physiology, or medicine, for the discovery of the sulfa drug.
DNA sequencing, a method developed by Walter Gilbert and Frederick Sanger in 1977, caused a rapid change the development of vaccines, medical treatments and diagnostic methods. Some of these include synthetic insulin which was produced in 1979 using recombinant DNA and the first genetically engineered vaccine was created in 1986 for hepatitis B.
In 1995 a team at The Institute for Genomic Research sequenced the first bacterial genome; Haemophilus influenzae. A few months later, the first eukaryotic genome was completed. This would prove invaluable for diagnostic techniques.
In 2007, a team at the Danish food company Danisco, were able to identify the purpose of the CRIPR-Cas systems as adaptive immunity to phages. The system was then quickly found to be able to help in genome editing through its ability to generate double strand breaks. A patient with sickle cell disease was the first person to be treated for a genetic disorder with CRISPR in July 2019.
== Commonly treated infectious diseases ==
Bacterial
Streptococcal pharyngitis
Chlamydia
Typhoid fever
Tuberculosis
Viral
Rotavirus
Hepatitis C
Human papillomavirus (HPV)
Parasitic
Malaria
Giardia lamblia
Toxoplasma gondii
Fungal
Candida
Histoplasmosis
Dandruff
Prion - Misfolded proteins that usually occur in the brain and trigger other normal proteins to misfold. They are extremely rare, and these built-up proteins lead to complications.
Transmissible spongiform encephalopathies (TSE)
Creutzfeldt-Jakob disease (CJD)
Fatal familial insomnia
== Causes and transmission of infectious diseases ==
Infections may be caused by bacteria, viruses, fungi, prions, and parasites. The pathogen that causes the disease may be exogenous (acquired from an external source; environmental, animal or other people, e.g. Influenza) or endogenous (from normal flora e.g. Candidiasis).
The site at which a microbe enters the body is referred to as the portal of entry. These include the respiratory tract, gastrointestinal tract, genitourinary tract, skin, parenteral, blood transfusion, congenital, optic, and mucous membranes. The portal of entry for a specific microbe is normally dependent on how it travels from its natural habitat to the host.
There are various ways in which disease can be transmitted between individuals.
These include:
Direct contact - Touching an infected host, including sexual contact
Indirect contact - Touching a contaminated surface
Droplet contact - Coughing or sneezing
Fecal–oral route - Ingesting contaminated food or water sources
Airborne transmission - Pathogen carrying spores
Vector transmission - An organism that does not cause disease itself but transmits infection by conveying pathogens from one host to another
Fomite transmission - An inanimate object or substance capable of carrying infectious germs or parasites
Environmental - Hospital-acquired infection (Nosocomial infections)
Like other pathogens, viruses use these methods of transmission to enter the body, but viruses differ in that they must also enter into the host's actual cells. Once the virus has gained access to the host's cells, the virus' genetic material (RNA or DNA) must be introduced to the cell. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm.
The mechanisms for infection, proliferation, and persistence of a virus in cells of the host are crucial for its survival. For example, some diseases such as measles employ a strategy whereby it must spread to a series of hosts. In these forms of viral infection, the illness is often treated by the body's own immune response, and therefore the virus is required to disperse to new hosts before it is destroyed by immunological resistance or host death. In contrast, some infectious agents such as the Feline leukemia virus, are able to withstand immune responses and are capable of achieving long-term residence within an individual host, whilst also retaining the ability to spread into successive hosts.
Virulence refers to the ability of an organism to invade a host and cause disease. Virulence factors are molecules that enable bacteria to attach to and invade the host's cells. These factors can be secreted, featured on the membrane, or located inside the cell (cytosolic). Cytosolic factors help bacteria rapidly adapt their metabolic, physical, and structural characteristics. Membrane-bound factors help bacteria adhere to the host and avoid detection by the host's immune system. Secreted factors assist bacteria to overcome the body's innate and adaptive immune defenses. In extracellular threats, secreted factors work together to destroy host cells.
== Diagnostic tests ==
Identification of an infectious agent for a minor illness can be as simple as clinical presentation; such as gastrointestinal disease and skin infections. In order to make an educated estimate as to which microbe could be causing the disease, epidemiological factors need to be considered; such as the patient's likelihood of exposure to the suspected organism and the presence and prevalence of a microbial strain in a community.
Diagnosis of infectious disease is nearly always initiated by consulting the patient's medical history and conducting a physical examination. More detailed identification techniques involve microbial culture, microscopy, biochemical tests and genotyping. Other less common techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal abnormalities resulting from the growth of an infectious agent.
=== Microbial culture ===
Microbiological culture is the primary method used for isolating infectious disease for study in the laboratory. Tissue or fluid samples are tested for the presence of a specific pathogen, which is determined by growth in a selective or differential medium.
The 3 main types of media used for testing are:
Solid culture: A solid surface is created using a mixture of nutrients, salts and agar. A single microbe on an agar plate can then grow into colonies (clones where cells are identical to each other) containing thousands of cells. These are primarily used to culture bacteria and fungi.
Liquid culture: Cells are grown inside a liquid media. Microbial growth is determined by the time taken for the liquid to form a colloidal suspension. This technique is used for diagnosing parasites and detecting mycobacteria.
Cell culture: Human or animal cell cultures are infected with the microbe of interest. These cultures are then observed to determine the effect the microbe has on the cells. This technique is used for identifying viruses.
=== Microscopy ===
Culture techniques will often use a microscopic examination to help in the identification of the microbe. Instruments such as compound light microscopes can be used to assess critical aspects of the organism. This can be performed immediately after the sample is taken from the patient and is used in conjunction with biochemical staining techniques, allowing for resolution of cellular features. Electron microscopes and fluorescence microscopes are also used for observing microbes in greater detail for research. The two main types of electron microscopy are scanning electron microscopy and transmission electron microscopy. Transmission electron microscopy passes electrons through a thin cross-section of the cell of interest, and it then redirects the electrons onto a fluorescent screen. This method is useful for looking at the inside of cells, and the structures within, especially cell walls and membranes. Scanning electron microscopy reads the electrons that are reflected off the surface of the cells. A 3-dimensional image is then made which shows the size and exterior structure of the cells. Both techniques help give more detailed information about the structure of microbes. This makes it useful in many medical fields, such as diagnostics and biopsies of many body parts, hygiene, and virology. They provide critical information about the structure of pathogens, which allow physicians to treat them with more knowledge.
=== Biochemical tests ===
Fast and relatively simple biochemical tests can be used to identify infectious agents. For bacterial identification, the use of metabolic or enzymatic characteristics are common due to their ability to ferment carbohydrates in patterns characteristic of their genus and species. Acids, alcohols and gases are usually detected in these tests when bacteria are grown in selective liquid or solid media, as mentioned above. In order to perform these tests en masse, automated machines are used. These machines perform multiple biochemical tests simultaneously, using cards with several wells containing different dehydrated chemicals. The microbe of interest will react with each chemical in a specific way, aiding in its identification.
Serological methods are highly sensitive, specific and often extremely rapid laboratory tests used to identify different types of microorganisms. The tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen (usually a protein or carbohydrate made by an infectious agent) is bound by the antibody, allowing this type of test to be used for organisms other than bacteria. This binding then sets off a chain of events that can be easily and definitively observed, depending on the test. More complex serological techniques are known as immunoassays. Using a similar basis as described above, immunoassays can detect or measure antigens from either infectious agents or the proteins generated by an infected host in response to the infection.
=== Polymerase chain reaction ===
Polymerase chain reaction (PCR) assays are the most commonly used molecular technique to detect and study microbes. As compared to other methods, sequencing and analysis is definitive, reliable, accurate, and fast. Today, quantitative PCR is the primary technique used, as this method provides faster data compared to a standard PCR assay. For instance, traditional PCR techniques require the use of gel electrophoresis to visualize amplified DNA molecules after the reaction has finished. quantitative PCR does not require this, as the detection system uses fluorescence and probes to detect the DNA molecules as they are being amplified. In addition to this, quantitative PCR also removes the risk of contamination that can occur during standard PCR procedures (carrying over PCR product into subsequent PCRs). Another advantage of using PCR to detect and study microbes is that the DNA sequences of newly discovered infectious microbes or strains can be compared to those already listed in databases, which in turn helps to increase understanding of which organism is causing the infectious disease and thus what possible methods of treatment could be used. This technique is the current standard for detecting viral infections such as AIDS and hepatitis.
== Treatments ==
Once an infection has been diagnosed and identified, suitable treatment options must be assessed by the physician and consulting medical microbiologists. Some infections can be dealt with by the body's own immune system, but more serious infections are treated with antimicrobial drugs. Bacterial infections are treated with antibacterials (often called antibiotics) whereas fungal and viral infections are treated with antifungals and antivirals respectively. A broad class of drugs known as antiparasitics are used to treat parasitic diseases.
Medical microbiologists often make treatment recommendations to the patient's physician based on the strain of microbe and its antibiotic resistances, the site of infection, the potential toxicity of antimicrobial drugs and any drug allergies the patient has.
In addition to drugs being specific to a certain kind of organism (bacteria, fungi, etc.), some drugs are specific to a certain genus or species of organism, and will not work on other organisms. Because of this specificity, medical microbiologists must consider the effectiveness of certain antimicrobial drugs when making recommendations. Additionally, strains of an organism may be resistant to a certain drug or class of drug, even when it is typically effective against the species. These strains, termed resistant strains, present a serious public health concern of growing importance to the medical industry as the spread of antibiotic resistance worsens. Antimicrobial resistance is an increasingly problematic issue that leads to millions of deaths every year.
Adapting to the antibiotic medicine means it no longer can kill them or stop their growth. These bacterial infections can become extremely difficult to treat since the options to remove that bacterium are now slimmer. Antibiotic resistance can be caused by overuse, misuse, spontaneous resistance, and transmitted resistance. Taking antibiotics that are not prescribed to you allows naturally resistant bacteria to survive and become “superbugs.” Misuse of antibiotics includes forgetting to take one or more antibiotic doses, stopping treatment too soon, or using someone else’s medicine. Mutated bacteria become increasingly resistant to medicine.
Whilst drug resistance typically involves microbes chemically inactivating an antimicrobial drug or a cell mechanically stopping the uptake of a drug, another form of drug resistance can arise from the formation of biofilms. Some bacteria are able to form biofilms by adhering to surfaces on implanted devices such as catheters and prostheses and creating an extracellular matrix for other cells to adhere to. This provides them with a stable environment from which the bacteria can disperse and infect other parts of the host. Additionally, the extracellular matrix and dense outer layer of bacterial cells can protect the inner bacteria cells from antimicrobial drugs.
Phage therapy is a technique that was discovered before antibiotics, but fell to the wayside as antibiotics became predominate. It is now being considered as a potential solution to increasing antimicrobial resistance. Bacteriophages, viruses that only infect bacteria, can specifically target the bacteria of interest and inject their genome. This process makes the bacteria halt its own production to make more phages, and this continues until the bacteria lyses itself and releases the phages into the surrounding environment. Phage therapy does not kill microbiota since it is specific, and it can help those with antibiotic allergies. Some drawbacks are that it is a time-intensive process since the specific bacterium needs to be identified. It also does not currently have the body of research supporting its effects and safety that antibiotics do. Bacteria can also eventually become resistant, through systems like CRISPR/Cas9 system. Many clinical trials have been promising though, showing that it could potentially help with the antimicrobial resistance problem. It can also be used in conjunction with antibiotics for a cumulative effect.
Medical microbiology is not only about diagnosing and treating disease, it also involves the study of beneficial microbes. Microbes have been shown to be helpful in combating infectious disease and promoting health. Treatments can be developed from microbes, as demonstrated by Alexander Fleming's discovery of penicillin as well as the development of new antibiotics from the bacterial genus Streptomyces among many others. Not only are microorganisms a source of antibiotics but some may also act as probiotics to provide health benefits to the host, such as providing better gastrointestinal health or inhibiting pathogens.
== References ==
== External links == | Wikipedia/Clinical_virology |
A Bachelor of Applied Science (BAS or BASc) is an undergraduate academic degree of applied sciences.
== Usage ==
In Canada, the Netherlands and other places the Bachelor of Applied Science (BASc) is equivalent to the Bachelor of Engineering, and is classified as a professional degree. In Australia and New Zealand this degree is awarded in various fields of study and is considered a highly specialized professional degree. In the United States, it is also considered a highly specialized professional technical degree; the Bachelor of Applied Science (BAS) is an applied baccalaureate, typically containing advanced technical education in sciences combined with liberal arts education that traditional degrees do not have. Yet, an earned BAS degree includes the same amount of required coursework as traditional bachelor's degree programs.
Compared to the Bachelor of Arts (BA) and Bachelor of Science (BS), a BAS degree combines “theoretical and hands-on knowledge and skills that build on a variety of educational backgrounds”. BAS degrees often enhance occupational/technical education.
In February 2009, the Dutch Minister of Education, Culture and Science, Ronald Plasterk, proposed to replace all the existing degrees offered by Dutch vocational universities, such as the BBA, BEd and BEng, with the BAA and the BASc. Similarly, the United States has taken BAS as an official degree name.
== Fields of study ==
The BAS usually requires a student to take a majority of their courses in the applied sciences, specializing in a specific area such as the following:
Agricultural systems
Applied physics
Applied mathematics
Architectural engineering
General engineering
Automotive engineering
Building Arts
Biological engineering
Biochemical engineering
Built Environment
Business informatics
Chemical engineering
Civil engineering
Computer science
Computer engineering
Communication
Construction Management
Criminal justice
Criminology
Electrical engineering
Environmental engineering
Geomatics
Occupational health and safety
Public health
Engineering management
Engineering physics
Engineering science
Engineering science and mechanics
Geological engineering
Hospitality Management
Industrial engineering
Information management
Integrated engineering
Information systems
Information technology
Management engineering
Management of technology
Manufacturing engineering
Manufacturing Management
Materials science & engineering
Mechanical engineering
Mechanical engineering technology
Mechatronics engineering
Mining engineering
Microbiology
Nanotechnology engineering
Nutrition and Food
Paralegal Studies
Forensics
Astrophysics
Professional Technical Teacher Education
Project Management
Property and Valuation
Software engineering
Sound engineering
Surveying
Sustainable building science technology
Systems engineering
Regional and Urban Planning
Applied physics & electronic engineering
Business management
Social science
Leadership
== See also ==
Bachelor of Applied Arts
Bachelor of Applied Arts and Sciences
Bachelor of Arts
Bachelor of Science
Bachelor of Science in Information Technology
Bachelor's degree
== References == | Wikipedia/Bachelor_of_Applied_Science |
Food science (or bromatology) is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology.
Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example.
Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties.
== Definition ==
The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing".
== Disciplines ==
Some of the subdisciplines of food science are described below.
=== Food chemistry ===
Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk.
It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This discipline also encompasses how products change under certain food processing techniques and ways either to enhance or to prevent them from happening.
==== Food physical chemistry ====
Food physical chemistry is the study of both physical and chemical interactions in foods in terms of physical and chemical principles applied to food systems, as well as the application of physicochemical techniques and instrumentation for the study and analysis of foods.
=== Food engineering ===
Food engineering is the industrial processes used to manufacture food. It involves coming up with novel approaches for manufacturing, packaging, delivering, ensuring quality, ensuring safety, and devising techniques to transform raw ingredients into wholesome food options.
=== Food microbiology ===
Food microbiology is the study of the microorganisms that inhabit, create, or contaminate food, including the study of microorganisms causing food spoilage. "Good" bacteria, however, such as probiotics, are becoming increasingly important in food science. In addition, microorganisms are essential for the production of foods such as cheese, yogurt, bread, beer, wine and, other fermented foods.
=== Food technology ===
Food technology is the technological aspect. Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process was not called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques.
=== Foodomics ===
In 2009, Foodomics was defined as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge". Foodomics requires the combination of food chemistry, biological sciences, and data analysis.
Foodomics greatly helps scientists in the area of food science and nutrition to gain better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards a better understanding of the development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, genes, and omics.
=== Molecular gastronomy ===
Molecular gastronomy is a subdiscipline of food science that seeks to investigate the physical and chemical transformations of ingredients that occur in cooking. Its program includes three axes, as cooking was recognized to have three components, which are social, artistic, and technical.
=== Quality control ===
Quality control involves the causes, prevention, and communication dealing with food-borne illness. Quality control also ensures that the product meets specs to ensure the customer receives what they expect from the packaging to the physical properties of the product itself.
=== Sensory analysis ===
Sensory analysis is the study of how consumer's senses perceive food.
=== Careers in Food Science ===
The five most common college degrees leading to a career in food science are: Food science/technology (66%), biological sciences (12%), business/marketing (10%), nutrition (9%) and chemistry (8%).
Careers available to food scientists include food technologists, research and development (R&D), quality control, flavor chemistry, laboratory director, food analytical chemist and technical sales.
The five most common positions for food scientists are food scientist/technologist (19%), product developer (12%), quality assurance/control director (8%), other R&D/scientific/technical (7%), and director of research (5%).
== By country ==
=== Australia ===
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is the federal government agency for scientific research in Australia. CSIRO maintains more than 50 sites across Australia and biological control research stations in France and Mexico. It has nearly 6,500 employees.
=== South Korea ===
The Korean Society of Food Science and Technology, or KoSFoST, claims to be the first society in South Korea for food science.
=== United States ===
In the United States, food science is typically studied at land-grant universities. Some of the country's pioneering food scientists were women who attended chemistry programs at land-grant universities which were state-run and largely under state mandates to allow for sex-blind admission. Although after graduation, they had difficulty finding jobs due to widespread sexism in the chemistry industry in the late 19th and early 20th centuries. Finding conventional career paths blocked, they found alternative employment as instructors in the home economics departments and used that as a base to launch the foundation of many modern food science programs.
The main US organization regarding food science and food technology is the Institute of Food Technologists (IFT), headquartered in Chicago, Illinois, which is the US member organisation of the International Union of Food Science and Technology (IUFoST).
== See also ==
== Publications ==
=== Books ===
Food Science is an academic topic so most food science books are textbooks.
=== Journals ===
== Notes and references ==
== Further reading ==
Wanucha, Genevieve (February 24, 2009). "Two Happy Clams: The friendship that forged food science". MIT Technology Review.
== External links ==
Media related to Food science at Wikimedia Commons
Learn about Food Science | Wikipedia/Food_science |
Medicine is the science and practice of caring for patients, managing the diagnosis, prognosis, prevention, treatment, palliation of their injury or disease, and promoting their health. Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Contemporary medicine applies biomedical sciences, biomedical research, genetics, and medical technology to diagnose, treat, and prevent injury and disease, typically through pharmaceuticals or surgery, but also through therapies as diverse as psychotherapy, external splints and traction, medical devices, biologics, and ionizing radiation, amongst others.
Medicine has been practiced since prehistoric times, and for most of this time it was an art (an area of creativity and skill), frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism. In recent centuries, since the advent of modern science, most medicine has become a combination of art and science (both basic and applied, under the umbrella of medical science). For example, while stitching technique for sutures is an art learned through practice, knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science.
Prescientific forms of medicine, now known as traditional medicine or folk medicine, remain commonly used in the absence of scientific medicine and are thus called alternative medicine. Alternative treatments outside of scientific medicine with ethical, safety and efficacy concerns are termed quackery.
== Etymology ==
Medicine (UK: , US: ) is the science and practice of the diagnosis, prognosis, treatment, and prevention of disease. The word "medicine" is derived from Latin medicus, meaning "a physician". The word "physic" itself, from which "physician" derives, was the old word for what is now called a medicine, and also the field of medicine.
== Clinical practice ==
Medical availability and clinical practice vary across the world due to regional differences in culture and technology. Modern scientific medicine is highly developed in the Western world, while in developing countries such as parts of Africa or Asia, the population may rely more heavily on traditional medicine with limited evidence and efficacy and no required formal training for practitioners.
In the developed world, evidence-based medicine is not universally used in clinical practice; for example, a 2007 survey of literature reviews found that about 49% of the interventions lacked sufficient evidence to support either benefit or harm.
In modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. The doctor-patient relationship typically begins with an interaction with an examination of the patient's medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices (e.g., stethoscope, tongue depressor) are typically used. After examining for signs and interviewing for symptoms, the doctor may order medical tests (e.g., blood tests), take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the same general procedure, and specialists follow a similar process. The diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue.
The components of the medical interview and encounter are:
Chief complaint (CC): the reason for the current medical visit. These are the symptoms. They are in the patient's own words and are recorded along with the duration of each one. Also called chief concern or presenting complaint.
Current activity: occupation, hobbies, what the patient actually does.
Family history (FH): listing of diseases in the family that may impact the patient. A family tree is sometimes used.
History of present illness (HPI): the chronological order of events of symptoms and further clarification of each symptom. Distinguishable from history of previous illness, often called past medical history (PMH). Medical history comprises HPI and PMH.
Medications (Rx): what drugs the patient takes including prescribed, over-the-counter, and home remedies, as well as alternative and herbal medicines or remedies. Allergies are also recorded.
Past medical history (PMH/PMHx): concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies.
Review of systems (ROS) or systems inquiry: a set of additional questions to ask, which may be missed on HPI: a general enquiry (have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc.), followed by questions on the body's main organ systems (heart, lungs, digestive tract, urinary tract, etc.).
Social history (SH): birthplace, residences, marital history, social and economic status, habits (including diet, medications, tobacco, alcohol).
The physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. The healthcare provider uses sight, hearing, touch, and sometimes smell (e.g., in infection, uremia, diabetic ketoacidosis). Four actions are the basis of physical examination: inspection, palpation (feel), percussion (tap to determine resonance characteristics), and auscultation (listen), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments.
The clinical examination involves the study of:
Abdomen and rectum
Cardiovascular (heart and blood vessels)
General appearance of the patient and specific indicators of disease (nutritional status, presence of jaundice, pallor or clubbing)
Genitalia (and pregnancy if the patient is or could be pregnant)
Head, eye, ear, nose, and throat (HEENT)
Musculoskeletal (including spine and extremities)
Neurological (consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves)
Psychiatric (orientation, mental state, mood, evidence of abnormal perception or thought).
Respiratory (large airways and lungs)
Skin
Vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation
It is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above.
The treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. A follow-up may be advised. Depending upon the health insurance plan and the managed care system, various forms of "utilization review", such as prior authorization of tests, may place barriers on accessing expensive services.
The medical decision-making (MDM) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses (the differential diagnoses), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient's problem.
On subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations.
== Institutions ==
Contemporary medicine is, in general, conducted within health care systems. Legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have a significant impact on the way medical care is provided.
From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals, and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system or compulsory private or cooperative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices, state-owned hospitals and clinics, or charities, most commonly a combination of all three.
Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those who can afford to pay for it, have self-insured it (either directly or as part of an employment contract), or may be covered by care financed directly by the government or tribe.
Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice of patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for its lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other.
The health professionals who provide care in medicine comprise multiple professions, such as medics, nurses, physiotherapists, and psychologists. These professions will have their own ethical standards, professional education, and bodies. The medical profession has been conceptualized from a sociological perspective.
=== Delivery ===
Provision of medical care is classified into primary, secondary, and tertiary care categories.
Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes.
Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting.
Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc.
Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means.
In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain.
Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs.
== Branches ==
Working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. Examples include: nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon's assistant, surgical technologist.
The scope and sciences underpinning human medicine overlap many other fields. A patient admitted to the hospital is usually under the care of a specific team based on their main presenting problem, e.g., the cardiology team, who then may interact with other specialties, e.g., surgical, radiology, to help diagnose or treat the main problem or any subsequent complications/developments.
Physicians have many specializations and subspecializations into certain branches of medicine, which are listed below. There are variations from country to country regarding which specialties certain subspecialties are in.
The main branches of medicine are:
Basic sciences of medicine; this is what every physician is educated in, and some return to in biomedical research.
Interdisciplinary fields, where different medical specialties are mixed to function in certain occasions.
Medical specialties
=== Basic sciences ===
Anatomy is the study of the physical structure of organisms. In contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures.
Biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components.
Biomechanics is the study of the structure and function of biological systems by means of the methods of Mechanics.
Biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems.
Biostatistics is the application of statistics to biological fields in the broadest sense. A knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. It is also fundamental to epidemiology and evidence-based medicine.
Cytology is the microscopic study of individual cells.
Embryology is the study of the early development of organisms.
Endocrinology is the study of hormones and their effect throughout the body of animals.
Epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics.
Genetics is the study of genes, and their role in biological inheritance.
Gynecology is the study of female reproductive system.
Histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry.
Immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example.
Lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them.
Medical physics is the study of the applications of physics principles in medicine.
Microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses.
Molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material.
Neuroscience includes those disciplines of science that are related to the study of the nervous system. A main focus of neuroscience is the biology and physiology of the human brain and spinal cord. Some related clinical specialties include neurology, neurosurgery and psychiatry.
Nutrition science (theoretical focus) and dietetics (practical focus) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. Medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases.
Pathology as a science is the study of disease – the causes, course, progression and resolution thereof.
Pharmacology is the study of drugs and their actions.
Photobiology is the study of the interactions between non-ionizing radiation and living organisms.
Physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms.
Radiobiology is the study of the interactions between ionizing radiation and living organisms.
Toxicology is the study of hazardous effects of drugs and poisons.
=== Specialties ===
In the broadest meaning of "medicine", there are many different specialties. In the UK, most specialities have their own body or college, which has its own entrance examination. These are collectively known as the Royal Colleges, although not all currently use the term "Royal". The development of a speciality is often driven by new technology (such as the development of effective anaesthetics) or ways of working (such as emergency departments); the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination.
Within medical circles, specialities usually fit into one of two broad categories: "Medicine" and "Surgery". "Medicine" refers to the practice of non-operative medicine, and most of its subspecialties require preliminary training in Internal Medicine. In the UK, this was traditionally evidenced by passing the examination for the Membership of the Royal College of Physicians (MRCP) or the equivalent college in Scotland or Ireland. "Surgery" refers to the practice of operative medicine, and most subspecialties in this area require preliminary training in General Surgery, which in the UK leads to membership of the Royal College of Surgeons of England (MRCS). At present, some specialties of medicine do not fit easily into either of these categories, such as radiology, pathology, or anesthesia. Most of these have branched from one or other of the two camps above; for example anaesthesia developed first as a faculty of the Royal College of Surgeons (for which MRCS/FRCS would have been required) before becoming the Royal College of Anaesthetists and membership of the college is attained by sitting for the examination of the Fellowship of the Royal College of Anesthetists (FRCA).
==== Surgical specialty ====
Surgery is an ancient medical specialty that uses operative manual and instrumental techniques on a patient to investigate or treat a pathological condition such as disease or injury, to help improve bodily function or appearance or to repair unwanted ruptured areas (for example, a perforated ear drum). Surgeons must also manage pre-operative, post-operative, and potential surgical candidates on the hospital wards. In some centers, anesthesiology is part of the division of surgery (for historical and logistical reasons), although it is not a surgical discipline. Other medical specialties may employ surgical procedures, such as ophthalmology and dermatology, but are not considered surgical sub-specialties per se.
Surgical training in the U.S. requires a minimum of five years of residency after medical school. Sub-specialties of surgery often require seven or more years. In addition, fellowships can last an additional one to three years. Because post-residency fellowships can be competitive, many trainees devote two additional years to research. Thus in some cases surgical training will not finish until more than a decade after medical school. Furthermore, surgical training can be very difficult and time-consuming.
Surgical subspecialties include those a physician may specialize in after undergoing general surgery residency training as well as several surgical fields with separate residency training. Surgical subspecialties that one may pursue following general surgery residency training:
Bariatric surgery
Cardiovascular surgery – may also be pursued through a separate cardiovascular surgery residency track
Colorectal surgery
Endocrine surgery
General surgery
Hand surgery
Hepatico-Pancreatico-Biliary Surgery
Minimally invasive surgery
Pediatric surgery
Plastic surgery – may also be pursued through a separate plastic surgery residency track
Surgical critical care
Surgical oncology
Transplant surgery
Trauma surgery
Vascular surgery – may also be pursued through a separate vascular surgery residency track
Other surgical specialties within medicine with their own individual residency training:
Dermatology
Neurosurgery
Ophthalmology
Oral and maxillofacial surgery
Orthopedic surgery
Otorhinolaryngology
Podiatric surgery – do not undergo medical school training, but rather separate training in podiatry school
Urology
==== Internal medicine specialty ====
Internal medicine is the medical specialty dealing with the prevention, diagnosis, and treatment of adult diseases. According to some sources, an emphasis on internal structures is implied. In North America, specialists in internal medicine are commonly called "internists". Elsewhere, especially in Commonwealth nations, such specialists are often called physicians. These terms, internist or physician (in the narrow sense, common outside North America), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities.
Because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. Formerly, many internists were not subspecialized; such general physicians would see any complex nonsurgical problem; this style of practice has become much less common. In modern urban practice, most internists are subspecialists: that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. For example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys.
In the Commonwealth of Nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians (or internists) who have subspecialized by age of patient rather than by organ system. Elsewhere, especially in North America, general pediatrics is often a form of primary care.
There are many subspecialities (or subdisciplines) of internal medicine:
Training in internal medicine (as opposed to surgical training), varies considerably across the world: see the articles on medical education for more details. In North America, it requires at least three years of residency training after medical school, which can then be followed by a one- to three-year fellowship in the subspecialties listed above. In general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the US. This difference does not apply in the UK where all doctors are now required by law to work less than 48 hours per week on average.
==== Diagnostic specialties ====
Clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. In the United States, these services are supervised by a pathologist. The personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. Subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology.
Clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. These kinds of tests can be divided into recordings of: (1) spontaneous or continuously running electrical activity, or (2) stimulus evoked responses. Subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. Sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional.
Diagnostic radiology is concerned with imaging of the body, e.g. by x-rays, x-ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. Interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling.
Nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances (radiopharmaceuticals) to the body, which can then be imaged outside the body by a gamma camera or a PET scanner. Each radiopharmaceutical consists of two parts: a tracer that is specific for the function under study (e.g., neurotransmitter pathway, metabolic pathway, blood flow, or other), and a radionuclide (usually either a gamma-emitter or a positron emitter). There is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the PET/CT scanner.
Pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. As a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence-based medicine. Many modern molecular tests such as flow cytometry, polymerase chain reaction (PCR), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization (FISH) fall within the territory of pathology.
==== Other major specialties ====
The following are some major medical specialties that do not directly fit into any of the above-mentioned groups:
Anesthesiology (also known as anaesthetics): concerned with the perioperative management of the surgical patient. The anesthesiologist's role during surgery is to prevent derangement in the vital organs' (i.e. brain, heart, kidneys) functions and postoperative pain. Outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine.
Emergency medicine is concerned with the diagnosis and treatment of acute or life-threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies.
Family medicine, family practice, general practice or primary care is, in many countries, the first port-of-call for patients with non-emergency medical problems. Family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care.
Medical genetics is concerned with the diagnosis and management of hereditary disorders.
Neurology is concerned with diseases of the nervous system. In the UK, neurology is a subspecialty of general medicine.
Obstetrics and gynecology (often abbreviated as OB/GYN (American English) or Obs & Gynae (British English)) are concerned respectively with childbirth and the female reproductive and associated organs. Reproductive medicine and fertility medicine are generally practiced by gynecological specialists.
Pediatrics (AE) or paediatrics (BE) is devoted to the care of infants, children, and adolescents. Like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery.
Pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health.
Physical medicine and rehabilitation (or physiatry) is concerned with functional improvement after injury, illness, or congenital disorders.
Podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back.
Preventive medicine is the branch of medicine concerned with preventing disease.
Community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis.
Psychiatry is the branch of medicine concerned with the bio-psycho-social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. Related fields include psychotherapy and clinical psychology.
=== Interdisciplinary fields ===
Some interdisciplinary sub-specialties of medicine include:
Addiction medicine deals with the treatment of addiction.
Aerospace medicine deals with medical problems related to flying and space travel.
Biomedical Engineering is a field dealing with the application of engineering principles to medical practice.
Clinical pharmacology is concerned with how systems of therapeutics interact with patients.
Conservation medicine studies the relationship between human and non-human animal health, and environmental conditions. Also known as ecological medicine, environmental medicine, or medical geology.
Disaster medicine deals with medical aspects of emergency preparedness, disaster mitigation and management.
Diving medicine (or hyperbaric medicine) is the prevention and treatment of diving-related problems.
Evolutionary medicine is a perspective on medicine derived through applying evolutionary theory.
Forensic medicine deals with medical questions in legal context, such as determination of the time and cause of death, type of weapon used to inflict trauma, reconstruction of the facial features using remains of deceased (skull) thus aiding identification.
Gender-based medicine studies the biological and physiological differences between the human sexes and how that affects differences in disease.
Health informatics is a relatively recent field that deal with the application of computers and information technology to medicine.
Hospice and Palliative Medicine is a relatively modern branch of clinical medicine that deals with pain and symptom relief and emotional support in patients with terminal illnesses including cancer and heart failure.
Hospital medicine is the general medical care of hospitalized patients. Physicians whose primary professional focus is hospital medicine are called hospitalists in the United States and Canada. The term Most Responsible Physician (MRP) or attending physician is also used interchangeably to describe this role.
Laser medicine involves the use of lasers in the diagnostics or treatment of various conditions.
Many other health science fields, e.g. dietetics
Medical ethics deals with ethical and moral principles that apply values and judgments to the practice of medicine.
Medical humanities includes the humanities (literature, philosophy, ethics, history and religion), social science (anthropology, cultural studies, psychology, sociology), and the arts (literature, theater, film, and visual arts) and their application to medical education and practice.
Nosokinetics is the science/subject of measuring and modelling the process of care in health and social care systems.
Nosology is the classification of diseases for various purposes.
Occupational medicine is the provision of health advice to organizations and individuals to ensure that the highest standards of health and safety at work can be achieved and maintained.
Pain management (also called pain medicine, or algiatry) is the medical discipline concerned with the relief of pain.
Pharmacogenomics is a form of individualized medicine.
Podiatric medicine is the study of, diagnosis, and medical treatment of disorders of the foot, ankle, lower limb, hip and lower back.
Sexual medicine is concerned with diagnosing, assessing and treating all disorders related to sexuality.
Sports medicine deals with the treatment and prevention and rehabilitation of sports/exercise injuries such as muscle spasms, muscle tears, injuries to ligaments (ligament tears or ruptures) and their repair in athletes, amateur and professional.
Therapeutics is the field, more commonly referenced in earlier periods of history, of the various remedies that can be used to treat disease and promote health.
Travel medicine or emporiatrics deals with health problems of international travelers or travelers across highly different environments.
Tropical medicine deals with the prevention and treatment of tropical diseases. It is studied separately in temperate climates where those diseases are quite unfamiliar to medical practitioners and their local clinical needs.
Urgent care focuses on delivery of unscheduled, walk-in care outside of the hospital emergency department for injuries and illnesses that are not severe enough to require care in an emergency department. In some jurisdictions this function is combined with the emergency department.
Veterinary medicine; veterinarians apply similar techniques as physicians to the care of non-human animals.
Wilderness medicine entails the practice of medicine in the wild, where conventional medical facilities may not be available.
== Education and legal controls ==
Medical education and training varies around the world. It typically involves entry level education at a university medical school, followed by a period of supervised practice or internship, or residency. This can be followed by postgraduate vocational training. A variety of teaching methods have been employed in medical education, still itself a focus of active research. In Canada and the United States of America, a Doctor of Medicine degree, often abbreviated M.D., or a Doctor of Osteopathic Medicine degree, often abbreviated as D.O. and unique to the United States, must be completed in and delivered from a recognized university.
Since knowledge, techniques, and medical technology continue to evolve at a rapid rate, many regulatory authorities require continuing medical education. Medical practitioners upgrade their knowledge in various ways, including medical journals, seminars, conferences, and online programs. A database of objectives covering medical knowledge, as suggested by national societies across the United States, can be searched at http://data.medobjectives.marian.edu/ Archived 4 October 2018 at the Wayback Machine.
In most countries, it is a legal requirement for a medical doctor to be licensed or registered. In general, this entails a medical degree from a university and accreditation by a medical board or an equivalent national organization, which may ask the applicant to pass exams. This restricts the considerable legal authority of the medical profession to physicians that are trained and qualified by national standards. It is also intended as an assurance to patients and as a safeguard against charlatans that practice inadequate medicine for personal gain. While the laws generally require medical doctors to be trained in "evidence based", Western, or Hippocratic Medicine, they are not intended to discourage different paradigms of health.
In the European Union, the profession of doctor of medicine is regulated. A profession is said to be regulated when access and exercise is subject to the possession of a specific professional qualification. The regulated professions database contains a list of regulated professions for doctor of medicine in the EU member states, EEA countries and Switzerland. This list is covered by the Directive 2005/36/EC.
Doctors who are negligent or intentionally harmful in their care of patients can face charges of medical malpractice and be subject to civil, criminal, or professional sanctions.
== Medical ethics ==
Medical ethics is a system of moral principles that apply values and judgments to the practice of medicine. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology. Six of the values that commonly apply to medical ethics discussions are:
autonomy – the patient has the right to refuse or choose their treatment. (Latin: Voluntas aegroti suprema lex.)
beneficence – a practitioner should act in the best interest of the patient. (Latin: Salus aegroti suprema lex.)
justice – concerns the distribution of scarce health resources, and the decision of who gets what treatment (fairness and equality).
non-maleficence – "first, do no harm" (Latin: primum non-nocere).
respect for persons – the patient (and the person treating the patient) have the right to be treated with dignity.
truthfulness and honesty – the concept of informed consent has increased in importance since the historical events of the Doctors' Trial of the Nuremberg trials, Tuskegee syphilis experiment, and others.
Values such as these do not give answers as to how to handle a particular situation, but provide a useful framework for understanding conflicts. When moral values are in conflict, the result may be an ethical dilemma or crisis. Sometimes, no good solution to a dilemma in medical ethics exists, and occasionally, the values of the medical community (i.e., the hospital and its staff) conflict with the values of the individual patient, family, or larger non-medical community. Conflicts can also arise between health care providers, or among family members. For example, some argue that the principles of autonomy and beneficence clash when patients refuse blood transfusions, considering them life-saving; and truth-telling was not emphasized to a large extent before the HIV era.
== History ==
=== Ancient world ===
Prehistoric medicine incorporated plants (herbalism), animal parts, and minerals. In many cases these materials were used ritually as magical substances by priests, shamans, or medicine men. Well-known spiritual systems include animism (the notion of inanimate objects having spirits), spiritualism (an appeal to gods or communion with ancestor spirits); shamanism (the vesting of an individual with mystic powers); and divination (magically obtaining the truth). The field of medical anthropology examines the ways in which culture and society are organized around or impacted by issues of health, health care and related issues.
The earliest known medical texts in the world were found in the ancient Syrian city of Ebla and date back to 2500 BCE. Other early records on medicine have been discovered from ancient Egyptian medicine, Babylonian Medicine, Ayurvedic medicine (in the Indian subcontinent), classical Chinese medicine (Alternative medicine) predecessor to the modern traditional Chinese medicine), and ancient Greek medicine and Roman medicine.
In Egypt, Imhotep (3rd millennium BCE) is the first physician in history known by name. The oldest Egyptian medical text is the Kahun Gynaecological Papyrus from around 2000 BCE, which describes gynaecological diseases. The Edwin Smith Papyrus dating back to 1600 BCE is an early work on surgery, while the Ebers Papyrus dating back to 1500 BCE is akin to a textbook on medicine.
In China, archaeological evidence of medicine in Chinese dates back to the Bronze Age Shang dynasty, based on seeds for herbalism and tools presumed to have been used for surgery. The Huangdi Neijing, the progenitor of Chinese medicine, is a medical text written beginning in the 2nd century BCE and compiled in the 3rd century.
In India, the surgeon Sushruta described numerous surgical operations, including the earliest forms of plastic surgery.Earliest records of dedicated hospitals come from Mihintale in Sri Lanka where evidence of dedicated medicinal treatment facilities for patients are found.
In Greece, the ancient Greek physician Hippocrates, the "father of modern medicine", laid the foundation for a rational approach to medicine. Hippocrates introduced the Hippocratic Oath for physicians, which is still relevant and in use today, and was the first to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence". The Greek physician Galen was also one of the greatest surgeons of the ancient world and performed many audacious operations, including brain and eye surgeries. After the fall of the Western Roman Empire and the onset of the Early Middle Ages, the Greek tradition of medicine went into decline in Western Europe, although it continued uninterrupted in the Eastern Roman (Byzantine) Empire.
Most of our knowledge of ancient Hebrew medicine during the 1st millennium BC comes from the Torah, i.e. the Five Books of Moses, which contain various health related laws and rituals. The Hebrew contribution to the development of modern medicine started in the Byzantine Era, with the physician Asaph the Jew.
=== Middle Ages ===
The concept of hospital as institution to offer medical care and possibility of a cure for the patients due to the ideals of Christian charity, rather than just merely a place to die, appeared in the Byzantine Empire.
Although the concept of uroscopy was known to Galen, he did not see the importance of using it to localize the disease. It was under the Byzantines with physicians such of Theophilus Protospatharius that they realized the potential in uroscopy to determine disease in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe.
After 750 CE, the Muslim world had the works of Hippocrates, Galen and Sushruta translated into Arabic, and Islamic physicians engaged in some significant medical research. Notable Islamic medical pioneers include the Persian polymath, Avicenna, who, along with Imhotep and Hippocrates, has also been called the "father of medicine". He wrote The Canon of Medicine which became a standard medical text at many medieval European universities, considered one of the most famous books in the history of medicine. Others include Abulcasis, Avenzoar, Ibn al-Nafis, and Averroes. Persian physician Rhazes was one of the first to question the Greek theory of humorism, which nevertheless remained influential in both medieval Western and medieval Islamic medicine. Some volumes of Rhazes's work Al-Mansuri, namely "On Surgery" and "A General Book on Therapy", became part of the medical curriculum in European universities. Additionally, he has been described as a doctor's doctor, the father of pediatrics, and a pioneer of ophthalmology. For example, he was the first to recognize the reaction of the eye's pupil to light. The Persian Bimaristan hospitals were an early example of public hospitals.
In Europe, Charlemagne decreed that a hospital should be attached to each cathedral and monastery and the historian Geoffrey Blainey likened the activities of the Catholic Church in health care during the Middle Ages to an early version of a welfare state: "It conducted hospitals for the old and orphanages for the young; hospices for the sick of all ages; places for the lepers; and hostels or inns where pilgrims could buy a cheap bed and meal". It supplied food to the population during famine and distributed food to the poor. This welfare system the church funded through collecting taxes on a large scale and possessing large farmlands and estates. The Benedictine order was noted for setting up hospitals and infirmaries in their monasteries, growing medical herbs and becoming the chief medical care givers of their districts, as at the great Abbey of Cluny. The Church also established a network of cathedral schools and universities where medicine was studied. The Schola Medica Salernitana in Salerno, looking to the learning of Greek and Arab physicians, grew to be the finest medical school in medieval Europe.
However, the fourteenth and fifteenth century Black Death devastated both the Middle East and Europe, and it has even been argued that Western Europe was generally more effective in recovering from the pandemic than the Middle East. In the early modern period, important early figures in medicine and anatomy emerged in Europe, including Gabriele Falloppio and William Harvey.
The major shift in medical thinking was the gradual rejection, especially during the Black Death in the 14th and 15th centuries, of what may be called the "traditional authority" approach to science and medicine. This was the notion that because some prominent person in the past said something must be so, then that was the way it was, and anything one observed to the contrary was an anomaly (which was paralleled by a similar shift in European society in general – see Copernicus's rejection of Ptolemy's theories on astronomy). Physicians like Vesalius improved upon or disproved some of the theories from the past. The main tomes used both by medicine students and expert physicians were Materia Medica and Pharmacopoeia.
Andreas Vesalius was the author of De humani corporis fabrica, an important book on human anatomy. Bacteria and microorganisms were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field microbiology. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work for which he paid with his life in 1553. Later this was described by Renaldus Columbus and Andrea Cesalpino. Herman Boerhaave is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and textbook 'Institutiones medicae' (1708). Pierre Fauchard has been called "the father of modern dentistry".
=== Modern ===
Veterinary medicine was, for the first time, truly separated from human medicine in 1761, when the French veterinarian Claude Bourgelat founded the world's first veterinary school in Lyon, France. Before this, medical doctors treated both humans and other animals.
Modern scientific biomedical research (where results are testable and reproducible) began to replace early Western traditions based on herbalism, the Greek "four humours" and other such pre-modern notions. The modern era really began with Edward Jenner's discovery of the smallpox vaccine at the end of the 18th century (inspired by the method of variolation originated in ancient China), Robert Koch's discoveries around 1880 of the transmission of disease by bacteria, and then the discovery of antibiotics around 1900.
The post-18th century modernity period brought more groundbreaking researchers from Europe. From Germany and Austria, doctors Rudolf Virchow, Wilhelm Conrad Röntgen, Karl Landsteiner and Otto Loewi made notable contributions. In the United Kingdom, Alexander Fleming, Joseph Lister, Francis Crick and Florence Nightingale are considered important. Spanish doctor Santiago Ramón y Cajal is considered the father of modern neuroscience.
From New Zealand and Australia came Maurice Wilkins, Howard Florey, and Frank Macfarlane Burnet.
Others that did significant work include William Williams Keen, William Coley, James D. Watson (United States); Salvador Luria (Italy); Alexandre Yersin (Switzerland); Kitasato Shibasaburō (Japan); Jean-Martin Charcot, Claude Bernard, Paul Broca (France); Adolfo Lutz (Brazil); Nikolai Korotkov (Russia); Sir William Osler (Canada); and Harvey Cushing (United States).
As science and technology developed, medicine became more reliant upon medications. Throughout history and in Europe right until the late 18th century, not only plant products were used as medicine, but also animal (including human) body parts and fluids. Pharmacology developed in part from herbalism and some drugs are still derived from plants (atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc.). Vaccines were discovered by Edward Jenner and Louis Pasteur.
The first antibiotic was arsphenamine (Salvarsan) discovered by Paul Ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. The first major class of antibiotics was the sulfa drugs, derived by German chemists originally from azo dyes.
Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side-effects. Genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision-making.
Evidence-based medicine is a contemporary movement to establish the most effective algorithms of practice (ways of doing things) through the use of systematic reviews and meta-analysis. The movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. The Cochrane Collaboration leads this movement. A 2001 review of 160 Cochrane systematic reviews revealed that, according to two readers, 21.3% of the reviews concluded insufficient evidence, 20% concluded evidence of no effect, and 22.5% concluded positive effect.
== Quality, efficiency, and access ==
Evidence-based medicine, prevention of medical error (and other "iatrogenesis"), and avoidance of unnecessary health care are a priority in modern medical systems. These topics generate significant political and public policy attention, particularly in the United States where healthcare is regarded as excessively costly but population health metrics lag similar nations.
Globally, many developing countries lack access to care and access to medicines. As of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the United States where lack of health insurance coverage may limit access.
== See also ==
== Notes ==
== References == | Wikipedia/Medical_sciences |
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.).
In many languages, the term technical physics is also used.
It has been used since 1861 by the German physics teacher J. Frick in his publications.
== Terminology ==
In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics.
In some universities and their institutions, an engineering physics (or applied physics) major is a discipline or specialization within the scope of engineering science, or applied science.
Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering".
== Expertise ==
Unlike traditional engineering disciplines, engineering science or engineering physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science or engineering physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.
== Degrees ==
In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics.
== Awards ==
There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field.
== See also ==
Applied physics
Engineering
Engineering science and mechanics
Environmental engineering science
Index of engineering science and mechanics articles
Industrial engineering
== Notes and references ==
== External links ==
"Engineering Physics at Xavier"
"The Engineering Physicist Profession"
"Engineering Physicist Professional Profile"
Society of Engineering Science Inc. Archived 2017-08-07 at the Wayback Machine | Wikipedia/Engineering_sciences |
In organic chemistry, a vinyl iodide (also known as an iodoalkene) functional group is an alkene with one or more iodide substituents. Vinyl iodides are versatile molecules that serve as important building blocks and precursors in organic synthesis. They are commonly used in carbon-carbon forming reactions in transition-metal catalyzed cross-coupling reactions, such as Stille reaction, Heck reaction, Sonogashira coupling, and Suzuki coupling. Synthesis of well-defined geometry or complexity vinyl iodide is important in stereoselective synthesis of natural products and drugs.
== Properties ==
Vinyl iodides are generally stable under nucleophilic conditions. In SN2 reactions, back-attack is difficult because of steric clash of R groups on carbon adjacent to electrophilic center (see figure 1a). In addition, the lone pair on iodide donates into the ╥* of the alkene, which reduces electrophilic character on the carbon as a result of decreased positive charge. Also, this stereoelectronic effect strengthens the C-I bond, thus making removal of the iodide difficult (see figure 1b). In SN1 case, dissociation is difficult because of the strengthened C-I bond and loss of the iodide will generate an unstable carbocation(see figure 1c)
In cross-coupling reactions, typically vinyl iodides react faster and under more mild conditions than vinyl chloride and vinyl bromide. The order of reactivity is based on the strength of carbon-halogen bond. C-I bond is the weakest of the halogens, the bond dissociation energies of C-I is 57.6kcal/mol, while fluoride, chloride and bromide are 115, 83.7, 72.1 kcal/mol respectively. As a result of having weaker bond, vinyl iodide does not polymerize as easily as its vinyl halide counterparts, but rather decompose and release iodide.
It is generally believed that vinyl iodide cannot survive common reduction conditions, which reduces the vinyl iodide to an olefin or unsaturated alkane. However, there is evidence in literature, in which a propargyl alcohol's alkyne was reduced in presence of a vinyl iodide using hydrogen over Pd/CaCO3 or Crabtree's catalyst.
== Other applications ==
Besides using vinyl iodides as useful substrates in transition metal cross-coupling reaction, they can also undergo elimination with a strong base to give corresponding alkyne, and they can be converted to suitable vinyl Grignard reagents. Vinyl iodides are converted to Grignard reagents by magnesium-halogen exchange (see Scheme 1a). The scope of this synthetic method is limited since it requires higher temperatures and longer reaction time, which affects functional group tolerance. However, vinyl iodide with electron withdrawing group can enhance rate of exchange(see Scheme 1b). Also addition of lithium chloride helps enhance magnesium-halogen exchange (see Scheme 1c). It is predicted lithium chloride breaks up aggregates in organomagnesium reagents.
== Methods of synthesis ==
Vinyl iodides are synthesized by methods such as iodination and substitution reaction. Vinyl iodides with well-defined geometry (regiochemistry and stereochemistry) are important in synthesis since many natural products and drugs that have specific structure and dimension. Example of regiochemistry is whether the iodide is positioned in either alpha or beta position on the olefin. Stereochemistry such as E-Z notation or cis-trans alkene geometry is important since some transition metal cross-coupling reactions, such as the Suzuki coupling, can retain olefin geometry. In synthesis, it is useful to introduce vinyl iodide at various positions to be set up for a coupling reaction at the next synthetic step. Below are various means and methods in introducing and synthesizing vinyl iodides.
=== Synthesis from alkynes ===
The common and simplest approach to make vinyl iodide is addition of one equivalent HI to alkyne. This generally makes 2-iodo-1-alkenes or α-vinyl iodide by Markovnikov's rule. However, this reaction does not happen at good rates or very high stereoselectively. As a result, most synthetic methods often involve a hydrometalation step before addition of I+ source.
==== α-vinyl iodides ====
Introducing an α-vinyl iodide from a terminal position of an alkyne is a difficult step. in addition, the vinyl metal intermediate can be mildly nucleophilic, for example vinyl aluminum, can form C-C bonds under catalytic conditions. However, Hoveyda group have demonstrated using nickel-based catalyst (Ni(dppp)Cl2), DIBAL-H with N-iodosuccinimide (NIS), selectively favor α-vinyl iodide with little to no byproducts. Also they observed reverse selectivity for β with Ni(PPh3)2Cl2 in their hydroalumination reactions under same conditions with little or no byproducts. The advantage of this method is that is inexpensive (and commercially available), scalable and one-pot reaction.
Another method doesn't involve hydrometalation but hydroiodation with I2/hydrophosphine binary system, which was developed by Ogawa's group.
The hydroiodation proceeds by Markovnikov-type adduct, no reaction is observed without addition of hydrophoshine. In a plausible mechanism proposed by Ogawa's group, the hydrophosphine reacts with HI to form an intermediate complex that coordinate HI to do Markovnikov hydroiodation on the alkene. The advantage of this system is the conditions are mild, can tolerate wide range of functional groups.
==== β-vinyl iodides ====
They are generally more methods in making β-vinyl iodides versus α-vinyl iodides using hydrometalation (with aluminum with DIBAL-H (hydroalumination), with boron (hydroboration), with HZrCp2Cl (hydrozirconation)). However, hydrometalation with alkyne with various functional groups often react poorly with side products. The Chong groups have demonstrated using hydrostannation, using Bu3SnH with palladium catalyst with high E stereoselectivity. They observed using sterically bulky ligands gave higher regioselectivity for β-vinyl iodide. The advantage of this technique is this technique can tolerate a wide range of functional groups.
Z selective β-vinyl iodides are slightly more difficult to introduce than E-β-vinyl iodides, often requiring more than one step. Hydroalumination and hydroboration usually proceed by syn fashion, therefore selectively favors E geometry. The Oshima group have demonstrated using hydroindation with HInCl selectively favors Z geometry. They suggested that the reaction proceeds by a radical mechanism. They predict that HInCl adds to alkyne by radical addition in a Z geometry. It does not isomerized to E geometry because of low reactivity of radical InCl2 with intermediate complex (no second addition). If second addition occurs then isomerization will occur through diindium intermediate. They confirm a radical mechanism in a mechanistic study with alkyne and alkene cyclization.
=== Substitution ===
Substitution is perhaps most useful method in introducing vinyl iodide into the molecule. Halogen-exchange can be useful since vinyl iodides are more reactivity than other vinyl halides. Buchwald group demonstrates a halogen-exchange from vinyl bromide to vinyl iodide with copper catalyst under mild conditions. It is possible that this method can tolerate various functional groups since these conditions were tested aryl halides initially. The scope of this exchange for regiochemistry and stereochemistry is currently unexplored.
Halogen-exchange can also be done with zirconium derivatives that retain olefin’s geometry
The Marek group have further investigated using zirconium catalyst on E or Z vinyl ethers, which selective for E-vinyl ethers. The zirconium's oxophilic nature allows elimination alkoxy group at the β position to form intermediate vinyl zirconium complex. The E geometry selectivity is not cause by sterics but rather the reaction itself is not concerted. In a mechanistic study, they observed isomerization, which suggest E geometry product is more favored than Z geometry. The difference of results between halogen exchange and E-vinyl ether reaction is that only when there is a presence of an oxonium intermediate, is isomerization observed.
An interesting substitution reaction is vinyl boronic acid to vinyl iodide done by Brown's group. Depending on order of addition of iodide or base, vinyl borate can yield different stereoisomers of vinyl iodide (see scheme 2a). The Whiting group, however, noticed that Brown's method was not applicable to more sterically hindered boronic esters (no reaction). They proposed that the iodide source was not electropositive enough. So they decided to use ICl which is more polar than I2, in which, they observed similar results (see scheme 2b).
Radical substitution of carboxylic acid to iodide is demonstrated by a modified Hunsdiecker reaction. Homolytic cleavage of O-I bond generates CO2 and vinyl radical. Vinyl radical recombines with iodide radical to form vinyl iodide.
==== Iododesilylation ====
Iododesilylation is a substitution reaction of silyl group for iodide. The advantages of iododesilylation are that it avoids toxic tin reagent and intermediate vinyl silyl are stable, nontoxic and easily handled and stored. Vinyl silyl can be made from terminal alkyne or other methods.
The Kishi's group reported a mild preparation of vinyl iodide from vinyl silyl using NIS in mixture of acetonitrile and chloroacetonitrile. They observed retention of olefin geometry in some vinyl silyl substrates while inversion in others. They reasoned that the R group's size had an effect on the geometry of the olefin. If the R group is small, the solvent acetonitrile can participate in the reaction leading to inversion of the olefin's geometry. If the R group is big, the solvent is unable to participate, leading to retention of olefin's geometry
Zakarian's group then decided to run the reaction in HFIP, which gave high retention of olefin geometry. They reasoned that HFIP is a low nucleophilicity solvent, unlike acetonitrile. In addition, they observed accelerated reaction rate because HFIP activate NIS by hydrogen bonding.
Unfortunately, iododesilylation under those conditions (above) can potentially yield multiple byproducts in highly functionalized molecules with oxygen functional groups. Vilarrasa and Costa's group hypothesized that radical reactions producing HI and I2 help facilitate cleavage in alcohol's protecting group and may add into other alkene bonds. They experimented with the use of silver additives such as silver acetate and silver carbonate in which the silver can react with the excess iodide to form silver iodide. They achieved better conversions with these conditions.
=== Name reactions ===
Some famous vinyl iodide synthesis methods involve conversion of aldehyde or ketone to vinyl iodide. Barton's hydrazone iodination method involves addition of hydrazines to aldehyde or ketone to form hydrazone. Then the hydrazone is converted to vinyl iodide by addition of iodide and DBU. This method has been used in natural product synthesis of Taxol by Danishefsky and Cortistatin A by Shair.
Another method is the Takai olefination which uses iodoform and chromium(II) chloride to make vinyl iodide from aldehyde with high stereoselectivity for E geometry. For high stereoselectivity for Z geometry, Stork-Zhao olefination proceeds by Wittig-like reaction. High yields and Z stereoselectivity occurred at low temperature and at the presence of HMPA.
Below is example of employing both Takai olefination and Stork-Zhao olefination in total synthesis of (+)-3-(E)- and (+)-3-(Z)-Pinnatifidenyne.
=== Elimination method ===
Vinyl iodides are rarely by made an elimination reaction of vicinal diiodide because it tends to decompose to alkene and iodide. The Baker group have shown using decarboxylation, elimination can occur.
== See also ==
List of functional groups
Group contribution method
== References == | Wikipedia/Vinyl_iodide_functional_group |
2-Aminoethoxydiphenyl borate (2-APB) is a chemical that acts to inhibit both IP3 receptors and TRP channels (although it activates TRPV1, TRPV2, & TRPV3 at higher concentrations). In research it is used to manipulate intracellular release of calcium ions (Ca2+) and modify TRP channel activity, although the lack of specific effects make it less than ideal under some circumstances. Additionally, there is evidence that 2-APB acts directly to inhibit gap junctions made of connexin. Increasing evidence showed that 2-APB is a powerful modifier of store-operated calcium channels (SOC) function, low concentration of 2-APB can enhance SOC while high concentration induces a transient increase followed by complete inhibition.
== References == | Wikipedia/2-Aminoethoxydiphenyl_borate |
A perchlorate is a chemical compound containing the perchlorate ion, ClO−4, the conjugate base of perchloric acid (ionic perchlorate). As counterions, there can be metal cations, quaternary ammonium cations or other ions, for example, nitronium cation (NO+2).
The term perchlorate can also describe perchlorate esters or covalent perchlorates. These are organic compounds that are alkyl or aryl esters of perchloric acid. They are characterized by a covalent bond between an oxygen atom of the ClO4 moiety and an organyl group.
In most ionic perchlorates, the cation is non-coordinating. The majority of ionic perchlorates are commercially produced salts commonly used as oxidizers for pyrotechnic devices and for their ability to control static electricity in food packaging. Additionally, they have been used in rocket propellants, fertilizers, and as bleaching agents in the paper and textile industries.
Perchlorate contamination of food and water endangers human health, primarily affecting the thyroid gland.
Ionic perchlorates are typically colorless solids that exhibit good solubility in water. The perchlorate ion forms when they dissolve in water, dissociating into ions. Many perchlorate salts also exhibit good solubility in non-aqueous solvents. Four perchlorates are of primary commercial interest: ammonium perchlorate (NH4)ClO4, perchloric acid HClO4, potassium perchlorate KClO4 and sodium perchlorate NaClO4.
== Production ==
Very few chemical oxidants are strong enough to convert chlorate to perchlorate. Persulfate, ozone, or lead dioxide are all known to do so, but the reactions are too delicate and low-yielding for commercial viability.
Perchlorate salts are typically manufactured through the process of electrolysis, which involves oxidizing aqueous solutions of corresponding chlorates. This technique is commonly employed in the production of sodium perchlorate, which finds widespread use as a key ingredient in rocket fuel. Perchlorate salts are also commonly produced by reacting perchloric acid with bases, such as ammonium hydroxide or sodium hydroxide. Ammonium perchlorate, which is highly valued, can also be produced via an electrochemical process.
Perchlorate esters are formed in the presence of a nucleophilic catalyst via a perchlorate salt's nucleophilic substitution onto an alkylating agent.
== Uses ==
The dominant use of perchlorates is as oxidizers in propellants for rockets, fireworks and highway flares. Of particular value is ammonium perchlorate composite propellant as a component of solid rocket fuel. In a related but smaller application, perchlorates are used extensively within the pyrotechnics industry and in certain munitions and for the manufacture of matches. Martian perchlorates might also be used to produce fuel on that planet.
Perchlorate is used to control static electricity in food packaging. Sprayed onto containers it stops statically charged food from clinging to plastic or paper/cardboard surface.
Niche uses include lithium perchlorate, which decomposes exothermically to produce oxygen, useful in oxygen "candles" on spacecraft, submarines, and in other situations where a reliable backup oxygen supply is needed.
Potassium perchlorate has, in the past, been used therapeutically to help manage Graves' disease. It impedes production of the thyroid hormones that contain iodine.
As perchlorate is generally a non-complexing anion and that its sodium salts is particularly soluble, it is commonly used as a background, or supporting, electrolyte in solution chemistry, electrophoresis, and electrochemistry. Although used as a powerful oxidizer in propulsive powders and explosives, quite surprisingly, the perchlorate anion is a weak oxidant in aqueous solution because of kinetics limitations severely hindering the electron transfer.
== Chemical properties ==
The perchlorate ion is the least redox reactive of the generalized chlorates. Perchlorate contains chlorine in its highest oxidation number (+7). A table of reduction potentials of the four chlorates shows that, contrary to expectation, perchlorate in aqueous solution is the weakest oxidant among the four.
These data show that the perchlorate and chlorate are stronger oxidizers in acidic conditions than in basic conditions.
Gas phase measurements of heats of reaction (which allow computation of ΔfH°) of various chlorine oxides do follow the expected trend wherein Cl2O7 exhibits the largest endothermic value of ΔfH° (238.1 kJ/mol) while Cl2O exhibits the lowest endothermic value of ΔfH° (80.3 kJ/mol).
=== Weak base and weak coordinating anion ===
As perchloric acid is one of the strongest mineral acids, perchlorate is a very weak base in the sense of Brønsted–Lowry acid–base theory.
As it is also generally a weakly coordinating anion, perchlorate is commonly used as a background, or supporting, electrolyte.
=== Weak oxidant in aqueous solution due to kinetic limitations ===
Perchlorate compounds oxidize organic compounds, especially when the mixture is heated. The explosive decomposition of ammonium perchlorate is catalyzed by metals and heat.
As perchlorate is a weak Lewis base (i.e., a weak electron pair donor) and a weak nucleophilic anion, it is also a very weakly coordinating anion. This is why it is often used as a supporting electrolyte to study the complexation and the chemical speciation of many cations in aqueous solution or in electroanalytical methods (voltammetry, electrophoresis…). Although the perchlorate reduction is thermodynamically favorable (∆G < 0; E° > 0), and that ClO−4 is expected to be a strong oxidant, most often in aqueous solution, it is practically an inert species behaving as an extremely slow oxidant because of severe kinetics limitations. The metastable character of perchlorate in the presence of reducing cations such as Fe2+ in solution is due to the difficulty to form an activated complex facilitating the electron transfer and the exchange of oxo groups in the opposite direction. These strongly hydrated cations cannot form a sufficiently stable coordination bridge with one of the four oxo groups of the perchlorate anion. Although thermodynamically a mild reductant, Fe2+ ion exhibits a stronger trend to remain coordinated by water molecules to form the corresponding hexa-aquo complex in solution. The high activation energy of the cation binding with perchlorate to form a transient inner sphere complex more favourable to electron transfer considerably hinders the redox reaction. The redox reaction rate is limited by the formation of a favorable activated complex involving an oxo-bridge between the perchlorate anion and the metallic cation. It depends on the molecular orbital rearrangement (HOMO and LUMO orbitals) necessary for a fast oxygen atom transfer (OAT) and the associated electron transfer as studied experimentally by Henry Taube (1983 Nobel Prize in Chemistry) and theoretically by Rudolph A. Marcus (1992 Nobel Prize in Chemistry), both awarded for their respective works on the mechanisms of electron-transfer reactions with metal complexes and in chemical systems.
In contrast to the Fe2+ cations which remain unoxidized in deaerated perchlorate aqueous solutions free of dissolved oxygen, other cations such as Ru(II) and Ti(III) can form a more stable bridge between the metal centre and one of the oxo groups of ClO−4. In the inner sphere electron transfer mechanism to observe the perchlorate reduction, the ClO−4 anion must quickly transfer an oxygen atom to the reducing cation. When it is the case, metallic cations can readily reduce perchlorate in solution. Ru(II) can reduce ClO−4 to ClO−3, while V(II), V(III), Mo(III), Cr(II) and Ti(III) can reduce ClO−4 to Cl−.
Some metal complexes, especially those of rhenium, and some metalloenzymes can catalyze the reduction of perchlorate under mild conditions. Perchlorate reductase (see below), a molybdoenzyme, also catalyzes the reduction of perchlorate. Both the Re- and Mo-based catalysts operate via metal-oxo intermediates.
=== Microbiology ===
Over 40 phylogenetically and metabolically diverse microorganisms capable of growth using perchlorate as an electron acceptor have been isolated since 1996. Most originate from the Pseudomonadota, but others include the Bacillota, Moorella perchloratireducens and Sporomusa sp., and the archaeon Archaeoglobus fulgidus. With the exception of A. fulgidus, microbes that grow via perchlorate reduction utilize the enzymes perchlorate reductase and chlorite dismutase, which collectively take perchlorate to chloride. In the process, free oxygen (O2) is generated.
== Natural abundance ==
=== Terrestrial abundance ===
Perchlorate is created by lightning discharges in the presence of chloride. Perchlorate has been detected in rain and snow samples from Florida and Lubbock, Texas. It is also present in Martian soil.
Naturally occurring perchlorate at its most abundant can be found commingled with deposits of sodium nitrate in the Atacama Desert of northern Chile. These deposits have been heavily mined as sources for nitrate-based fertilizers. Chilean nitrate is in fact estimated to be the source of around 81,000 tonnes (89,000 tons) of perchlorate imported to the U.S. (1909–1997). Results from surveys of ground water, ice, and relatively unperturbed deserts have been used to estimate a 100,000 to 3,000,000 tonnes (110,000 to 3,310,000 tons) "global inventory" of natural perchlorate presently on Earth.
=== On Mars ===
Perchlorate was detected in Martian soil at the level of ~0.6% by weight. It was shown that at the Phoenix landing site it was present as a mixture of 60% Ca(ClO4)2 and 40% Mg(ClO4)2. These salts, formed from perchlorates, act as antifreeze and substantially lower the freezing point of water. Based on the temperature and pressure conditions on present-day Mars at the Phoenix lander site, conditions would allow a perchlorate salt solution to be stable in liquid form for a few hours each day during the summer.
The possibility that the perchlorate was a contaminant brought from Earth was eliminated by several lines of evidence. The Phoenix retro-rockets used ultra pure hydrazine and launch propellants consisting of ammonium perchlorate or ammonium nitrate. Sensors on board Phoenix found no traces of ammonium nitrate, and thus the nitrate in the quantities present in all three soil samples is indigenous to the Martian soil. Perchlorate is widespread in Martian soils at concentrations between 0.5 and 1%. At such concentrations, perchlorate could be an important source of oxygen, but it could also become a critical chemical hazard to astronauts.
In 2006, a mechanism was proposed for the formation of perchlorates that is particularly relevant to the discovery of perchlorate at the Phoenix lander site. It was shown that soils with high concentrations of chloride converted to perchlorate in the presence of titanium dioxide and sunlight/ultraviolet light. The conversion was reproduced in the lab using chloride-rich soils from Death Valley. Other experiments have demonstrated that the formation of perchlorate is associated with wide band gap semiconducting oxides. In 2014, it was shown that perchlorate and chlorate can be produced from chloride minerals under Martian conditions via UV using only NaCl and silicate.
Further findings of perchlorate and chlorate in the Martian meteorite EETA79001 and by the Mars Curiosity rover in 2012-2013 support the notion that perchlorates are globally distributed throughout the Martian surface. With concentrations approaching 0.5% and exceeding toxic levels on Martian soil, Martian perchlorates would present a serious challenge to human settlement, as well as microorganisms. On the other hand, the perchlorate would provide a convenient source of oxygen for the settlements.
On September 28, 2015, NASA announced that analyses of spectral data from the Compact Reconnaissance Imaging Spectrometer for Mars instrument (CRISM) on board the Mars Reconnaissance Orbiter from four different locations where recurring slope lineae (RSL) are present found evidence for hydrated salts. The hydrated salts most consistent with the spectral absorption features are magnesium perchlorate, magnesium chlorate and sodium perchlorate. The findings strongly support the hypothesis that RSL form as a result of contemporary water activity on Mars.
== Contamination in environment ==
Perchlorates are of concern because of uncertainties about toxicity and health effects at low levels in drinking water, impact on ecosystems, and indirect exposure pathways for humans due to accumulation in vegetables. They are water-soluble, exceedingly mobile in aqueous systems, and can persist for many decades under typical groundwater and surface water conditions.
=== Industrial origin ===
Perchlorates are used mostly in rocket propellants but also in disinfectants, bleaching agents, and herbicides. Perchlorate contamination is caused during both the manufacture and ignition of rockets and fireworks. Fireworks are also a source of perchlorate in lakes. Removal and recovery methods of these compounds from explosives and rocket propellants include high-pressure water washout, which generates aqueous ammonium perchlorate.
=== In U.S. drinking water ===
In 2000, perchlorate contamination beneath the former flare manufacturing plant Olin Corporation Flare Facility, Morgan Hill, California was first discovered several years after the plant had closed. The plant had used potassium perchlorate as one of the ingredients during its 40 years of operation. By late 2003, the State of California and the Santa Clara Valley Water District had confirmed a groundwater plume currently extending over nine miles through residential and agricultural communities.
The California Regional Water Quality Control Board and the Santa Clara Valley Water District have engaged in a major outreach effort, a water well testing program has been underway for about 1,200 residential, municipal, and agricultural wells. Large ion exchange treatment units are operating in three public water supply systems which include seven municipal wells with perchlorate detection. The potentially responsible parties, Olin Corporation and Standard Fuse Incorporated, have been supplying bottled water to nearly 800 households with private wells, and the Regional Water Quality Control Board has been overseeing cleanup efforts.
The source of perchlorate in California was mainly attributed to two manufacturers in the southeast portion of the Las Vegas Valley in Nevada, where perchlorate has been produced for industrial use. This led to perchlorate release into Lake Mead in Nevada and the Colorado River which affected regions of Nevada, California and Arizona, where water from this reservoir is used for consumption, irrigation and recreation for approximately half the population of these states. Lake Mead has been attributed as the source of 90% of the perchlorate in Southern Nevada's drinking water. Based on sampling, perchlorate has been affecting 20 million people, with highest detection in Texas, southern California, New Jersey, and Massachusetts, but intensive sampling of the Great Plains and other middle state regions may lead to revised estimates with additional affected regions. An action level of 18 μg/L has been adopted by several affected states.
In 2001, the chemical was detected at levels as high as 5 μg/L at Joint Base Cape Cod (formerly Massachusetts Military Reservation), over the Massachusetts then state regulation of 2 μg/L.
As of 2009, low levels of perchlorate had been detected in both drinking water and groundwater in 26 states in the U.S., according to the Environmental Protection Agency (EPA).
=== In food ===
In 2004, the chemical was found in cow's milk in California at an average level of 1.3 parts per billion (ppb, or μg/L), which may have entered the cows through feeding on crops exposed to water containing perchlorates.
A 2005 study suggested human breast milk had an average of 10.5 μg/L of perchlorate.
=== From minerals and other natural occurrences ===
In some places, there is no clear source of perchlorate, and it may be naturally occurring. Natural perchlorate on Earth was first identified in terrestrial nitrate deposits /fertilizers of the Atacama Desert in Chile as early as the 1880s and for a long time considered a unique perchlorate source. The perchlorate released from historic use of Chilean nitrate based fertilizer which the U.S.imported by the hundreds of tons in the early 19th century can still be found in some groundwater sources of the United States, for example Long Island, New York. Recent improvements in analytical sensitivity using ion chromatography based techniques have revealed a more widespread presence of natural perchlorate, particularly in subsoils of Southwest USA, salt evaporites in California and Nevada, Pleistocene groundwater in New Mexico, and even present in extremely remote places such as Antarctica. The data from these studies and others indicate that natural perchlorate is globally deposited on Earth with the subsequent accumulation and transport governed by the local hydrologic conditions.
Despite its importance to environmental contamination, the specific source and processes involved in natural perchlorate production remain poorly understood. Laboratory experiments in conjunction with isotopic studies have implied that perchlorate may be produced on earth by oxidation of chlorine species through pathways involving ozone or its photochemical products. Other studies have suggested that perchlorate can also be formed by lightning activated oxidation of chloride aerosols (e.g., chloride in sea salt sprays), and ultraviolet or thermal oxidation of chlorine (e.g., bleach solutions used in swimming pools) in water.
=== From nitrate fertilizers ===
Although perchlorate as an environmental contaminant is usually associated with the manufacture, storage, and testing of solid rocket motors, contamination of perchlorate has been focused as a side effect of the use of natural nitrate fertilizer and its release into ground water. The use of naturally contaminated nitrate fertilizer contributes to the infiltration of perchlorate anions into the ground water and threaten the water supplies of many regions in the US.
One of the main sources of perchlorate contamination from natural nitrate fertilizer use was found to come from the fertilizer derived from Chilean caliche (calcium carbonate), because Chile has rich source of naturally occurring perchlorate anion. Perchlorate concentration was the highest in Chilean nitrate, ranging from 3.3 to 3.98%. Perchlorate in the solid fertilizer ranged from 0.7 to 2.0 mg g−1, variation of less than a factor of 3 and it is estimated that sodium nitrate fertilizers derived from Chilean caliche contain approximately 0.5–2 mg g−1 of perchlorate anion. The direct ecological effect of perchlorate is not well known; its impact can be influenced by factors including rainfall and irrigation, dilution, natural attenuation, soil adsorption, and bioavailability. Quantification of perchlorate concentrations in nitrate fertilizer components via ion chromatography revealed that in horticultural fertilizer components contained perchlorate ranging between 0.1 and 0.46%.
== Environmental cleanup ==
There have been many attempts to eliminate perchlorate contamination. Current remediation technologies for perchlorate have downsides of high costs and difficulty in operation. Thus, there have been interests in developing systems that would offer economic and green alternatives.
=== Treatment ex situ and in situ ===
Several technologies can remove perchlorate, via treatments ex situ (away from the location) and in situ (at the location).
Ex situ treatments include ion exchange using perchlorate-selective or nitrite-specific resins, bioremediation using packed-bed or fluidized-bed bioreactors, and membrane technologies via electrodialysis and reverse osmosis. In ex situ treatment via ion exchange, contaminants are attracted and adhere to the ion exchange resin because such resins and ions of contaminants have opposite charge. As the ion of the contaminant adheres to the resin, another charged ion is expelled into the water being treated, in which then ion is exchanged for the contaminant. Ion exchange technology has advantages of being well-suitable for perchlorate treatment and high volume throughput but has a downside that it does not treat chlorinated solvents. In addition, ex situ technology of liquid phase carbon adsorption is employed, where granular activated carbon (GAC) is used to eliminate low levels of perchlorate and pretreatment may be required in arranging GAC for perchlorate elimination.
In situ treatments, such as bioremediation via perchlorate-selective microbes and permeable reactive barrier, are also being used to treat perchlorate. In situ bioremediation has advantages of minimal above-ground infrastructure and its ability to treat chlorinated solvents, perchlorate, nitrate, and RDX simultaneously. However, it has a downside that it may negatively affect secondary water quality. In situ technology of phytoremediation could also be utilized, even though perchlorate phytoremediation mechanism is not fully founded yet.
Bioremediation using perchlorate-reducing bacteria, which reduce perchlorate ions to harmless chloride, has also been proposed.
== Health effects ==
=== Thyroid inhibition ===
Perchlorate is a potent competitive inhibitor of the thyroid sodium-iodide symporter. Thus, it has been used to treat hyperthyroidism since the 1950s. At very high doses (70,000–300,000 ppb) the administration of potassium perchlorate was considered the standard of care in the United States, and remains the approved pharmacologic intervention for many countries.
In large amounts perchlorate interferes with iodine uptake into the thyroid gland. In adults, the thyroid gland helps regulate the metabolism by releasing hormones, while in children, the thyroid helps in proper development. The NAS, in its 2005 report, Health Implications of Perchlorate Ingestion, emphasized that this effect, also known as Iodide Uptake Inhibition (IUI) is not an adverse health effect. However, in January 2008, California's Department of Toxic Substances Control stated that perchlorate is becoming a serious threat to human health and water resources. In 2010, the EPA's Office of the Inspector General determined that the agency's own perchlorate reference dose (RfD) of 24.5 parts per billion protects against all human biological effects from exposure, as the federal government is responsible for all US military base groundwater contamination. This finding was due to a significant shift in policy at the EPA in basing its risk assessment on non-adverse effects such as IUI instead of adverse effects. The Office of the Inspector General also found that because the EPA's perchlorate reference dose is conservative and protective of human health further reducing perchlorate exposure below the reference dose does not effectively lower risk.
Because of ammonium perchlorate's adverse effects upon children, Massachusetts set its maximum allowed limit of ammonium perchlorate in drinking water at 2 parts per billion (2 ppb = 2 micrograms per liter).
Perchlorate affects only thyroid hormone. Because it is neither stored nor metabolized, effects of perchlorate on the thyroid gland are reversible, though effects on brain development from lack of thyroid hormone in fetuses, newborns, and children are not.
Toxic effects of perchlorate have been studied in a survey of industrial plant workers who had been exposed to perchlorate, compared to a control group of other industrial plant workers who had no known exposure to perchlorate. After undergoing multiple tests, workers exposed to perchlorate were found to have a significant systolic blood pressure rise compared to the workers who were not exposed to perchlorate, as well as a significant decreased thyroid function compared to the control workers.
A study involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate can temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The EPA converted this dose into a reference dose of 0.0007 mg/(kg·d) by dividing this level by the standard intraspecies uncertainty factor of 10. The agency then calculated a "drinking water equivalent level" of 24.5 ppb by assuming a person weighs 70 kg (150 lb) and consumes 2 L (0.44 imp gal; 0.53 US gal) of drinking water per day over a lifetime.
In 2006, a study reported a statistical association between environmental levels of perchlorate and changes in thyroid hormones of women with low iodine. The study authors were careful to point out that hormone levels in all the study subjects remained within normal ranges. The authors also indicated that they did not originally normalize their findings for creatinine, which would have essentially accounted for fluctuations in the concentrations of one-time urine samples like those used in this study. When the Blount research was re-analyzed with the creatinine adjustment made, the study population limited to women of reproductive age, and results not shown in the original analysis, any remaining association between the results and perchlorate intake disappeared. Soon after the revised Blount Study was released, Robert Utiger, a doctor with the Harvard Institute of Medicine, testified before the US Congress and stated: "I continue to believe that that reference dose, 0.007 milligrams per kilo (24.5 ppb), which includes a factor of 10 to protect those who might be more vulnerable, is quite adequate."
In 2014, a study was published, showing that environmental exposure to perchlorate in pregnant women with hypothyroidism is associated with a significant risk of low IQ in their children.
=== Lung toxicity ===
Some studies suggest that perchlorate has pulmonary toxic effects as well. Studies have been performed on rabbits where perchlorate has been injected into the trachea. The lung tissue was removed and analyzed, and it was found that perchlorate injected lung tissue showed several adverse effects when compared to the control group that had been intratracheally injected with saline. Adverse effects included inflammatory infiltrates, alveolar collapse, subpleural thickening, and lymphocyte proliferation.
=== Aplastic anemia ===
In the early 1960s, potassium perchlorate used to treat Graves' disease was implicated in the development of aplastic anemia—a condition where the bone marrow fails to produce new blood cells in sufficient quantity—in thirteen patients, seven of whom died. Subsequent investigations have indicated the connection between administration of potassium perchlorate and development of aplastic anemia to be "equivocable at best", which means that the benefit of treatment, if it is the only known treatment, outweighs the risk, and it appeared a contaminant poisoned the 13.
== Regulation in the U.S. ==
=== Water ===
In 1998, perchlorate was included in the U.S. EPA Contaminant Candidate List, primarily due to its detection in California drinking water.
In 2002, the EPA completed its draft toxicological review of perchlorate and proposed an reference dose of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate.
In 2003, a federal district court in California found that the Comprehensive Environmental Response, Compensation and Liability Act applied, because perchlorate is ignitable, and therefore was a "characteristic" hazardous waste.
Subsequently, the U.S. National Research Council of the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS).
The NAS report described the level of lowest exposure from Greer et al. as a "no-observed-effect level" (NOEL). However, there was actually an effect at that level although not statistically significant largely due to small size of study population (four of seven subjects showed a slight decrease in iodide uptake).
Reduced iodide uptake was not considered to be an adverse effect, even though it is a precursor to an adverse effect, hypothyroidism. Therefore, additional safety factors, would be necessary when extrapolating from the point of departure to the RfD.
Consideration of data uncertainty was insufficient because the Greer, et al. study reflected only a 14-day exposure (=acute) to healthy adults and no additional safety factors were considered to protect sensitive subpopulations like for example, breastfeeding newborns.
Although there has generally been consensus with the Greer et al. study, there has been no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or "lowest-observed-adverse-effect level", LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD.
In early 2006, EPA issued a "Cleanup Guidance" and recommended a Drinking Water Equivalent Level (DWEL) for perchlorate of 24.5 μg/L. Both DWEL and Cleanup Guidance were based on a 2005 review of the existing research by the National Academy of Sciences (NAS).
Lacking a federal drinking water standard, several states subsequently published their own standards for perchlorate including Massachusetts in 2006 and California in 2007. Other states, including Arizona, Maryland, Nevada, New Mexico, New York, and Texas have established non-enforceable, advisory levels for perchlorate.
In 2008, EPA issued an interim drinking water health advisory for perchlorate and with it a guidance and analysis concerning the impacts on the environment and drinking water. California also issued guidance regarding perchlorate use. Both the Department of Defense and some environmental groups voiced questions about the NAS report, but no credible science has emerged to challenge the NAS findings.
In February 2008, the U.S. Food and Drug Administration (FDA) reported that U.S. toddlers on average were being exposed to more than half of EPA's safe dose from food alone. In March 2009, a Centers for Disease Control study found 15 brands of infant formula contaminated with perchlorate and that combined with existing perchlorate drinking water contamination, infants could be at risk for perchlorate exposure above the levels considered safe by EPA.
In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07 μg/kg/day) than the NAS RfD using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA had done.
On February 11, 2011, EPA determined that perchlorate meets the Safe Drinking Water Act criteria for regulation as a contaminant. The agency found that perchlorate may have an adverse effect on the health of persons and is known to occur in public water systems with a frequency and at levels that it presents a public health concern. Since then EPA has continued to determine what level of contamination is appropriate. EPA prepared extensive responses to submitted public comments.
In 2016, the Natural Resources Defense Council (NRDC) filed a lawsuit to accelerate EPA's regulation of perchlorate.
In 2019, EPA proposed a Maximum Contaminant Level of 0.056 mg/L for public water systems.
On June 18, 2020, EPA announced that it was withdrawing its 2011 regulatory determination and its 2019 proposal, stating that it had taken "proactive steps" with state and local governments to address perchlorate contamination. In September 2020 NRDC filed suit against EPA for its failure to regulate perchlorate, and stated that 26 million people may be affected by perchlorate in their drinking water. On March 31, 2022, the EPA announced that a review confirmed its 2020 decision. Following the NRDC lawsuit, in 2023 the US Court of Appeals for the DC Circuit ordered EPA to develop a perchlorate standard for public water systems. EPA stated that it will publish a proposed standard for perchlorate in 2025, and issue a final rule in 2027.
== Covalent perchlorates ==
Although typically found as a non-coordinating anion, a few metal complexes are known. Hexaperchloratoaluminate and tetraperchloratoaluminate are strong oxidising agents.
Several perchlorate esters are known. For example, methyl perchlorate is a high energy material that is a strong alkylating agent. Chlorine perchlorate is a covalent inorganic analog.
== Safety ==
As discussed above, iodide is competitor in the thyroid glands. In the presence of reductants, perchlorate forms potentially explosive mixtures. The PEPCON disaster destroyed a production plant for ammonium perchlorate when a fire caused the ammonium perchlorate stored on site to react with the aluminum that the storage tanks were constructed with and explode.
== References ==
== External links ==
NAS Report: The Health Effects of Perchlorate Ingestion
NRDC's criticism of NAS report
Environment California report Archived 2010-06-09 at the Wayback Machine (Executive Summary with link to full text)
Macho Moms: Perchlorate pollutant masculinizes fish: Science News Online, August 12, 2006 Archived February 20, 2008, at the Wayback Machine
New Scientist Space Blog: Phoenix discovery may be bad for Mars life
State Threatening to Sue Military over Water Pollution Archived 2005-11-09 at the Wayback Machine, Associated Press, May 19, 2003.
Health Effects of Perchlorate from Spent Rocket, SpaceDaily.com, July 11, 2002.
Dept of Defense, Dept of Energy, and US Environmental Protection Agency's Strategic Environmental Research and Development Program, Elimination of Perchlorate Oxidizers from Pyrotechnic Flare Compositions, 2009 Archived 2007-08-06 at the Wayback Machine | Wikipedia/Perchlorate_ester |
In organic chemistry, a nitrate ester is an organic functional group with the formula R−ONO2, where R stands for any organyl group. They are the esters of nitric acid and alcohols. A well-known example is nitroglycerin, which is not a nitro compound, despite its name.
== Synthesis and reactions ==
Nitrate esters are typically prepared by condensation of nitric acid and the alcohol: For example, the simplest nitrate ester, methyl nitrate, is formed by reaction of methanol and nitric acid in the presence of sulfuric acid:
CH
3
OH
+
HNO
3
⟶
CH
3
ONO
2
+
H
2
O
{\displaystyle {\ce {CH3OH + HNO3 -> CH3ONO2 + H2O}}}
Formation of a nitrate ester is called a nitrooxylation (less commonly, nitroxylation).
Most commonly, "mixed acid" (nitric and sulfuric acids) are used, but in the 1980s production of the nitrocellulose with magnesium nitrate as a dehydrating agent was started in the US. In laboratory, phosphoric acid and phosphorus pentoxide or acetic acid and its anhydride may be used for the same purpose, or the nitroxylation can be conducted in anhydrous conditions (such as dichloromethane or chloroform).
== Explosive properties ==
The thermal decomposition of nitrate esters mainly yields the gases molecular nitrogen (N2) and carbon dioxide. The considerable chemical energy of the detonation is due to the high strength of the bond in molecular nitrogen. This stoichiometry is illustrated by the equation for the detonation of nitroglycerin.
Illustrative of the highly sensitive nature of some organic nitrates is Si(CH2ONO2)4. A single crystal of this compound detonates even upon contact with a teflon spatula and in fact made full characterization impossible. Another contributor to its exothermic decomposition (inferred from much safer in silico experimentation) is the ability of silicon in its crystal phase to coordinate to two oxygen nitrito groups in addition to regular coordination to the four carbon atoms. This additional coordination would make formation of silicon dioxide (one of the decomposition products) more facile.
== Medical use ==
The nitrate esters isosorbide dinitrate (Isordil) and isosorbide mononitrate (Imdur, Ismo, Monoket, Mononitron) are converted in the body to nitric oxide, a potent natural vasodilator. In medicine, these esters are used as a medicine for angina pectoris (ischemic heart disease).
== Related compounds ==
Acetyl nitrate is a nitrate anhydride, being derived from the condensation of nitric and acetic acids.
== References == | Wikipedia/Nitrate_ester |
Retrosynthetic analysis is a technique for solving problems in the planning of organic syntheses. This is achieved by transforming a target molecule into simpler precursor structures regardless of any potential reactivity/interaction with reagents. Each precursor material is examined using the same method. This procedure is repeated until simple or commercially available structures are reached. These simpler/commercially available compounds can be used to form a synthesis of the target molecule. Retrosynthetic analysis was used as early as 1917 in Robinson's Tropinone total synthesis. Important conceptual work on retrosynthetic analysis was published by George Vladutz in 1963.
E.J. Corey formalized and popularized the concept from 1967 onwards in his article General methods for the construction of complex molecules and his book The Logic of Chemical Synthesis.
The power of retrosynthetic analysis becomes evident in the design of a synthesis. The goal of retrosynthetic analysis is a structural simplification. Often, a synthesis will have more than one possible synthetic route. Retrosynthesis is well suited for discovering different synthetic routes and comparing them in a logical and straightforward fashion. A database may be consulted at each stage of the analysis, to determine whether a component already exists in the literature. In that case, no further exploration of that compound would be required. If that compound exists, it can be a jumping point for further steps developed to reach a synthesis.
There are both academic and commercial groups developing retrosynthesis tools. With the growing application of machine learning and artificial intelligence in chemistry, many research groups, such as the Coley Group from MIT, and companies, such as Chemical.AI, Reaxys, etc., have started to integrate deep learning into the conventional rule-based approaches.
== Definitions ==
Disconnection
A retrosynthetic step involving the breaking of a bond to form two (or more) synthons.
Retron
A minimal molecular substructure that enables certain transformations.
Retrosynthetic tree
A directed acyclic graph of several (or all) possible retrosyntheses of a single target.
Synthon
A fragment of a compound that assists in the formation of a synthesis, derived from that target molecule. A synthon and the corresponding commercially available synthetic equivalent are shown below:
Target
The desired final compound.
Transform
The reverse of a synthetic reaction; the formation of starting materials from a single product.
== Example ==
Shown below is a retrosynthetic analysis of phenylacetic acid:
In planning the synthesis, two synthons are identified. A nucleophilic "-COOH" group, and an electrophilic "PhCH2+" group. Both synthons do not exist as written; synthetic equivalents corresponding to the synthons are reacted to produce the desired product. In this case, the cyanide anion is the synthetic equivalent for the −COOH synthon, while benzyl bromide is the synthetic equivalent for the benzyl synthon.
The synthesis of phenylacetic acid determined by retrosynthetic analysis is thus:
PhCH2Br + NaCN → PhCH2CN + NaBr
PhCH2CN + 2 H2O → PhCH2COOH + NH3
In fact, phenylacetic acid has been synthesized from benzyl cyanide, itself prepared by the analogous reaction of benzyl bromide with sodium cyanide.
== Strategies ==
=== Functional group strategies ===
Manipulation of functional groups can lead to significant reductions in molecular complexity.
=== Stereochemical strategies ===
Numerous chemical targets have distinct stereochemical demands. Stereochemical transformations (such as the Claisen rearrangement and Mitsunobu reaction) can remove or transfer the desired chirality thus simplifying the target.
=== Structure-goal strategies ===
Directing a synthesis toward a desirable intermediate can greatly narrow the focus of analysis. This allows bidirectional search techniques.
=== Transform-based strategies ===
The application of transformations to retrosynthetic analysis can lead to powerful reductions in molecular complexity. Unfortunately, powerful transform-based retrons are rarely present in complex molecules, and additional synthetic steps are often needed to establish their presence.
=== Topological strategies ===
The identification of one or more key bond disconnections may lead to the identification of key substructures or difficult to identify rearrangement transformations in order to identify the key structures.
Disconnections that preserve ring structures are encouraged.
Disconnections that create rings larger than 7 members are discouraged.
Disconnection involves creativity.
== See also ==
Organic synthesis
Total synthesis
== References ==
== External links ==
ChemAIRS, AI-driven retrosynthesis tools by Chemical.AI
Centre for Molecular and Biomolecular Informatics
Presentation on ARChem Route Designer, ACS, Philadelphia, September 2008 for more info on ARChem see the SimBioSys pages.
Manifold, Software freely available for academic users developed by PostEra
Retrosynthesis planning tool: ICSynth by InfoChem
Spaya, Software freely available proposed by Iktos | Wikipedia/Functional_group_interconversion |
In organic chemistry, nitro compounds are organic compounds that contain one or more nitro functional groups (−NO2). The nitro group is one of the most common explosophores (functional group that makes a compound explosive) used globally. The nitro group is also strongly electron-withdrawing. Because of this property, C−H bonds alpha (adjacent) to the nitro group can be acidic. For similar reasons, the presence of nitro groups in aromatic compounds retards electrophilic aromatic substitution but facilitates nucleophilic aromatic substitution. Nitro groups are rarely found in nature. They are almost invariably produced by nitration reactions starting with nitric acid.
== Synthesis ==
=== Preparation of aromatic nitro compounds ===
Aromatic nitro compounds are typically synthesized by nitration. Nitration is achieved using a mixture of nitric acid and sulfuric acid, which produce the nitronium ion (NO+2), which is the electrophile:
The nitration product produced on the largest scale, by far, is nitrobenzene. Many explosives are produced by nitration including trinitrophenol (picric acid), trinitrotoluene (TNT), and trinitroresorcinol (styphnic acid).
Another but more specialized method for making aryl–NO2 group starts from halogenated phenols, is the Zinke nitration.
=== Preparation of aliphatic nitro compounds ===
Aliphatic nitro compounds can be synthesized by various methods; notable examples include:
Free radical nitration of alkanes. The reaction produces fragments from the parent alkane, creating a diverse mixture of products; for instance, nitromethane, nitroethane, 1-nitropropane, and 2-nitropropane are produced by treating propane with nitric acid in the gas phase (e.g. 350–450 °C and 8–12 atm).
Nucleophilic substitution reactions between halocarbons or organosulfates with silver or alkali nitrite salts.
Nitromethane can be produced in the laboratory by treating sodium chloroacetate with sodium nitrite.
Oxidation of oximes or primary amines.
Reduction of β-nitro alcohols or nitroalkenes.
By decarboxylation of α-nitro carboxylic acids formed from nitriles and ethyl nitrate.
==== Ter Meer Reaction ====
In nucleophilic aliphatic substitution, sodium nitrite (NaNO2) replaces an alkyl halide. In the so-called Ter Meer reaction (1876) named after Edmund ter Meer, the reactant is a 1,1-halonitroalkane:
The reaction mechanism is proposed in which in the first slow step a proton is abstracted from nitroalkane 1 to a carbanion 2 followed by protonation to an aci-nitro 3 and finally nucleophilic displacement of chlorine based on an experimentally observed hydrogen kinetic isotope effect of 3.3. When the same reactant is reacted with potassium hydroxide the reaction product is the 1,2-dinitro dimer.
== Occurrence ==
=== In nature ===
Chloramphenicol is a rare example of a naturally occurring nitro compound. At least some naturally occurring nitro groups arose by the oxidation of amino groups. 2-Nitrophenol is an aggregation pheromone of ticks.
Examples of nitro compounds are rare in nature. 3-Nitropropionic acid found in fungi and plants (Indigofera). Nitropentadecene is a defense compound found in termites. Aristolochic acids are found in the flowering plant family Aristolochiaceae. Nitrophenylethane is found in Aniba canelilla. Nitrophenylethane is also found in members of the Annonaceae, Lauraceae and Papaveraceae.
=== In pharmaceuticals ===
Despite the occasional use in pharmaceuticals, the nitro group is associated with mutagenicity and genotoxicity and therefore is often regarded as a liability in the drug discovery process.
== Reactions ==
Nitro compounds participate in several organic reactions, the most important being reduction of nitro compounds to the corresponding amines:
RNO2 + 3 H2 → RNH2 + 2 H2O
Virtually all aromatic amines (e.g. aniline) are derived from nitroaromatics through such catalytic hydrogenation. A variation is formation of a dimethylaminoarene with palladium on carbon and formaldehyde:
The α-carbon of nitroalkanes is somewhat acidic. The pKa values of nitromethane and 2-nitropropane are respectively 17.2 and 16.9 in dimethyl sulfoxide (DMSO) solution, suggesting an aqueous pKa of around 11. In other words, these carbon acids can be deprotonated in aqueous solution. The conjugate base is called a nitronate, and behaves similar to an enolate. In the nitroaldol reaction, it adds directly to aldehydes, and, with enones, can serve as a Michael donor. Conversely, a nitroalkene reacts with enols as a Michael acceptor. Nitrosating a nitronate gives a nitrolic acid.
Nitronates are also key intermediates in the Nef reaction: when exposed to acids or oxidants, a nitronate hydrolyzes to a carbonyl and azanone.
Grignard reagents combine with nitro compounds to give a nitrone; but a Grignard reagent with an α hydrogen will then add again to the nitrone to give a hydroxylamine salt.
=== Dye syntheses ===
The Leimgruber–Batcho, Bartoli and Baeyer–Emmerling indole syntheses begin with aromatic nitro compounds. Indigo can be synthesized in a condensation reaction from ortho-nitrobenzaldehyde and acetone in strongly basic conditions in a reaction known as the Baeyer–Drewson indigo synthesis.
=== Biochemical reactions ===
Many flavin-dependent enzymes are capable of oxidizing aliphatic nitro compounds to less-toxic aldehydes and ketones. Nitroalkane oxidase and 3-nitropropionate oxidase oxidize aliphatic nitro compounds exclusively, whereas other enzymes such as glucose oxidase have other physiological substrates.
=== Explosions ===
Explosive decomposition of organo nitro compounds are redox reactions, wherein both the oxidant (nitro group) and the fuel (hydrocarbon substituent) are bound within the same molecule. The explosion process generates heat by forming highly stable products including molecular nitrogen (N2), carbon dioxide, and water. The explosive power of this redox reaction is enhanced because these stable products are gases at mild temperatures. Many contact explosives contain the nitro group.
== See also ==
Functional group
Reduction of nitro compounds
Nitration
Nitrite (also an NO2 group, but bonds differently)
Nitroalkene
Nitroglycerin
== References == | Wikipedia/Nitro_functional_group |
Ethyl butyrate, also known as ethyl butanoate, or butyric ether, is an ester with the chemical formula CH3CH2CH2COOCH2CH3. It is soluble in propylene glycol, paraffin oil, and kerosene. It has a fruity odor, similar to pineapple, and is a key ingredient used as a flavor enhancer in processed orange juices. It also occurs naturally in many fruits, albeit at lower concentrations.
== Uses ==
It is commonly used as artificial flavoring resembling orange juice and is hence used in nearly all orange juices sold in the US, including those sold as "fresh" or “concentrated". It is also used in alcoholic beverages (e.g. martinis, daiquiris etc.), as a solvent in perfumery products, and as a plasticizer for cellulose.
Ethyl butyrate is one of the most common chemicals used in flavors and fragrances. It can be used in a variety of flavors: orange (most common), cherry, pineapple, mango, guava, bubblegum, peach, apricot, fig, and plum. Ethyl butyrate is synthesised in Jamaican rum upon the estrification of butyric acid from muck and ethanol during the distillation process. This gives Jamaican rum its pleasant flavour. In industrial use, it is also one of the cheapest chemicals, which only adds to its popularity.
== Production ==
It can be synthesized by reacting ethanol and butyric acid. This is a condensation reaction, meaning water is produced in the reaction as a byproduct. Ethyl butyrate from natural sources can be distinguished from synthetic ethyl butyrate by Stable Isotope Ratio Analysis (SIRA).
== Table of physical properties ==
== See also ==
Butyric acid
Butyrates
Methyl butyrate
== References ==
== External links ==
MSDS sheet
Sorption of ethyl butyrate and octanal constituents of orange essence by polymeric adsorbents Archived 2009-05-01 at the Wayback Machine
Biosynthesis of ethyl butyrate using immobilized lipase: a statistical approach Archived 2009-05-01 at the Wayback Machine | Wikipedia/Ethyl_butyrate |
A group-contribution method in chemistry is a technique to estimate and predict thermodynamic and other properties from molecular structures.
== Introduction ==
In today's chemical processes hundreds of thousands of components are used. The Chemical Abstracts Service registry lists 56 million substances, but many of these are only of scientific interest.
Process designers need to know some basic chemical properties of the components and their mixtures. Experimental measurement is often too expensive.
Predictive methods can replace measurements if they provide sufficiently good estimations. The estimated properties cannot be as precise as well-made measurements, but for many purposes the quality of estimated properties is sufficient. Predictive methods can also be used to check the results of experimental work.
== Principles ==
A group-contribution method uses the principle that some simple aspects of the structures of chemical components are always the same in many different molecules. The smallest common constituents are the atoms and the bonds. The vast majority of organic components, for example, are built of carbon, hydrogen, oxygen, nitrogen, halogens (not including astatine), and maybe sulfur or phosphorus. Together with a single, a double, and a triple bond there are only ten atom types and three bond types to build thousands of components. The next slightly more complex building blocks of components are functional groups, which are themselves built from few atoms and bonds.
A group-contribution method is used to predict properties of pure components and mixtures by using group or atom properties. This reduces the number of needed data dramatically. Instead of needing to know the properties of thousands or millions of compounds, only data for a few dozens or hundreds of groups have to be known.
=== Additive group-contribution method ===
The simplest form of a group-contribution method is the determination of a component property by summing up the group contributions
G
i
{\displaystyle G_{i}}
:
T
b
[
K
]
=
198.2022567824111
+
∑
G
i
.
{\displaystyle T_{\text{b}}[{\text{K}}]=198.2022567824111+\sum G_{i}.}
This simple form assumes that the property (normal boiling point in the example) is strictly linearly dependent on the number of groups, and additionally no interaction between groups and molecules are assumed. This simple approach is used, for example, in the Joback method for some properties, and it works well in a limited range of components and property ranges, but leads to quite large errors if used outside the applicable ranges.
=== Additive group contributions and correlations ===
This technique uses the purely additive group contributions to correlate the wanted property with an easy accessible property. This is often done for the critical temperature, where the Guldberg rule implies that Tc is 3/2 of the normal boiling point, and the group contributions are used to give a more precise value:
T
c
=
T
b
[
0.584
+
0.965
∑
G
i
−
(
∑
G
i
)
2
]
−
1
.
{\displaystyle T_{\text{c}}=T_{\text{b}}\left[0.584+0.965\sum G_{i}-\left(\sum G_{i}\right)^{2}\right]^{-1}.}
This approach often gives better results than pure additive equations because the relation with a known property introduces some knowledge about the molecule. Commonly used additional properties are the molecular weight, the number of atoms, chain length, and ring sizes and counts.
=== Group interactions ===
For the prediction of mixture properties it is in most cases not sufficient to use a purely additive method. Instead the property is determined from group-interaction parameters:
P
=
f
(
G
i
j
)
,
{\displaystyle P=f(G_{ij}),}
where P stands for property, and Gij for group-interaction value.
A typical group-contribution method using group-interaction values is the UNIFAC method, which estimates activity coefficients. A big disadvantage of the group-interaction model is the need for many more model parameters. Where a simple additive model only needs 10 parameters for 10 groups, a group-interaction model needs already 45 parameters. Therefore, a group-interaction model has normally not parameter for all possible combinations.
=== Group contributions of higher orders ===
Some newer methods introduce second-order groups. These can be super-groups containing several first-order (standard) groups. This allows the introduction of new parameters for the position of groups. Another possibility is to modify first-order group contributions if specific other groups are also present.
If the majority of group-contribution methods give results in gas phase, recently, a new such method was created for estimating the standard Gibbs free energy of formation (ΔfG′°) and reaction (ΔrG′°) in biochemical systems: aqueous solution, temperature of 25 °C and pH = 7 (biochemical conditions). This new aqueous-system method is based on the group-contribution method of Mavrovouniotis.
A free-access tool of this new method in aqueous condition is available on the web.
== Determination of group contributions ==
Group contributions are obtained from known experimental data of well defined pure components and mixtures. Common sources are thermophysical data banks like the Dortmund Data Bank, Beilstein database, or the DIPPR data bank (from AIChE). The given pure component and mixture properties are then assigned to the groups by statistical correlations like e. g. (multi-)linear regression.
Important steps during the development of a new method are:
Evaluation of the quality of available experimental data, elimination of wrong data, finding of outliers.
Construction of groups.
Searching additional simple and easily accessible properties that can be used to correlate the sum of group contributions with the examined property.
Finding a good but simple mathematical equation for the relation of the group contribution sum with the wanted property. The critical pressures, for example, is often determined as Pc = f(ΣGi2).
Fitting the group contribution.
The reliability of a method mainly relies on a comprehensive data bank where sufficient source data have been available for all groups. A small data base may lead to a precise reproduction of the used data but will lead to significant errors when the model is used for the prediction of other systems.
== Group contribution methods ==
=== Joback method ===
The Joback method was published in 1984 by Kevin G. Joback. It can be used to estimate critical temperature, critical pressure, critical volume, standard ideal gas enthalpy of formation, standard ideal gas Gibbs energy of formation, ideal gas heat capacity, enthalpy of vaporization, enthalpy of fusion, normal boiling point, freezing point, and liquid viscosity. The Joback method is a first-order method, and does not account for molecular interactions.
=== Ambrose method ===
The Ambrose method was published by Douglas Ambrose in 1978 and 1979. It can be used to estimate critical temperature, critical pressure, and critical volume. In addition to the molecular structure, it requires normal boiling point for estimating critical temperature and molecular weight for estimating critical pressure.
=== Nannoolal method ===
The Nannoolal method was published by Yash Nannoolal et al in 2004. It can be used to estimate the normal boiling point. It includes first-order and second-order contributions.
== See also ==
UNIFAC
Benson group increment theory
Activity coefficient
== References == | Wikipedia/Group_contribution_method |
In chemistry, the term amide ( or or ) is a compound with the functional group RnE(=O)xNR2, where x is not zero, E is some element, and each R represents an organic group or hydrogen. It is a derivative of an oxoacid RnE(=O)xOH with an hydroxy group –OH replaced by an amine group –NR2.
Some important subclasses are
carboxamides, or organic amides, where E = carbon, with the general formula RC(=O)NR2.
phosphoramides, where E = phosphorus, such as R2P(=O)NR2
sulfonamides, where E = sulfur, namely RS(=O)2NR2
The term amide may also refer to
amide group, a functional group –C(=O)N= consisting of a carbonyl adjacent to a nitrogen atom.
cyclic amide or lactam, a cyclic compound with the amide group –C(=O)N– in the ring.
metal amide, an ionic compound ("salt") with the azanide anion H2N− (the conjugate base of ammonia) or to a derivative thereof R2N−.
There is also a neutral amino radical (•NH2) and a positively charged NH2+ ion called a nitrenium ion, but both of these are very unstable.
== See also ==
Imide
== References == | Wikipedia/Amide_(functional_group) |
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
== Etymology and pronunciation ==
Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
== History ==
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
== Terms ==
Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture.
Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample.
Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing.
Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated.
Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation.
Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction.
Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase.
Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column.
Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place.
Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column.
Eluotropic series – a list of solvents ranked according to their eluting power.
Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing.
Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated.
Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis.
Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index
Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste.
Solute – the sample components in partition chromatography.
Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography.
Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography
Detector – the instrument used for qualitative and quantitative detection of analytes after separation.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used.
== Techniques by chromatographic bed shape ==
=== Column chromatography ===
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
=== Planar chromatography ===
Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
==== Paper chromatography ====
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
==== Thin-layer chromatography (TLC) ====
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
== Displacement chromatography ==
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
== Techniques by physical state of mobile phase ==
=== Gas chromatography ===
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
=== Liquid chromatography ===
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 (octadecylsilyl) as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
=== Supercritical fluid chromatography ===
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
== Techniques by separation mechanism ==
=== Affinity chromatography ===
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
=== Ion exchange chromatography ===
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
=== Size-exclusion chromatography ===
Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
=== Expanded bed adsorption chromatographic separation ===
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
== Special techniques ==
=== Reversed-phase chromatography ===
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
=== Hydrophobic interaction chromatography ===
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
=== Hydrodynamic chromatography ===
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than
10
5
{\displaystyle 10^{5}}
daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
=== Two-dimensional chromatography ===
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
=== Simulated moving-bed chromatography ===
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
=== Pyrolysis gas chromatography ===
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
=== Fast protein liquid chromatography ===
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
=== Countercurrent chromatography ===
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
==== Hydrodynamic countercurrent chromatography (CCC) ====
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
==== Centrifugal partition chromatography (CPC) ====
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
=== Periodic counter-current chromatography ===
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
=== Chiral chromatography ===
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
=== Aqueous normal-phase chromatography ===
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
== Applications ==
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
== See also ==
== References ==
== External links ==
IUPAC Nomenclature for Chromatography
Overlapping Peaks Program – Learning by Simulations
Chromatography Videos – MIT OCW – Digital Lab Techniques Manual
Chromatography Equations Calculators – MicroSolv Technology Corporation | Wikipedia/Adsorption_chromatography |
Gas chromatography–mass spectrometry (GC–MS) is an analytical method that combines the features of gas-chromatography and mass spectrometry to identify different substances within a test sample. Applications of GC–MS include drug detection, fire investigation, environmental analysis, explosives investigation, food and flavor analysis, and identification of unknown samples, including that of material samples obtained from planet Mars during probe missions as early as the 1970s. GC–MS can also be used in airport security to detect substances in luggage or on human beings. Additionally, it can identify trace elements in materials that were previously thought to have disintegrated beyond identification. Like liquid chromatography–mass spectrometry, it allows analysis and detection even of tiny amounts of a substance.
GC–MS has been regarded as a "gold standard" for forensic substance identification because it is used to perform a 100% specific test, which positively identifies the presence of a particular substance. A nonspecific test merely indicates that any of several in a category of substances is present. Although a nonspecific test could statistically suggest the identity of the substance, this could lead to false positive identification. However, the high temperatures (300°C) used in the GC–MS injection port (and oven) can result in thermal degradation of injected molecules, thus resulting in the measurement of degradation products instead of the actual molecule(s) of interest.
== History ==
The first on-line coupling of gas chromatography to a mass spectrometer was reported in the late 1950s. An interest in coupling the methods had been suggested as early as December 1954, but conventional recording techniques had too poor temporal resolution. Fortunately, time-of-flight mass spectrometry developed around the same time allowed to measure spectra thousands times a second.
The development of affordable and miniaturized computers has helped in the simplification of the use of this instrument, as well as allowed great improvements in the amount of time it takes to analyze a sample. In 1964, Electronic Associates, Inc. (EAI), a leading U.S. supplier of analog computers, began development of a computer controlled quadrupole mass spectrometer under the direction of Robert E. Finnigan. By 1966 Finnigan and collaborator Mike Uthe's EAI division had sold over 500 quadrupole residual gas-analyzer instruments. In 1967, Finnigan left EAI to form the Finnigan Instrument Corporation along with Roger Sant, T. Z. Chou, Michael Story, Lloyd Friedman, and William Fies. In early 1968, they delivered the first prototype quadrupole GC/MS instruments to Stanford and Purdue University. When Finnigan Instrument Corporation was acquired by Thermo Instrument Systems (later Thermo Fisher Scientific) in 1990, it was considered "the world's leading manufacturer of mass spectrometers".
== Instrumentation ==
The GC–MS is composed of two major building blocks: the gas chromatograph and the mass spectrometer. The gas chromatograph utilizes a capillary column whose properties regarding molecule separation depend on the column's dimensions (length, diameter, film thickness) as well as the phase properties (e.g. 5% phenyl polysiloxane). The difference in the chemical properties between different molecules in a mixture and their relative affinity for the stationary phase of the column will promote separation of the molecules as the sample travels the length of the column. The molecules are retained by the column and then elute (come off) from the column at different times (called the retention time), and this allows the mass spectrometer downstream to capture, ionize, accelerate, deflect, and detect the ionized molecules separately. The mass spectrometer does this by breaking each molecule into ionized fragments and detecting these fragments using their mass-to-charge ratio.
These two components, used together, allow a much finer degree of substance identification than either unit used separately. It is not possible to make an accurate identification of a particular molecule by gas chromatography or mass spectrometry alone. The mass spectrometry process normally requires a very pure sample while gas chromatography using a traditional detector (e.g. Flame ionization detector) cannot differentiate between multiple molecules that happen to take the same amount of time to travel through the column (i.e. have the same retention time), which results in two or more molecules that co-elute. Sometimes two different molecules can also have a similar pattern of ionized fragments in a mass spectrometer (mass spectrum). Combining the two processes reduces the possibility of error, as it is extremely unlikely that two different molecules will behave in the same way in both a gas chromatograph and a mass spectrometer. Therefore, when an identifying mass spectrum appears at a characteristic retention time in a GC–MS analysis, it typically increases certainty that the analyte of interest is in the sample.
=== Purge and trap GC–MS ===
For the analysis of volatile compounds, a purge and trap (P&T) concentrator system may be used to introduce samples. The target analytes are extracted by mixing the sample with water and purge with inert gas (e.g. Nitrogen gas) into an airtight chamber, this is known as purging or sparging. The volatile compounds move into the headspace above the water and are drawn along a pressure gradient (caused by the introduction of the purge gas) out of the chamber. The volatile compounds are drawn along a heated line onto a 'trap'. The trap is a column of adsorbent material at ambient temperature that holds the compounds by returning them to the liquid phase. The trap is then heated and the sample compounds are introduced to the GC–MS column via a volatiles interface, which is a split inlet system. P&T GC–MS is particularly suited to volatile organic compounds (VOCs) and BTEX compounds (aromatic compounds associated with petroleum).
A faster alternative is the "purge-closed loop" system. In this system the inert gas is bubbled through the water until the concentrations of organic compounds in the vapor phase are at equilibrium with concentrations in the aqueous phase. The gas phase is then analysed directly.
=== Types of mass spectrometer detectors ===
The most common type of mass spectrometer (MS) associated with a gas chromatograph (GC) is the quadrupole mass spectrometer, sometimes referred to by the Hewlett-Packard (now Agilent) trade name "Mass Selective Detector" (MSD). Another relatively common detector is the ion trap mass spectrometer. Additionally one may find a magnetic sector mass spectrometer, however these particular instruments are expensive and bulky and not typically found in high-throughput service laboratories. Other detectors may be encountered such as time of flight (TOF), tandem quadrupoles (MS-MS) (see below), or in the case of an ion trap MSn where n indicates the number mass spectrometry stages.
=== GC–tandem MS ===
When a second phase of mass fragmentation is added, for example using a second quadrupole in a quadrupole instrument, it is called tandem MS (MS/MS). MS/MS can sometimes be used to quantitate low levels of target compounds in the presence of a high sample matrix background.
The first quadrupole (Q1) is connected with a collision cell (Q2) and another quadrupole (Q3). Both quadrupoles can be used in scanning or static mode, depending on the type of MS/MS analysis being performed. Types of analysis include product ion scan, precursor ion scan, selected reaction monitoring (SRM) (sometimes referred to as multiple reaction monitoring (MRM)) and neutral loss scan. For example: When Q1 is in static mode (looking at one mass only as in SIM), and Q3 is in scanning mode, one obtains a so-called product ion spectrum (also called "daughter spectrum"). From this spectrum, one can select a prominent product ion which can be the product ion for the chosen precursor ion. The pair is called a "transition" and forms the basis for SRM. SRM is highly specific and virtually eliminates matrix background.
== Ionization ==
After the molecules travel the length of the column, pass through the transfer line and enter into the mass spectrometer they are ionized by various methods with typically only one method being used at any given time. Once the sample is fragmented it will then be detected, usually by an electron multiplier, which essentially turns the ionized mass fragment into an electrical signal that is then detected.
The ionization technique chosen is independent of using full scan or SIM.
=== Electron ionization ===
By far the most common and perhaps standard form of ionization is electron ionization (EI). The molecules enter into the MS (the source is a quadrupole or the ion trap itself in an ion trap MS) where they are bombarded with free electrons emitted from a filament, not unlike the filament one would find in a standard light bulb. The electrons bombard the molecules, causing the molecule to fragment in a characteristic and reproducible way. This "hard ionization" technique results in the creation of more fragments of low mass-to-charge ratio (m/z) and few, if any, molecules approaching the molecular mass unit. Hard ionization is considered by mass spectrometrists as the employ of molecular electron bombardment, whereas "soft ionization" is charge by molecular collision with an introduced gas. The molecular fragmentation pattern is dependent upon the electron energy applied to the system, typically 70 eV (electronvolts). The use of 70 eV facilitates comparison of generated spectra with library spectra using manufacturer-supplied software or software developed by the National Institute of Standards (NIST-USA). Spectral library searches employ matching algorithms such as Probability Based Matching and dot-product matching that are used with methods of analysis written by many method standardization agencies. Sources of libraries include NIST, Wiley, the AAFS, and instrument manufacturers.
==== Cold electron ionization ====
The "hard ionization" process of electron ionization can be softened by the cooling of the molecules before their ionization, resulting in mass spectra that are richer in information. In this method named cold electron ionization (cold-EI) the molecules exit the GC column, mixed with added helium make up gas and expand into vacuum through a specially designed supersonic nozzle, forming a supersonic molecular beam (SMB). Collisions with the make up gas at the expanding supersonic jet reduce the internal vibrational (and rotational) energy of the analyte molecules, hence reducing the degree of fragmentation caused by the electrons during the ionization process. Cold-EI mass spectra are characterized by an abundant molecular ion while the usual fragmentation pattern is retained, thus making cold-EI mass spectra compatible with library search identification techniques. The enhanced molecular ions increase the identification probabilities of both known and unknown compounds, amplify isomer mass spectral effects and enable the use of isotope abundance analysis for the elucidation of elemental formulas.
=== Chemical ionization ===
In chemical ionization (CI) a reagent gas, typically methane or ammonia is introduced into the mass spectrometer. Depending on the technique (positive CI or negative CI) chosen, this reagent gas will interact with the electrons and analyte and cause a 'soft' ionization of the molecule of interest. A softer ionization fragments the molecule to a lower degree than the hard ionization of EI. One of the main benefits of using chemical ionization is that a mass fragment closely corresponding to the molecular weight of the analyte of interest is produced.
In positive chemical ionization (PCI) the reagent gas interacts with the target molecule, most often with a proton exchange. This produces the species in relatively high amounts.
In negative chemical ionization (NCI) the reagent gas decreases the impact of the free electrons on the target analyte. This decreased energy typically leaves the fragment in great supply.
== Analysis ==
A mass spectrometer is typically utilized in one of two ways: full scan or selective ion monitoring (SIM). The typical GC–MS instrument is capable of performing both functions either individually or concomitantly, depending on the setup of the particular instrument.
The primary goal of instrument analysis is to quantify an amount of substance. This is done by comparing the relative concentrations among the atomic masses in the generated spectrum. Two kinds of analysis are possible, comparative and original. Comparative analysis essentially compares the given spectrum to a spectrum library to see if its characteristics are present for some sample in the library. This is best performed by a computer because there are a myriad of visual distortions that can take place due to variations in scale. Computers can also simultaneously correlate more data (such as the retention times identified by GC), to more accurately relate certain data. Deep learning was shown to lead to promising results in the identification of VOCs from raw GC–MS data.
Another method of analysis measures the peaks in relation to one another. In this method, the tallest peak is assigned 100% of the value, and the other peaks being assigned proportionate values. All values above 3% are assigned. The total mass of the unknown compound is normally indicated by the parent peak. The value of this parent peak can be used to fit with a chemical formula containing the various elements which are believed to be in the compound. The isotope pattern in the spectrum, which is unique for elements that have many natural isotopes, can also be used to identify the various elements present. Once a chemical formula has been matched to the spectrum, the molecular structure and bonding can be identified, and must be consistent with the characteristics recorded by GC–MS. Typically, this identification is done automatically by programs which come with the instrument, given a list of the elements which could be present in the sample.
A "full spectrum" analysis considers all the "peaks" within a spectrum. Conversely, selective ion monitoring (SIM) only monitors selected ions associated with a specific substance. This is done on the assumption that at a given retention time, a set of ions is characteristic of a certain compound. This is a fast and efficient analysis, especially if the analyst has previous information about a sample or is only looking for a few specific substances. When the amount of information collected about the ions in a given gas chromatographic peak decreases, the sensitivity of the analysis increases. So, SIM analysis allows for a smaller quantity of a compound to be detected and measured, but the degree of certainty about the identity of that compound is reduced.
=== Full scan MS ===
When collecting data in the full scan mode, a target range of mass fragments is determined and put into the instrument's method. An example of a typical broad range of mass fragments to monitor would be m/z 50 to m/z 400. The determination of what range to use is largely dictated by what one anticipates being in the sample while being cognizant of the solvent and other possible interferences. A MS should not be set to look for mass fragments too low or else one may detect air (found as m/z 28 due to nitrogen), carbon dioxide (m/z 44) or other possible interference. Additionally if one is to use a large scan range then sensitivity of the instrument is decreased due to performing fewer scans per second since each scan will have to detect a wide range of mass fragments.
Full scan is useful in determining unknown compounds in a sample. It provides more information than SIM when it comes to confirming or resolving compounds in a sample. During instrument method development it may be common to first analyze test solutions in full scan mode to determine the retention time and the mass fragment fingerprint before moving to a SIM instrument method.
=== Selective ion monitoring ===
In selective ion monitoring (SIM) certain ion fragments are entered into the instrument method and only those mass fragments are detected by the mass spectrometer. The advantages of SIM are that the detection limit is lower since the instrument is only looking at a small number of fragments (e.g. three fragments) during each scan. More scans can take place each second. Since only a few mass fragments of interest are being monitored, matrix interferences are typically lower. To additionally confirm the likelihood of a potentially positive result, it is relatively important to be sure that the ion ratios of the various mass fragments are comparable to a known reference standard.
== Applications ==
=== Environmental monitoring and cleanup ===
GC–MS is becoming the tool of choice for tracking organic pollutants in the environment. The cost of GC–MS equipment has decreased significantly, and the reliability has increased at the same time, which has contributed to its increased adoption in environmental studies.
=== Criminal forensics ===
GC–MS can analyze the particles from a human body in order to help link a criminal to a crime. The analysis of fire debris using GC–MS is well established, and there is even an established American Society for Testing and Materials (ASTM) standard for fire debris analysis. GCMS/MS is especially useful here as samples often contain very complex matrices, and results used in court need to be highly accurate.
=== Law enforcement ===
GC–MS is increasingly used for detection of illegal narcotics, and may eventually supplant drug-sniffing dogs.[1] A simple and selective GC–MS method for detecting marijuana usage was recently developed by the Robert Koch Institute in Germany. It involves identifying an acid metabolite of tetrahydrocannabinol (THC), the active ingredient in marijuana, in urine samples by employing derivatization in the sample preparation. GC–MS is also commonly used in forensic toxicology to find drugs and/or poisons in biological specimens of suspects, victims, or the deceased. In drug screening, GC–MS methods frequently utilize liquid-liquid extraction as a part of sample preparation, in which target compounds are extracted from blood plasma.
=== Sports anti-doping analysis ===
GC–MS is the main tool used in sports anti-doping laboratories to test athletes' urine samples for prohibited performance-enhancing drugs, for example anabolic steroids.
=== Security ===
A post–September 11 development, explosive detection systems have become a part of all US airports. These systems run on a host of technologies, many of them based on GC–MS. There are only three manufacturers certified by the FAA to provide these systems, one of which is Thermo Detection (formerly Thermedics), which produces the EGIS, a GC–MS-based line of explosives detectors. The other two manufacturers are Barringer Technologies, now owned by Smith's Detection Systems, and Ion Track Instruments, part of General Electric Infrastructure Security Systems.
=== Chemical warfare agent detection ===
As part of the post-September 11 drive towards increased capability in homeland security and public health preparedness, traditional GC–MS units with transmission quadrupole mass spectrometers, as well as those with cylindrical ion trap (CIT-MS) and toroidal ion trap (T-ITMS) mass spectrometers have been modified for field portability and near real-time detection of chemical warfare agents (CWA) such as sarin, soman, and VX. These complex and large GC–MS systems have been modified and configured with resistively heated low thermal mass (LTM) gas chromatographs that reduce analysis time to less than ten percent of the time required in traditional laboratory systems. Additionally, the systems are smaller, and more mobile, including units that are mounted in mobile analytical laboratories (MAL), such as those used by the United States Marine Corps Chemical and Biological Incident Response Force MAL and other similar laboratories, and systems that are hand-carried by two-person teams or individuals, much ado to the smaller mass detectors. Depending on the system, the analytes can be introduced via liquid injection, desorbed from sorbent tubes through a thermal desorption process, or with solid-phase micro extraction (SPME).
=== Chemical engineering ===
GC–MS is used for the analysis of unknown organic compound mixtures. One critical use of this technology is the use of GC–MS to determine the composition of bio-oils processed from raw biomass. GC–MS is also utilized in the identification of continuous phase component in a smart material, magnetorheological (MR) fluid.
=== Food, beverage and perfume analysis ===
Foods and beverages contain numerous aromatic compounds, some naturally present in the raw materials and some forming during processing. GC–MS is extensively used for the analysis of these compounds which include esters, fatty acids, alcohols, aldehydes, terpenes etc. It is also used to detect and measure contaminants from spoilage or adulteration which may be harmful and which is often controlled by governmental agencies, for example pesticides.
=== Astrochemistry ===
Several GC–MS systems have left earth. Two were brought to Mars by the Viking program. Venera 11 and 12 and Pioneer Venus analysed the atmosphere of Venus with GC–MS. The Huygens probe of the Cassini–Huygens mission landed one GC–MS on Saturn's largest moon, Titan. The MSL Curiosity rover's Sample analysis at Mars (SAM) instrument contains both a gas chromatograph and quadrupole mass spectrometer that can be used in tandem as a GC–MS. The material in the comet 67P/Churyumov–Gerasimenko was analysed by the Rosetta mission with a chiral GC–MS in 2014.
=== Medicine ===
Dozens of congenital metabolic diseases also known as inborn errors of metabolism (IEM) are now detectable by newborn screening tests, especially the testing using gas chromatography–mass spectrometry. GC–MS can determine compounds in urine even in minor concentration. These compounds are normally not present but appear in individuals suffering with metabolic disorders. This is increasingly becoming a common way to diagnose IEM for earlier diagnosis and institution of treatment eventually leading to a better outcome. It is now possible to test a newborn for over 100 genetic metabolic disorders by a urine test at birth based on GC–MS.
In combination with isotopic labeling of metabolic compounds, the GC–MS is used for determining metabolic activity. Most applications are based on the use of 13C as the labeling and the measurement of 13C-12C ratios with an isotope ratio mass spectrometer (IRMS); an MS with a detector designed to measure a few select ions and return values as ratios.
== See also ==
Capillary electrophoresis–mass spectrometry
Ion-mobility spectrometry–mass spectrometry
Liquid chromatography–mass spectrometry
Prolate trochoidal mass spectrometer
Pyrolysis–gas chromatography–mass spectrometry
== References ==
== Bibliography ==
== External links ==
Gas+chromatography-mass+spectrometry at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
Golm Metabolome Database, a mass spectral reference database of plant metabolites | Wikipedia/Gas_chromatography-mass_spectrometry |
Gas chromatography-olfactometry (GC-O) is a technique that integrates the separation of volatile compounds using a gas chromatograph with the detection of odour using an olfactometer (human assessor). It was first invented and applied in 1964 by Fuller and co-workers. While GC separates volatile compounds from an extract, human olfaction detects the odour activity of each eluting compound. In this olfactometric detection, a human assessor may qualitatively determine whether a compound has odour activity or describe the odour perceived, or quantitatively evaluate the intensity of the odour or the duration of the odour activity. The olfactometric detection of compounds allows the assessment of the relationship between a quantified substance and the human perception of its odour, without instrumental detection limits present in other kinds of detectors. Compound identification still requires use of other detectors, such as mass spectrometry, with analytical standards.
== Olfactory perception ==
The properties of a compound relating to human olfactory perception includes its odour quality, threshold and intensity as a function of its concentration.
The odour quality of a (odour-active) compound is assessed using odour descriptors in sensory descriptive analyses. It shows the sensory–chemical relationship in volatile compounds. The odour quality of a compound may change with its concentration.
The absolute threshold of a compound is the minimum concentration at which it can be detected. In a mixture of volatile compounds, only the proportion of compounds present at concentrations above their threshold contribute to the odour. This property can be represented by the odour threshold (OT), the minimum concentration at which the odour is perceived by 50% of a human panel without determining its quality, or the recognition threshold, the minimum concentration at which the odour is perceived and can be described by 50% of a human panel.
The intensity of perception of a compound is positively correlated with its concentration. It is represented by the unique psychometric or concentration-response function of the compound. A psychometric function with a log concentration–perceived intensity plot is characterised by its sigmoidal shape, with its initial baseline representing the compound at concentrations below its threshold, a slow rise in response around the inflection point representing the threshold, an exponential rise in response as the concentration exceeds the threshold, a deceleration of the response to a flat region as the zone of saturation or the point at which the change in intensity is no longer perceived is reached. On the other hand, a log concentration–log perceived intensity plot, using Steven's power law, forms a linear line with the exponent characterising the relationship between the two variables.
== Experimental design ==
The apparatus consists of a gas chromatograph equipped with an odour port (ODP), in place of or in addition to conventional detectors, from with human assessors sniff the eluates. The odour port is characterised by its nose-cone design connected to the GC instrument by a transfer line. The odour port is commonly glass or polytetrafluoroethylene. It is generally placed 30–60 cm away from the instrument, extending from the side such that it is not affected by the hot GC oven. The deactivated silica transfer line is generally heated to prevent the condensation of less-volatile compounds. It is flexible so that the assessor can adjust it according to their comfortable sitting position. As traditional warm and dry carrier gases may dehydrate the mucous membrane of the nose, volatiles are delivered via auxiliary gas or humidified carrier gas, with relative humidity (RH) of 50–75%, to ease the dehydration.
The olfactometric detector may be coupled with, or connected in parallel to, a flame ionization detector (FID) or mass spectrometer (MS). Moreover, multiple odour ports may be set-up. In these cases, the eluate is generally split evenly between the detectors to allow it to reach the detectors simultaneously.
== Methods of detection ==
In a GC-O analysis, various methods are used to determine the odour contribution of a compound or the relative importance of each odorant. The methods can be categorised as (i) detection frequency, (ii) dilution to threshold and (iii) direct intensity.
=== Detection frequency ===
The GC-O analysis is carried out by a panel of 6–12 assessors to count the number of participants who perceive an odour at each retention time. This frequency is then used to represent the relative importance of an odorant in the extract. It is also presumed to relate to the intensity of the odorant at the particular concentration, based on the assumption that individual detection thresholds are normally distributed.
Two different kinds of data can be reported by this method depending on the data collected. First, if only frequency data is available, it is reported as the nasal impact frequency (NIF) or the peak height of the olfactometric signal. It is zero if no assessor senses the odour and added with one each time an assessor senses an odour. Second, if both frequency of detection and duration of odour are collected, the surface of NIF (SNIF) or the peak area corresponding to the product of frequency of detection (%) and duration of odour (s) can be interpreted. SNIF allows further interpretation of odour compounds other than just peak height.
The detection frequency method benefits from its simplicity and lack of requirement for trained assessors, as the signal recorded is binary (presence/absence of odour). On the other hand, a drawback of this method is the limitation to the assumption of the relationship between frequency and perceived odour intensity. Odour-active compounds in food samples are often present at concentrations above their detection thresholds. This means that a compound may be detected by all assessors and therefore reach the limit of 100% detection in spite of increases in intensity.
=== Dilution to threshold ===
A dilution series of a sample or extract is prepared and assessed for presence of odour. The result can be described as the odour potency of a compound.
One kind of analysis is to measure the maximum dilution in the series in which odour is still perceived. The resulting value is called the flavour dilution (FD) factor in the aroma extraction dilution analysis (AEDA) developed in 1987 by Schieberle and Grosch. On the other hand, another kind of analysis is to also measure the duration of the perceived odour to compute peak areas. The peak areas are known as Charm values in the CharmAnalysis developed in 1984 by Acree and co-workers. The former can then be interpreted as the peak height of the latter. Because the odour threshold of a compound is intended to be measured from a prepared series of dilution (commonly by a factor of 2–3 with 8–10 dilutions), the precision and variation in data can be determined from the dilution factors used.
Due to time demand requirements from this method and the general requirement for multiple assessors to minimise errors, having the column split into multiple odour ports would be beneficial for the method.
=== Direct intensity ===
This method adds to the dilution to threshold method by considering the perceived intensity of the compounds as well. Assessors can report this based on a predetermined scale.
The posterior intensity method measures the maximum intensity perceived for each eluting compound. A panel of assessors is recommended to be used to obtain an averaged signal. On the other hand, the dynamic time-intensity method measures
the intensity at different points in time starting from the time of elution, allowing a continuous measurement of onset, maximum, and decline of the odour intensity. This is used in the Osme (Greek word for odour) method developed in 1992 by Da Silva. An aromagram can then be constructed in a similar way as an FID chromatogram whereby intensity is plotted as a function of retention time. The peak height corresponds to the maximum intensity perceived whereas the peak width corresponds to the duration of the odour perceived.
The time requirement maybe high for this particular method regarding the essentials of assessor training, as lack of training may result in inconsistencies in scale usage. However, with a trained panel of assessors, the analysis can be done in a relatively short amount of time with high precision.
== Variations ==
Gas chromatography/mass spectrometry-olfactometry (GC/MS-O)
GC-recomposition-olfactometry (GC-R)
Multi-gas chromatography-olfactometry
== References ==
== External links ==
Gas Chromatography–Olfactometry: Principles, Practical Aspects and Applications in Food Analysis | Wikipedia/Gas_chromatography-olfactometry |
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
== Etymology and pronunciation ==
Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
== History ==
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
== Terms ==
Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture.
Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample.
Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing.
Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated.
Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation.
Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction.
Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase.
Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column.
Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place.
Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column.
Eluotropic series – a list of solvents ranked according to their eluting power.
Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing.
Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated.
Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis.
Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index
Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste.
Solute – the sample components in partition chromatography.
Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography.
Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography
Detector – the instrument used for qualitative and quantitative detection of analytes after separation.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used.
== Techniques by chromatographic bed shape ==
=== Column chromatography ===
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
=== Planar chromatography ===
Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
==== Paper chromatography ====
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
==== Thin-layer chromatography (TLC) ====
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
== Displacement chromatography ==
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
== Techniques by physical state of mobile phase ==
=== Gas chromatography ===
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
=== Liquid chromatography ===
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 (octadecylsilyl) as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
=== Supercritical fluid chromatography ===
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
== Techniques by separation mechanism ==
=== Affinity chromatography ===
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
=== Ion exchange chromatography ===
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
=== Size-exclusion chromatography ===
Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
=== Expanded bed adsorption chromatographic separation ===
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
== Special techniques ==
=== Reversed-phase chromatography ===
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
=== Hydrophobic interaction chromatography ===
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
=== Hydrodynamic chromatography ===
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than
10
5
{\displaystyle 10^{5}}
daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
=== Two-dimensional chromatography ===
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
=== Simulated moving-bed chromatography ===
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
=== Pyrolysis gas chromatography ===
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
=== Fast protein liquid chromatography ===
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
=== Countercurrent chromatography ===
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
==== Hydrodynamic countercurrent chromatography (CCC) ====
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
==== Centrifugal partition chromatography (CPC) ====
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
=== Periodic counter-current chromatography ===
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
=== Chiral chromatography ===
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
=== Aqueous normal-phase chromatography ===
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
== Applications ==
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
== See also ==
== References ==
== External links ==
IUPAC Nomenclature for Chromatography
Overlapping Peaks Program – Learning by Simulations
Chromatography Videos – MIT OCW – Digital Lab Techniques Manual
Chromatography Equations Calculators – MicroSolv Technology Corporation | Wikipedia/Preparative_chromatography |
High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions.
It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase.
Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount.
HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes.
Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination.
== Operation ==
The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results.
HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique.
The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector.
A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed.
The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte.
Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature.
The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile.
The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise.
In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up.
The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation.
== History and development ==
Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC.
Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle.
The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve.
While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure.
== Types ==
=== Partition chromatography ===
Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run.
The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times.
=== Normal–phase chromatography ===
Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers.
The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports.
Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times.
Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique.
=== Displacement chromatography ===
The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration.
=== Reversed-phase liquid chromatography (RP-LC) ===
Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18).
The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17.
With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release.
RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3×10−6 J/cm2, methanol: 2.2×10−6 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis.
Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention.
Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond.
Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents.
Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components.
Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment.
As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica...
=== Size-exclusion chromatography ===
Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface.
Two types of SEC are usually termed:
Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents.
Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.).
The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker.
In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time.
This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins.
=== Ion-exchange chromatography ===
Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc.
Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation.
In general, ion exchangers favor the binding of ions of higher charge and smaller radius.
An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations.
This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others.
=== Bioaffinity chromatography ===
High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer.
This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites.
=== Aqueous normal-phase chromatography ===
Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS.
== Isocratic and gradient elution ==
A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC.
The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol.
In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution.
By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time.
In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change
The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component.
== Parameters ==
=== Theoretical ===
The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it.
This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time.
The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph.
The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base.
The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion).
Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'.
Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column.
Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation.
Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation.
=== Internal diameter ===
The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC).
Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity.
Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector.
Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry
Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ.
=== Particle size ===
Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared.
According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction.
=== Pore size ===
Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside.
=== Pump pressure ===
Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC.
=== Detectors ===
HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer.
When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays.
=== Autosamplers ===
Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision.
It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity.
== Applications ==
=== Manufacturing ===
HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost.
=== Legal ===
This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs.
=== Research ===
Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species.
=== Medical and health sciences ===
Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally.
Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled.
== See also ==
History of chromatography
Capillary electrochromatography
Column chromatography
Csaba Horváth
Ion chromatography
Micellar liquid chromatography
== References ==
== Further reading ==
L. R. Snyder, J.J. Kirkland, and J. W. Dolan, Introduction to Modern Liquid Chromatography, John Wiley & Sons, New York, 2009.
M.W. Dong, Modern HPLC for practicing scientists. Wiley, 2006.
L. R. Snyder, J.J. Kirkland, and J. L. Glajch, Practical HPLC Method Development, John Wiley & Sons, New York, 1997.
S. Ahuja and H. T. Rasmussen (ed), HPLC Method Development for Pharmaceuticals, Academic Press, 2007.
S. Ahuja and M.W. Dong (ed), Handbook of Pharmaceutical Analysis by HPLC, Elsevier/Academic Press, 2005.
Y. V. Kazakevich and R. LoBrutto (ed.), HPLC for Pharmaceutical Scientists, Wiley, 2007.
U. D. Neue, HPLC Columns: Theory, Technology, and Practice, Wiley-VCH, New York, 1997.
M. C. McMaster, HPLC, a practical user's guide, Wiley, 2007.
== External links ==
{{usurped|1=HPLC Chromatography Principle, Application [Basic Note]}} – 2020. at Rxlalit.com | Wikipedia/High_performance_liquid_chromatography |
Inverse gas chromatography is a physical characterization analytical technique that is used in the analysis of the surfaces of solids.
Inverse gas chromatography or IGC is a highly sensitive and versatile gas phase technique developed over 40 years ago to study the surface and bulk properties of particulate and fibrous materials. In IGC the roles of the stationary (solid) and mobile (gas or vapor) phases are inverted from traditional analytical gas chromatography (GC); IGC is considered a materials characterization technique (of the solid) rather than an analytical technique (of a gas mixture). In GC, a standard column is used to separate and characterize a mixture of several gases or vapors. In IGC, a single standard gas or vapor (probe molecule) is injected into a column packed with the solid sample under investigation.
During an IGC experiment a pulse or constant concentration of a known gas or vapor (probe molecule) is injected down the column at a fixed carrier gas flow rate. The retention time of the probe molecule is then measured by traditional GC detectors (i.e. flame ionization detector or thermal conductivity detector). Measuring how the retention time changes as a function of probe molecule chemistry, probe molecule size, probe molecule concentration, column temperature, or carrier gas flow rate can elucidate a wide range of physico-chemical properties of the solid under investigation. Several in depth reviews of IGC have been published previously.
IGC experiments are typically carried out at "infinite dilution", where only small amounts of probe molecule are injected. This region is also called Henry's law region or linear region of the sorption isotherm. At infinite dilution probe-probe interactions are assumed negligible and any retention is only due to probe-solid interactions. The resulting retention volume, VRo, is given by the following equation:
V
R
∘
=
j
m
F
(
t
R
−
t
o
)
T
273.15
{\displaystyle V_{R}^{\circ }={\frac {j}{m}}F(t_{R}-t_{o}){\frac {T}{273.15}}}
where j is the James–Martin pressure drop correction, m is the sample mass, F is the carrier gas flow rate at standard temperature and pressure, tR is the gross retention time for the injected probe, to is the retention time for a non-interaction probe (i.e. dead-time), and T is the absolute temperature.
== Surface energy determination ==
The main application of IGC is to measure the surface energy of solids (fibers, particulates, and films). Surface energy is defined as the amount of energy required to create a unit area of a solid surface; analogous to surface tension of a liquid. Also, the surface energy can be defined as the excess energy at the surface of a material compared to the bulk. The surface energy (γ) is directly related to the thermodynamic work of adhesion (Wadh) between two materials as given by the following equation:
W
a
d
h
=
2
(
γ
1
γ
2
)
1
/
2
{\displaystyle W_{\mathrm {adh} }=2(\gamma _{1}\gamma _{2})^{1/2}}
where 1 and 2 represent the two components in the composite or blend. When determining if two materials will adhere it is common to compare the work of adhesion with the work of cohesion, Wcoh = 2γ. If the work of adhesion is greater than the work of cohesion, then the two materials are thermodynamically favored to adhere.
Surface energies are commonly measured by contact angle methods. However, these methods are ideally designed for flat, uniform surfaces. For contact angle measurements on powders, they are typically compressed or adhered to a substrate which can effectively change the surface characteristics of the powder. Alternatively, the Washburn method can be used, but this has been shown to be affected by column packing, particle size, and pore geometry. IGC is a gas phase technique, thus is not subject to the above limitations of the liquid phase techniques.
To measure the solid surface energy by IGC a series of injections using different probe molecules is performed at defined column conditions. It is possible to ascertain both the dispersive component of the surface energy and acid-base properties via IGC. For the dispersive surface energy, the retention volumes for a series of n-alkane vapors (i.e. decane, nonane, octane, heptanes, etc.) are measured. The Dorris and Gray. or Schultz methods can then be used to calculate the dispersive surface energy. Retention volumes for polar probes (i.e. toluene, ethyl acetate, acetone, ethanol, acetonitrile, chloroform, dichloromethane, etc.) can then be used to determine the acid-base characteristics of the solid using either the Gutmann, or Good-van Oss theory.
Other parameters accessible by IGC include: heats of sorption [1], adsorption isotherms, energetic heterogeneity profiles, diffusion coefficients, glass transition temperatures [1], Hildebrand and Hansen solubility parameters, and crosslink densities.
=== Applications ===
IGC experiments have applications over a wide range of industries. Both surface and bulk properties obtained from IGC can yield vital information for materials ranging from pharmaceuticals to carbon nanotubes. Although surface energy experiments are most common, there are a wide range of experimental parameters that can be controlled in IGC, thus allowing the determination of a variety of sample parameters. The below sections highlight how IGC experiments are utilized in several industries.
=== Polymers and coatings ===
IGC has been used extensively for the characterization of polymer films, beads, and powders. For instance, IGC was used to study surface properties and interactions amongst components in paint formulations. Also, IGC has been used to investigate the degree of crosslinking for ethylene propylene rubber using the Flory–Rehner equation [17]. Additionally, IGC is a sensitive technique for the detection and determination of first and second order phase transitions like melting and glass transition temperatures of polymers. Although other techniques like differential scanning calorimetry are capable of measuring these transition temperatures, IGC has the capability of glass transition temperatures as a function of relative humidity.
=== Pharmaceuticals ===
The increasing sophistication of pharmaceutical materials has necessitated the use for more sensitive, thermodynamic based techniques for materials characterization. For these reasons, IGC, has seen increased use throughout the pharmaceutical industry. Applications include polymorph characterization, effect of processing steps like milling, and drug-carrier interactions for dry powder formulations. In other studies, IGC was used to relate surface energy and acid-base values with triboelectric charging and differentiate the crystalline and amorphous phases [23].
=== Fibers ===
Surface energy values obtained by IGC have been used extensively on fibrous materials including textiles, natural fibers, glass fibers, and carbon fibers. Most of these and other related studies investigating the surface energy of fibers are focusing on the use of these fibers in composites. Ultimately, the changes in surface energy can be related to composite performance via the works of adhesion and cohesion discussed previously.
=== Nanomaterials ===
Similar to fibers, nanomaterials like carbon nanotubes, nanoclays, and nanosilicas are being used as composite reinforcement agents. Therefore, the surface energy and surface treatment of these materials has been actively studied by IGC. For instance, IGC has been used to study the surface activity of nanosilica, nanohematite, and nanogeoethite. Further, IGC was used to characterize the surface of as received and modified carbon nanotubes.
=== Metakaolins ===
IGC was used to characterize the adsorption surface properties of calcined kaolin (metakaolin) and the grinding effect on this material.
=== Other ===
Other applications for IGC include paper-toner adhesion, wood composites, porous materials [3], and food materials.
== See also ==
Adhesion
Material characterization
Sessile drop technique
Surface energy
Wetting
Wetting transition
== References == | Wikipedia/Inverse_gas_chromatography |
The lower critical solution temperature (LCST) or lower consolute temperature is the critical temperature below which the components of a mixture are miscible in all proportions. The word lower indicates that the LCST is a lower bound to a temperature interval of partial miscibility, or miscibility for certain compositions only.
The phase behavior of polymer solutions is an important property involved in the development and design of most polymer-related processes. Partially miscible polymer solutions often exhibit two solubility boundaries, the upper critical solution temperature (UCST) and the LCST, both of which depend on the molar mass and the pressure. At temperatures below LCST, the system is completely miscible in all proportions, whereas above LCST partial liquid miscibility occurs.
In the phase diagram of the mixture components, the LCST is the shared minimum of the concave up spinodal and binodal (or coexistence) curves. It is in general pressure dependent, increasing as a function of increased pressure.
For small molecules, the existence of an LCST is much less common than the existence of an upper critical solution temperature (UCST), but some cases do exist. For example, the system triethylamine-water has an LCST of 19 °C, so that these two substances are miscible in all proportions below 19 °C but not at higher temperatures. The nicotine-water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C.
== Polymer-solvent mixtures ==
Some polymer solutions have an LCST at temperatures higher than the UCST. As shown in the diagram, this means that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures.
In the case of polymer solutions, the LCST also depends on polymer degree of polymerization, polydispersity and branching as well as on the polymer's composition and architecture. One of the most studied polymers whose aqueous solutions exhibit LCST is poly(N-isopropylacrylamide). Although it is widely believed that this phase transition occurs at 32 °C (90 °F), the actual temperatures may differ 5 to 10 °C (or even more) depending on the polymer concentration, molar mass of polymer chains, polymer dispersity as well as terminal moieties. Furthermore, other molecules in the polymer solution, such as salts or proteins, can alter the cloud point temperature. Another monomer whose homo- and co-polymers exhibit LCST behavior in solution is 2-(dimethylamino)ethyl methacrylate.
The LCST depends on the polymer preparation and in the case of copolymers, the monomer ratios, as well as the hydrophobic or hydrophilic nature of the polymer.
To date, over 70 examples of non-ionic polymers with an LCST in aqueous solution have been found.
== Physical basis ==
A key physical factor which distinguishes the LCST from other mixture behavior is that the LCST phase separation is driven by unfavorable entropy of mixing. Since mixing of the two phases is spontaneous below the LCST and not above, the Gibbs free energy change (ΔG) for the mixing of these two phases is negative below the LCST and positive above, and the entropy change ΔS = – (dΔG/dT) is negative for this mixing process. This is in contrast to the more common and intuitive case in which entropies drive mixing due to the increased volume accessible to each component upon mixing.
In general, the unfavorable entropy of mixing responsible for the LCST has one of two physical origins. The first is associating interactions between the two components such as strong polar interactions or hydrogen bonds, which prevent random mixing. For example, in the triethylamine-water system, the amine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing which occurs below 19 °C is not due to entropy but due to the enthalpy of formation of the hydrogen bonds. Sufficiently strong, geometrically-informed, associative interactions between solute and solvent(s) have been shown to be sufficient to lead to an LCST.
The second physical factor which can lead to an LCST is compressibility effects, especially in polymer-solvent systems. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy.
== Theory ==
Within statistical mechanics, the LCST may be modeled theoretically via the lattice fluid model, an extension of Flory–Huggins solution theory, that incorporates vacancies, and thus accounts for variable density and compressibility effects.
Newer extensions of the Flory-Huggins solution theory have shown that the inclusion of only geometrically-informed, associative interactions between solute and solvent are sufficient to observe the LCST.
== Prediction of LCST (θ) ==
There are three groups of methods for correlating and predicting LCSTs. The first group proposes models that are based on a solid theoretical background using liquid–liquid or vapor–liquid experimental data. These methods require experimental data to adjust the unknown parameters, resulting in limited predictive ability . Another approach uses empirical equations that correlate θ (LCST) with physicochemical properties such as density, critical properties etc., but suffers from the disadvantage that these properties are not always available. A new approach proposed by Liu and Zhong develops linear models for the prediction of θ(LCST) using molecular connectivity indices, which depends only on the solvent and polymer structures. The latter approach has proven to be a very useful technique in quantitative structure–activity/property relationships (QSAR/QSPR) research for polymers and polymer solutions. QSAR/QSPR studies constitute an attempt to reduce the trial-and-error element in the design of compounds with desired activity/properties by establishing mathematical relationships between the activity/property of interest and measurable or computable parameters, such as topological, physicochemical, stereochemistry, or electronic indices. More recently QSPR models for the prediction of the θ (LCST) using molecular (electronic, physicochemical etc.) descriptors have been published. Using validated robust QSPR models, experimental time and effort can be reduced significantly as reliable estimates of θ (LCST) for polymer solutions can be obtained before they are actually synthesized in the laboratory.
== See also ==
Upper critical solution temperature
Coil-globule transition
== References == | Wikipedia/Lower_critical_solution_temperature |
Thin-film drug delivery uses a dissolving film or oral drug strip to administer drugs via absorption in the mouth (buccally or sublingually) and/or via the small intestines (enterically). A film is prepared using hydrophilic polymers that rapidly dissolves on the tongue or buccal cavity, delivering the drug to the systemic circulation via dissolution when contact with liquid is made.
Thin-film drug delivery has emerged as an advanced alternative to the traditional tablets, capsules and liquids often associated with prescription and OTC medications. Similar in size, shape and thickness to a postage stamp, thin-film strips are typically designed for oral administration, with the user placing the strip on or under the tongue (sublingual) or along the inside of the cheek (buccal). These drug delivery options allow the medication to bypass the first pass metabolism thereby making the medication more bioavailable. As the strip dissolves, the drug can enter the blood stream enterically, buccally or sublingually. Evaluating the systemic transmucosal drug delivery, the buccal mucosa is the preferred region as compared to the sublingual mucosa. Oral Thin Films (Oral Dissolvable Strips) address several of the disadvantages of tablets or capsules such as dysphagia or the inability to adjust dosing to patient parameters, often resulting to a lack of treatment adherence, especially in low-resource settings.
Different buccal delivery products have been marketed or are proposed for certain diseases like trigeminal neuralgia, Ménière's disease, diabetes, and addiction. There are many commercial non-drug product to use thin films like Mr. Mint and Listerine PocketPaks breath freshening strips. Since then, thin-film products for other breath fresheners, as well as a number of cold, flu, anti-snoring and gastrointestinal medications, have entered the marketplace. There are currently several projects in development that will deliver prescription drugs using the thin-film dosage form.
Formulation of oral drug strips involves the application of both aesthetic and performance characteristics such as strip-forming polymers, plasticizers, active pharmaceutical ingredient, sweetening agents, saliva stimulating agent, flavoring agents, coloring agents, stabilizing and thickening agents. From the regulatory perspectives, all excipients used in the formulation of oral drug strips should be approved for use in oral pharmaceutical dosage forms.
== Oral drug strip development ==
=== Strip forming polymers ===
The polymer employed should be non-toxic, non-irritant and devoid of leachable impurities. It should have good wetting and spreadability property. The polymer should exhibit sufficient peel, shear and tensile strengths. The polymer should be readily available and should not be very expensive. Film obtained should be tough enough so that there won't be any damage while handling or during transportation. Combination of microcrystalline cellulose and maltodextrin has been used to formulate Oral Strips of piroxicam made by hot melt extrusion technique. Pullulan has been the most widely used film former (used in Listerine PocketPak, Benadryl, etc.)
=== Plasticizers ===
Plasticizer is a vital ingredient of the OS formulation. It helps to improve the flexibility and reduces the brittleness of the strip. Plasticizer significantly improves the strip properties by reducing the glass transition temperature of the polymer. Glycerol, Propylene glycol, low molecular weight polyethylene glycols, phthalate derivatives like dimethyl, diethyl and dibutyl phthalate, Citrate derivatives such as tributyl, triethyl, acetyl citrate, triacetin and castor oil are some of the commonly used plasticizer excipients.
=== Active pharmaceutical ingredient ===
Since the size of the dosage form has limitation, high-dose molecules are difficult to be incorporated in OS. Generally 5%w/w to 30%w/w of active pharmaceutical ingredients can be incorporated in the oral strip.
=== Sweetening, flavoring and coloring agents ===
An important aspect of thin film drug technology is its taste and color. The sweet taste in formulation is more important in case of pediatric population. Natural sweeteners as well as artificial sweeteners are used to improve the flavor of the mouth dissolving formulations for the flavors changes from individual to individual. Pigments such as titanium dioxide is incorporated for coloring.
=== Stabilizing and thickening agents ===
The stabilizing and thickening agents are employed to improve the viscosity and consistency of dispersion or solution of the strip preparation solution or suspension before casting. Drug content uniformity is a requirement for all dosage forms, particularly those containing low dose highly potent drugs. To uniquely meet this requirement, thin film formulations contain uniform dispersions of drug throughout the whole manufacturing process. Since this criterion is essential for the quality of the thin film and final pharmaceutical dosage form, the use of Laser Scanning Confocal Microscopy (LSCM) was recommended to follow the manufacturing process.
=== Oral strips in development ===
An increasing number of film-based therapeutics are in development, including:
Sildenafil citrate indicated for the treatment of erectile disfunction (ED), is being developed for use by Cure Pharmaceutical.
Montelukast indicated for the treatment of dementia, asthma and allergy, is being developed variously for uses as a film by IntelGenx and Aquestive Therapeutics (formerly known as Monosol Rx).
Midatech, a company specializing in nanotechnology, is partnering with Aquestive Therapeutics to create a film-based insulin. (Sachs Associates. 5th Annual European Life Science CEP Forum for Partnering and Investing. March 6–7, 2012. Zurich, Switzerland.)
Rizatriptan indicated for the treatment of migraine, is being developed for use as a film by IntelGenx, Aquestive Therapeutics, and Zim Laboratories Ltd.
Aquestive Therapeutics is also developing a testosterone film-based therapeutic for the treatment of male hypogonadism. The product is currently in phase 1.
Undergraduate biomedical engineering students at Johns Hopkins University have created a new drug delivery system based on the thin-film technology used by a breath freshener. Laced with a vaccine against rotavirus, the strips could be used to provide the vaccine to infants in impoverished areas.
Other molecules like sildenafil citrate, tadalafil, methylcobalamin and vitamin D3 are also developed by IntelGenx Zim Laboratories Ltd.
Oak Therapeutics, a drug delivery company, has developed an oral thin film (oral disposable strip) for the treatment of Xerostomia, and has isoniazid-rifapentine (for the treatment of latent tuberculosis) and abacavir-lumevadine-dolutegravir (ALD, for the treatment of HIV/AIDS) oral dissolvable strips in development, funded by grants from the National Institutes of Health (NIH).
== References ==
Hariharan, Madhu; Bogue, B. Arlie (February 2009). "Orally Dissolving Film Strips (ODFS): The Final Evolution of Orally Dissolving Dosage Forms" (PDF). Drug Delivery Technology. 9 (2). Montville, New Jersey: 24–29. ISSN 1537-2898. OCLC 48060225. Archived (PDF) from the original on 20 February 2025. | Wikipedia/Thin_film_drug_delivery |
Patient-controlled analgesia (PCA) is any method of allowing a person in pain to administer their own pain relief. The infusion is programmable by the prescriber. If it is programmed and functioning as intended, the machine is unlikely to deliver an overdose of medication. Providers must always observe the first administration of any PCA medication which has not already been administered by the provider to respond to allergic reactions.
== Routes of administration ==
=== Oral ===
The most common form of patient-controlled analgesia is self-administration of oral over-the-counter or prescription painkillers. For example, if a headache does not resolve with a small dose of an oral analgesic, more may be taken. As pain is a combination of tissue damage and emotional state, being in control means reducing the emotional component of pain.
=== Intravenous ===
In a hospital setting, an intravenous PCA (IV PCA) refers to an electronically controlled infusion pump that delivers an amount of analgesic when the patient presses a button. IV PCA can be used for both acute and chronic pain patients. It is commonly used for post-operative pain management, and for end-stage cancer patients.
Narcotics are the most common analgesics administered through IV PCAs. It is important for caregivers to monitor patients for the first two to twenty-four hours to ensure they are using the device properly.
With an IV PCA the patient is protected from overdose by the caregiver programming the PCA to deliver a dose at frequent set intervals. If the patient presses the button sooner than the prescribed intake pressing the button does not operate the PCA. (The PCA can be set to emit a beep telling the patient a dose was NOT delivered). The inability of an obtunded patient to push the button is also considered an inherent safety feature of PCA.
=== Epidural ===
Patient-controlled epidural analgesia (PCEA) is a related term describing the patient-controlled administration of analgesic medicine in the epidural space, by way of intermittent boluses or infusion pumps. This can be used by women in labour, terminally ill cancer patients or to manage post-operative pain.
=== Inhaled ===
In 1968, Robert Wexler of Abbott Laboratories developed the Analgizer, a disposable inhaler that allowed the self-administration of methoxyflurane vapor in air for analgesia. The Analgizer consisted of a polyethylene cylinder 5 inches long and 1 inch in diameter with a 1 inch long mouthpiece. The device contained a rolled wick of polypropylene felt which held 15 milliliters of methoxyflurane. Because of the simplicity of the Analgizer and the pharmacological characteristics of methoxyflurane, it was easy for patients to self-administer the drug and rapidly achieve a level of conscious analgesia which could be maintained and adjusted as necessary over a period of time lasting from a few minutes to several hours. The 15 milliliter supply of methoxyflurane would typically last for two to three hours, during which time the user would often be partly amnesic to the sense of pain; the device could be refilled if necessary. The Analgizer was found to be safe, effective, and simple to administer in obstetric patients during childbirth, as well as for patients with bone fractures and joint dislocations, and for dressing changes on burn patients. When used for labor analgesia, the Analgizer allows labor to progress normally and with no apparent adverse effect on Apgar scores. All vital signs remain normal in obstetric patients, newborns, and injured patients. The Analgizer was widely utilized for analgesia and sedation until the early 1970s, in a manner that foreshadowed the patient-controlled analgesia infusion pumps of today. The Analgizer inhaler was withdrawn in 1974, but use of methoxyflurane as a sedative and analgesic continues in Australia and New Zealand in the form of the Penthrox inhaler.
=== Nasal ===
Patient Controlled Intranasal Analgesia (PCINA or Nasal PCA) refers to PCA devices in a Nasal spray form with inbuilt features to control the number of sprays that can be delivered in a fixed time period.
=== Transdermal ===
Transdermal PCA using iontophoretic technology are also available. The most advanced ones are used for administration of opioids such as fentanyl. An adhesive is applied to the intact hairless skin, while a small electric current allows the ionized drug to cross the stratum corneum to deliver the analgesic dose upon the device being triggered by the patient.
== Advantages and disadvantages ==
Advantages of patient-controlled analgesia include self-delivery of pain medication, faster alleviation of pain because the patient can address pain with medication, and dosage monitoring by medical staff (dosage can be increased or decreased depending on need). With a PCA the patient spends less time in pain and as a corollary to this, patients tend to use less medication than in cases in which medication is given according to a set schedule or on a timer.
Disadvantages include the possibility that a patient will use the pain medication non-medically, self-administering the narcotic for its euphoric properties even though the patient's pain is sufficiently controlled. If a PCA device is not programmed properly for the patient this can result in an under-dose or overdose in a medicine. The system may also be inappropriate for certain individuals, for example patients with learning difficulties or confusion. Also, patients with poor manual dexterity may be unable to press the buttons as would those who are critically ill. PCA may not be appropriate for younger patients.
== History ==
The PCA pump was developed and introduced by Philip H. Sechzer in the late 1960s and described in 1971.
== References ==
== Further reading == | Wikipedia/Patient-controlled_analgesia |
Ophthalmic drug administration is the administration of a drug to the eyes, most typically as an eye drop formulation. Topical formulations are used to combat a multitude of diseased states of the eye. These states may include bacterial infections, eye injury, glaucoma, and dry eye. However, there are many challenges associated with topical delivery of drugs to the cornea of the eye.
== Eye drop formulations ==
Two of the largest challenges faced when using topicals to treat pathological states of the eye include patient compliance and ineffective absorbance of drugs into the cornea due to short contact times, solution drainage, tears turnover, and dilution or lacrimation. In fact, researchers in this field of drug delivery agree that less than 7% of drugs delivered to the eye reach and penetrate the corneal barrier, therefore, increasing the frequency of dosing used for topicals. This is one of the fundamental problem associated with using topicals to deliver drugs to the cornea and therefore leads to the increased demand for patient compliance. Together, these two factors drive a need in the field of scientific research and engineering for a way to better deliver drugs to the cornea of the eye while decreasing dosing frequency and demand for patient compliance. Strategies to achieve a prolonged residence time of drug delivery systems on ocular surface include mucoadhesive and in situ gelling polymers and thiolated cyclodextrins (see thiomers). Besides the logistical problems associated with using topicals, there are also systemic side effects which result from the administration of some drugs used to combat the pathological states of the eye. With the increased concentration of drugs in topicals and the frequent application to the eye, a majority of the drug is drained from the eye via nasolacrimal drainage. This drainage is thought to be the reason that systemic side effects exist from such administration.
== Contact lenses as delivery devices ==
The U.S. Centers for Disease Control and Prevention (CDC) claims that there were "about 41 million contact lens wearers greater than 18 years old in the United States" in 2018. Of all of these wearers, nearly 90% of them wear contact lenses known as 'soft contact lenses' (SCLs). Contact lenses are regulated by the United States Food and Drug Administration (FDA).
The main approaches that researchers in this field are using today are: molecular imprinting, supercritical soaking, solvent impregnation, and nanoparticle loading. Each of these techniques assists by hoping to deliver drugs at a lower, more sustained, rate that does not require a demand for increased patient compliance nor the systemic side effects from topical drug delivery systems. However, each of these different types of loading techniques results in contact lenses that all have separate physical and chemical challenges when it comes to the sustained release and penetration of specific drugs at the molecular level in regards to the cornea of the eye.
=== Molecular imprinting ===
Molecular imprinting is a process by which polymerization of a polymer around template result in the polymer matrix with embedded templates. After the template is removed, a cavity results with the functionalized monomers within the polymer cavity. This cavity is the idealized position for drug loading since this process can be designed specifically to recruit and hold onto drugs due to chemical specificity. This technique can be better visualized by referring to Figure 3.0. This type of drug loading can be used as a way to create a pH responsive system, which releases drug(s) as the pH of the biological system changes. Some drugs that have been successfully loaded via this method are: timolol, norfloxacin, ketotifen, polyvinlypyrrolidone, and hyaluronic acid. The molecular structures of each of these drugs are shown below in the index of important scientific terminology.
=== Supercritical soaking/solvent impregnation ===
The supercritical soaking method is commonly used in hydrogel-based contact lenses and is the most common of all types of molecular drug loading techniques. Since this technique requires no special equipment or advanced knowledge of polymer-based hydrogels it is the least complex of all loading types. In order to load the hydrogel matrix with a certain drug, contact lenses are simply placed in a solution of the drug and the drug diffuses into the matrix. Since this loading technique is driven solely by the gradient of the drug concentration surrounding the lens relative to the hydrogel matrix, the diffusion rate and amount of drug that is loaded can be controlled solely by the concentration of the drug solution. Since this process allows for specific amounts of a certain drug to be loaded to the hydrogel matrix, this method of loading has become important for patient-specific (personalized) medicine and treatments.
=== Nanoparticle loading ===
The nanoparticle loading technique includes two major parts. The first part of this process is the creation and conjugation of a specific drug into or onto a nanoparticle or other colloidal particle. Next, the nanoparticle is loaded into the hydrogel matrix of the contact lens. In this case, before the drug can diffuse out of the hydrogel matrix to reach the cornea, it must also diffuse or be released out of the nanoparticle.
== Physical and chemical challenges of loading ==
It is important to recognize the positives and negatives associated with each type of drug loading for using contact lenses as drug delivery devices. In order to seriously address the possibility of clinical translation of these devices, it is important to recognize the physical and chemical barriers. By understanding this better, the mechanism of drug loading and the controlled and sustained release of drugs to a patient's eye can be optimized.
=== Lens transparency ===
Since contact lenses are used on a part of the body that is important for normal daily functioning (sight) it is critical that scientists take into account the transparency of the lens. As larger and more drugs/objects are loaded to a contact lens it begins to physically crowd the space available, making it more difficult for light to penetrate and reach the eye.
Fundamental Concept Understanding: A simple analogy to this is a crowded versus an uncrowded area while it is raining outside. When individuals are packed tightly the rain falls and lands on people, making its way to the ground slowly but surely in a scattered way. In an uncrowded area, the rain can fall and land on the ground easily and without interference from the people. In this analogy, the rain is analogous to light and the people are analogous to drugs being loaded in a contact lens. The more drugs added to the contact lens, the less light that can penetrate without being randomly scattered. Random scattering of the light can result in unclear and unfocused sight.
Researchers have noted that by using the nanoparticle loading technique, the transparency decreases by nearly 10%. Conversely, researchers have confirmed that by using the molecular imprinting and supercritical soaking methods of drug loading, the lens transparency of the contact lenses has stayed at or above the lens transparency of the contact lenses currently approved by the FDA.
=== Oxygen permeability ===
Oxygen permeability is another important feature of all contact lenses and much be optimized to the largest degree possible when creating drug delivery devices for the eye. The contact lens adheres to the external cornea of the eye which is made up of a layer of cells. Cells, being the basic component of living organisms, require sustained and constant access to oxygen in order to survive. The cornea of the eye is not supplied with blood as are most other cells in the body, making this a challenging part of the body to which to deliver drugs. Decreasing oxygenation to the eye can result in undesirable side effects. Researchers in this field have noted that different types of contact lenses have varying degrees of oxygen permeability. For example, it has been shown that SCLs have limited oxygen permeability while silicon-based contact lenses have much better oxygen permeability. Silicon-base contact lenses have also been shown to have some other very important physical parameters.
Researchers have attempted to make the thickness of the contact lenses in order to increase the drug loading capacity of the contact lens. However, for silicon-based lenses this parameter is inversely proportion to oxygen permeability (i.e. as thickness of the contact lens increases the oxygen permeability decreases). Moreover, it has been shown that as water content increases in silicon-based lenses, the oxygen permeability decreases, another relationship that is inversely proportional. Surprisingly, as SCLs increase with water content the oxygen permeability also increases (a directly proportional relationship).
In regards to whether silicon-based lenses or SCLs are a better candidate as an ophthalmic drug delivery device is a question that remains unanswered and is not uniformly agreed upon in the scientific community. For example, Ciolino et al. claim that silicon-based contact lenses are better candidates for patients that are long-term contact lens wearer. Conversely, Kim et al. suggest that SCLs are better candidates because they show the possibility to be able to overcome the difficult of oxygen permeability as well as mechanical integrity of the lens. Kim et al. have shown that the mechanical strength can be increased for SCLs by incorporating a nanodiamond (ND) infrastructure into contact lens matrix.
Additionally, many researchers have investigated the implications of loading vitamin E into the contact lens matrix of SCLs. Although vitamin E incorporation into the matrix has been shown to slow the release of drugs into the eye and onto cornea (a desirable trait of an ophthalmic delivery system), it has also been shown to decrease oxygen permeability. Oxygen permeability continues to be an extremely important factor in the development of these devices and is one of the main reason that much research is beginning to focus on this area of drug delivery.
=== Water content ===
The amount of water content that a particular contact lens can retain is another extremely important factor that must be taken into account when these devices are designed. Research in this specific area of design suggests that contact lens wearers find it more comfortable to wear lenses that retain water more than those that deter water. For SCLs, as the water content of a lens increases so does the oxygen permeability. Conversely, as the water content increases in silicon-based contact lenses, the oxygen permeability decreases. In reference to SCLs, higher water content in contact lenses allows for easier loading using the supercritical soaking method. This could be due to the water acting as a lubricant to some drugs and allowing the drug to be more easily facilitated into the matrix. This would essentially allow for more drug to be loaded into contact lenses of this type. This increase in drug loading capacity is an important advancement and would allow for patients wince it may allow for a longer period of drug release time and would hopefully be more sustained.
Furthermore, Guzman-Aranguez et al. has shown that when using the molecular imprinting method for loading drugs such as ketotifen and norfloxacin into the contact lens, the water content is not largely impacted. Additionally, it has been predicted by Peng et al. by using Fickian release kinetic models that although water content changes once contact lenses are inserted onto the cornea of the eye, this will not pose significant challenges when it comes to the release of rugs from SCLs.
=== Drug release kinetics ===
The most important factor that must be taken into account when designing any type of drug delivery device, and specifically ocular devices, is the release rate of a drug. As discussed previously, the deliver rate and kinetics associated with drugs to the eye can reach levels that are toxic to the eye or could even cause undesirable side effects. The rate of release of a drug is also important because too slow of a release could have no beneficial outcome for the patient and a release that is too quick could result in negative side effects. Thus, it is important to balance the factors that govern the release of drugs from contact lenses as potential drug delivery devices. Researchers such as C. Alvarez-Lorenzo have tested (with animal models) and have data which supports that molecularly imprinted contact lenses release drugs in a sustained and long period of time. It has also been supported by researchers that the rate of drug release can be controlled by incorporating vitamin E within the hydrogen matrix.
=== Systemic side effects ===
Over time, it has been reported that many of the same drugs and eye drops used to treat particular eye diseases do, in fact, result in systemic side effects that could possibly be minimized or limited due to a slower, more sustained release of the drug. The systemic side effects of glaucoma medications such as latanoprost increased heart rate resulting in cardiac arrhythmias, bronchoconstriction, and hypotension. These complications could be life-threatening. Some other drugs that help to reduce the effects of glaucoma in the eye result in vomiting, diarrhea, tachycardia and bronchospasm. It has been found that some drugs delivered in the form of eye drops are highly toxic to children since their total body volume and tissue volumes are much lower than that of an adult for which the drugs are intended for use. In this case, some parents are not aware of these implications and could use the same drug they would use to help treat their children's bacterial infections in the eye. Moreover, some drugs administered to the eye have been shown to result in cardiac depression and propagation of some disorders such as asthma. With continued research in this area, it has become known that skin irritation, itching or rash are commonly associated with drugs used to treat ocular bacterial infections.
== Ocular disorders ==
There are currently four main ocular disorders that have been heavily investigated and have shown success with using contact lenses as possible devices for molecular drug delivery.
=== Bacterial infection ===
The drug release rate is extremely important in treating many diseased states of the eye, bacterial infections being one of them. Ciprofloxacin and norfloxacin are drugs that are normally used to treat bacterial infections of the eye. It is of utmost importance that these drugs stay in the therapeutic window for an extended period of time in order to be fully effective and kill bacteria. To keep the specific drug in the therapeutic window using eye drops the topical must be applied approximately every 30 minutes in order to be fully effective. Having to apply eye drops every 30 minutes would be nearly impossible for anyone and is not the ideal mechanism by which to deliver such drugs to the eye. Researchers have gathered data to support the idea that silicon-based contact lenses with ciprofloxacin could release the drug in the therapeutic window for approximately one month. Ana Guzman-Aranguez et al. also confirmed that the contact lens used also retained important properties such as transparency, oxygen permeability, mechanical strength, and zero-order release pharmacokinetics.
=== Corneal injury ===
Many factors can result in corneal injury and cause the deterioration or death of cells that make up the cornea of the eye. The epithelial cells that make up the cornea are important in order for normal vision. These cells play a role in creating a physical environment that can correctly bend light rays to help project images to the retina of the eye. There have been successful human clinical trials with using SCLs infused with epidermal growth factor (EGF) that showed increased rate of healing of the epithelial cell layer of the cornea.
=== Glaucoma ===
Glaucoma is the leading cause of blindness in the world and is a progressive and irreversible disease of the eye. A poly(lactic-co-glycolic acid)-based contact lens was shown to release latanoprost at a sustained release rate of up to a month in animal models by Ciolino et al. at Harvard Medical School and Massachusetts Institute of Technology. Latanoprost is one of the drug interventions used to treat patients with glaucoma, generally in the form of topicals such as eye drops.
=== Dry eye ===
More than 50% of all contact lens wearers report that they experience dry eye. In order to help combat this issue and be assured that this does not occur in people that will one day be using drug eluting contact lenses, it is important to make sure that this complication is highly investigated. However, these investigations will not only be beneficial for contact lenses as drug delivery devices, but it will also have positive implications on contact lens wearers who use lenses for vision correction and appearance.
== References == | Wikipedia/Ophthalmic_drug_administration |
In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property depends on many other variables, such as the physical form of the two substances and the manner and intensity of mixing.
The concept and measure of solubility are extremely important in many sciences besides chemistry, such as geology, biology, physics, and oceanography, as well as in engineering, medicine, agriculture, and even in non-technical activities like painting, cleaning, cooking, and brewing. Most chemical reactions of scientific, industrial, or practical interest only happen after the reagents have been dissolved in a suitable solvent. Water is by far the most common such solvent.
The term "soluble" is sometimes used for materials that can form colloidal suspensions of very fine solid particles in a liquid. The quantitative solubility of such substances is generally not well-defined, however.
== Quantification of solubility ==
The solubility of a specific solute in a specific solvent is generally expressed as the concentration of a saturated solution of the two. Any of the several ways of expressing concentration of solutions can be used, such as the mass, volume, or amount in moles of the solute for a specific mass, volume, or mole amount of the solvent or of the solution.
=== Per quantity of solvent ===
In particular, chemical handbooks often express the solubility as grams of solute per 100 millilitres of solvent (g/(100 mL), often written as g/100 ml), or as grams of solute per decilitre of solvent (g/dL); or, less commonly, as grams of solute per litre of solvent (g/L). The quantity of solvent can instead be expressed in mass, as grams of solute per 100 grams of solvent (g/(100 g), often written as g/100 g), or as grams of solute per kilogram of solvent (g/kg). The number may be expressed as a percentage in this case, and the abbreviation "w/w" may be used to indicate "weight per weight". (The values in g/L and g/kg are similar for water, but that may not be the case for other solvents.)
Alternatively, the solubility of a solute can be expressed in moles instead of mass. For example, if the quantity of solvent is given in kilograms, the value is the molality of the solution (mol/kg).
=== Per quantity of solution ===
The solubility of a substance in a liquid may also be expressed as the quantity of solute per quantity of solution, rather than of solvent. For example, following the common practice in titration, it may be expressed as moles of solute per litre of solution (mol/L), the molarity of the latter.
In more specialized contexts the solubility may be given by the mole fraction (moles of solute per total moles of solute plus solvent) or by the mass fraction at equilibrium (mass of solute per mass of solute plus solvent). Both are dimensionless numbers between 0 and 1 which may be expressed as percentages (%).
=== Liquid and gaseous solutes ===
For solutions of liquids or gases in liquids, the quantities of both substances may be given volume rather than mass or mole amount; such as litre of solute per litre of solvent, or litre of solute per litre of solution. The value may be given as a percentage, and the abbreviation "v/v" for "volume per volume" may be used to indicate this choice.
=== Conversion of solubility values ===
Conversion between these various ways of measuring solubility may not be trivial, since it may require knowing the density of the solution — which is often not measured, and cannot be predicted. While the total mass is conserved by dissolution, the final volume may be different from both the volume of the solvent and the sum of the two volumes.
Moreover, many solids (such as acids and salts) will dissociate in non-trivial ways when dissolved; conversely, the solvent may form coordination complexes with the molecules or ions of the solute. In those cases, the sum of the moles of molecules of solute and solvent is not really the total moles of independent particles solution. To sidestep that problem, the solubility per mole of solution is usually computed and quoted as if the solute does not dissociate or form complexes—that is, by pretending that the mole amount of solution is the sum of the mole amounts of the two substances.
== Qualifiers used to describe extent of solubility ==
The extent of solubility ranges widely, from infinitely soluble (without limit, i.e. miscible) such as ethanol in water, to essentially insoluble, such as titanium dioxide in water. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms, according to the mass msv of solvent required to dissolve one unit of mass msu of solute: (The solubilities of the examples are approximate, for water at 20–25 °C.)
The thresholds to describe something as insoluble, or similar terms, may depend on the application. For example, one source states that substances are described as "insoluble" when their solubility is less than 0.1 g per 100 mL of solvent.
== Molecular view ==
Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase joining (e.g. precipitation of solids). A stable state of the solubility equilibrium occurs when the rates of dissolution and re-joining are equal, meaning the relative amounts of dissolved and non-dissolved materials are equal. If the solvent is removed, all of the substance that had dissolved is recovered.
The term solubility is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid", although in fact the aqueous acid irreversibly degrades the solid to give soluble products. Most ionic solids dissociate when dissolved in polar solvents. In those cases where the solute is not recovered upon evaporation of the solvent, the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis.
When a solute dissolves, it may form several species in the solution. For example, an aqueous solution of cobalt(II) chloride can afford [Co(H2O)6]2+, [CoCl(H2O)5]+, CoCl2(H2O)2, each of which interconverts.
== Factors affecting solubility ==
Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula.
The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility.
Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium.
For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher (i.e. about ten times higher) when the redox potential is controlled using a highly oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately oxidizing Ni-NiO buffer.
Solubility (metastable, at concentrations approaching saturation) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific surface area or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time).
=== Temperature ===
The solubility of a given solute in a given solvent is function of temperature. Depending on the change in enthalpy (ΔH) of the dissolution reaction, i.e., on the endothermic (ΔH > 0) or exothermic (ΔH < 0) character of the dissolution reaction, the solubility of a given compound may increase or decrease with temperature. The van 't Hoff equation relates the change of solubility equilibrium constant (Ksp) to temperature change and to reaction enthalpy change.
For most solids and liquids, their solubility increases with temperature because their dissolution reaction is endothermic (ΔH > 0). In liquid water at high temperatures, (e.g. that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent and in a change of hydration energy affecting the ΔG of the dissolution reaction.
Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water (exothermic dissolution reaction related to their hydration) (to a minimum, which is below 120 °C for most permanent gases), but more soluble in organic solvents (endothermic dissolution reaction related to their solvation).
The chart shows solubility curves for some typical solid inorganic salts in liquid water (temperature is in degrees Celsius, i.e. kelvins minus 273.15). Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature (ΔH > 0). Some solutes (e.g. sodium chloride in water) exhibit solubility that is fairly independent of temperature (ΔH ≈ 0). A few, such as calcium sulfate (gypsum) and cerium(III) sulfate, become less soluble in water as temperature increases (ΔH < 0). This is also the case for calcium hydroxide (portlandite), whose solubility at 70 °C is about half of its value at 25 °C. The dissolution of calcium hydroxide in water is also an exothermic process (ΔH < 0). As dictated by the van 't Hoff equation and Le Chatelier's principle, low temperatures favor dissolution of Ca(OH)2. Portlandite solubility increases at low temperature. This temperature dependence is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal (mirabilite) loses water of crystallization at 32 °C to form a more soluble anhydrous phase (thenardite) with a smaller change in Gibbs free energy (ΔG) in the dissolution reaction.
The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins.
=== Pressure ===
For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as:
(
∂
ln
N
i
∂
P
)
T
=
−
V
i
,
a
q
−
V
i
,
c
r
R
T
{\displaystyle \left({\frac {\partial \ln N_{i}}{\partial P}}\right)_{T}=-{\frac {V_{i,aq}-V_{i,cr}}{RT}}}
where the index
i
{\displaystyle i}
iterates the components,
N
i
{\displaystyle N_{i}}
is the mole fraction of the
i
{\displaystyle i}
-th component in the solution,
P
{\displaystyle P}
is the pressure, the index
T
{\displaystyle T}
refers to constant temperature,
V
i
,
a
q
{\displaystyle V_{i,aq}}
is the partial molar volume of the
i
{\displaystyle i}
-th component in the solution,
V
i
,
c
r
{\displaystyle V_{i,cr}}
is the partial molar volume of the
i
{\displaystyle i}
-th component in the dissolving solid, and
R
{\displaystyle R}
is the universal gas constant.
The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.
== Solubility of gases ==
Henry's law is used to quantify the solubility of gases in solvents. The solubility of a gas in a solvent is directly proportional to the partial pressure of that gas above the solvent. This relationship is similar to Raoult's law and can be written as:
p
=
k
H
c
{\displaystyle p=k_{\rm {H}}\,c}
where
k
H
{\displaystyle k_{\rm {H}}}
is a temperature-dependent constant (for example, 769.2 L·atm/mol for dioxygen (O2) in water at 298 K),
p
{\displaystyle p}
is the partial pressure (in atm), and
c
{\displaystyle c}
is the concentration of the dissolved gas in the liquid (in mol/L).
The solubility of gases is sometimes also quantified using Bunsen solubility coefficient.
In the presence of small bubbles, the solubility of the gas does not depend on the bubble radius in any other way than through the effect of the radius on pressure (i.e. the solubility of gas in the liquid in contact with small bubbles is increased due to pressure increase by Δp = 2γ/r; see Young–Laplace equation).
Henry's law is valid for gases that do not undergo change of chemical speciation on dissolution. Sieverts' law shows a case when this assumption does not hold.
The carbon dioxide solubility in seawater is also affected by temperature, pH of the solution, and by the carbonate buffer. The decrease of solubility of carbon dioxide in seawater when temperature increases is also an important retroaction factor (positive feedback) exacerbating past and future climate changes as observed in ice cores from the Vostok site in Antarctica. At the geological time scale, because of the Milankovich cycles, when the astronomical parameters of the Earth orbit and its rotation axis progressively change and modify the solar irradiance at the Earth surface, temperature starts to increase. When a deglaciation period is initiated, the progressive warming of the oceans releases CO2 into the atmosphere because of its lower solubility in warmer sea water. In turn, higher levels of CO2 in the atmosphere increase the greenhouse effect and carbon dioxide acts as an amplifier of the general warming.
== Polarity ==
A popular aphorism used for predicting solubility is "like dissolves like" also expressed in the Latin language as "Similia similibus solventur". This statement indicates that a solute will dissolve best in a solvent that has a similar chemical structure to itself, based on favorable entropy of mixing. This view is simplistic, but it is a useful rule of thumb. The overall solvation capacity of a solvent depends primarily on its polarity. For example, a very polar (hydrophilic) solute such as urea is very soluble in highly polar water, less soluble in fairly polar methanol, and practically insoluble in non-polar solvents such as benzene. In contrast, a non-polar or lipophilic solute such as naphthalene is insoluble in water, fairly soluble in methanol, and highly soluble in non-polar benzene.
In even more simple terms a simple ionic compound (with positive and negative ions) such as sodium chloride (common salt) is easily soluble in a highly polar solvent (with some separation of positive (δ+) and negative (δ-) charges in the covalent molecule) such as water, as thus the sea is salty as it accumulates dissolved salts since early geological ages.
The solubility is favored by entropy of mixing (ΔS) and depends on enthalpy of dissolution (ΔH) and the hydrophobic effect. The free energy of dissolution (Gibbs energy) depends on temperature and is given by the relationship: ΔG = ΔH – TΔS. Smaller ΔG means greater solubility.
Chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. This applies in vast areas of chemistry from drug synthesis to spent nuclear fuel reprocessing.
== Rate of dissolution ==
Dissolution is not an instantaneous process. The rate of solubilization (in kg/s) is related to the solubility product and the surface area of the material. The speed at which a solid dissolves may depend on its crystallinity or lack thereof in the case of amorphous solids and the surface area (crystallite size) and the presence of polymorphism. Many practical systems illustrate this effect, for example in designing methods for controlled drug delivery. In some cases, solubility equilibria can take a long time to establish (hours, days, months, or many years; depending on the nature of the solute and other factors).
The rate of dissolution can be often expressed by the Noyes–Whitney equation or the Nernst and Brunner equation of the form:
d
m
d
t
=
A
D
d
(
C
s
−
C
b
)
{\displaystyle {\frac {\mathrm {d} m}{\mathrm {d} t}}=A{\frac {D}{d}}(C_{\mathrm {s} }-C_{\mathrm {b} })}
where:
m
{\displaystyle m}
= mass of dissolved material
t
{\displaystyle t}
= time
A
{\displaystyle A}
= surface area of the interface between the dissolving substance and the solvent
D
{\displaystyle D}
= diffusion coefficient
d
{\displaystyle d}
= thickness of the boundary layer of the solvent at the surface of the dissolving substance
C
s
{\displaystyle C_{s}}
= mass concentration of the substance on the surface
C
b
{\displaystyle C_{b}}
= mass concentration of the substance in the bulk of the solvent
For dissolution limited by diffusion (or mass transfer if mixing is present),
C
s
{\displaystyle C_{s}}
is equal to the solubility of the substance. When the dissolution rate of a pure substance is normalized to the surface area of the solid (which usually changes with time during the dissolution process), then it is expressed in kg/m2s and referred to as "intrinsic dissolution rate". The intrinsic dissolution rate is defined by the United States Pharmacopeia.
Dissolution rates vary by orders of magnitude between different systems. Typically, very low dissolution rates parallel low solubilities, and substances with high solubilities exhibit high dissolution rates, as suggested by the Noyes-Whitney equation.
== Theories of solubility ==
=== Solubility product ===
Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. Since it is a product of ion concentrations in equilibrium, it is also known as the solubility product. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. While the solubility constant is not as simple as solubility, the value of this constant is generally independent of the presence of other species in the solvent.
=== Other theories ===
The Flory–Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen solubility parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion.
The octanol-water partition coefficient, usually expressed as its logarithm (Log P), is a measure of differential solubility of a compound in a hydrophobic solvent (1-octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity).
The energy change associated with dissolving is usually given per mole of solute as the enthalpy of solution.
== Applications ==
Solubility is of fundamental importance in a large number of scientific disciplines and practical applications, ranging from ore processing and nuclear reprocessing to the use of medicines, and the transport of pollutants.
Solubility is often said to be one of the "characteristic properties of a substance", which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid".
Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation.
Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid–liquid extraction, is an important technique in synthetic chemistry. Recycling is used to ensure maximum extraction.
=== Differential solubility ===
In flowing systems, differences in solubility often determine the dissolution-precipitation driven transport of species. This happens when different parts of the system experience different conditions. Even slightly different conditions can result in significant effects, given sufficient time.
For example, relatively low solubility compounds are found to be soluble in more extreme environments, resulting in geochemical and geological effects of the activity of hydrothermal fluids in the Earth's crust. These are often the source of high quality economic mineral deposits and precious or semi-precious gems. In the same way, compounds with low solubility will dissolve over extended time (geological time), resulting in significant effects such as extensive cave systems or Karstic land surfaces.
== Solubility of ionic compounds in water ==
Some ionic compounds (salts) dissolve in water, which arises because of the attraction between positive and negative charges (see: solvation). For example, the salt's positive ions (e.g. Ag+) attract the partially negative oxygen atom in H2O. Likewise, the salt's negative ions (e.g. Cl−) attract the partially positive hydrogens in H2O. Note: the oxygen atom is partially negative because it is more electronegative than hydrogen, and vice versa (see: chemical polarity).
AgCl(s) ⇌ Ag+(aq) + Cl−(aq)
However, there is a limit to how much salt can be dissolved in a given volume of water. This concentration is the solubility and related to the solubility product, Ksp. This equilibrium constant depends on the type of salt (AgCl vs. NaCl, for example), temperature, and the common ion effect.
One can calculate the amount of AgCl that will dissolve in 1 liter of pure water as follows:
Ksp = [Ag+] × [Cl−] / M2 (definition of solubility product; M = mol/L)
Ksp = 1.8 × 10−10 (from a table of solubility products)
[Ag+] = [Cl−], in the absence of other silver or chloride salts, so
[Ag+]2 = 1.8 × 10−10 M2
[Ag+] = 1.34 × 10−5 mol/L
The result: 1 liter of water can dissolve 1.34 × 10−5 moles of AgCl at room temperature. Compared with other salts, AgCl is poorly soluble in water. For instance, table salt (NaCl) has a much higher Ksp = 36 and is, therefore, more soluble. The following table gives an overview of solubility rules for various ionic compounds.
== Solubility of organic compounds ==
The principle outlined above under polarity, that like dissolves like, is the usual guide to solubility with organic systems. For example, petroleum jelly will dissolve in gasoline because both petroleum jelly and gasoline are non-polar hydrocarbons. It will not, on the other hand, dissolve in ethyl alcohol or water, since the polarity of these solvents is too high. Sugar will not dissolve in gasoline, since sugar is too polar in comparison with gasoline. A mixture of gasoline and sugar can therefore be separated by filtration or extraction with water.
== Solid solution ==
This term is often used in the field of metallurgy to refer to the extent that an alloying element will dissolve into the base metal without forming a separate phase. The solvus or solubility line (or curve) is the line (or lines) on a phase diagram that give the limits of solute addition. That is, the lines show the maximum amount of a component that can be added to another component and still be in solid solution. In the solid's crystalline structure, the 'solute' element can either take the place of the matrix within the lattice (a substitutional position; for example, chromium in iron) or take a place in a space between the lattice points (an interstitial position; for example, carbon in iron).
In microelectronic fabrication, solid solubility refers to the maximum concentration of impurities one can place into the substrate.
In solid compounds (as opposed to elements), the solubility of a solute element can also depend on the phases separating out in equilibrium. For example, amount of Sn soluble in the ZnSb phase can depend significantly on whether the phases separating out in equilibrium are (Zn4Sb3+Sn(L)) or (ZnSnSb2+Sn(L)). Besides these, the ZnSb compound with Sn as a solute can separate out into other combinations of phases after the solubility limit is reached depending on the initial chemical composition during synthesis. Each combination produces a different solubility of Sn in ZnSb. Hence solubility studies in compounds, concluded upon the first instance of observing secondary phases separating out might underestimate solubility. While the maximum number of phases separating out at once in equilibrium can be determined by the Gibb's phase rule, for chemical compounds there is no limit on the number of such phase separating combinations itself. Hence, establishing the "maximum solubility" in solid compounds experimentally can be difficult, requiring equilibration of many samples. If the dominant crystallographic defect (mostly interstitial or substitutional point defects) involved in the solid-solution can be chemically intuited beforehand, then using some simple thermodynamic guidelines can considerably reduce the number of samples required to establish maximum solubility.
== Incongruent dissolution ==
Many substances dissolve congruently (i.e. the composition of the solid and the dissolved solute stoichiometrically match). However, some substances may dissolve incongruently, whereby the composition of the solute in solution does not match that of the solid. This solubilization is accompanied by alteration of the "primary solid" and possibly formation of a secondary solid phase. However, in general, some primary solid also remains and a complex solubility equilibrium establishes. For example, dissolution of albite may result in formation of gibbsite.
NaAlSi3O8(s) + H+ + 7H2O ⇌ Na+ + Al(OH)3(s) + 3H4SiO4.
In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks.
In principle, both congruent and incongruent dissolution can lead to the formation of secondary solid phases in equilibrium. So, in the field of Materials Science, the solubility for both cases is described more generally on chemical composition phase diagrams.
== Solubility prediction ==
Solubility is a property of interest in many aspects of science, including but not limited to: environmental predictions, biochemistry, pharmacy, drug-design, agrochemical design, and protein ligand binding. Aqueous solubility is of fundamental interest owing to the vital biological and transportation functions played by water. In addition, to this clear scientific interest in water solubility and solvent effects; accurate predictions of solubility are important industrially. The ability to accurately predict a molecule's solubility represents potentially large financial savings in many chemical product development processes, such as pharmaceuticals. In the pharmaceutical industry, solubility predictions form part of the early stage lead optimisation process of drug candidates. Solubility remains a concern all the way to formulation. A number of methods have been applied to such predictions including quantitative structure–activity relationships (QSAR), quantitative structure–property relationships (QSPR) and data mining. These models provide efficient predictions of solubility and represent the current standard. The draw back such models is that they can lack physical insight. A method founded in physical theory, capable of achieving similar levels of accuracy at an sensible cost, would be a powerful tool scientifically and industrially.
Methods founded in physical theory tend to use thermodynamic cycles, a concept from classical thermodynamics. The two common thermodynamic cycles used involve either the calculation of the free energy of sublimation (solid to gas without going through a liquid state) and the free energy of solvating a gaseous molecule (gas to solution), or the free energy of fusion (solid to a molten phase) and the free energy of mixing (molten to solution). These two process are represented in the following diagrams.
These cycles have been used for attempts at first principles predictions (solving using the fundamental physical equations) using physically motivated solvent models, to create parametric equations and QSPR models and combinations of the two. The use of these cycles enables the calculation of the solvation free energy indirectly via either gas (in the sublimation cycle) or a melt (fusion cycle). This is helpful as calculating the free energy of solvation directly is extremely difficult. The free energy of solvation can be converted to a solubility value using various formulae, the most general case being shown below, where the numerator is the free energy of solvation, R is the gas constant and T is the temperature in kelvins.
log
S
(
V
m
)
=
Δ
G
solvation
−
2.303
R
T
{\displaystyle \log S(V_{m})={\frac {\Delta G_{\text{solvation}}}{-2.303RT}}}
Well known fitted equations for solubility prediction are the general solubility equations. These equations stem from the work of Yalkowsky et al. The original formula is given first, followed by a revised formula which takes a different assumption of complete miscibility in octanol.
log
10
(
S
)
=
0.8
−
log
10
(
P
)
−
0.01
(
melting point
−
25
)
{\displaystyle \log _{10}(S)=0.8-\log _{10}(P)-0.01({\text{melting point}}-25)}
log
10
(
S
)
=
0.5
−
log
10
(
P
)
−
0.01
(
melting point
−
25
)
{\displaystyle \log _{10}(S)=0.5-\log _{10}(P)-0.01({\text{melting point}}-25)}
These equations are founded on the principles of the fusion cycle.
== See also ==
Apparent molar property – Difference in properties of one mole of substance in a mixture vs. an ideal solution
Biopharmaceutics Classification System – System to differentiate drugs on the basis of their solubility and permeability
Dühring's rule – Linear relationship between the temperatures at which two solutions exert the same vapour pressure
Fajans–Paneth–Hahn Law – chemistry rule concerning co-precipitation and adsorptionPages displaying wikidata descriptions as a fallback
Flexible SPC water model – Aspect of computational chemistryPages displaying short descriptions of redirect targets
Henry's law – Gas law regarding proportionality of dissolved gas
Hot water extraction – method of carpet cleaningPages displaying wikidata descriptions as a fallback
Hydrotrope – chemical substancePages displaying wikidata descriptions as a fallback
Micellar solubilization – Process of incorporating the solubilizate into or onto micelles
Raoult's law – Law of thermodynamics for vapour pressure of a mixture
Rate of solution – Capacity of a substance to dissolve in a homogeneous wayPages displaying short descriptions of redirect targets
Solubility equilibrium – Thermodynamic equilibrium between a solid and a solution of the same compound
Van 't Hoff equation – Relation between temperature and the equilibrium constant of a chemical reaction
== Notes ==
== References ==
== External links == | Wikipedia/Saturated_solution |
The osmotic-controlled release oral delivery system (OROS) is an advanced controlled release oral drug delivery system in the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the laser drilled opening(s) in the tablet and into the gastrointestinal tract. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery.
== Rationale ==
=== Pros and cons ===
Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet.
== Oral osmotic release systems ==
=== Single-layer ===
The Elementary Osmotic Pump (EOP) was developed by ALZA in 1974, and was the first practical example of an osmotic pump based drug release system for oral use. It was introduced to the market in the early 1980s in Osmosin (indomethacin) and Acutrim (phenylpropanolamine), but unexpectedly severe issues with GI irritation and cases of GI perforation led to the withdrawal of Osmosin.
Merck & Co. later developed the Controlled-Porosity Osmotic Pump (CPOP) with the intention of addressing some of the issues that led to Osmosin's withdrawal via a new approach to the final stage of the release mechanism. Unlike the EOP, the CPOP had no pre-formed hole in the outer shell for the drug to be expelled out of. Instead, the CPOP's semipermeable membrane was designed to form numerous small pores upon contact with water through which the drug would be expelled via osmotic pressure. The pores were formed via the use of a pH insensitive leachable or dissolvable additive such as sorbitol.
=== Multi-layer ===
Both the EOP and CPOP were relatively simple designs, and were limited by their inability to deliver poorly soluble drugs. This led to the development of an additional internal "push layer" composed of material (a swellable polymer) that would expand as it absorbed water, which then pushed the drug layer (which incorporates a viscous polymer for suspension of poorly soluble drugs) out of the exit hole at a controlled rate. Osmotic agents such as sodium chloride, potassium chloride, or xylitol are added to both the drug and push layers to increase the osmotic pressure. The initial design developed in 1982 by ALZA researchers was designated the Push-Pull Osmotic Pump (PPOP), and Procardia XL (nifedipine) was one of the first drugs to utilize this PPOP design.
In the early 1990s, an ALZA-funded research program began to develop a new dosage form of methylphenidate for the treatment of children with attention deficit hyperactivity disorder (ADHD). Methylphenidate's short half-life required multiple doses to be administered each day to attain long-lasting coverage, which made it an ideal candidate for the OROS technology. Multiple candidate pharmacokinetic profiles were evaluated and tested in an attempt to determine the optimal way to deliver the drug, which was especially important given the puzzling failure of an existing extended-release formulation of methylphenidate (Ritalin SR) to act as expected. The zero-order (flat) release profile that the PPOP was optimal at delivering failed to maintain its efficacy over time, which suggested that acute tolerance to methylphenidate formed over the course of the day. This explained why Ritalin SR was inferior to twice-daily Ritalin IR, and led to the hypothesis that an ascending pattern of drug delivery was necessary to maintain clinical effect. Trials designed to test this hypothesis were successful, and ALZA subsequently developed a modified PPOP design that utilized an overcoat of methylphenidate designed to release immediately and rapidly raise serum levels, followed by 10 hours of first-order (ascending) drug delivery from the modified PPOP design. This design was called the Push-Stick Osmotic Pump (PSOP), and utilized two separate drug layers with different concentrations of methylphenidate in addition to the (now quite robust) push layer.
== List of OROS medications ==
OROS medications include:
== References == | Wikipedia/Osmotic-controlled_release_oral_delivery_system |
Heated humidified high-flow therapy, often simply called high flow therapy , is a medical treatment providing respiratory support by delivering a flow of oxygen of up to 60 liters per minute to a patient through a large-bore or high-flow nasal cannula. Primarily studied in neonates, it has also been found effective in some adults to treat hypoxemia and work of breathing issues. The key components of it are a gas blender, heated humidifier, heated circuit, and cannula.
== History ==
The development of heated humidified high flow started in 1999 with Vapotherm introducing the concept of high flow use with race horses.
High flow was approved by the U.S. Food and Drug Administration in the early 2000s and used as an alternative to positive airway pressure for treatment of apnea of prematurity in neonates. The term high flow is relative to the size of the patient which is why the flow rate used in children is done by weight as just a few liters can meet the inspiratory demands of a neonate unlike in adults It has since become popular for use in adults for respiratory failure
== Mechanism ==
The traditional low flow system used for medical gas delivery is the Nasal cannula which is limited to the delivery of 1–6 L/min of oxygen or up to 15 L/min in certain types. This is because even with quiet breathing, the inspiratory flow rate at the nares of an adult usually exceeds 30 L/min. Therefore, the oxygen provided is diluted with room air during inspiration. Being a high flow system means that it meets or exceeds the flow demands of the patient.
=== Oxygenation ===
Since it is a high flow system, it is able to maintain the wearers fraction of inhaled oxygen (FiO2) at the set rate because they shouldn't be entraining ambient air. However, this may not be the case in patients who are poorly compliant with the therapy and are actively breathing through their mouth.
=== Ventilation ===
The flow can wash out some of the dead space in the upper airway. This can reduce slightly the amount of carbon dioxide rebreathed.
There is a correlation of the flow rate to mean airway pressure and in some subjects there has been an increase in lung volumes and decrease in respiratory rate. However, positive end expiratory pressure has only been measured at less 3 cmH2O meaning it is not able to provide close to what a closed ventilatory system could provide. In neonates it has been found, however, with a good fit and mouth closed, it can provide end expiratory pressure comparable to nasal continuous positive airway pressure.
=== Humidification ===
The higher the flow, the more important proper humidification and heating of the flow becomes to prevent tissue irritation and mucous drying. It has been found that long term use of flows of 20-25 L/min can help reduce symptoms of chronic obstructive pulmonary disease. This is because, heat and humidity help mucociliary clearance. This is the reason why high-flow therapy is assumed to help with mucus clearance better than other less humidified methodologies.
== Medical use ==
High-flow therapy is useful in patients that are spontaneously breathing but are in some type of respiratory failure. These are hypoxemic and certain cases of hypercapnic respiratory failure stemming from exacerbations of asthma and chronic obstructive pulmonary disease, bronchiolitis, pneumonia, and congestive heart failure are all possible situations where high-flow therapy may be indicated.
=== Newborn babies ===
High-flow therapy has shown to be useful in neonatal intensive care settings for premature infants with Infant respiratory distress syndrome, as it prevents many infants from needing more invasive ventilatory treatments.
Due to the decreased stress of effort needed to breathe, the neonatal body is able to spend more time utilizing metabolic efforts elsewhere, which causes decreased days on a mechanical ventilator, faster weight gain, and overall decreased hospital stay entirely.
High-flow therapy has been successfully implemented in infants and older children. The cannula improves the respiratory distress, the oxygen saturation, and the patient's comfort. Its mechanism of action is the application of mild positive airway pressure and lung volume recruitment.
=== Hypoxemic respiratory failure ===
In high-flow therapy, clinicians can deliver higher FiO2 than is possible with typical oxygen delivery therapy without the use of a non-rebreather mask or tracheal intubation. Some patients requiring respiratory support for bronchospasm benefit using air delivered by high-flow therapy without additional oxygen. Patients can speak during use of high-flow therapy. As this is a non-invasive therapy, it avoids the risk of ventilator-associated pneumonia.
Use of nasal high flow in acute hypoxemic respiratory failure does not affect mortality or length of stay either in hospital or in the intensive care unit. It can however reduce the need for tracheal intubation and escalation of oxygenation and respiratory support.
=== Hypercapnic respiratory failure ===
Stable patients with hypercapnia on high-flow therapy have been found to have their carbon dioxide levels decrease similar amounts to noninvasive treatment, but evidence is still limited as to its efficacy and currently the practice guideline is still to use noninvasive ventilation for those with exacerbations of chronic obstructive pulmonary disease and acidosis.
=== Other uses ===
Heated humidified high-flow therapy has been used in spontaneously breathing patients with during general anesthesia to facilitate surgery for airway obstruction.
High flow therapy is useful in the treatment of sleep apnea.
== References == | Wikipedia/Heated_humidified_high-flow_therapy |
In physical chemistry, supersaturation occurs with a solution when the concentration of a solute exceeds the concentration specified by the value of solubility at equilibrium. Most commonly the term is applied to a solution of a solid in a liquid, but it can also be applied to liquids and gases dissolved in a liquid. A supersaturated solution is in a metastable state; it may return to equilibrium by separation of the excess of solute from the solution, by dilution of the solution by adding solvent, or by increasing the solubility of the solute in the solvent.
== History ==
Early studies of the phenomenon were conducted with sodium sulfate, also known as Glauber's Salt because, unusually, the solubility of this salt in water may decrease with increasing temperature. Early studies have been summarised by Tomlinson. It was shown that the crystallization of a supersaturated solution does not simply come from its agitation, (the previous belief) but from solid matter entering and acting as a "starting" site for crystals to form, now called "seeds" (for more information, see nucleation). Expanding upon this, Gay-Lussac brought attention to the kinematics of salt ions and the characteristics of the container having an impact on the supersaturation state. He was also able to expand upon the number of salts with which a supersaturated solution can be obtained. Later Henri Löwel came to the conclusion that both nuclei of the solution and the walls of the container have a catalyzing effect on the solution that cause crystallization. Explaining and providing a model for this phenomenon has been a task taken on by more recent research. Désiré Gernez contributed to this research by discovering that nuclei must be of the same salt that is being crystallized in order to promote crystallization.
Furthermore, in 1950, Victor K. LaMer proposed another theory for nucleation, in which he described the nucleation and growth of sulfur nuclei in a solution where a chemical reaction provided a constant inflow of molecularly dissolved sulfur. This theory, however, is not confined to this specific case and can be generalised as shown in LaMer’s diagram, provided in the second figure of this section.
In section (I), the concentration of solute grows linearly, as it is formed (or added) to the solution. Upon reaching
c
L
e
q
{\displaystyle c_{L}^{eq}\,\!}
, it will become saturated, but it won’t start depositing solute right away. Instead, it will keep absorbing it, becoming supersaturated.
In section (II), concentration reaches critical saturation levels,
c
m
i
n
{\displaystyle c_{min}\,\!}
, when solute crystals begin nucleating. The appearance of nuclei partially relieves the supersaturation, at least rapidly enough that the rate of nucleation falls almost immediately to zero. The system rapidly reaches a balance between the solute supply and the consumption rate for the nucleation and its growth, slowing down the increase in its concentration. After reaching the peak, the curve declines owing to the increasing consumption of the solute for the growth of nuclei and reaches again the critical level of nucleation,
c
m
i
n
{\displaystyle c_{min}\,\!}
, ending the nucleation stage. Given optimal conditions, having the solute be introduced to the solution very steadily while keeping the system free from perturbations and nucleation seeds, the maximum concentration that can be achieved in this way is defined as
c
m
a
x
{\displaystyle c_{max}\,\!}
.
In section (III), the supersaturation becomes too low for any more crystals to nucleate, so no new crystals are formed. However, as the solution is still supersaturated, the existing crystals grow by solute diffusion. As time passes by, the growth rate of the crystal equals the rate of solute supply, so the concentration converges to the saturation value
c
L
e
q
{\displaystyle c_{L}^{eq}\,\!}
.
== Occurrence and examples ==
=== Solid precipitate, liquid solvent ===
A solution of a chemical compound in a liquid will become supersaturated when the temperature of the saturated solution is changed. In most cases solubility decreases with decreasing temperature; in such cases the excess of solute will rapidly separate from the solution as crystals or an amorphous powder.
In a few cases the opposite effect occurs. The example of sodium sulfate in water is well-known and this was why it was used in early studies of solubility.
Recrystallization is a process used to purify chemical compounds. A mixture of the impure compound and solvent is heated until the compound has dissolved. If there is some solid impurity remaining it is removed by filtration. When the temperature of the solution is subsequently lowered it briefly becomes supersaturated and then the compound crystallizes out until chemical equilibrium at the lower temperature is achieved. Impurities remain in the supernatant liquid. In some cases crystals do not form quickly and the solution remains supersaturated after cooling. This is because there is a thermodynamic barrier to the formation of a crystal in a liquid medium. Commonly this is overcome by adding a tiny crystal of the solute compound to the supersaturated solution, a process known as "seeding". Another process in common use is to rub a rod on the side of a glass vessel containing the solution to release microscopic glass particles which can act as nucleation centres. In industry, centrifugation is used to separate the crystals from the supernatant liquid.
Some compounds and mixtures of compounds can form long-living supersaturated solutions. Carbohydrates are a class of such compounds; The thermodynamic barrier to formation of crystals is rather high because of extensive and irregular hydrogen bonding with the solvent, water. For example, although sucrose can be recrystallised easily, its hydrolysis product, known as "invert sugar" or "golden syrup" is a mixture of glucose and fructose that exists as a viscous, supersaturated, liquid. Clear honey contains carbohydrates which may crystallize over a period of weeks.
Supersaturation may be encountered when attempting to crystallize a protein.
=== Gaseous solute, liquid solvent ===
The solubility of a gas in a liquid increases with increasing gas pressure. When the external pressure is reduced, the excess gas comes out of solution.
Fizzy drinks are made by subjecting the liquid to carbon dioxide, under pressure. In champagne the CO2 is produced naturally in the final stage of fermentation. When the bottle or can is opened some gas is released in the form of bubbles.
Release of gas from supersaturated tissues can cause an underwater diver to suffer from decompression sickness (a.k.a. the bends) when returning to the surface. This can be fatal if the released gas obstructs critical blood supplies causing ischaemia in vital tissues.
Dissolved gases can be released during oil exploration when a strike is made. This occurs because the oil in oil-bearing rock is under considerable pressure from the over-lying rock, allowing the oil to be supersaturated with respect to dissolved gases.
=== Liquid formation from a mixture of gases ===
A cloudburst is an extreme form of production of liquid water from a supersaturated mixture of air and water vapour in the atmosphere. Supersaturation in the vapour phase is related to the surface tension of liquids through the Kelvin equation, the Gibbs–Thomson effect and the Poynting effect.
The International Association for the Properties of Water and Steam (IAPWS) provides a special equation for the Gibbs free energy in the metastable-vapor region of water in its Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam. All thermodynamic properties for the metastable-vapor region of water can be derived from this equation by means of the appropriate relations of thermodynamic properties to the Gibbs free energy.
== Measurement ==
When measuring the concentration of a solute in a supersaturated gaseous or liquid mixture it is obvious that the pressure inside the cuvette may be greater than the ambient pressure. When this is so a specialized cuvette must be used. The choice of analytical technique to use will depend on the characteristics of the analyte.
== Applications ==
The characteristics of supersaturation have practical applications in terms of pharmaceuticals. By creating a supersaturated solution of a certain drug, it can be ingested in liquid form. The drug can be made driven into a supersaturated state through any normal mechanism and then prevented from precipitating out by adding precipitation inhibitors. Drugs in this state are referred to as "supersaturating drug delivery services," or "SDDS." Oral consumption of a drug in this form is simple and allows for the measurement of very precise dosages. Primarily, it provides a means for drugs with very low solubility to be made into aqueous solutions. In addition, some drugs can undergo supersaturation inside the body despite being ingested in a crystalline form. This phenomenon is known as in vivo supersaturation.
The identification of supersaturated solutions can be used as a tool for marine ecologists to study the activity of organisms and populations. Photosynthetic organisms release O2 gas into the water. Thus, an area of the ocean supersaturated with O2 gas can likely determined to be rich with photosynthetic activity. Though some O2 will naturally be found in the ocean due to simple physical chemical properties, upwards of 70% of all oxygen gas found in supersaturated regions can be attributed to photosynthetic activity.
Supersaturation in vapor phase is usually present in the expansion process through steam nozzles that operate with superheated steam at the inlet, which transitions to saturated state at the outlet. Supersaturation thus becomes an important factor to be taken into account in the design of steam turbines, as this results in an actual mass flow of steam through the nozzle being about 1 to 3% greater than the theoretically calculated value that would be expected if the expanding steam underwent a reversible adiabatic process through equilibrium states. In these cases supersaturation occurs due to the fact that the expansion process develops so rapidly and in such a short time, that the expanding vapor cannot reach its equilibrium state in the process, behaving as if it were superheated. Hence the determination of the expansion ratio, relevant to the calculation of the mass flow through the nozzle, must be done using an adiabatic index of approximately 1.3, like that of the superheated steam, instead of 1.135, which is the value that should have to be used for a quasi-static adiabatic expansion in the saturated region.
The study of supersaturation is also relevant to atmospheric studies. Since the 1940s, the presence of supersaturation in the atmosphere has been known. When water is supersaturated in the troposphere, the formation of ice lattices is frequently observed. In a state of saturation, the water particles will not form ice under tropospheric conditions. It is not enough for molecules of water to form an ice lattice at saturation pressures; they require a surface to condense on to or conglomerations of liquid water molecules of water to freeze. For these reasons, relative humidities over ice in the atmosphere can be found above 100%, meaning supersaturation has occurred. Supersaturation of water is actually very common in the upper troposphere, occurring between 20% and 40% of the time. This can be determined using satellite data from the Atmospheric Infrared Sounder.
== References == | Wikipedia/Supersaturated |
The upper critical solution temperature (UCST) or upper consolute temperature is the critical temperature above which the components of a mixture are miscible in all proportions. The word upper indicates that the UCST is an upper bound to a temperature range of partial miscibility, or miscibility for certain compositions only. For example, hexane-nitrobenzene mixtures have a UCST of 19 °C (66 °F), so that these two substances are miscible in all proportions above 19 °C (66 °F) but not at lower temperatures.: 185 Examples at higher temperatures are the aniline-water system at 168 °C (334 °F) (at pressures high enough for liquid water to exist at that temperature),: 230 and the lead-zinc system at 798 °C (1,468 °F) (a temperature where both metals are liquid).: 232
A solid state example is the palladium-hydrogen system which has a solid solution phase (H2 in Pd) in equilibrium with a hydride phase (PdHn) below the UCST at 300 °C. Above this temperature there is a single solid solution phase.: 186
In the phase diagram of the mixture components, the UCST is the shared maximum of the concave down spinodal and binodal (or coexistence) curves. The UCST is in general dependent on pressure.
The phase separation at the UCST is in general driven by unfavorable energetics; in particular, interactions between components favor a partially demixed state.
== Polymer-solvent mixtures ==
Some polymer solutions also have a lower critical solution temperature (LCST) or lower bound to a temperature range of partial miscibility. As shown in the diagram, for polymer solutions the LCST is higher than the UCST, so that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures.
The UCST and LCST of polymer mixtures generally depend on polymer degree of polymerization and polydispersity.
The seminal statistical mechanical model for the UCST of polymers is the Flory–Huggins solution theory.
By adding soluble impurities the upper critical solution temperature increases and lower critical solution temperature decreases.
== See also ==
Lower critical solution temperature – Critical temperature below which components of a mixture are miscible for all compositions
Coil–globule transition – Collapse of a macromolecule from an expanded coil state to a collapsed globule state
== References == | Wikipedia/Upper_critical_solution_temperature |
Molar concentration (also called molarity, amount concentration or substance concentration) is the number of moles of solute per liter of solution. Specifically, It is a measure of the concentration of a chemical species, in particular, of a solute in a solution, in terms of amount of substance per unit volume of solution. In chemistry, the most commonly used unit for molarity is the number of moles per liter, having the unit symbol mol/L or mol/dm3 (1000 mol/m3) in SI units. A solution with a concentration of 1 mol/L is said to be 1 molar, commonly designated as 1 M or 1 M. Molarity is often depicted with square brackets around the substance of interest; for example, the molarity of the hydrogen ion is depicted as [H+].
== Definition ==
Molar concentration or molarity is most commonly expressed in units of moles of solute per litre of solution. For use in broader applications, it is defined as amount of substance of solute per unit volume of solution, or per unit volume available to the species, represented by lowercase
c
{\displaystyle c}
:
c
=
n
V
=
N
N
A
V
=
C
N
A
.
{\displaystyle c={\frac {n}{V}}={\frac {N}{N_{\text{A}}\,V}}={\frac {C}{N_{\text{A}}}}.}
Here,
n
{\displaystyle n}
is the amount of the solute in moles,
N
{\displaystyle N}
is the number of constituent particles present in volume
V
{\displaystyle V}
(in litres) of the solution, and
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant, since 2019 defined as exactly 6.02214076×1023 mol−1. The ratio
N
V
{\displaystyle {\frac {N}{V}}}
is the number density
C
{\displaystyle C}
.
In thermodynamics, the use of molar concentration is often not convenient because the volume of most solutions slightly depends on temperature due to thermal expansion. This problem is usually resolved by introducing temperature correction factors, or by using a temperature-independent measure of concentration such as molality.
The reciprocal quantity represents the dilution (volume) which can appear in Ostwald's law of dilution.
=== Formality or analytical concentration ===
If a molecule or salt dissociates in solution, the concentration refers to the original chemical formula in solution, the molar concentration is sometimes called formal concentration or formality (FA) or analytical concentration (cA). For example, if a sodium carbonate solution (Na2CO3) has a formal concentration of c(Na2CO3) = 1 mol/L, the molar concentrations are c(Na+) = 2 mol/L and c(CO2−3) = 1 mol/L because the salt dissociates into these ions.
== Units ==
In the International System of Units (SI), the coherent unit for molar concentration is mol/m3. However, most chemical literature traditionally uses mol/dm3, which is the same as mol/L. This traditional unit is often called a molar and denoted by the letter M, for example:
1 mol/m3 = 10−3 mol/dm3 = 10−3 mol/L = 10−3 M = 1 mM = 1 mmol/L.
The SI prefix "mega" (symbol M) has the same symbol. However, the prefix is never used alone, so "M" unambiguously denotes molar.
Sub-multiples, such as "millimolar" (mM) and "nanomolar" (nM), consist of the unit preceded by an SI prefix:
== Related quantities ==
=== Number concentration ===
The conversion to number concentration
C
i
{\displaystyle C_{i}}
is given by
C
i
=
c
i
N
A
,
{\displaystyle C_{i}=c_{i}N_{\text{A}},}
where
N
A
{\displaystyle N_{\text{A}}}
is the Avogadro constant.
=== Mass concentration ===
The conversion to mass concentration
ρ
i
{\displaystyle \rho _{i}}
is given by
ρ
i
=
c
i
M
i
,
{\displaystyle \rho _{i}=c_{i}M_{i},}
where
M
i
{\displaystyle M_{i}}
is the molar mass of constituent
i
{\displaystyle i}
.
=== Mole fraction ===
The conversion to mole fraction
x
i
{\displaystyle x_{i}}
is given by
x
i
=
c
i
M
¯
ρ
,
{\displaystyle x_{i}=c_{i}{\frac {\overline {M}}{\rho }},}
where
M
¯
{\displaystyle {\overline {M}}}
is the average molar mass of the solution,
ρ
{\displaystyle \rho }
is the density of the solution.
A simpler relation can be obtained by considering the total molar concentration, namely, the sum of molar concentrations of all the components of the mixture:
x
i
=
c
i
c
=
c
i
∑
j
c
j
.
{\displaystyle x_{i}={\frac {c_{i}}{c}}={\frac {c_{i}}{\sum _{j}c_{j}}}.}
=== Mass fraction ===
The conversion to mass fraction
w
i
{\displaystyle w_{i}}
is given by
w
i
=
c
i
M
i
ρ
.
{\displaystyle w_{i}=c_{i}{\frac {M_{i}}{\rho }}.}
=== Molality ===
For binary mixtures, the conversion to molality
b
2
{\displaystyle b_{2}}
is
b
2
=
c
2
ρ
−
c
1
M
1
,
{\displaystyle b_{2}={\frac {c_{2}}{\rho -c_{1}M_{1}}},}
where the solvent is substance 1, and the solute is substance 2.
For solutions with more than one solute, the conversion is
b
i
=
c
i
ρ
−
∑
j
≠
i
c
j
M
j
.
{\displaystyle b_{i}={\frac {c_{i}}{\rho -\sum _{j\neq i}c_{j}M_{j}}}.}
== Properties ==
=== Sum of molar concentrations – normalizing relations ===
The sum of molar concentrations gives the total molar concentration, namely the density of the mixture divided by the molar mass of the mixture or by another name the reciprocal of the molar volume of the mixture. In an ionic solution, ionic strength is proportional to the sum of the molar concentration of salts.
=== Sum of products of molar concentrations and partial molar volumes ===
The sum of products between these quantities equals one:
∑
i
c
i
V
i
¯
=
1.
{\displaystyle \sum _{i}c_{i}{\overline {V_{i}}}=1.}
=== Dependence on volume ===
The molar concentration depends on the variation of the volume of the solution due mainly to thermal expansion. On small intervals of temperature, the dependence is
c
i
=
c
i
,
T
0
1
+
α
Δ
T
,
{\displaystyle c_{i}={\frac {c_{i,T_{0}}}{1+\alpha \Delta T}},}
where
c
i
,
T
0
{\displaystyle c_{i,T_{0}}}
is the molar concentration at a reference temperature,
α
{\displaystyle \alpha }
is the thermal expansion coefficient of the mixture.
== Examples ==
== See also ==
Molality
Normality
Orders of magnitude (molar concentration)
== References ==
== External links ==
Molar Solution Concentration Calculator
Experiment to determine the molar concentration of vinegar by titration | Wikipedia/Molar_solution |
In thermochemistry, the enthalpy of solution (heat of solution or enthalpy of solvation) is the enthalpy change associated with the dissolution of a substance in a solvent at constant pressure resulting in infinite dilution.
The enthalpy of solution is most often expressed in kJ/mol at constant temperature. The energy change can be regarded as being made up of three parts: the endothermic breaking of bonds within the solute and within the solvent, and the formation of attractions between the solute and the solvent. An ideal solution has a null enthalpy of mixing. For a non-ideal solution, it is an excess molar quantity.
== Energetics ==
Dissolution by most gases is exothermic. That is, when a gas dissolves in a liquid solvent, energy is released as heat, warming both the system (i.e. the solution) and the surroundings.
The temperature of the solution eventually decreases to match that of the surroundings. The equilibrium, between the gas as a separate phase and the gas in solution, will by Le Châtelier's principle shift to favour the gas going into solution as the temperature is decreased (decreasing the temperature increases the solubility of a gas).
When a saturated solution of a gas is heated, gas comes out of the solution.
== Steps in dissolution ==
Dissolution can be viewed as occurring in three steps:
Breaking solute–solute attractions (endothermic), for instance, lattice energy
U
latt
{\displaystyle U_{\text{latt}}}
in salts.
Breaking solvent–solvent attractions (endothermic), for instance, that of hydrogen bonding.
Forming solvent–solute attractions (exothermic), in solvation.
The value of the enthalpy of solvation is the sum of these individual steps:
Δ
H
solv
=
Δ
H
diss
+
U
latt
.
{\displaystyle \Delta H_{\text{solv}}=\Delta H_{\text{diss}}+U_{\text{latt}}.}
Dissolving ammonium nitrate in water is endothermic. The energy released by the solvation of the ammonium ions and nitrate ions is less than the energy absorbed in breaking up the ammonium nitrate ionic lattice and the attractions between water molecules. Dissolving potassium hydroxide is exothermic, as more energy is released during solvation than is used in breaking up the solute and solvent.
== Expressions in differential or integral form ==
The expressions of the enthalpy change of dissolution can be differential or integral, as a function of the ratio of amounts of solute-solvent.
The molar differential enthalpy change of dissolution is
Δ
diss
d
H
=
(
∂
Δ
diss
H
∂
Δ
n
i
)
T
,
p
,
n
B
,
{\displaystyle \Delta _{\text{diss}}^{\text{d}}H=\left({\frac {\partial \Delta _{\text{diss}}H}{\partial \Delta n_{i}}}\right)_{T,p,n_{B}},}
where
∂
Δ
n
i
{\displaystyle \partial \Delta n_{i}}
is the infinitesimal variation, or differential, of the mole number of the solute during dissolution.
The integral heat of dissolution is defined as a process of obtaining a certain amount of solution with a final concentration. The enthalpy change in this process, normalized by the mole number of solute, is evaluated as the molar integral heat of dissolution. Mathematically, the molar integral heat of dissolution is denoted as
Δ
diss
i
H
=
Δ
diss
H
n
B
.
{\displaystyle \Delta _{\text{diss}}^{\text{i}}H={\frac {\Delta _{\text{diss}}H}{n_{B}}}.}
The prime heat of dissolution is the differential heat of dissolution for obtaining an infinitely diluted solution.
== Dependence on the nature of the solution ==
The enthalpy of mixing of an ideal solution is zero by definition, but the enthalpy of dissolution of nonelectrolytes has the value of the enthalpy of fusion or vaporisation. For non-ideal solutions of electrolytes it is connected to the activity coefficient of the solute(s) and the temperature derivative of the relative permittivity through the following formula:
H
dil
=
∑
i
ν
i
R
T
ln
γ
i
(
1
+
T
ϵ
∂
ϵ
∂
T
)
.
{\displaystyle H_{\text{dil}}=\sum _{i}\nu _{i}RT\ln \gamma _{i}\left(1+{\frac {T}{\epsilon }}{\frac {\partial \epsilon }{\partial T}}\right).}
== See also ==
Apparent molar property
Enthalpy of mixing
Heat of dilution
Heat of melting
Hydration energy
Lattice energy
Law of dilution
Solvation
Thermodynamic activity
Solubility equilibrium
== References ==
== External links ==
phase diagram | Wikipedia/Enthalpy_of_solution |
Ascalaph Designer is a computer program for general purpose molecular modelling for molecular design and simulations. It provides a graphical environment for the common programs of quantum and classical molecular modelling ORCA, NWChem, Firefly, CP2K and MDynaMix
. The molecular mechanics calculations cover model building, energy optimizations and molecular dynamics. Firefly (formerly named PC GAMESS) covers a wide range of quantum chemistry methods. Ascalaph Designer is free and open-source software, released under the GNU General Public License, version 2 (GPLv2).
== Key features ==
== Uses ==
== See also ==
List of software for molecular mechanics modeling
Molecular design software
Molecule editor
Abalone
== References ==
== External links ==
Official website
SourceForge
Twitter | Wikipedia/Ascalaph_Designer |
MacroModel is a computer program for molecular modelling of organic compounds and biopolymers. It features various chemistry force fields, plus energy minimizing algorithms, to predict geometry and relative conformational energies of molecules. MacroModel is maintained by Schrödinger, LLC.
It performs simulations in the framework of classical mechanics, also termed molecular mechanics, and can perform molecular dynamics simulations to model systems at finite temperatures using stochastic dynamics and mixed Monte Carlo algorithms. MacroModel supports Windows, Linux, macOS, Silicon Graphics (SGI) IRIX, and IBM AIX.
The Macromodel software package was first been described in the scientific literature in 1990, and has been subsequently acquired by Schrödinger, Inc. in 2000.
== Key features ==
== Known version history ==
2013: version 10.0
2012: version 9.9.2
2011: version 9.9.1
2010: version 9.8
2009: version 9.7
2008: version 9.6
2007: version 9.5
2006: version 9.1
2005: version 9.0
2004: version 8.5
2003: version 8.1
== See also ==
== References ==
== External links ==
Official website | Wikipedia/MacroModel |
Biochemical and Organic Simulation System (BOSS) is a general-purpose molecular modeling program that performs molecular mechanics calculations, Metropolis Monte Carlo statistical mechanics simulations, and semiempirical Austin Model 1 (AM1), PM3, and PDDG/PM3 quantum mechanics calculations. The molecular mechanics calculations cover energy minimizations, normal mode analysis and conformational searching with the Optimized Potentials for Liquid Simulations (OPLS) force fields. BOSS is developed by Prof. William L. Jorgensen at Yale University, and distributed commercially by Cemcomco, LLC and Schrödinger, Inc.
== Key features ==
OPLS force field inventor
Geometry optimization
Semiempirical quantum chemistry
MC simulations for pure liquids, solutions, clusters or gas-phase systems
Free energies are computed from statistical perturbation (free energy perturbation (FEP)) theory
TIP3P, TIP4P, and TIP5P water models
== See also ==
== References ==
== External links ==
Official website | Wikipedia/BOSS_(molecular_mechanics) |
Internal Coordinate Mechanics (ICM) is a software program and algorithm to predict low-energy conformations of molecules by sampling the space of internal coordinates (bond lengths, bond angles and dihedral angles) defining molecular geometry. In ICM each molecule is constructed as a tree from an entry atom where each next atom is built iteratively from the preceding three atoms via three internal variables. The rings kept rigid or imposed via additional restraints. ICM is used for modelling peptides and interactions with substrates and coenzymes.
== Software ==
ICM also is a programming environment for various tasks in computational chemistry and computational structural biology, sequence analysis and rational drug design. The original goal was to develop algorithms for energy optimization of several biopolymers with respect to an arbitrary subset of internal coordinates such as bond lengths, bond angles torsion angles and phase angles. The efficient and general global optimization method which evolved from the original ICM method is still the central piece of the program. It is this basic algorithm which is used for peptide prediction, homology modeling and loop simulations, flexible macromolecular docking and energy refinement. However the complexity of problems related to structure prediction and analysis, as well as the desire for perfection, compactness and consistency, led to the program's expansion into neighboring areas such as graphics, chemistry, sequence analysis and database searches, mathematics, statistics and plotting.
The original meaning became too narrow, but the name was kept. The current integrated ICM shell contains hundreds of variables, functions, commands, database and web tools, novel algorithms for structure prediction and analysis into a powerful, yet compact program which is still called ICM. The seven principal areas are centered on a general core of shell-language and data analysis and visualization.
== References ==
Abagyan, R.A. and Totrov, M.M. Biased Probability Monte Carlo Conformational Searches and Electrostatic Calculations For Peptides and Proteins J. Mol. Biol., 235, 983–1002, 1994. PMID 8289329
Abagyan, R.A., Totrov, M.M., and Kuznetsov, D.A. ICM: A New Method For Protein Modeling and Design: Applications To Docking and Structure Prediction From The Distorted Native Conformation. J. Comput. Chem., 15, 488–506, 1994. doi:10.1002/jcc.540150503
Totrov, M.M. and Abagyan, R.A. Efficient Parallelization of The Energy, Surface and Derivative Calculations For internal Coordinate Mechanics. J. Comput. Chem., 15, 1105–1112, 1994. doi:10.1002/jcc.540151006
== External links ==
www.molsoft.com
iSee (interactive Structurally enhanced experience)
EDS (Electron Density Server) Archived 2017-07-02 at the Wayback Machine (ICM supports electron density visualization) | Wikipedia/Internal_Coordinate_Mechanics |
In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.
A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.
There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.
== Force fields for molecular systems ==
The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds, and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as
E
total
=
E
bonded
+
E
nonbonded
{\displaystyle E_{\text{total}}=E_{\text{bonded}}+E_{\text{nonbonded}}}
where the components of the covalent and noncovalent contributions are given by the following summations:
E
bonded
=
E
bond
+
E
angle
+
E
dihedral
{\displaystyle E_{\text{bonded}}=E_{\text{bond}}+E_{\text{angle}}+E_{\text{dihedral}}}
E
nonbonded
=
E
electrostatic
+
E
van der Waals
{\displaystyle E_{\text{nonbonded}}=E_{\text{electrostatic}}+E_{\text{van der Waals}}}
The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additionally, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.
The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.
=== Bond stretching ===
As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula:
E
bond
=
k
i
j
2
(
l
i
j
−
l
0
,
i
j
)
2
,
{\displaystyle E_{\text{bond}}={\frac {k_{ij}}{2}}(l_{ij}-l_{0,ij})^{2},}
where
k
i
j
{\displaystyle k_{ij}}
is the force constant,
l
i
j
{\displaystyle l_{ij}}
is the bond length, and
l
0
,
i
j
{\displaystyle l_{0,ij}}
is the value for the bond length between atoms
i
{\displaystyle i}
and
j
{\displaystyle j}
when all other terms in the force field are set to 0. The term
l
0
,
i
j
{\displaystyle l_{0,ij}}
is at times differently defined or taken at different thermodynamic conditions.
The bond stretching constant
k
i
j
{\displaystyle k_{ij}}
can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant
k
i
j
{\displaystyle k_{ij}}
determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.
Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better, one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.
=== Electrostatic interactions ===
Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges
q
i
{\displaystyle q_{i}}
to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law:
E
Coulomb
=
1
4
π
ε
0
q
i
q
j
r
i
j
,
{\displaystyle E_{\text{Coulomb}}={\frac {1}{4\pi \varepsilon _{0}}}{\frac {q_{i}q_{j}}{r_{ij}}},}
where
r
i
j
{\displaystyle r_{ij}}
is the distance between two atoms
i
{\displaystyle i}
and
j
{\displaystyle j}
. The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes 1, 2 bonded atoms, 1, 3 bonded atoms, as well as 1, 4 bonded atoms.
Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.
== Force fields for crystal systems ==
Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, particularly multi-body interactions, these interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. Additionally, Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.
== Parameterization ==
In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.
A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.
Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.
Heuristic force field parametrization procedures have been very successful for many years, but recently criticized since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.
Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.
== Force field databases ==
A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).
== Transferability and mixing function types ==
Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.
In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.
== Limitations ==
All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.
Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.
All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.
Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.
It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ −1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were −4 to −6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.
== Force fields available in literature ==
Different force fields are designed for different purposes:
=== Classical ===
AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
CVFF – also used broadly for small molecules and macromolecules.
ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
QCFF/PI – A general force fields for conjugated molecules.
UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.
=== Polarizable ===
Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.
AMBER – polarizable force field developed by Jim Caldwell and coworkers.
AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
DRF90 – developed by P. Th. van Duijnen and coworkers.
NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
PHAST – polarizable potential developed by Chris Cioce and coworkers.
ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov
XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.
=== Reactive ===
EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.
=== Coarse-grained ===
DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman with later modifications by Español and Warren The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids, later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.
=== Machine learning ===
MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
FFLUX (originally QCTFF) A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.
=== Water ===
The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.
=== Modified amino acids ===
Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.
=== Other ===
LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.
== See also ==
== References ==
== Further reading == | Wikipedia/Potential_energy_of_protein |
Clinical Biochemistry is a peer-reviewed scientific journal covering the analytical and clinical investigation of laboratory tests in humans used for diagnosis, molecular biology and genetics, prognosis, treatment and therapy, and monitoring of disease ; the discipline of clinical biochemistry. It is the official journal of the Canadian Society of Clinical Chemists.
== Abstracting and indexing ==
The journal is abstracted and indexed in BIOSIS, Chemical Abstracts, Current Contents/Life Sciences, EMBASE, MEDLINE, and Scopus.
== Article categories ==
The journal publishes the following types of articles:
== Most cited articles ==
According to SCOPUS, the following three articles have been cited most often (>70 times):
Herget-Rosenthal, S.; Bökenkamp, A.; Hofmann, W. (2007). "How to estimate GFR-serum creatinine, serum cystatin C or equations?". Clinical Biochemistry. 40 (3–4): 153–161. doi:10.1016/j.clinbiochem.2006.10.014. PMID 17234172.
Juliana F. Roos; Jenny Doust; Susan E. Tett; Carl M.J. Kirkpatrick (2007). "Diagnostic accuracy of cystatin C compared to serum creatinine for the estimation of renal dysfunction in adults and children-A meta-analysis". Clinical Biochemistry. 40 (5–6): 383–391. doi:10.1016/j.clinbiochem.2006.10.026. PMID 17316593.
Atta, H.M.; Mahfouz, S.; Fouad, H.H.; Roshdy, N.K.; Ahmed, H.H.; Rashed, L.A.; Sabry, D.; Hassouna, A.A.; Hasan, N.M (2007). "Therapeutic potential of bone marrow-derived mesenchymal stem cells on experimental liver fibrosis". Clinical Biochemistry. 40 (12): 893–899. doi:10.1016/j.clinbiochem.2007.04.017. PMID 17543295.
== Baby Wash Products found to contain cannabinoid immunoassay ==
Researchers at the University of North Carolina published an article in Clinical Biochemistry which found Baby wash products could cause false drug test results. Newborn drug screening has a significant implications in both the healthcare and legal domains, on occasion resulting in involvement by social services or false child abuse allegations. The accuracy of the screening results is therefore essential. This research highlights reasons why false positive cannabinoid (THC) screening results may have occurred. Researchers identified commonly used soap and wash products used for newborn and infant care as potential causes of false positive THC screening results.
== External links ==
Official website
Canadian Society of Clinical Chemists
== Notes == | Wikipedia/Clinical_Biochemistry |
Substance misuse, also known as drug misuse or, in older vernacular, substance abuse, is the use of a drug in amounts or by methods that are harmful to the individual or others. It is a form of substance-related disorder, differing definitions of drug misuse are used in public health, medical, and criminal justice contexts. In some cases, criminal or anti-social behavior occurs when some persons are under the influence of a drug, and may result in long-term personality changes in individuals which may also occur. In addition to possible physical, social, and psychological harm, the use of some drugs may also lead to criminal penalties, although these vary widely depending on the local jurisdiction.
Drugs most often associated with this term include alcohol, amphetamines, barbiturates, benzodiazepines, cannabis, cocaine, hallucinogens, methaqualone, and opioids. The exact cause of substance abuse is sometimes clear, but there are two predominant theories: either a genetic predisposition or most times a habit learned or passed down from others, which, if addiction develops, manifests itself as a possible chronic debilitating disease. It is not easy to determine why a person misuses drugs, as there are multiple environmental factors to consider. These factors include not only inherited biological influences (genes), but there are also mental health stressors such as overall quality of life, physical or mental abuse, luck and circumstance in life and early exposure to drugs that all play a huge factor in how people will respond to drug use.
In 2010, about 5% of adults (230 million) used an illicit substance. Of these, 27 million have high-risk drug use—otherwise known as recurrent drug use—causing harm to their health, causing psychological problems, and or causing social problems that put them at risk of those dangers. In 2015, substance use disorders resulted in 307,400 deaths, up from 165,000 deaths in 1990. Of these, the highest numbers are from alcohol use disorders at 137,500, opioid use disorders at 122,100 deaths, amphetamine use disorders at 12,200 deaths, and cocaine use disorders at 11,100.
== Classification ==
=== Public health definitions ===
Public health practitioners have attempted to look at substance use from a broader perspective than the individual, emphasizing the role of society, culture, and availability. Some health professionals choose to avoid the terms alcohol or drug "abuse" in favor of language considered more objective, such as "substance and alcohol type problems" or "harmful/problematic use" of drugs. The Health Officers Council of British Columbia — in their 2005 policy discussion paper, A Public Health Approach to Drug Control in Canada — has adopted a public health model of psychoactive substance use that challenges the simplistic black-and-white construction of the binary (or complementary) antonyms "use" vs. "abuse". This model explicitly recognizes a spectrum of use, ranging from beneficial use to chronic dependence.
=== Medical definitions ===
'Drug abuse' is no longer a current medical diagnosis in either of the most used diagnostic tools in the world, the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), and the World Health Organization's International Classification of Diseases (ICD). According to the DSM, substance abuse is the abuse of one of the 10 classes of drugs which include cannabis, alcohol, caffeine, hallucinogens, hypnotics, opioids, anxiolytics, inhalants, tobacco, and sedatives as well as other, possibly unknown, substances.
=== Value judgment ===
History professor Philip Jenkins suggests that there are two issues with the term "drug abuse". First, what constitutes a drug is debatable. For instance, GHB, a naturally occurring substance in the central nervous system is considered a drug, and is illegal in many countries, while nicotine is not officially considered a "drug" in most countries.
Second, the word "abuse" implies a recognized standard of use for any substance. Drinking an occasional glass of wine is considered acceptable in most Western countries, while drinking several bottles is seen as abuse. Strict temperance advocates, who may or may not be religiously motivated, would see drinking even one glass as abuse. Some groups (Mormons, as prescribed in "the Word of Wisdom") even condemn caffeine use in any quantity. Similarly, adopting the view that any (recreational) use of cannabis or substituted amphetamines constitutes drug abuse implies a decision made that the substance is harmful, even in minute quantities. In the U.S., drugs have been legally classified into five categories; these are schedule I, II, III, IV, or V in the Controlled Substances Act. The drugs are classified on their deemed potential for abuse.
The usage of some drugs is strongly correlated. For example, the consumption of seven illicit drugs (amphetamines, cannabis, cocaine, ecstasy, legal highs, LSD, and magic mushrooms) is correlated and the Pearson correlation coefficient r>0.4 in every pair of them; consumption of cannabis is strongly correlated (r>0.5) with the usage of nicotine (tobacco), heroin is correlated with cocaine (r>0.4) and methadone (r>0.45), and is strongly correlated with crack (r>0.5)
=== Drug misuse ===
Drug misuse is a term used commonly when prescription medication with sedative, anxiolytic, analgesic, or stimulant properties is used for mood alteration or intoxication ignoring the fact that overdose of such medicines can sometimes have serious adverse effects. It sometimes involves drug diversion from the individual for whom it was prescribed.
Prescription misuse has been defined differently and rather inconsistently based on the status of drug prescription, the uses without a prescription, intentional use to achieve intoxicating effects, route of administration, co-ingestion with alcohol, and the presence or absence of dependence symptoms. Chronic use of certain substances leads to a change in the central nervous system known as a "tolerance" to the medicine such that more of the substance is needed in order to produce desired effects. With some substances, stopping or reducing use can cause withdrawal symptoms to occur, but this is highly dependent on the specific substance in question.
The rate of prescription drug misuse is fast overtaking illegal drug use in the United States. According to the National Institute of Drug Abuse, 7 million people were taking prescription drugs for nonmedical use in 2010. Among 12th graders, nonmedical prescription drug use is now second only to cannabis. In 2011, "Nearly 1 in 12 high school seniors reported nonmedical use of Vicodin; 1 in 20 reported such use of OxyContin." Both of these drugs contain opioids. Fentanyl is an opioid that is 100 times more potent than morphine, and 50 times more potent than heroin. A 2017 survey of 12th graders in the United States, found misuse of OxyContin of 2.7 percent, compared to 5.5 percent at its peak in 2005. Misuse of the combination hydrocodone/paracetamol was at its lowest since a peak of 10.5 percent in 2003. This decrease may be related to public health initiatives and decreased availability.
Avenues of obtaining prescription drugs for misuse are varied: sharing between family and friends, illegally buying medications at school or work, and often "doctor shopping" to find multiple physicians to prescribe the same medication, without the knowledge of other prescribers.
Increasingly, law enforcement is holding physicians responsible for prescribing controlled substances without fully establishing patient controls, such as a patient "drug contract". Concerned physicians are educating themselves on how to identify medication-seeking behavior in their patients, and are becoming familiar with "red flags" that would alert them to potential prescription drug abuse.
== Signs and symptoms ==
Depending on the actual compound, drug abuse including alcohol may lead to health problems, social problems, morbidity, injuries, unprotected sex, violence, deaths, motor vehicle accidents, homicides, suicides, physical dependence or psychological addiction.
There is a high rate of suicide in alcoholics and other drug abusers. The reasons believed to cause the increased risk of suicide include the long-term abuse of alcohol and other drugs causing physiological distortion of brain chemistry as well as the social isolation. Another factor is the acute intoxicating effects of the drugs may make suicide more likely to occur. Suicide is also very common in adolescent alcohol abusers, with 1 in 4 suicides in adolescents being related to alcohol abuse. In the US, approximately 30% of suicides are related to alcohol abuse. Alcohol abuse is also associated with increased risks of committing criminal offences including child abuse, domestic violence, rapes, burglaries and assaults.
Drug abuse, including alcohol and prescription drugs, can induce symptomatology which resembles mental illness. This can occur both in the intoxicated state and also during withdrawal. In some cases, substance-induced psychiatric disorders can persist long after detoxification, such as prolonged psychosis or depression after amphetamine or cocaine abuse. A protracted withdrawal syndrome can also occur with symptoms persisting for months after cessation of use. Benzodiazepines are the most notable drug for inducing prolonged withdrawal effects with symptoms sometimes persisting for years after cessation of use. Both alcohol, barbiturate as well as benzodiazepine withdrawal can potentially be fatal. Abuse of hallucinogens, although extremely unlikely, may in some individuals trigger delusional and other psychotic phenomena long after cessation of use. This is mainly a risk with deliriants, and most unlikely with psychedelics and dissociatives.
Cannabis may trigger panic attacks during intoxication and with continued use, it may cause a state similar to dysthymia. Researchers have found that daily cannabis use and the use of or low-potency indoor grown cannabis are independently associated with a higher chance of developing schizophrenia and other psychotic disorders.
Severe anxiety and depression are often induced by sustained alcohol abuse. Even sustained moderate alcohol use may increase anxiety and depression levels in some individuals. In most cases, these drug-induced psychiatric disorders fade away with prolonged abstinence. Similarly, although substance abuse induces many changes to the brain, there is evidence that many of these alterations are reversed following periods of prolonged abstinence.
=== Impulsivity ===
Impulsivity is characterized by actions based on sudden desires, whims, or inclinations rather than careful thought. Individuals with substance abuse have higher levels of impulsivity, and individuals who use multiple drugs tend to be more impulsive. A number of studies using the Iowa gambling task as a measure for impulsive behavior found that drug using populations made more risky choices compared to healthy controls. There is a hypothesis that the loss of impulse control may be due to impaired inhibitory control resulting from drug induced changes that take place in the frontal cortex. The neurodevelopmental and hormonal changes that happen during adolescence may modulate impulse control that could possibly lead to the experimentation with drugs and may lead to addiction. Impulsivity is thought to be a facet trait in the neuroticism personality domain (overindulgence/negative urgency) which is prospectively associated with the development of substance abuse.
== Screening and assessment ==
The screening and assessment process of substance use behavior is important for the diagnosis and treatment of substance use disorders. Screeners is the process of identifying individuals who have or may be at risk for a substance use disorder and are usually brief to administer. Assessments are used to clarify the nature of the substance use behavior to help determine appropriate treatment. Assessments usually require specialized skills, and are longer to administer than screeners.
Given that addiction manifests in structural changes to the brain, it is possible that non-invasive magnetic resonance imaging could help diagnose addiction in the future.
=== Targeted assessments ===
There are several different screening tools that have been validated for use with adolescents such as the CRAFFT Screening Test and in adults the CAGE questionnaire. Some recommendations for screening tools for substance misuse in pregnancy include that they take less than 10 minutes, should be used routinely, include an educational component. Tools suitable for pregnant women include i.a. 4Ps, T-ACE, TWEAK, TQDH (Ten-Question Drinking History), and AUDIT.
== Treatment ==
=== Psychological ===
From the applied behavior analysis literature, behavioral psychology, and from randomized clinical trials, several evidenced based interventions have emerged: behavioral marital therapy, motivational Interviewing, community reinforcement approach, exposure therapy, contingency management They help suppress cravings and mental anxiety, improve focus on treatment and new learning behavioral skills, ease withdrawal symptoms and reduce the chances of relapse.
In children and adolescents, cognitive behavioral therapy (CBT) and family therapy currently has the most research evidence for the treatment of substance abuse problems. Well-established studies also include ecological family-based treatment and group CBT. These treatments can be administered in a variety of different formats, each of which has varying levels of research support Research has shown that what makes group CBT most effective is that it promotes the development of social skills, developmentally appropriate emotional regulatory skills and other interpersonal skills. A few integrated treatment models, which combines parts from various types of treatment, have also been seen as both well-established or probably effective. A study on maternal alcohol and other drug use has shown that integrated treatment programs have produced significant results, resulting in higher negative results on toxicology screens. Additionally, brief school-based interventions have been found to be effective in reducing adolescent alcohol and cannabis use and abuse. Motivational interviewing can also be effective in treating substance use disorder in adolescents.
Alcoholics Anonymous and Narcotics Anonymous are widely known self-help organizations in which members support each other abstain from substances. Social skills are significantly impaired in people with alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. It has been suggested that social skills training adjunctive to inpatient treatment of alcohol dependence is probably efficacious, including managing the social environment.
=== Medication ===
A number of medications have been approved for the treatment of substance abuse. These include replacement therapies such as buprenorphine and methadone as well as antagonist medications like disulfiram and naltrexone in either short acting, or the newer long acting form. Several other medications, often ones originally used in other contexts, have also been shown to be effective including bupropion and modafinil. Methadone and buprenorphine are sometimes used to treat opiate addiction. These drugs are used as substitutes for other opioids and still cause withdrawal symptoms but they facilitate the tapering off process in a controlled fashion. When a person goes from using fentanyl every day, to not using it at all, they will experience a point where they need to get used to not using the substance. This is called withdrawal.
Antipsychotic medications have not been found to be useful. Acamprostate is a glutamatergic NMDA antagonist, which helps with alcohol withdrawal symptoms because alcohol withdrawal is associated with a hyperglutamatergic system.
=== Heroin-assisted treatment ===
Three countries in Europe have active HAT programs, namely England, the Netherlands and Switzerland. Despite critical voices by conservative think-tanks with regard to these harm-reduction strategies, significant progress in the reduction of drug-related deaths has been achieved in those countries. For example, the US, devoid of such measures, has seen large increases in drug-related deaths since 2000 (mostly related to heroin use), while Switzerland has seen large decreases. In 2018, approximately 60,000 people have died of drug overdoses in America, while in the same time period, Switzerland's drug deaths were at 260. Relative to the population of these countries, the US has 10 times more drug-related deaths compared to the Swiss Confederation, which in effect illustrates the efficacy of HAT to reduce fatal outcomes in opiate/opioid addiction.
=== Dual diagnosis ===
It is common for individuals with drugs use disorder to have other psychological problems. The terms "dual diagnosis" or "co-occurring disorders", refer to having a mental health and substance use disorder at the same time. According to the British Association for Psychopharmacology (BAP), "symptoms of psychiatric disorders such as depression, anxiety and psychosis are the rule rather than the exception in patients misusing drugs and/or alcohol."
Individuals who have a comorbid psychological disorder often have a poor prognosis if either disorder is untreated. Historically most individuals with dual diagnosis either received treatment only for one of their disorders or they did not receive any treatment all. However, since the 1980s, there has been a push towards integrating mental health and addiction treatment. In this method, neither condition is considered primary and both are treated simultaneously by the same provider.
== Epidemiology ==
The initiation of drug use including alcohol is most likely to occur during adolescence, and some experimentation with substances by older adolescents is common. For example, results from 2010 Monitoring the Future survey, a nationwide study on rates of substance use in the United States, show that 48.2% of 12th graders report having used an illicit drug at some point in their lives. In the 30 days prior to the survey, 41.2% of 12th graders had consumed alcohol and 19.2% of 12th graders had smoked tobacco cigarettes. In 2009 in the United States about 21% of high school students have taken prescription drugs without a prescription. And earlier in 2002, the World Health Organization estimated that around 140 million people were alcohol dependent and another 400 million with alcohol-related problems.
Studies have shown that the large majority of adolescents will phase out of drug use before it becomes problematic. Thus, although rates of overall use are high, the percentage of adolescents who meet criteria for substance abuse is significantly lower (close to 5%). According UN estimates, there are "more than 50 million regular users of morphine diacetate (heroin), cocaine and synthetic drugs".
More than 70,200 Americans died from drug overdoses in 2017. Among these, the sharpest increase occurred among deaths related to fentanyl and synthetic opioids (28,466 deaths). See charts below.
== History ==
=== APA, AMA, and NCDA ===
In 1966, the American Medical Association's Committee on Alcoholism and Addiction defined abuse of stimulants (amphetamines, primarily) in terms of 'medical supervision':
...'use' refers to the proper place of stimulants in medical practice; 'misuse' applies to the physician's role in initiating a potentially dangerous course of therapy; and 'abuse' refers to self-administration of these drugs without medical supervision and particularly in large doses that may lead to psychological dependency, tolerance and abnormal behavior.
In 1972, the American Psychiatric Association created a definition that used legality, social acceptability, and cultural familiarity as qualifying factors:
...as a general rule, we reserve the term drug abuse to apply to the illegal, nonmedical use of a limited number of substances, most of them drugs, which have properties of altering the mental state in ways that are considered by social norms and defined by statute to be inappropriate, undesirable, harmful, threatening, or, at minimum, culture-alien.
In 1973, the National Commission on Marijuana and Drug Abuse stated:
...drug abuse may refer to any type of drug or chemical without regard to its pharmacologic actions. It is an eclectic concept having only one uniform connotation: societal disapproval. ... The Commission believes that the term drug abuse must be deleted from official pronouncements and public policy dialogue. The term has no functional utility and has become no more than an arbitrary codeword for that drug use which is presently considered wrong.
=== DSM ===
The first edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (published in 1952) grouped alcohol and other drug abuse under "sociopathic personality disturbances", which were thought to be symptoms of deeper psychological disorders or moral weakness. The third edition, published in 1980, was the first to recognize substance abuse (including drug abuse) and substance dependence as conditions separate from substance abuse alone, bringing in social and cultural factors. The definition of dependence emphasised tolerance to drugs, and withdrawal from them as key components to diagnosis, whereas abuse was defined as "problematic use with social or occupational impairment" but without withdrawal or tolerance.
In 1987, the DSM-IIIR category "psychoactive substance abuse", which includes former concepts of drug abuse is defined as "a maladaptive pattern of use indicated by...continued use despite knowledge of having a persistent or recurrent social, occupational, psychological or physical problem that is caused or exacerbated by the use (or by) recurrent use in situations in which it is physically hazardous". It is a residual category, with dependence taking precedence when applicable. It was the first definition to give equal weight to behavioural and physiological factors in diagnosis. By 1988, the DSM-IV defined substance dependence as "a syndrome involving compulsive use, with or without tolerance and withdrawal"; whereas substance abuse is "problematic use without compulsive use, significant tolerance, or withdrawal". Substance abuse can be harmful to health and may even be deadly in certain scenarios. By 1994, the fourth edition of the DSM issued by the American Psychiatric Association, the DSM-IV-TR, defined substance dependence as "when an individual persists in use of alcohol or other drugs despite problems related to use of the substance, substance dependence may be diagnosed", along with criteria for the diagnosis.
The DSM-IV-TR defines substance abuse as:
A. A maladaptive pattern of substance use leading to clinically significant impairment or distress, as manifested by one (or more) of the following, occurring within a 12-month period:
Recurrent substance use resulting in a failure to fulfill major role obligations at work, school, or home (e.g., repeated absences or poor work performance related to substance use; substance-related absences, suspensions or expulsions from school; neglect of children or household)
Recurrent substance use in situations in which it is physically hazardous (e.g., driving an automobile or operating a machine when impaired by substance use)
Recurrent substance-related legal problems (e.g., arrests for substance-related disorderly conduct)
Continued substance use despite having persistent or recurrent social or interpersonal problems caused or exacerbated by the effects of the substance (e.g., arguments with spouse about consequences of intoxication, physical fights)
the symptoms have never met the criteria for substance dependence for this class of substance
The fifth edition of the DSM (DSM-5), was released in 2013, and it revisited this terminology. The principal change was a transition from the abuse-dependence terminology. In the DSM-IV era, abuse was seen as an early form or less hazardous form of the disease characterized with the dependence criteria. However, the APA's dependence term does not mean that physiologic dependence is present but rather means that a disease state is present, one that most would likely refer to as an addicted state. Many involved recognize that the terminology has often led to confusion, both within the medical community and with the general public. The American Psychiatric Association requested input as to how the terminology of this illness should be altered as it moves forward with DSM-5 discussions. In the DSM-5, substance abuse and substance dependence have been merged into the category of substance use disorders and they no longer exist as individual concepts. While substance abuse and dependence were either present or not, substance use disorder has three levels of severity: mild, moderate and severe.
== Society and culture ==
=== Legal approaches ===
Related articles: Drug control law, Prohibition (drugs), Arguments for and against drug prohibition, Harm reduction
Most governments have designed legislation to criminalize certain types of drug use. These drugs are often called "illegal drugs" but generally what is illegal is their unlicensed production, distribution, and possession. These drugs are also called "controlled substances". Even for simple possession, legal punishment can be quite severe (including the death penalty in some countries). Laws vary across countries, and even within them, and have fluctuated widely throughout history.
Attempts by government-sponsored drug control policy to interdict drug supply and eliminate drug abuse have been largely unsuccessful. In spite of the huge efforts by the U.S., drug supply and purity has reached an all-time high, with the vast majority of resources spent on interdiction and law enforcement instead of public health. In the United States, the number of nonviolent drug offenders in prison exceeds by 100,000 the total incarcerated population in the EU, despite the fact that the EU has 100 million more citizens.
Despite drug legislation (or perhaps because of it), large, organized criminal drug cartels operate worldwide. Advocates of decriminalization argue that drug prohibition makes drug dealing a lucrative business, leading to much of the associated criminal activity.
Some states in the U.S., as of late, have focused on facilitating safe use as opposed to eradicating it. For example, as of 2022, New Jersey has made the effort to expand needle exchange programs throughout the state, passing a bill through legislature that gives control over decisions regarding these types of programs to the state's department of health. This state level bill is not only significant for New Jersey, as it could be used as a model for other states to possibly follow as well. This bill is partly a reaction to the issues occurring at local level city governments within the state of New Jersey as of late. One example of this is in the Atlantic City Government which came under lawsuit after they halted the enactment of said programs within their city. This suit came a year before the passing of this bill, stemming from a local level decision to shut down related operations in Atlantic City made in July that same year. This lawsuit highlights the feelings of New Jersey residents, who had a great influence on this bill passing the legislature. These feelings were demonstrated in front of Atlantic City City hall, where residents exclaimed their desire for these programs. All in all, the aforementioned bill was signed effectively into law just days after it passed legislature, by New Jersey Governor Phil Murphy.
=== Cost ===
Policymakers try to understand the relative costs of drug-related interventions. An appropriate drug policy relies on the assessment of drug-related public expenditure based on a classification system where costs are properly identified.
Labelled drug-related expenditures are defined as the direct planned spending that reflects the voluntary engagement of the state in the field of illicit drugs. Direct public expenditures explicitly labeled as drug-related can be easily traced back by exhaustively reviewing official accountancy documents such as national budgets and year-end reports. Unlabelled expenditure refers to unplanned spending and is estimated through modeling techniques, based on a top-down budgetary procedure. Starting from overall aggregated expenditures, this procedure estimates the proportion causally attributable to substance abuse (Unlabelled Drug-related Expenditure = Overall Expenditure × Attributable Proportion). For example, to estimate the prison drug-related expenditures in a given country, two elements would be necessary: the overall prison expenditures in the country for a given period, and the attributable proportion of inmates due to drug-related issues. The product of the two will give a rough estimate that can be compared across different countries.
==== Europe ====
As part of the reporting exercise corresponding to 2005, the European Monitoring Centre for Drugs and Drug Addiction's network of national focal points set up in the 27 European Union (EU) the member states, Norway, and the candidates' countries to the EU, were requested to identify labeled drug-related public expenditure, at the national level.
This was reported by 10 countries categorized according to the functions of government, amounting to a total of EUR 2.17 billion. Overall, the highest proportion of this total came within the government functions of health (66%) (e.g. medical services), and public order and safety (POS) (20%) (e.g. police services, law courts, prisons). By country, the average share of GDP was 0.023% for health, and 0.013% for POS. However, these shares varied considerably across countries, ranging from 0.00033% in Slovakia, up to 0.053% of GDP in Ireland in the case of health, and from 0.003% in Portugal, to 0.02% in the UK, in the case of POS; almost a 161-fold difference between the highest and the lowest countries for health, and a six-fold difference for POS.
To respond to these findings and to make a comprehensive assessment of drug-related public expenditure across countries, this study compared health and POS spending and GDP in the 10 reporting countries. Results suggest GDP to be a major determinant of the health and POS drug-related public expenditures of a country. Labeled drug-related public expenditure showed a positive association with the GDP across the countries considered: r = 0.81 in the case of health, and r = 0.91 for POS. The percentage change in health and POS expenditures due to a one percent increase in GDP (the income elasticity of demand) was estimated to be 1.78% and 1.23% respectively.
Being highly income elastic, health and POS expenditures can be considered luxury goods; as a nation becomes wealthier it openly spends proportionately more on drug-related health and public order and safety interventions.
==== United Kingdom ====
The UK Home Office estimated that the social and economic cost of drug abuse to the UK economy in terms of crime, absenteeism and sickness is in excess of £20 billion a year.
However, the UK Home Office does not estimate what portion of those crimes are unintended consequences of drug prohibition (crimes to sustain expensive drug consumption, risky production and dangerous distribution), nor what is the cost of enforcement. Those aspects are necessary for a full analysis of the economics of prohibition.
==== United States ====
These figures represent overall economic costs, which can be divided in three major components: health costs, productivity losses and non-health direct expenditures.
Health-related costs were projected to total $16 billion in 2002.
Productivity losses were estimated at $128.6 billion. In contrast to the other costs of drug abuse (which involve direct expenditures for goods and services), this value reflects a loss of potential resources: work in the labor market and in household production that was never performed, but could reasonably be expected to have been performed absent the impact of drug abuse.
Included are estimated productivity losses due to premature death ($24.6 billion), drug abuse-related illness ($33.4 billion), incarceration ($39.0 billion), crime careers ($27.6 billion) and productivity losses of victims of crime ($1.8 billion).
The non-health direct expenditures primarily concern costs associated with the criminal justice system and crime victim costs, but also include a modest level of expenses for administration of the social welfare system. The total for 2002 was estimated at $36.4 billion. The largest detailed component of these costs is for state and federal corrections at $14.2 billion, which is primarily for the operation of prisons. Another $9.8 billion was spent on state and local police protection, followed by $6.2 billion for federal supply reduction initiatives.
According to a report from the Agency for Healthcare Research and Quality (AHRQ), Medicaid was billed for a significantly higher number of hospitals stays for opioid drug overuse than Medicare or private insurance in 1993. By 2012, the differences were diminished. Over the same time, Medicare had the most rapid growth in number of hospital stays.
Canada
Substance abuse takes a financial toll on Canada's hospitals and the country as a whole. In the year 2011, around $267 million of hospital services were attributed to dealing with substance abuse problems. The majority of these hospital costs in 2011 were related to issues with alcohol. Additionally, in 2014, Canada also allocated almost $45 million towards battling prescription drug abuse, extending into the year 2019. Most of the financial decisions made on substance abuse in Canada can be attributed to the research conducted by the Canadian Centre on Substance Abuse (CCSA) which conduct both extensive and specific reports. In fact, the CCSA is heavily responsible for identifying Canada's heavy issues with substance abuse. Some examples of reports by the CCSA include a 2013 report on drug use during pregnancy and a 2015 report on adolescents' use of cannabis.
== Special populations ==
=== Immigrants and refugees ===
Immigrant and refugees have often been under great stress, physical trauma and depression and anxiety due to separation from loved ones often characterize the pre-migration and transit phases, followed by "cultural dissonance", language barriers, racism, discrimination, economic adversity, overcrowding, social isolation, and loss of status and difficulty obtaining work and fears of deportation are common. Refugees frequently experience concerns about the health and safety of loved ones left behind and uncertainty regarding the possibility of returning to their country of origin. For some, substance abuse functions as a coping mechanism to attempt to deal with these stressors.
Immigrants and refugees may bring the substance use and abuse patterns and behaviors of their country of origin, or adopt the attitudes, behaviors, and norms regarding substance use and abuse that exist within the dominant culture into which they are entering.
Another factor that can contribute to substance abuse among immigrants is the lack of support that they receive. With few social and economic resources available to them, some turn to drugs as a way to cope through the stress that they are experiencing. When examining an assimilation model it can be concluded that as immigrants settle into their new environment and adapt to the new culture they are in, the amount as which they use begins to match those of their new environment. So in some cases, the amount in which one uses decreases as they adapt to their new society.
=== Street children ===
Street children in many developing countries are a high-risk group for substance misuse, in particular solvent abuse. Drawing on research in Kenya, Cottrell-Boyce argues that "drug use amongst street children is primarily functional—dulling the senses against the hardships of life on the street—but can also provide a link to the support structure of the 'street family' peer group as a potent symbol of shared experience."
=== Musicians ===
In order to maintain high-quality performance, some musicians take chemical substances. Some musicians take drugs such as alcohol to deal with the stress of performing. As a group they have a higher rate of substance abuse. The most common chemical substance which is abused by pop musicians is cocaine, because of its neurological effects. Stimulants like cocaine increase alertness and cause feelings of euphoria, and can therefore make the performer feel as though they in some ways 'own the stage'. One way in which substance abuse is harmful for a performer (musicians especially) is if the substance being abused is aspirated. The lungs are an important organ used by singers, and addiction to cigarettes may seriously harm the quality of their performance. Smoking harms the alveoli, which are responsible for absorbing oxygen.
=== Veterans ===
Substance abuse can be a factor that affects the physical and mental health of veterans. Substance abuse may also harm personal and familial relationships, leading to financial difficulty. There is evidence to suggest that substance abuse disproportionately affects the homeless veteran population. A 2015 Florida study, which compared causes of homelessness between veterans and non-veteran populations in a self-reporting questionnaire, found that 17.8% of the homeless veteran participants attributed their homelessness to alcohol and other drug-related problems compared to just 3.7% of the non-veteran homeless group.
A 2003 study found that homelessness was correlated with access to support from family/friends and services. However, this correlation was not true when comparing homeless participants who had a current substance-use disorders. The U.S. Department of Veterans Affairs provides a summary of treatment options for veterans with substance-use disorder. For treatments that do not involve medication, they offer therapeutic options that focus on finding outside support groups and "looking at how substance use problems may relate to other problems such as PTSD and depression".
=== Sex and gender ===
There are many sex differences in substance abuse. Men and women express differences in the short- and long-term effects of substance abuse. These differences can be credited to sexual dimorphisms in the brain, endocrine and metabolic systems. Social and environmental factors that tend to disproportionately affect women, such as child and elder care and the risk of exposure to violence, are also factors in the gender differences in substance abuse. Women report having greater impairment in areas such as employment, family and social functioning when abusing substances but have a similar response to treatment. Co-occurring psychiatric disorders are more common among women than men who abuse substances; women more frequently use substances to reduce the negative effects of these co-occurring disorders. Substance abuse puts both men and women at higher risk for perpetration and victimization of sexual violence. Men tend to take drugs for the first time to be part of a group and fit in more so than women. At first interaction, women may experience more pleasure from drugs than men do. Women tend to progress more rapidly from first experience to addiction than men. Physicians, psychiatrists and social workers have believed for decades that women escalate alcohol use more rapidly once they start. Once the addictive behavior is established for women they stabilize at higher doses of drugs than males do. When withdrawing from smoking women experience greater stress response. Males experience greater symptoms when withdrawing from alcohol. There are gender differences when it comes to rehabilitation and relapse rates. For alcohol, relapse rates were very similar for men and women. For women, marriage and marital stress were risk factors for alcohol relapse. For men, being married lowered the risk of relapse. This difference may be a result of gendered differences in excessive drinking. Alcoholic women are much more likely to be married to partners that drink excessively than are alcoholic men. As a result of this, men may be protected from relapse by marriage while women are at higher risk when married. However, women are less likely than men to experience relapse to substance use. When men experience a relapse to substance use, they more than likely had a positive experience prior to the relapse. On the other hand, when women relapse to substance use, they were more than likely affected by negative circumstances or interpersonal problems.
== See also ==
== References ==
People overdose drugs to try to forget their problems at home, and some use them for fun because they saw people using drugs at television advertising them.
== External links ==
"The Science of Drug Use: A Resource for the Justice Sector". North Bethesda, Maryland: National Institute on Drug Abuse. 26 May 2020. Archived from the original on 1 September 2022. Retrieved 23 December 2021.
School-Based Drug Abuse Prevention: Promising and Successful Programs (PDF). Ottawa, Ontario: Public Safety Canada. 31 January 2018. ISBN 978-1-100-12181-9. Archived (PDF) from the original on 19 May 2021. Retrieved 23 December 2021.
Adverse Childhood Experiences: Risk Factors for Substance Misuse and Mental Health. 6 March 2013. Archived from the original on 29 June 2019 – via YouTube. Dr. Robert Anda of the U.S. Centers for Disease Control describes the relation between childhood adversity and later ill-health, including substance abuse (video) | Wikipedia/Drugs_of_abuse |
Proteinuria is the presence of excess proteins in the urine. In healthy persons, urine contains very little protein, less than 150 mg/day; an excess is suggestive of illness. Excess protein in the urine often causes the urine to become foamy (although this symptom may also be caused by other conditions). Severe proteinuria can cause nephrotic syndrome in which there is worsening swelling of the body.
== Signs and symptoms ==
Proteinuria often causes no symptoms and it may only be discovered incidentally.
Foamy urine is considered a cardinal sign of proteinuria, but only a third of people with foamy urine have proteinuria as the underlying cause. It may also be caused by bilirubin in the urine (bilirubinuria), retrograde ejaculation, pneumaturia (air bubbles in the urine) due to a fistula, or drugs such as pyridium.
== Causes ==
There are three main mechanisms to cause proteinuria:
Due to disease in the glomerulus
Because of increased quantity of proteins in serum (overflow proteinuria)
Due to low reabsorption at proximal tubule (Fanconi syndrome)
Proteinuria can also be caused by certain biological agents, such as bevacizumab (Avastin) used in cancer treatment. Excessive fluid intake (drinking in excess of 4 litres of water per day) is another cause.
=== Conditions with proteinuria ===
Proteinuria may be a feature of the following conditions:
Nephrotic syndromes (i.e. intrinsic kidney failure)
Pre-eclampsia
Eclampsia
Toxic lesions of kidneys
Amyloidosis
Collagen vascular diseases (e.g. systemic lupus erythematosus)
Dehydration
Glomerular diseases, such as membranous glomerulonephritis, focal segmental glomerulonephritis, minimal change disease (lipoid nephrosis)
Strenuous exercise
Stress
Benign orthostatic (postural) proteinuria
Focal segmental glomerulosclerosis (FSGS)
IgA nephropathy (i.e. Berger's disease)
IgM nephropathy
Membranoproliferative glomerulonephritis
Membranous nephropathy
Minimal change disease
Sarcoidosis
Alport syndrome
Diabetes mellitus (diabetic nephropathy)
Drugs (e.g. NSAIDs, nicotine, penicillamine, lithium carbonate, gold and other heavy metals, ACE inhibitors, antibiotics, or opiates (especially heroin)
Fabry disease
Infections (e.g. HIV, syphilis, hepatitis, poststreptococcal infection, urinary schistosomiasis)
Aminoaciduria
Fanconi syndrome in association with Wilson disease
Hypertensive nephrosclerosis
Interstitial nephritis
Sickle cell disease
Hemoglobinuria
Multiple myeloma
Myoglobinuria
Organ rejection:
Ebola virus disease
Nail–patella syndrome
Familial Mediterranean fever
HELLP syndrome
Systemic lupus erythematosus
Granulomatosis with polyangiitis
Rheumatoid arthritis
Glycogen storage disease type 1
Goodpasture syndrome
Henoch–Schönlein purpura
A urinary tract infection which has spread to the kidney(s)
Sjögren syndrome
Post-infectious glomerulonephritis
Living kidney donor
Polycystic kidney disease
=== Bence–Jones proteinuria ===
Amyloidosis
Pre-malignant plasma cell dyscrasias:
Monoclonal gammopathy of undetermined significance
Smoldering multiple myeloma
Malignant plasma cell dyscrasias
Multiple myeloma
Waldenström's macroglobulinemia
Other malignancies
Chronic lymphocytic leukemia
Rare cases of other Lymphoid leukemias
Rare cases of Lymphomas
== Pathophysiology ==
Protein is the building block of all living organisms. When kidneys are functioning properly by filtering the blood, they distinguish the proteins from the wastes which were previously present together in the blood. Thereafter, kidneys retain or reabsorb the filtered proteins and return them to the circulating blood while removing wastes by excreting them in the urine. Whenever the kidney is compromised, their ability to filter the blood by differentiating protein from the waste, or retaining the filtered protein then returning which back to the body, is damaged. As a result, there is a significant amount of protein to be discharged along with waste in the urine that makes the concentration of proteins in urine high enough to be detected by medical machine.
Medical testing equipment has improved over time, and as a result tests are better able to detect smaller quantities of protein. Protein in urine is considered normal as long as the value remains within the normal reference range. Variation exists between healthy patients, and it is generally considered harmless for the kidney to fail to retain a few proteins in the blood, letting those protein discharge from the body through urine.
=== Albumin and immunoglobins ===
Albumin is a protein produced by the liver which makes up roughly 50%-60% of the total proteins in the blood while the other 40%-50% are proteins other than albumin, such as immunoglobins. This is why the concentration of albumin in the urine is one of the single sensitive indicators of kidney disease, particularly for those with diabetes or hypertension, compared to routine proteinuria examination.
As the loss of proteins from the body progresses, the suffering will gradually become symptomatic.
The exception applies to the scenario when there's an overproduction of proteins in the body, in which the kidney is not to blame.
== Diagnosis ==
Conventionally, proteinuria is diagnosed by a simple dipstick test, although it is possible for the test to give a false negative reading, even with nephrotic range proteinuria if the urine is dilute. False negatives may also occur if the protein in the urine is composed mainly of globulins or Bence Jones proteins because the reagent on the test strips, bromophenol blue, is highly specific for albumin. Traditionally, dipstick protein tests would be quantified by measuring the total quantity of protein in a 24-hour urine collection test, and abnormal globulins by specific requests for protein electrophoresis.
More recently developed technology detects human serum albumin (HSA) through the use of liquid crystals (LCs). The presence of HSA molecules disrupts the LCs supported on the AHSA-decorated slides thereby producing bright optical signals which are easily distinguishable. Using this assay, concentrations of HSA as low as 15 μg/mL can be detected.
Alternatively, the concentration of protein in the urine may be compared to the creatinine level in a spot urine sample. This is termed the protein/creatinine ratio. The 2005 UK Chronic Kidney Disease guidelines state that protein/creatinine ratio is a better test than 24-hour urinary protein measurement. Proteinuria is defined as a protein/creatinine ratio greater than 45 mg/mmol (which is equivalent to albumin/creatinine ratio of greater than 30 mg/mmol or approximately 300 mg/g) with very high levels of proteinuria having a ratio greater than 100 mg/mmol.
Protein dipstick measurements should not be confused with the amount of protein detected on a test for microalbuminuria which denotes values for protein for urine in mg/day versus urine protein dipstick values which denote values for protein in mg/dL. That is, there is a basal level of proteinuria that can occur below 30 mg/day which is considered non-pathology. Values between 30 and 300 mg/day are termed microalbuminuria which is considered pathologic. Urine protein lab values for microalbumin of >30 mg/day correspond to a detection level within the "trace" to "1+" range of a urine dipstick protein assay. Therefore, positive indication of any protein detected on a urine dipstick assay obviates any need to perform a urine microalbumin test as the upper limit for microalbuminuria has already been exceeded.
=== Analysis ===
It is possible to analyze urine samples in determining albumin, hemoglobin and myoglobin with an optimized MEKC method.
== Treatment ==
The most common cause is diabetic nephropathy; in this case, proper glycemic control may slow the progression. Medical management consists of angiotensin converting enzyme (ACE) inhibitors, which are typically first-line therapy for proteinuria. In patients whose proteinuria is not controlled with ACE inhibitors, the addition of an aldosterone antagonist (i.e., spironolactone) or angiotensin receptor blocker (ARB) may further reduce protein loss.
Atrasentan (Vanrafia) was approved for medical use in the United States in April 2025.
== See also ==
List of terms associated with diabetes
== References ==
== External links == | Wikipedia/Proteinuria |
A urine test is any medical test performed on a urine specimen. The analysis of urine is a valuable diagnostic tool because its composition reflects the functioning of many body systems, particularly the kidneys and urinary system, and specimens are easy to obtain. Common urine tests include the routine urinalysis, which examines the physical, chemical, and microscopic properties of the urine; urine drug screening; and urine pregnancy testing.
== Background ==
The value of urine for diagnostic purposes has been recognized since ancient times. Urine examination was practiced in Sumer and Babylonia as early as 4000 BC, and is described in ancient Greek and Sanskrit texts. Contemporary urine testing uses a range of methods to investigate the physical and biochemical properties of the urine. For instance, the results of the routine urinalysis can provide information about the functioning of the kidneys and urinary system; suggest the presence of a urinary tract infection (UTI); and screen for possible diabetes or liver disease, among other conditions. A urine culture can be performed to identify the bacterial species involved in a UTI. Simple point-of-care tests can detect pregnancy by identifying the presence of beta-hCG in the urine and indicate the use of recreational drugs by detecting excreted drugs or their metabolites. Analysis of abnormal cells in urine (urine cytology) can help to diagnose some cancers, and testing for organic acids or amino acids in urine can be used to screen for some genetic disorders.
== Specimen collection ==
The techniques used to collect urine specimens vary based on the desired test. A random urine, meaning a specimen that is collected at any time, can be used for many tests. However, a sample collected during the first urination of the morning (first morning specimen) is preferred for tests like urinalysis and pregnancy screening because it is typically more concentrated, making the test more sensitive. Because the concentration of many substances in the urine varies throughout the day, some tests require timed urine collections, in which the patient collects all of their urine into a container for a given period of time (commonly 24 hours). A small amount of the specimen is then removed for testing. Timed collections are commonly used to measure creatinine, urea, urine protein, hormones and electrolytes.
If urine is needed for microbiological culture, it is important that the sample is not contaminated. In this case, the proper collection procedure involves cleaning the genital area, beginning to urinate into the toilet, and then filling the specimen container before completing the urination into the toilet. This is called a "midstream clean catch" collection. Research has shown many women are unsure of how to take a midstream sample or why it is needed.
If the subject is not able to urinate voluntarily, samples can be obtained using a urinary catheter or by inserting a needle through the abdomen and into the bladder (suprapubic aspiration). In infants and young children, urine can be collected into a bag attached to the genital region, but this is associated with a high risk of contamination.
== Types ==
Some examples of urine tests include:
=== Chemistry ===
Urinalysis — assessment of the visual properties of the urine, chemical evaluation using urine test strips, and microscopic examination
Urine creatinine, creatinine clearance — used to assess kidney function
Albumin/creatinine ratio — used to diagnose microalbuminuria
Urine osmolality — measure of the solute concentration of urine
Urine specific gravity ― another measure of urine concentration
Urine electrolyte levels — measurement of electrolytes such as sodium and potassium in urine
Urine anion gap — used to distinguish between some causes of metabolic acidosis
=== Hormones ===
Urine pregnancy test ― detects human chorionic gonadotropin in urine
Urine cortisol ― used to investigate disorders of the adrenal glands
Urine metanephrines ― used to help diagnose some rare tumours
=== Microbiology ===
Urine culture — microbiological culture of urine samples, used to identify bacteria causing urinary tract infections
=== Miscellaneous ===
Urine drug screen — screen for usage of recreational drugs
Urine cytology — cytopathological examination of cells in the urine, used to screen for cancer
Urine protein electrophoresis — classification and measurement of different proteins in the urine; used to help diagnose monoclonal gammopathies
Urine organic acids, urine amino acids — used to test for some inborn errors of metabolism
== References ==
== Works cited == | Wikipedia/Clinical_urine_tests |
Gastrointestinal diseases (abbrev. GI diseases or GI illnesses) refer to diseases involving the gastrointestinal tract, namely the esophagus, stomach, small intestine, large intestine and rectum; and the accessory organs of digestion, the liver, gallbladder, and pancreas.
== Oral disease ==
The oral cavity is part of the gastrointestinal system and as such the presence of alterations in this district can be the first sign of both systemic and gastrointestinal diseases. By far the most common oral conditions are plaque-induced diseases (e.g., gingivitis, periodontitis, dental caries). Oral symptoms can be similar to lesions occurring elsewhere in the digestive tract, with a pattern of swelling, inflammation, ulcers, and fissures. If these signs are present, then patients are more likely to also have anal and esophageal lesions and experience other extra-intestinal disease manifestations. Some diseases which involve other parts of the GI tract can manifest in the mouth, alone or in combination, including:
Gastroesophageal reflux disease can cause acid erosion of the teeth and halitosis.
Gardner's syndrome can be associated with failure of tooth eruption, supernumerary teeth, and dentigerous cysts.
Peutz–Jeghers syndrome can cause dark spots on the oral mucosa or on the lips or the skin around the mouth.
Several GI diseases, especially those associated with malabsorption, can cause recurrent mouth ulcers, atrophic glossitis, and angular cheilitis (e.g., Crohn's disease is sometimes termed orofacial granulomatosis when it involves the mouth alone).
Sideropenic dysphagia can cause glossitis, angular cheilitis.
== Oesophageal disease ==
Oesophageal diseases include a spectrum of disorders affecting the oesophagus. The most common condition of the oesophagus in Western countries is gastroesophageal reflux disease, which in chronic forms is thought to result in changes to the epithelium of the oesophagus, known as Barrett's oesophagus.: 863–865
Acute disease might include infections such as oesophagitis, trauma caused by the ingestion of corrosive substances, or rupture of veins such as oesophageal varices, Boerhaave syndrome or Mallory-Weiss tears. Chronic diseases might include congenital diseases such as Zenker's diverticulum and esophageal webbing, and oesophageal motility disorders including the nutcracker oesophagus, achalasia, diffuse oesophageal spasm, and oesophageal stricture.: 853, 863–868
Oesophageal disease may result in a sore throat, throwing up blood, difficulty swallowing or vomiting. Chronic or congenital diseases might be investigated using barium swallows, endoscopy and biopsy, whereas acute diseases such as reflux may be investigated and diagnosed based on symptoms and a medical history alone.: 863–867
== Gastric disease ==
Gastric diseases refer to diseases affecting the stomach. Inflammation of the stomach by infection from any cause is called gastritis, and when including other parts of the gastrointestinal tract called gastroenteritis. When gastritis persists in a chronic state, it is associated with several diseases, including atrophic gastritis, pyloric stenosis, and gastric cancer. Another common condition is gastric ulceration, peptic ulcers. Ulceration erodes the gastric mucosa, which protects the tissue of the stomach from the stomach acids. Peptic ulcers are most commonly caused by a bacterial Helicobacter pylori infection. Epstein–Barr virus infection is another factor to induce gastric cancer.
As well as peptic ulcers, vomiting blood may result from abnormal arteries or veins that have ruptured, including Dieulafoy's lesion and Gastric antral vascular ectasia. Congenital disorders of the stomach include pernicious anaemia, in which a targeted immune response against parietal cells results in an inability to absorb vitamin B12. Other common symptoms that stomach disease might cause include indigestion or dyspepsia, vomiting, and in chronic disease, digestive problems leading to forms of malnutrition. : 850–853 In addition to routine tests, an endoscopy might be used to examine or take a biopsy from the stomach. : 848
== Intestinal disease ==
The small and large intestines may be affected by infectious, autoimmune, and physiological states. Inflammation of the intestines is called enterocolitis, which may lead to diarrhea.
Acute conditions affecting the bowels include infectious diarrhea and mesenteric ischaemia. Causes of constipation may include faecal impaction and bowel obstruction, which may in turn be caused by ileus, intussusception, volvulus. Inflammatory bowel disease is a condition of unknown aetiology, classified as either Crohn's disease or ulcerative colitis, that can affect the intestines and other parts of the gastrointestinal tract. Other causes of illness include intestinal pseudoobstruction, and necrotizing enterocolitis.: 850–862, 895–903
Diseases of the intestine may cause vomiting, diarrhoea or constipation, and altered stool, such as with blood in stool. Colonoscopy may be used to examine the large intestine, and a person's stool may be sent for culture and microscopy. Infectious disease may be treated with targeted antibiotics, and inflammatory bowel disease with immunosuppression. Surgery may also be used to treat some causes of bowel obstruction.: 850–862
The normal thickness of the small intestinal wall is 3–5 mm, and 1–5 mm in the large intestine. Focal, irregular and asymmetrical gastrointestinal wall thickening on CT scan suggests a malignancy. Segmental or diffuse gastrointestinal wall thickening is most often due to ischemic, inflammatory or infectious disease. Though less common, medications such as ACE inhibitors can cause angioedema and small bowel thickening.
=== Small intestine ===
The small intestine consists of the duodenum, jejunum and ileum. Inflammation of the small intestine is called enteritis, which if localised to just part is called duodenitis, jejunitis and ileitis, respectively. Peptic ulcers are also common in the duodenum.: 879–884
Chronic diseases of malabsorption may affect the small intestine, including the autoimmune coeliac disease, infective tropical sprue, and congenital or surgical short bowel syndrome. Other rarer diseases affecting the small intestine include Curling's ulcer, blind loop syndrome, Milroy disease and Whipple's disease. Tumours of the small intestine include gastrointestinal stromal tumours, lipomas, hamartomas and carcinoid syndromes.: 879–887
Diseases of the small intestine may present with symptoms such as diarrhoea, malnutrition, fatigue and weight loss. Investigations pursued may include blood tests to monitor nutrition, such as iron levels, folate and calcium, endoscopy and biopsy of the duodenum, and barium swallow. Treatments may include renutrition and antibiotics for infections.: 879–887
=== Large intestine ===
Diseases that affect the large intestine may affect it in whole or in part. Appendicitis is one such disease, caused by inflammation of the appendix. Generalised inflammation of the large intestine is referred to as colitis, which when caused by the bacteria Clostridioides difficile is referred to as pseudomembranous colitis. Diverticulitis is a common cause of abdominal pain resulting from outpouchings that particularly affect the colon. Functional colonic diseases refer to disorders without a known cause, including irritable bowel syndrome and intestinal pseudoobstruction. Constipation may result from lifestyle factors, impaction of a rigid stool in the rectum, or in neonates, Hirschprung's disease.: 913–915
Diseases affecting the large intestine may cause blood to be passed with stool, may cause constipation, or may result in abdominal pain or a fever. Tests that specifically examine the function of the large intestine include barium swallows, abdominal x-rays, and colonoscopy.: 913–915
=== Rectum and anus ===
Diseases affecting the rectum and anus are extremely common, especially in older adults. Hemorrhoids, vascular outpouchings of skin, are very common, as is pruritus ani, referring to anal itchiness. Other conditions, such as anal cancer may be associated with ulcerative colitis or with sexually transmitted infections such as HIV. Inflammation of the rectum is known as proctitis, one cause of which is radiation damage associated with radiotherapy to other sites such as the prostate. Faecal incontinence can result from mechanical and neurological problems, and when associated with a lack of voluntary voiding ability is described as encopresis. Pain on passing stool may result from anal abscesses, small inflamed nodules, anal fissures, and anal fistulas.: 915–916
Rectal and anal disease may be asymptomatic, or may present with pain when passing stools, fresh blood in stool, a feeling of incomplete emptying, or pencil-thin stools. In addition to regular tests, medical tests used to investigate the anus and rectum include the digital rectal exam and proctoscopy.
== Accessory digestive gland disease ==
=== Hepatic ===
Hepatic diseases refers to those affecting the liver. Hepatitis refers to inflammation of liver tissue, and may be acute or chronic. Infectious viral hepatitis, such as hepatitis A, B and C, affect in excess of (X) million people worldwide. Liver disease may also be a result of lifestyle factors, such as fatty liver and NASH. Alcoholic liver disease may also develop as a result of chronic alcohol use, which may also cause alcoholic hepatitis. Cirrhosis may develop as a result of chronic hepatic fibrosis in a chronically inflamed liver, such as one affected by alcohol or viral hepatitis.: 947–958
Liver abscesses are often acute conditions, with common causes being pyogenic and amoebic. Chronic liver disease, such as cirrhosis, may be a cause of liver failure, a state where the liver is unable to compensate for chronic damage, and unable to meet the metabolic demands of the body. In the acute setting, this may be a cause of hepatic encephalopathy and hepatorenal syndrome. Other causes of chronic liver disease are genetic or autoimmune disease, such as hemochromatosis, Wilson's disease, autoimmune hepatitis, and primary biliary cirrhosis.: 959–963, 971
Acute liver disease rarely results in pain, but may result in jaundice. Infectious liver disease may cause a fever. Chronic liver disease may result in a buildup of fluid in the abdomen, yellowing of the skin or eyes, easy bruising, immunosuppression, and feminization. Portal hypertension is often present, and this may lead to the development of prominent veins in many parts of the body, such as oesophageal varices, and haemorrhoids.: 959–963, 971–973
In order to investigate liver disease, a medical history, including regarding a person's family history, travel to risk-prone areas, alcohol use and food consumption, may be taken. A medical examination may be conducted to investigate for symptoms of liver disease. Blood tests may be used, particularly liver function tests, and other blood tests may be used to investigate the presence of the Hepatitis viruses in the blood, and ultrasound used. If ascites is present, abdominal fluid may be tested for protein levels.: 921, 926–927
=== Pancreatic ===
Pancreatic diseases that affect digestion refers to disorders affecting the exocrine pancreas, which is a part of the pancreas involved in digestion.
One of the most common conditions of the exocrine pancreas is acute pancreatitis, which in the majority of cases relates to gallstones that have impacted in the pancreatic part of the biliary tree, or due to acute or chronic hazardous alcohol use or as a side-effect of ERCP. Other forms of pancreatitis include chronic and hereditary forms. Chronic pancreatitis may predispose to pancreatic cancer and is strongly linked to alcohol use. Other rarer diseases affecting the pancreas may include pancreatic pseudocysts, exocrine pancreatic insufficiency, and pancreatic fistulas.: 888–891
Pancreatic disease may present with or without symptoms. When symptoms occur, such as in acute pancreatitis, a person may experience acute-onset, severe mid-abdominal pain, nausea and vomiting. In severe cases, pancreatitis may lead to rapid blood loss and systemic inflammatory response syndrome. When the pancreas is unable to secrete digestive enzymes, such as with a pancreatic cancer occluding the pancreatic duct, result in jaundice. Pancreatic disease might be investigated using abdominal x-rays, MRCP or ERCP, CT scans, and through blood tests such as measurement of the amylase and lipase enzymes.: 888–894
=== Gallbladder and biliary tract ===
Diseases of the hepatobiliary system affect the biliary tract (also known as the biliary tree), which secretes bile in order to aid digestion of fats. Diseases of the gallbladder and bile ducts are commonly diet-related, and may include the formation of gallstones that impact in the gallbladder (cholecystolithiasis) or in the common bile duct (choledocholithiasis).: 977–978
Gallstones are a common cause of inflammation of the gallbladder, called cholecystitis. Inflammation of the biliary duct is called cholangitis, which may be associated with autoimmune disease, such as primary sclerosing cholangitis, or a result of bacterial infection, such as ascending cholangitis.: 977–978, 963–968
Disease of the biliary tree may cause pain in the upper right abdomen, particularly when pressed. Disease might be investigated using ultrasound or ERCP, and might be treated with drugs such as antibiotics or UDCA, or by the surgical removal of the gallbladder.: 977–979
=== Cancer ===
The Wikipedia article "Gastrointestinal cancer" describes the specific malignant conditions of the gastrointestinal tract. In general, a significant factor in the etiology of gastrointestinal cancers appears to be excessive exposure of the digestive organs to bile acids.
== See also ==
Functional gastrointestinal disorder
Gastrointestinal malformations
Gastrointestinal bleeding
Neurotherapy
== References ==
== External links == | Wikipedia/Gastrointestinal_disease |
C-reactive protein (CRP) is an annular (ring-shaped) pentameric protein found in blood plasma, whose circulating concentrations rise in response to inflammation. It is an acute-phase protein of hepatic origin that increases following interleukin-6 secretion by macrophages and T cells. Its physiological role is to bind to lysophosphatidylcholine expressed on the surface of dead or dying cells (and some types of bacteria) in order to activate the complement system via C1q.
CRP is synthesized by the liver in response to factors released by macrophages, T cells and fat cells (adipocytes). It is a member of the pentraxin family of proteins. It is not related to C-peptide (insulin) or protein C (blood coagulation). C-reactive protein was the first pattern recognition receptor (PRR) to be identified.
== History and etymology ==
Discovered by Tillett and Francis in 1930, it was initially thought that CRP might be a pathogenic secretion since it was elevated in a variety of illnesses, including cancer. The later discovery of hepatic synthesis (made in the liver) demonstrated that it is a native protein. Initially, CRP was measured using the quellung reaction which gave a positive or a negative result. More precise methods nowadays use dynamic light scattering after reaction with CRP-specific antibodies.
CRP was so named because it was first identified as a substance in the serum of patients with acute inflammation that reacted with the cell wall polysaccharide (C-polysaccharide) of pneumococcus.
== Genetics and structure ==
It is a member of the small pentraxins family (also known as short pentraxins). The polypeptide encoded by this gene has 224 amino acids. The full-length polypeptide is not present in the body in significant quantities due to signal peptide, which is removed by signal peptidase before translation is completed. The complete protein, composed of five monomers, has a total mass of approximately 120,000 Da. In serum, it assembles into stable pentameric structure with a discoid shape.
== Function ==
CRP binds to the phosphocholine expressed on the surface of bacterial cells such as pneumococcus bacteria. This activates the complement system, promoting phagocytosis by macrophages, which clears necrotic and apoptotic cells and bacteria. With this mechanism, CRP also binds to ischemic/hypoxic cells, which could regenerate with more time. However, the binding of CRP causes them to be disposed of prematurely. CRP binds to the Fc-gamma receptor IIa, to which IgG isotype antibodies also bind. In addition, CRP activates the classical complement pathway via C1q binding. CRP thus forms immune complexes in the same way as IgG antibodies.
This so-called acute phase response occurs as a result of increasing concentrations of interleukin-6 (IL-6), which is produced by macrophages as well as adipocytes in response to a wide range of acute and chronic inflammatory conditions such as bacterial, viral, or fungal infections; rheumatic and other inflammatory diseases; malignancy; and tissue injury and necrosis. These conditions cause release of IL-6 and other cytokines that trigger the synthesis of CRP and fibrinogen by the liver.
CRP binds to phosphocholine on micro-organisms. It is thought to assist in complement binding to foreign and damaged cells and enhances phagocytosis by macrophages (opsonin-mediated phagocytosis), which express a receptor for CRP. It plays a role in innate immunity as an early defense system against infections.
== Serum levels ==
=== Measurement methods ===
Traditional CRP measurement only detected CRP in the range of 10 to 1,000 mg/L, whereas high sensitivity CRP (hs-CRP) detects CRP in the range of 0.5 to 10 mg/L. hs-CRP can detect cardiovascular disease risk when in excess of 3 mg/L, whereas below 1 mg/L would be low risk. Traditional CRP measurement is faster and less costly than hs-CRP, and can be adequate for some applications, such as monitoring hemodialysis patients. Current immunoassay methods for CRP have similar precision to hsCRP performed by nephelometry and could probably replace hsCRP for cardiovascular risk assessment, however, in the United States this would represent off-label use, making it a laboratory-developed test under FDA regulations.
=== Normal ===
In healthy adults, the normal concentrations of CRP varies between 0.8 mg/L and 3.0 mg/L. However, some healthy adults show elevated CRP at 10 mg/L. CRP concentrations also increase with age, possibly due to subclinical conditions. There are also no seasonal variations of CRP concentrations. Gene polymorphism of interleukin-1 family, interleukin 6, and polymorphic GT repeat of the CRP gene do affect the usual CRP concentrations when a person does not have any medical illnesses.
=== Acute inflammation ===
When there is a stimulus, the CRP level can increase 10,000-fold from less than 50 μg/L to more than 500 mg/L. Its concentration can increase to 5 mg/L by 6 hours and peak at 48 hours. The plasma half-life of CRP is 19 hours, and is constant in all medical conditions. Therefore, the only factor that affects the blood CRP concentration is its production rate, which increases with inflammation, infection, trauma, necrosis, malignancy, and allergic reactions. Other inflammatory mediators that can increase CRP are TGF beta 1, and tumor necrosis factor alpha. In acute inflammation, CRP can increase as much as 50 to 100 mg/L within 4 to 6 hours in mild to moderate inflammation or an insult such as skin infection, cystitis, or bronchitis. It can double every 8 hours and reaches its peak at 36 to 50 hours following injury or inflammation. CRP between 100 and 500 mg/L is considered highly predictive of inflammation due to bacterial infection. Once inflammation subsides, CRP level falls quickly because of its relatively short half-life.
=== Metabolic inflammation ===
CRP concentrations between 2 and 10 mg/L are considered as metabolic inflammation: metabolic pathways that cause arteriosclerosis and type II diabetes mellitus.
== Clinical significance ==
=== Diagnostic use ===
CRP is used mainly as an inflammation marker. Apart from liver failure, there are few known factors that interfere with CRP production. Interferon alpha inhibits CRP production from liver cells which may explain the relatively low levels of CRP found during viral infections compared to bacterial infections
Measuring and charting CRP values can prove useful in determining disease progress or the effectiveness of treatments. ELISA and radial immunodiffusion methods are available for research use, while immunoturbidimetry is used clinically for CRP and nephelometry is typically used for hsCRP. Cutoffs for cardiovascular risk assessment have included:
low: hs-CRP level under 1.0 mg/L
average: between 1.0 and 3.0 mg/L
high: above 3.0 mg/L
Normal levels increase with aging. Higher levels are found in late pregnant women, mild inflammation and viral infections (10–40 mg/L), active inflammation, bacterial infection (40–200 mg/L), severe bacterial infections and burns (>200 mg/L).
CRP cut-off levels indicating bacterial from non-bacterial illness can vary due to co-morbidities such as malaria, HIV and malnutrition and the stage of disease presentation. In patients presenting to the emergency department with suspected sepsis, a CRP/albumin ratio of less than 32 has a negative predictive value of 89% for ruling out sepsis.
CRP is a more sensitive and accurate reflection of the acute phase response than the ESR (erythrocyte sedimentation rate). ESR may be normal while CRP is elevated. CRP returns to normal more quickly than ESR in response to therapy.
=== Cardiovascular disease ===
Recent research suggests that patients with elevated basal levels of CRP are at an increased risk of diabetes, hypertension and cardiovascular disease. A study of over 700 nurses showed that those in the highest quartile of trans fat consumption had blood levels of CRP that were 73% higher than those in the lowest quartile. Although one group of researchers indicated that CRP may be only a moderate risk factor for cardiovascular disease, this study (known as the Reykjavik Study) was found to have some problems for this type of analysis related to the characteristics of the population studied, and there was an extremely long follow-up time, which may have attenuated the association between CRP and future outcomes. Others have shown that CRP can exacerbate ischemic necrosis in a complement-dependent fashion and that CRP inhibition can be a safe and effective therapy for myocardial and cerebral infarcts; this has been demonstrated in animal models and humans.
It has been hypothesized that patients with high CRP levels might benefit from use of statins. This is based on the JUPITER trial that found that elevated CRP levels without hyperlipidemia benefited. Statins were selected because they have been proven to reduce levels of CRP. Studies comparing effect of various statins in hs-CRP revealed similar effects of different statins. A subsequent trial however failed to find that CRP was useful for determining statin benefit.
In a meta-analysis of 20 studies involving 1,466 patients with coronary artery disease, CRP levels were found to be reduced after exercise interventions. Among those studies, higher CRP concentrations or poorer lipid profiles before beginning exercise were associated with greater reductions in CRP.
To clarify whether CRP is a bystander or active participant in atherogenesis, a 2008 study compared people with various genetic CRP variants. Those with a high CRP due to genetic variation had no increased risk of cardiovascular disease compared to those with a normal or low CRP. A study published in 2011 shows that CRP is associated with lipid responses to low-fat and high-polyunsaturated fat diets.
=== Coronary heart disease risk ===
Arterial damage results from white blood cell invasion and inflammation within the wall. CRP is a general marker for inflammation and infection, so it can be used as a very rough proxy for heart disease risk. Since many things can cause elevated CRP, this is not a very specific prognostic indicator. Nevertheless, a level above 2.4 mg/L has been associated with a doubled risk of a coronary event compared to levels below 1 mg/L; however, the study group in this case consisted of patients who had been diagnosed with unstable angina pectoris; whether elevated CRP has any predictive value of acute coronary events in the general population of all age ranges remains unclear. Currently, C-reactive protein is not recommended as a cardiovascular disease screening test for average-risk adults without symptoms.
The American Heart Association and U.S. Centers for Disease Control and Prevention have defined risk groups as follows:
Low Risk: less than 1.0 mg/L
Average risk: 1.0 to 3.0 mg/L
High risk: above 3.0 mg/L
But hs-CRP is not to be used alone and should be combined with elevated levels of cholesterol, LDL-C, triglycerides, and glucose level. Smoking, hypertension and diabetes also increase the risk level of cardiovascular disease.
=== Fibrosis and inflammation ===
Scleroderma, polymyositis, and dermatomyositis elicit little or no CRP response. CRP levels also tend to remain low despite inflammatory activity in systemic lupus erythematosus (SLE) unless serositis or synovitis is present. This may be explained by increased levels of type I IFN in SLE, since type I IFN (i.e IFN alpha) inhibits hepatic CRP production. A polymorphisms of the CRP gene which cause lower CRP levels is also more frequent in SLE patients compared with controls. Elevations of CRP in the absence of clinically significant inflammation can occur in kidney failure. CRP level is an independent risk factor for atherosclerotic disease. Patients with high CRP concentrations are more likely to develop stroke, myocardial infarction, and severe peripheral vascular disease. Elevated level of CRP can also be observed in inflammatory bowel disease (IBD), including Crohn's disease and ulcerative colitis.
High levels of CRP has been associated to point mutation Cys130Arg in the APOE gene, coding for apolipoprotein E, establishing a link between lipid values and inflammatory markers modulation.
=== Cancer ===
The role of inflammation in cancer is not well understood. Some organs of the body show greater risk of cancer when they are chronically inflamed. While there is an association between increased levels of C-reactive protein and risk of developing cancer, there is no association between genetic polymorphisms influencing circulating levels of CRP and cancer risk.
In a 2004 prospective cohort study on colon cancer risk associated with CRP levels, people with colon cancer had higher average CRP concentrations than people without colon cancer. It can be noted that the average CRP levels in both groups were well within the range of CRP levels usually found in healthy people. However, these findings may suggest that low inflammation level can be associated with a lower risk of colon cancer, concurring with previous studies that indicate anti-inflammatory drugs could lower colon cancer risk.
=== Obstructive sleep apnea ===
C-reactive protein (CRP), a marker of systemic inflammation, is also increased in obstructive sleep apnea (OSA). CRP and interleukin-6 (IL-6) levels were significantly higher in patients with OSA compared to obese control subjects. Patients with OSA have higher plasma CRP concentrations that increased corresponding to the severity of their apnea-hypopnea index score. Treatment of OSA with CPAP (continuous positive airway pressure) significantly alleviated the effect of OSA on CRP and IL-6 levels.
=== Rheumatoid arthritis ===
In the context of rheumatoid arthritis (RA), CRP is one of the acute phase reactants, whose assessment is defined as part of the joint 2010 ACR/EULAR classification criteria for RA with abnormal levels accounting for a single point within the criteria. Higher levels of CRP are associated with more severe disease and a higher likelihood of radiographic progression. Rheumatoid arthritis associated antibodies together with 14-3-3η YWHAH have been reported to complement CRP in predicting clinical and radiographic outcomes in patients with recent onset inflammatory polyarthritis. Elevated levels of CRP appear to be associated with common comorbidities including cardiovascular disease, metabolic syndrome, diabetes and interstitial lung (pulmonary) disease. Mechanistically, CRP also appears to influence osteoclast activity leading to bone resorption and also stimulates RANKL expression in peripheral blood monocytes.
It has previously been speculated that single-nucleotide polymorphisms in the CRP gene may affect clinical decision-making based on CRP in rheumatoid arthritis, e.g. DAS28 (Disease Activity Score 28 joints). A recent study showed that CRP genotype and haplotype were only marginally associated with serum CRP levels and without any association to the DAS28 score. Thus, that DAS28, which is the core parameter for inflammatory activity in RA, can be used for clinical decision-making without adjustment for CRP gene variants.
=== Viral infections ===
Increased blood CRP levels were higher in people with avian flu H7N9 compared to those with H1N1 (more common) influenza, with a review reporting that severe H1N1 influenza had elevated CRP. In 2020, people infected with COVID-19 in Wuhan, China, had elevated CRP.
== Additional images ==
== References ==
== External links ==
MedlinePlus Encyclopedia: C-reactive protein
Inflammation, Heart Disease and Stroke: The Role of C-Reactive Protein (American Heart Association)
C-Reactive+Protein at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
CRP: analyte monograph - The Association for Clinical Biochemistry and Laboratory Medicine
George Vrousgos, N.D. - Southern Cross University Archived 2020-02-18 at the Wayback Machine
Human CRP genome location and CRP gene details page in the UCSC Genome Browser.
Overview of all the structural information available in the PDB for UniProt: P02741 (C-reactive protein) at the PDBe-KB. | Wikipedia/C-reactive_protein |
Clinical Chemistry is a peer-reviewed medical journal covering the field of clinical chemistry. It is the official journal of the American Association for Clinical Chemistry. The journal was first published in 1955 on a bi-monthly basis "to raise the level at which chemistry is practiced in the clinical laboratory"; monthly publication commenced in 1964. The editor-in-chief is Nader Rifai (Harvard Medical School).
== Abstracting and indexing ==
The journal is abstracted and indexed in PubMed/MEDLINE and the Science Citation Index. According to the Journal Citation Reports, the journal has a 2020 impact factor of 8.327.
== References ==
== External links ==
Official website | Wikipedia/Clinical_Chemistry_(journal) |
Thyroid disease is a medical condition that affects the structure and/or function of the thyroid gland. The thyroid gland is located at the front of the neck and produces thyroid hormones that travel through the blood to help regulate many other organs, meaning that it is an endocrine organ. These hormones normally act in the body to regulate energy use, infant development, and childhood development.
There are five general types of thyroid disease, each with their own symptoms. A person may have one or several different types at the same time. The five groups are:
Hypothyroidism (low function) caused by not having enough free thyroid hormones
Hyperthyroidism (high function) caused by having too many free thyroid hormones
Structural abnormalities, most commonly a goiter (enlargement of the thyroid gland)
Tumors which can be benign (not cancerous) or cancerous
Abnormal thyroid function tests without any clinical symptoms (subclinical hypothyroidism or subclinical hyperthyroidism).
In the US, hypothyroidism and hyperthyroidism were respectively found in 4.6 and 1.3% of the >12y old population (2002).
In some types, such as subacute thyroiditis or postpartum thyroiditis, symptoms may go away after a few months and laboratory tests may return to normal. However, most types of thyroid disease do not resolve on their own. Common hypothyroid symptoms include fatigue, low energy, weight gain, inability to tolerate the cold, slow heart rate, dry skin and constipation. Common hyperthyroid symptoms include irritability, anxiety, weight loss, fast heartbeat, inability to tolerate the heat, diarrhea, and enlargement of the thyroid. Structural abnormalities may not produce symptoms; however, some people may have hyperthyroid or hypothyroid symptoms related to the structural abnormality or notice swelling of the neck. Rarely goiters can cause compression of the airway, compression of the vessels in the neck, or difficulty swallowing. Tumors, often called thyroid nodules, can also have many different symptoms ranging from hyperthyroidism to hypothyroidism to swelling in the neck and compression of the structures in the neck.
Diagnosis starts with a history and physical examination. Screening for thyroid disease in patients without symptoms is a debated topic although commonly practiced in the United States. If dysfunction of the thyroid is suspected, laboratory tests can help support or rule out thyroid disease. Initial blood tests often include thyroid-stimulating hormone (TSH) and free thyroxine (T4). Total and free triiodothyronine (T3) levels are less commonly used. If autoimmune disease of the thyroid is suspected, blood tests looking for Anti-thyroid autoantibodies can also be obtained. Procedures such as ultrasound, biopsy and a radioiodine scanning and uptake study may also be used to help with the diagnosis, particularly if a nodule is suspected.
Thyroid diseases are highly prevalent worldwide, and treatment varies based on the disorder. Levothyroxine is the mainstay of treatment for people with hypothyroidism, while people with hyperthyroidism caused by Graves' disease can be managed with iodine therapy, antithyroid medication, or surgical removal of the thyroid gland. Thyroid surgery may also be performed to remove a thyroid nodule or to reduce the size of a goiter if it obstructs nearby structures or for cosmetic reasons.
== Signs and symptoms ==
Symptoms of the condition vary with type: hypo- vs. hyperthyroidism, which are further described below.
Possible symptoms of hypothyroidism are:
Possible symptoms of hyperthyroidism are:
Note: certain symptoms and physical changes can be seen in both hypothyroidism and hyperthyroidism —fatigue, fine / thinning hair, menstrual cycle irregularities, muscle weakness / aches (myalgia), and different forms of myxedema.
== Diseases ==
=== Low function ===
Hypothyroidism is a state in which the body is not producing enough thyroid hormones, or is not able to respond to / utilize existing thyroid hormones properly. The main categories are:
Thyroiditis: an inflammation of the thyroid gland
Hashimoto's thyroiditis / Hashimoto's disease
Ord's thyroiditis
Postpartum thyroiditis
Silent thyroiditis
Acute thyroiditis
Riedel's thyroiditis (the majority of cases do not affect thyroid function, but approximately 30% of cases lead to hypothyroidism)
Iatrogenic hypothyroidism
Postoperative hypothyroidism
Medication- or radiation-induced hypothyroidism
Thyroid hormone resistance
Euthyroid sick syndrome
Congenital hypothyroidism: a deficiency of thyroid hormone from birth, which untreated can lead to cretinism
=== High function ===
Hyperthyroidism is a state in which the body is producing too much thyroid hormone. The main hyperthyroid conditions are:
Graves' disease
Toxic thyroid nodule
Thyroid storm
Toxic nodular struma (Plummer's disease)
Hashitoxicosis: transient hyperthyroidism that can occur in Hashimoto's thyroiditis
=== Structural abnormalities ===
Goiter: an abnormal enlargement of the thyroid gland
Endemic goiter
Diffuse goiter
Multinodular goiter
Lingual thyroid
Thyroglossal duct cyst
=== Tumors ===
Thyroid cancer
Papillary
Follicular
Medullary
Anaplastic
Lymphomas are usually malignant
Thyroid adenomas are benign tumors
=== Medication side effects ===
Certain medications can have the unintended side effect of affecting thyroid function. While some medications can lead to significant hypothyroidism or hyperthyroidism and those at risk will need to be carefully monitored, some medications may affect thyroid hormone lab tests without causing any symptoms or clinical changes, and may not require treatment. The following medications have been linked to various forms of thyroid disease:
Amiodarone (more commonly can lead to hypothyroidism, but can be associated with some types of hyperthyroidism)
Lithium salts (hypothyroidism)
Some types of interferon and IL-2 (thyroiditis)
Glucocorticoids, dopamine agonists, and somatostatin analogs (block TSH, which can lead to hypothyroidism)
== Pathophysiology ==
Most thyroid disease in the United States stems from a condition where the body's immune system attacks itself. In other instances, thyroid disease comes from the body trying to adapt to environmental conditions like iodine deficiency or to new physiologic conditions like pregnancy.
=== Autoimmune Thyroid Disease ===
Autoimmune thyroid disease is a general category of disease that occurs due to the immune system targeting its own body. It is not fully understood why this occurs, but it is thought to be partially genetic as these diseases tend to run in families. In one of the most common types, Graves' Disease, the body produces antibodies against the TSH receptor on thyroid cells. This causes the receptor to activate even without TSH being present and causes the thyroid to produce and release excess thyroid hormone (hyperthyroidism). Another common form of autoimmune thyroid disease is Hashimoto's thyroiditis where the body produces antibodies against different normal components of the thyroid gland, most commonly thyroglobulin, thyroid peroxidase, and the TSH receptor. These antibodies cause the immune system to attack the thyroid cells and cause inflammation (lymphocytic infiltration) and destruction (fibrosis) of the gland.
=== Goiter ===
Goiter is the general enlargement of the thyroid that can be associated with many thyroid diseases. The main reason this happens is because of increased signaling to the thyroid by way of TSH receptors to try to make it produce more thyroid hormone. This causes increased vascularity and increase in size (hypertrophy) of the gland. In hypothyroid states or iodine deficiency, the body recognizes that it is not producing enough thyroid hormone and starts to produce more TSH to help stimulate the thyroid to produce more thyroid hormone. This stimulation causes the gland to increase in size to increase production of thyroid hormone. In hyperthyroidism caused by Graves' Disease or toxic multinodular goiter, there is excess stimulation of the TSH receptor even when thyroid hormone levels are normal. In Graves' Disease this is because of an autoantibodies (Thyroid Stimulating Immunoglobulins) which bind to and activate the TSH receptors in place of TSH while in toxic multinodular goiter this is often because of a mutation in the TSH receptor that causes it to activate without receiving a signal from TSH. In more rare cases, the thyroid may become enlarged because it becomes filled with thyroid hormone or thyroid hormone precursors that it is unable to release or because of congential abnormalities or because of increased intake of iodine from supplementation or medication.
=== Pregnancy ===
There are many changes to the body during pregnancy. One of the major changes to help with the development of the fetus is the production of human chorionic gonadotropin (hCG). This hormone, produced by the placenta, has similar structure to TSH and can bind to the maternal TSH receptor to produce thyroid hormone. During pregnancy, there is also an increase in estrogen which causes the mother to produce more thyroxine binding globulin, which is what carries most of the thyroid hormone in the blood. These normal hormonal changes often make pregnancy look like a hyperthyroid state but may be within the normal range for pregnancy, so it necessary to use trimester specific ranges for TSH and free T4. True hyperthyroidism in pregnancy is most often caused by an autoimmune mechanism from Graves' Disease. New diagnosis of hypothyroidism in pregnancy is rare because hypothyroidism often makes it difficult to become pregnant in the first place. When hypothyroidism is seen in pregnancy, it is often because an individual already has hypothyroidism and needs to increase their levothyroxine dose to account for the increased thyroxine binding globulin present in pregnancy.
== Diagnosis ==
Diagnosis of thyroid disease depends on symptoms and whether or not a thyroid nodule is present. Most patients will receive a blood test. Others might need an ultrasound, biopsy or a radioiodine scanning and uptake study.
=== Blood tests ===
==== Thyroid function tests ====
There are several hormones that can be measured in the blood to determine how the thyroid gland is functioning. These include the thyroid hormones triiodothyronine (T3) and its precursor thyroxine (T4), which are produced by the thyroid gland. Thyroid-stimulating hormone (TSH) is another important hormone that is secreted by the anterior pituitary cells in the brain. Its primary function is to increase the production of T3 and T4 by the thyroid gland.
The most useful marker of thyroid gland function is serum thyroid-stimulating hormone (TSH) levels. TSH levels are determined by a classic negative feedback system in which high levels of T3 and T4 suppress the production of TSH, and low levels of T3 and T4 increase the production of TSH. TSH levels are thus often used by doctors as a screening test, where the first approach is to determine whether TSH is elevated, suppressed, or normal.
Elevated TSH levels can signify inadequate thyroid hormone production (hypothyroidism)
Suppressed TSH levels can point to excessive thyroid hormone production (hyperthyroidism)
Because a single abnormal TSH level can be misleading, T3 and T4 levels must be measured in the blood to further confirm the diagnosis. When circulating in the body, T3 and T4 are bound to transport proteins. Only a small fraction of the circulating thyroid hormones are unbound or free, and thus biologically active. T3 and T4 levels can thus be measured as free T3 and T4, or total T3 and T4, which takes into consideration the free hormones in addition to the protein-bound hormones. Free T3 and T4 measurements are important because certain drugs and illnesses can affect the concentrations of transport proteins, resulting in differing total and free thyroid hormone levels. There are differing guidelines for T3 and T4 measurements.
Free T4 levels should be measured in the evaluation of hypothyroidism, and low free T4 establishes the diagnosis. T3 levels are generally not measured in the evaluation of hypothyroidism.
Free T4 and total T3 can be measured when hyperthyroidism is of high suspicion as it will improve the accuracy of the diagnosis. Free T4, total T3 or both are elevated and serum TSH is below normal in hyperthyroidism. If the hyperthyroidism is mild, only serum T3 may be elevated and serum TSH can be low or may not be detected in the blood.
Free T4 levels may also be tested in patients who have convincing symptoms of hyper- and hypothyroidism, despite a normal TSH.
==== Antithyroid antibodies ====
Autoantibodies to the thyroid gland may be detected in various disease states. There are several anti-thyroid antibodies, including anti-thyroglobulin antibodies (TgAb), anti-microsomal/anti-thyroid peroxidase antibodies (TPOAb), and TSH receptor antibodies (TSHRAb).
Elevated anti-thryoglobulin (TgAb) and anti-thyroid peroxidase antibodies (TPOAb) can be found in patients with Hashimoto's thyroiditis, the most common autoimmune type of hypothyroidism. TPOAb levels have also been found to be elevated in patients who present with subclinical hypothyroidism (where TSH is elevated, but free T4 is normal), and can help predict progression to overt hypothyroidism. The American Association Thyroid Association thus recommends measuring TPOAb levels when evaluating subclinical hypothyroidism or when trying to identify whether nodular thyroid disease is due to autoimmune thyroid disease.
When the etiology of hyperthyroidism is not clear after initial clinical and biochemical evaluation, measurement of TSH receptor antibodies (TSHRAb) can help make the diagnosis. In Graves' disease, TSHRAb levels are elevated as they are responsible for activating the TSH receptor and causing increased thyroid hormone production.
==== Other markers ====
There are two markers for thyroid-derived cancers.
Thyroglobulin (TG) levels can be elevated in well-differentiated papillary or follicular adenocarcinoma. It is often used to provide information on residual, recurrent or metastatic disease in patients with differentiated thyroid cancer. However, serum TG levels can be elevated in most thyroid diseases. Routine measurement of serum TG for evaluation of thyroid nodules is therefore currently not recommended by the American Thyroid Association.
Elevated calcitonin levels in the blood have been shown to be associated with the rare medullary thyroid cancer. However, the measurement of calcitonin levels as a diagnostic tool is currently controversial due to falsely high or low calcitonin levels in a variety of diseases other than medullary thyroid cancer.
Very infrequently, TBG and transthyretin levels may be abnormal; these are not routinely tested.
To differentiate between different types of hypothyroidism, a specific test may be used. Thyrotropin-releasing hormone (TRH) is injected into the body through a vein. This hormone is naturally secreted by the hypothalamus and stimulates the pituitary gland. The pituitary responds by releasing thyroid-stimulating hormone (TSH). Large amounts of externally administered TRH can suppress the subsequent release of TSH. This amount of release-suppression is exaggerated in primary hypothyroidism, major depression, cocaine dependence, amphetamine dependence and chronic phencyclidine abuse. There is a failure to suppress in the manic phase of bipolar disorder.
=== Ultrasound ===
Many people may develop a thyroid nodule at some point in their lives. Although many who experience this worry that it is thyroid cancer, there are many causes of nodules that are benign and not cancerous. If a possible nodule is present, a doctor may order thyroid function tests to determine if the thyroid gland's activity is being affected. If more information is needed after a clinical exam and lab tests, medical ultrasonography can help determine the nature of thyroid nodule(s). There are some notable differences in typical benign vs. cancerous thyroid nodules that can particularly be detected by the high-frequency sound waves in an ultrasound scan. The ultrasound may also locate nodules that are too small for a doctor to feel on a physical exam, and can demonstrate whether a nodule is primarily solid, liquid (cystic), or a mixture of both. It is an imaging process that can often be done in a doctor's office, is painless, and does not expose the individual to any radiation.
The main characteristics that can help distinguish a benign vs. malignant (cancerous) thyroid nodule on ultrasound are as follows:
Although ultrasonography is a very important diagnostic tool, this method is not always able to separate benign from malignant nodules with certainty. In suspicious cases, a tissue sample is often obtained by biopsy for microscopic examination.
=== Radioiodine scanning and uptake ===
Thyroid scintigraphy, in which the thyroid is imaged with the aid of radioactive iodine (usually iodine-123, which does not harm thyroid cells, or rarely, iodine-131), is performed in the nuclear medicine department of a hospital or clinic. Radioiodine collects in the thyroid gland before being excreted in the urine. While in the thyroid, the radioactive emissions can be detected by a camera, producing a rough image of the shape (a radioiodine scan) and tissue activity (a radioiodine uptake) of the thyroid gland.
A normal radioiodine scan shows even uptake and activity throughout the gland. Irregular uptake can reflect an abnormally shaped or abnormally located gland, or it can indicate that a portion of the gland is overactive or underactive. For example, a nodule that is overactive ("hot") -- to the point of suppressing the activity of the rest of the gland—is usually a thyrotoxic adenoma, a surgically curable form of hyperthyroidism that is rarely malignant. In contrast, finding that a substantial section of the thyroid is inactive ("cold") may indicate an area of non-functioning tissue, such as thyroid cancer.
The amount of radioactivity can be quantified and serves as an indicator of the metabolic activity of the gland. A normal quantitation of radioiodine uptake demonstrates that about 8-35% of the administered dose can be detected in the thyroid 24 hours later. Overactivity or underactivity of the gland, as may occur with hyperthyroidism or hypothyroidism, is usually reflected in increased or decreased radioiodine uptake. Different patterns may occur with different causes of hypo- or hyperthyroidism.
=== Biopsy ===
A medical biopsy refers to the obtaining of a tissue sample for examination under the microscope or other testing, usually to distinguish cancer from noncancerous conditions. Thyroid tissue may be obtained for biopsy by fine needle aspiration (FNA) or by surgery.
Fine needle aspiration has the advantage of being a brief, safe, outpatient procedure that is safer and less expensive than surgery and does not leave a visible scar. Needle biopsies became widely used in the 1980s, though it was recognized that the accuracy of identification of cancer was good, but not perfect. The accuracy of the diagnosis depends on obtaining tissue from all of the suspicious areas of an abnormal thyroid gland. The reliability of fine needle aspiration is increased when sampling can be guided by ultrasound, and over the last 15 years, this has become the preferred method for thyroid biopsy in North America.
== Treatment ==
=== Medication ===
Levothyroxine is a stereoisomer of thyroxine (T4) which is degraded much more slowly and can be administered once daily in patients with hypothyroidism. Natural thyroid hormone from pigs is sometimes also used, especially for people who cannot tolerate the synthetic version. Hyperthyroidism caused by Graves' disease may be treated with the thioamide drugs propylthiouracil, carbimazole or methimazole, or rarely with Lugol's solution. Additionally, hyperthyroidism and thyroid tumors may be treated with radioactive iodine. Ethanol injections for the treatment of recurrent thyroid cysts and metastatic thyroid cancer in lymph nodes can also be an alternative to surgery.
=== Surgery ===
Thyroid surgery is performed for a variety of reasons. A nodule or lobe of the thyroid is sometimes removed for biopsy or because of the presence of an autonomously functioning adenoma causing hyperthyroidism. A large majority of the thyroid may be removed (subtotal thyroidectomy) to treat the hyperthyroidism of Graves' disease, or to remove a goiter that is unsightly or impinges on vital structures.
A complete thyroidectomy of the entire thyroid, including associated lymph nodes, is the preferred treatment for thyroid cancer. Removal of the bulk of the thyroid gland usually produces hypothyroidism unless the person takes thyroid hormone replacement. Consequently, individuals who have undergone a total thyroidectomy are typically placed on thyroid hormone replacement (e.g. levothyroxine) for the remainder of their lives. Higher than normal doses are often administered to prevent recurrence.
If the thyroid gland must be removed surgically, care must be taken to avoid damage to adjacent structures, the parathyroid glands and the recurrent laryngeal nerve. Both are susceptible to accidental removal and/or injury during thyroid surgery.
The parathyroid glands produce parathyroid hormone (PTH), a hormone needed to maintain adequate amounts of calcium in the blood. Removal results in hypoparathyroidism and a need for supplemental calcium and vitamin D each day. In the event that the blood supply to any one of the parathyroid glands is endangered through surgery, the parathyroid gland(s) involved may be re-implanted in surrounding muscle tissue.
The recurrent laryngeal nerves provide motor control for all external muscles of the larynx except for the cricothyroid muscle, which also runs along the posterior thyroid. Accidental laceration of either of the two or both recurrent laryngeal nerves may cause paralysis of the vocal cords and their associated muscles, changing the voice quality. A 2019 systematic review concluded that the available evidence shows no difference between visually identifying the nerve or utilizing intraoperative neuroimaging during surgery, when trying to prevent injury to recurrent laryngeal nerve during thyroid surgery.
=== Radioiodine ===
Radioiodine therapy with iodine-131 can be used to shrink the thyroid gland (for instance, in the case of large goiters that cause symptoms but do not harbor cancer—after evaluation and biopsy of suspicious nodules has been done), or to destroy hyperactive thyroid cells (for example, in cases of thyroid cancer). The iodine uptake can be high in countries with iodine deficiency, but low in iodine sufficient countries. To enhance iodine-131 uptake by the thyroid and allow for more successful treatment, TSH is raised prior to therapy in order to stimulate the existing thyroid cells. This is done either by withdrawal of thyroid hormone medication or injections of recombinant human TSH (Thyrogen), released in the United States in 1999. Thyrogen injections can reportedly boost uptake up to 50-60%. Radioiodine treatment can also cause hypothyroidism (which is sometimes the end goal of treatment) and, although rare, a pain syndrome (due to radiation thyroiditis).
== Epidemiology ==
In the United States, autoimmune inflammation is the most common form of thyroid disease while worldwide hypothyroidism and goiter due to dietary iodine deficiency is the most common. According to the American Thyroid Association in 2015, approximately 20 million people in the United States alone are affected by thyroid disease. Hypothyroidism affects 3-10% percent of adults, with a higher incidence in women and the elderly. An estimated one-third of the world's population currently lives in areas of low dietary iodine levels. In regions of severe iodine deficiency, the prevalence of goiter is as high as 80%. In areas where iodine-deficiency is not found, the most common type of hypothyroidism is an autoimmune subtype called Hashimoto's thyroiditis, with a prevalence of 1-2%. As for hyperthyroidism, Graves' disease, another autoimmune condition, is the most common type with a prevalence of 0.5% in males and 3% in females. Although thyroid nodules are common, thyroid cancer is rare. Thyroid cancer accounts for less than 1% of all cancer in the UK, though it is the most common endocrine tumor and makes up greater than 90% of all cancers of the endocrine glands.
== See also ==
Hyperthyroidism
Graves' disease
Hypothyroidism
Hashimoto's thyroiditis
Thyroid nodule
Thyroid disease in pregnancy
== References ==
== External links ==
Medline Plus Medical Encyclopedia entry for Thyroid Disease
National Institutes of Health Archived 2015-02-25 at the Wayback Machine | Wikipedia/Thyroid_disease |
Protein electrophoresis is a method for analysing the proteins in a fluid or an extract. The electrophoresis may be performed with a small volume of sample in a number of alternative ways with or without a supporting medium, namely agarose or polyacrylamide. Variants of gel electrophoresis include SDS-PAGE, free-flow electrophoresis, electrofocusing, isotachophoresis, affinity electrophoresis, immunoelectrophoresis, counterelectrophoresis, and capillary electrophoresis. Each variant has many subtypes with individual advantages and limitations. Gel electrophoresis is often performed in combination with electroblotting or immunoblotting to give additional information about a specific protein.
== Denaturing gel methods ==
=== SDS-PAGE ===
SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis, describes a collection of related techniques to separate proteins according to their electrophoretic mobility (a function of the molecular weight of a polypeptide chain) while in the denatured (unfolded) state. In most proteins, the binding of SDS to the polypeptide chain imparts an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis.
SDS is a strong detergent agent used to denature native proteins to unfolded, individual polypeptides. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. In this process, the intrinsic charges of polypeptides becomes negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit length. The electrophoretic mobilities of these proteins will be a linear function of the logarithms of their molecular weights.
== Native gel methods ==
Native gels, also known as non-denaturing gels, analyze proteins that are still in their folded state. Thus, the electrophoretic mobility depends not only on the charge-to-mass ratio, but also on the physical shape and size of the protein.
=== Blue native PAGE ===
BN-PAGE is a native PAGE technique, where the Coomassie brilliant blue dye provides the necessary charges to the protein complexes for the electrophoretic separation. The disadvantage of Coomassie is that in binding to proteins it can act like a detergent causing complexes to dissociate. Another drawback is the potential quenching of chemoluminescence (e.g. in subsequent western blot detection or activity assays) or fluorescence of proteins with prosthetic groups (e.g. heme or chlorophyll) or labelled with fluorescent dyes.
=== Clear native PAGE ===
CN-PAGE (commonly referred to as Native PAGE) separates acidic water-soluble and membrane proteins in a polyacrylamide gradient gel. It uses no charged dye so the electrophoretic mobility of proteins in CN-PAGE (in contrast to the charge shift technique BN-PAGE) is related to the intrinsic charge of the proteins. The migration distance depends on the protein charge, its size and the pore size of the gel. In many cases this method has lower resolution than BN-PAGE, but CN-PAGE offers advantages whenever Coomassie dye would interfere with further analytical techniques, for example it has been described as a very efficient microscale separation technique for FRET analyses. Additionally, as CN-PAGE does not require the harsh conditions of BN-PAGE, it can retain the supramolecular assemblies of membrane protein complexes that would be dissociated in BN-PAGE.
=== Preparative native PAGE ===
The folded protein complexes of interest separate cleanly and predictably without the risk of denaturation due to the specific properties of the polyacrylamide gel, electrophoresis buffer solution, electrophoretic equipment and standard parameters used. The separated proteins are continuously eluted into a physiological eluent and transported to a fraction collector. In four to five PAGE fractions each the different metal cofactors can be identified and absolutely quantified by high-resolution ICP-MS. The associated structures of the isolated metalloproteins in these fractions can be specifically determined by solution NMR spectroscopy.
== Buffer systems ==
Most protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus into a single sharp band. The formation of the ion gradient is achieved by choosing a pH value at which the ions of the buffer are only moderately charged compared to the SDS-coated proteins. These conditions provide an environment in which Kohlrausch's reactions determine the molar conductivity. As a result, SDS-coated proteins are concentrated to several fold in a thin zone of the order of 19 μm within a few minutes. At this stage all proteins migrate at the same migration speed by isotachophoresis. This occurs in a region of the gel that has larger pores so that the gel matrix does not retard the migration during the focusing or "stacking" event. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. At the same time, the separating part of the gel also has a pH value in which the buffer ions on average carry a greater charge, causing them to "outrun" the SDS-covered proteins and eliminate the ion gradient and thereby the stacking effect.
A very widespread discontinuous buffer system is the tris-glycine or "Laemmli" system that stacks at a pH of 6.8 and resolves at a pH of ~8.3-9.0. A drawback of this system is that these pH values may promote disulfide bond formation between cysteine residues in the proteins because the pKa of cysteine ranges from 8-9 and because reducing agent present in the loading buffer doesn't co-migrate with the proteins. Recent advances in buffering technology alleviate this problem by resolving the proteins at a pH well below the pKa of cysteine (e.g., bis-tris, pH 6.5) and include reducing agents (e.g. sodium bisulfite) that move into the gel ahead of the proteins to maintain a reducing environment. An additional benefit of using buffers with lower pH values is that the acrylamide gel is more stable at lower pH values, so the gels can be stored for long periods of time before use.
=== SDS gradient gel electrophoresis of proteins ===
As voltage is applied, the anions (and negatively charged sample molecules) migrate toward the positive electrode (anode) in the lower chamber, the leading ion is Cl− ( high mobility and high concentration); glycinate is the trailing ion (low mobility and low concentration). SDS-protein particles do not migrate freely at the border between the Cl− of the gel buffer and the Gly− of the cathode buffer. Friedrich Kohlrausch found that Ohm's law also applies to dissolved electrolytes. Because of the voltage drop between the Cl− and Glycine-buffers, proteins are compressed (stacked) into micrometer thin layers. The boundary moves through a pore gradient and the protein stack gradually disperses due to a frictional resistance increase of the gel matrix. Stacking and unstacking occurs continuously in the gradient gel, for every protein at a different position. For a complete protein unstacking the polyacrylamide-gel concentration must exceed 16% T. The two-gel system of "Laemmli" is a simple gradient gel. The pH discontinuity of the buffers is of no significance for the separation quality, and a "stacking-gel" with a different pH is not needed.
== Visualization ==
The most popular protein stain is Coomassie brilliant blue. It is an anionic dye, which non-specifically binds to proteins. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background.
When more sensitive method than staining by Coomassie is needed, silver staining is usually used. Silver staining is a sensitive procedure to detect trace amounts of proteins in gels, but can also visualize nucleic acid or polysaccharides.
Visualization methods without using a dye such as Coomassie and silver are available on the market. For example Bio-Rad Laboratories markets ”stain-free” gels for SDS-PAGE gel electrophoresis. Alternatively, reversible fluorescent dyes, such as those from Azure Biosystems such as AzureRed or Azure TotalStain Q can be used.
Similarly as in nucleic acid gel electrophoresis, tracking dye is often used. Anionic dyes of a known electrophoretic mobility are usually included in the sample buffer. A very common tracking dye is Bromophenol blue. This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins.
== Medical applications ==
In medicine, protein electrophoresis is a method of analysing the proteins mainly in blood serum. Before the widespread use of gel electrophoresis, protein electrophoresis was performed as free-flow electrophoresis (on paper) or as immunoelectrophoresis.
Traditionally, two classes of blood proteins are considered: serum albumin and globulin. They are generally equal in proportion, but albumin as a molecule is much smaller and lightly, negatively-charged, leading to an accumulation of albumin on the electrophoretic gel. A small band before albumin represents transthyretin (also named prealbumin). Some forms of medication or body chemicals can cause their own band, but it usually is small. Abnormal bands (spikes) are seen in monoclonal gammopathy of undetermined significance and multiple myeloma, and are useful in the diagnosis of these conditions.
The globulins are classified by their banding pattern (with their main representatives):
The alpha (α) band consists of two parts, 1 and 2:
α1 - α1-antitrypsin, α1-acid glycoprotein.
α2 - haptoglobin, α2-macroglobulin, α2-antiplasmin, ceruloplasmin.
The beta (β) band - transferrin, LDL, complement
The gamma (γ) band - immunoglobulin (IgA, IgD, IgE, IgG and IgM). Paraproteins (in multiple myeloma) usually appear in this band.
== See also ==
Affinity electrophoresis
Electroblotting
Electrofocusing
Fast parallel proteolysis (FASTpp)
Gel electrophoresis
Immunoelectrophoresis
Immunofixation
Native gel electrophoresis
Paraprotein
QPNC-PAGE
SDD-AGE
== References ==
== External links ==
Educational resource for protein electrophoresis
Gel electrophoresis of proteins Archived 2021-01-26 at the Wayback Machine | Wikipedia/Protein_electrophoresis |
Bone disease refers to the medical conditions which affect the bone.
== Terminology ==
A bone disease is also called an "osteopathy", but because the term osteopathy is often used to refer to an alternative health-care philosophy, use of the term can cause some confusion.
== Bone and cartilage disorders ==
Osteochondrodysplasia is a general term for a disorder of the development of bone and cartilage.
== List ==
=== A ===
Ambe
Avascular necrosis or Osteonecrosis
Arthritis
=== B ===
Bone spur (Osteophytes)
=== C ===
Craniosynostosis
Coffin–Lowry syndrome
Copenhagen disease
=== F ===
Fibrodysplasia ossificans progressiva
Fibrous dysplasia
Fong disease (or Nail–patella syndrome)
Fracture
=== G ===
Giant cell tumor of bone
Greenstick fracture
Gout
=== H ===
Hypophosphatasia
Hereditary multiple exostoses
=== K ===
Klippel–Feil syndrome
=== M ===
Metabolic bone disease
Multiple myeloma
=== N ===
Nail–patella syndrome
=== O ===
Osteitis
Osteitis deformans (or Paget's disease of bone)
Osteitis fibrosa cystica (or Osteitis fibrosa, or Von Recklinghausen's disease of bone)
Osteitis pubis
Condensing osteitis (or Osteitis condensas)
Osteochondritis dissecans
Osteochondroma (bone tumor)
Osteogenesis imperfecta
Osteomalacia
Osteomyelitis
Osteopenia
Osteopetrosis
Osteoporosis
=== P ===
Porotic hyperostosis
Primary hyperparathyroidism
=== R ===
Renal osteodystrophy
=== S ===
Salter–Harris fracture
Scoliosis
=== W ===
Water on the knee
== See also ==
Osteoimmunology
== References ==
== External links ==
https://www.nlm.nih.gov/medlineplus/bonediseases.html | Wikipedia/Bone_disease |
Laboratory quality control is designed to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior to the release of patient results, in order to improve the quality of the results reported by the laboratory. Quality control (QC) is a measure of precision, or how well the measurement system reproduces the same result over time and under varying operating conditions. Laboratory quality control material is usually run at the beginning of each shift, after an instrument is serviced, when reagent lots are changed, after equipment calibration, and whenever patient results seem inappropriate. Quality control material should approximate the same matrix as patient specimens, taking into account properties such as viscosity, turbidity, composition, and color. It should be stable for long periods of time, and available in large enough quantities for a single batch to last at least one year. Liquid controls are more convenient than lyophilized (freeze-dried) controls because they do not have to be reconstituted, minimizing pipetting error. Dried Tube Specimen (DTS) is slightly cumbersome as a QC material but it is very low-cost, stable over long periods and efficient, especially useful for resource-restricted settings in under-developed and developing countries. DTS can be manufactured in-house by a laboratory or Blood Bank for its use.
== Interpretation ==
Interpretation of quality control data involves both graphical and statistical methods. Quality control data is most easily visualized using a Levey–Jennings chart. The dates of analyses are plotted along the x-axis and control values are plotted along the y-axis. The pattern of plotted points provides a simple way to detect increased random error and shifts or trends in calibration.
== The control charts ==
Control charts are a statistical approach to the study of manufacturing process variation for the purpose of improving the economic effectiveness of the process. These methods are based on continuous monitoring of process variation. The control chart, also known as the Shewhart chart or process-behavior chart, is a statistical tool intended to assess the nature of variation in a process and to facilitate forecasting and management. A control chart is a more specific kind of run chart. The control chart is one of the seven basic tools of quality control, which also include the histogram, pareto chart, check sheet, cause and effect diagram, flowchart and scatter diagram. Control charts prevent unnecessary process adjustments, provide information about process capability, provide diagnostic information, and are a proven technique for improving productivity.
== Levey–Jennings chart ==
A Levey–Jennings chart is a graph that quality control data is plotted on to give a visual indication whether a laboratory test is working well. The distance from the mean is measured in standard deviations. It is named after Stanley Levey and E. R. Jennings, pathologists who suggested in 1950 that Shewhart's individuals control chart could be used in the clinical laboratory. The date and time, or more often the number of the control run, is plotted on the x-axis. A mark is made indicating how far the actual result was from the mean, which is the expected value for the control. Lines run across the graph at the mean, as well as one, two and three standard deviations to either side of the mean. This makes it easy to see how far off the result was.
Rules such as the Westgard rules can be applied to see whether the results from the samples when the control was done can be released, or if they need to be rerun. The formulation of Westgard rules were based on statistical methods. Westgard rules are commonly used to analyse data in Shewhart control charts. Westgard rules are used to define specific performance limits for a particular assay (test) and can be used to detect both random and systematic errors. Westgard rules are programmed into automated analyzers to determine when an analytical run should be rejected. These rules need to be applied carefully so that true errors are detected while false rejections (of valid results that are outside of range) are minimized. The rules applied to high-volume chemistry and hematology instruments should produce low false rejection rates.
The Levey–Jennings chart differs from the Shewhart individuals control chart because the standard deviation (σ, "sigma") is estimated. The Levey–Jennings chart uses the long-term (i.e., population) estimate of sigma whereas the Shewhart chart uses the short-term (i.e., within the rational subgroup) estimate.
== Validation and verification ==
Validation and verification of medical devices ensure that they fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests.
The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements.
== Analytical sensitivity and specificity ==
"Analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously to detection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others. These definitions are different from diagnostic sensitivity and diagnostic specificity, which are measures of how well a test can identify true positives and true negatives, respectively.
== See also ==
Quality control
Quality assurance
External quality assessment
== References ==
== External links ==
Westgard.com | Wikipedia/Laboratory_quality_control |
Therapeutic drug monitoring (TDM) is a branch of clinical chemistry and clinical pharmacology that specializes in the measurement of medication levels in blood. Its main focus is on drugs with a narrow therapeutic range, i.e. drugs that can easily be under- or overdosed. TDM aimed at improving patient care by individually adjusting the dose of drugs for which clinical experience or clinical trials have shown it improved outcome in the general or special populations. It can be based on a a priori pharmacogenetic, demographic and clinical information, and/or on the a posteriori measurement of blood concentrations of drugs (pharmacokinetic monitoring) or biological surrogate or end-point markers of effect (pharmacodynamic monitoring).
There are numerous variables that influence the interpretation of drug concentration data: time, route and dose of drug given, time of blood sampling, handling and storage conditions, precision and accuracy of the analytical method, validity of pharmacokinetic models and assumptions, co-medications and, last but not least, clinical status of the patient (i.e. disease, renal/hepatic status, biologic tolerance to drug therapy, etc.).
Many different professionals (physicians, clinical pharmacists, nurses, medical laboratory scientists, etc.) are involved with the various elements of drug concentration monitoring, which is a truly multidisciplinary process. Because failure to properly carry out any one of the components can severely affect the usefulness of using drug concentrations to optimize therapy, an organized approach to the overall process is critical.
== A priori therapeutic drug monitoring ==
A priori TDM consists of determining the initial dose regimen to be given to a patient, based on clinical endpoint and on established population pharmacokinetic-pharmacodynamic (PK/PD) relationships. These relationships help to identify sub-populations of patients with different dosage requirements, by utilizing demographic data, clinical findings, clinical chemistry results, and/or, when appropriate, pharmacogenetic characteristics.
== A posteriori therapeutic drug monitoring ==
The concept of a posteriori TDM corresponds to the usual meaning of TDM in medical practice, which refers to the readjustment of the dosage of a given treatment in response to the measurement of an appropriate marker of drug exposure or effect. TDM encompasses all aspects of this feedback control, namely:
it includes pre-analytical, analytical and post-analytical phases, each with the same importance;
it is most often based on the specific, accurate, precise and timely determinations of the active and.or toxic forms of drugs in biological samples collected at the appropriate times in the correct containers (PK monitoring), or can employ the measurement of a biological perimeter as a surrogate or end-point marker of effect (PD monitoring) e.g. concentration of an endogenous compound, enzymatic activity, gene expression, etc. either as a complement to PK monitoring or as the main TDM tool;
it requires interpretation of the results, taking into account pre-analytical conditions, clinical information and the clinical efficiency of the current dosage regimen; this can be achieved by the application of PK-PD modeling;
it can potentially benefit from population PK/PD models possibly combined with individual pharmacokinetic forecasting techniques, or pharmacogenetic data.
== Characteristics of drugs candidate to therapeutic drug monitoring ==
In pharmacotherapy, many medications are used without monitoring of blood levels, as their dosage can generally be varied according to the clinical response that a patient gets to that substance. For certain drugs, this is impracticable, while insufficient levels will lead to undertreatment or resistance, and excessive levels can lead to toxicity and tissue damage.
Indications in favor of therapeutic drug monitoring include:
consistent, clinically established pharmacodynamic relationships between plasma drug concentrations and pharmacological efficacy and/or toxicity;
significant between-patient pharmacokinetic variability, making a standard dosage achieve different concentration levels among patients (while the drug disposition remains relatively stable in a given patient);
narrow therapeutic window of the drug, which forbids giving high doses in all patients to ensure overall efficacy;
drug dosage optimization not achievable based on clinical observation alone;
duration of the treatment and criticality for patient's condition justifying dosage adjustment efforts;
potential patient compliance problems that might be remedied through concentration monitoring.
TDM determinations are also used to detect and diagnose poisoning with drugs, should the suspicion arise.
Examples of drugs widely analysed for therapeutic drug monitoring:
Aminoglycoside antibiotics (gentamicin)
Antiepileptics (such as carbamazepine, phenytoin and valproic acid)
Mood stabilisers, especially lithium citrate
Antipsychotics (such as pimozide and clozapine)
Digoxin
Ciclosporin, tacrolimus in organ transplant recipients
TDM increasingly proposed for a number of therapeutic drugs, e.g. many antibiotics, small molecule tyrosine kinase inhibitors and other targeted anticancer agents, TNF inhibitors and other biological agents, antifungal agents, antiretroviral agents used in HIV infection, psychiatric drugs etc.
== Practice of therapeutic drug monitoring ==
Automated analytical methods such as enzyme multiplied immunoassay technique or fluorescence polarization immunoassay are widely available in medical laboratories for drugs frequently measured in practice. Nowadays, most other drugs can be readily measured in blood or plasma using versatile methods such as liquid chromatography–mass spectrometry or gas chromatography–mass spectrometry, which progressively replaced high-performance liquid chromatography. Yet, TDM is not limited to the provision of precise and accurate concentration measurement results, it also involves appropriate medical interpretation, based on robust scientific knowledge.
In order to guarantee the quality of this clinical interpretation, it is essential that the sample be taken under good conditions: i.e., preferably under a stable dosage, at a standardized sampling time (often at the end of a dosing interval), excluding any source of bias (sample contamination or dilution, analytical interferences) and having carefully recorded the sampling time, the last dose intake time, the current dosage and the influential patient's characteristics.
The interpretation of a drug concentration result goes through the following stages:
Determine whether the observed concentration is in the “normal range” expected under the dosage administered, taking into account the patient's individual characteristics. This requires referring to population pharmacokinetic studies of the drug in consideration.
Determine whether the patient's concentration profile is close to the “exposure target” associated with the best trade-off between probability of therapeutic success and risk of toxicity. This refers to clinical pharmacodynamic knowledge describing dose-concentration-response relationships among treated patients.
If the observed concentration is plausible but far from the suitable level, determine how to adjust the dosage to drive the concentration curve close to target. Several approaches exist for this, from the easiest “rule of three” to sophisticated computer-assisted calculations implementing Bayesian inference algorithms based on population pharmacokinetics.
Ideally, the usefulness of a TDM strategy should be confirmed through an evidence-based approach involving the performance of well-designed controlled clinical trials. In practice however, TDM has undergone formal clinical evaluation only for a limited number of drugs to date, and much of its development rests on empirical foundations.
Point-of-care tests for an easy performance of TDM at the medical practice are under elaboration.
== Integration with Precision Medicine : Model-Informed Precision Dosing ==
The evolution of information technology holds great promise for using the methods and knowledge of pharmacometrics to bring patient treatment closer to the ideal of precision medicine (which is not just about adjusting treatments to genetic factors, but encompasses all aspects of therapeutic individualization). Model-informed precision dosing (MIPD) should enable significant progress to be made in taking into account the many factors influencing drug response, in order to optimize therapies (a priori TDM). It should also make it possible to take optimal account of TDM results to individualize drug dosage (a posteriori TDM). Ideally, these computational approaches will be integrated into the electronic patient record in the form of clinical decision support systems (CDSS), automatically and seamlessly linking laboratory resources and pharmacological expertise with clinicians and caregivers at the patient's bedside. Software tools already exist and continue to be developed for this purpose. They are set to play an increasingly important role in patient treatment, in the same way that instrument piloting has become the norm in aviation.
== References ==
== External links ==
International Association of Therapeutic Drug Monitoring and Clinical Toxicology | Wikipedia/Therapeutic_drug_monitoring |
DNA ligase is a type of enzyme that facilitates the joining of DNA strands together by catalyzing the formation of a phosphodiester bond. It plays a role in repairing single-strand breaks in duplex DNA in living organisms, but some forms (such as DNA ligase IV) may specifically repair double-strand breaks (i.e. a break in both complementary strands of DNA). Single-strand breaks are repaired by DNA ligase using the complementary strand of the double helix as a template, with DNA ligase creating the final phosphodiester bond to fully repair the DNA.
DNA ligase is used in both DNA repair and DNA replication (see Mammalian ligases). In addition, DNA ligase has extensive use in molecular biology laboratories for recombinant DNA experiments (see Research applications). Purified DNA ligase is used in gene cloning to join DNA molecules together to form recombinant DNA.
== Enzymatic mechanism ==
The mechanism of DNA ligase is to form two covalent phosphodiester bonds between 3' hydroxyl ends of one nucleotide ("acceptor"), with the 5' phosphate end of another ("donor"). Two ATP molecules are consumed for each phosphodiester bond formed. AMP is required for the ligase reaction, which proceeds in four steps:
Reorganization of activity site such as nicks in DNA segments or Okazaki fragments etc.
Adenylylation (addition of AMP) of a lysine residue in the active center of the enzyme, pyrophosphate is released;
Transfer of the AMP to the 5' phosphate of the so-called donor, formation of a pyrophosphate bond;
Formation of a phosphodiester bond between the 5' phosphate of the donor and the 3' hydroxyl of the acceptor.
Ligase will also work with blunt ends, although higher enzyme concentrations and different reaction conditions are required.
== Types ==
=== E. coli ===
The E. coli DNA ligase is encoded by the lig gene. DNA ligase in E. coli, as well as most prokaryotes, uses energy gained by cleaving nicotinamide adenine dinucleotide (NAD) to create the phosphodiester bond. It does not ligate blunt-ended DNA except under conditions of molecular crowding with polyethylene glycol, and cannot join RNA to DNA efficiently.
The activity of E. coli DNA ligase can be enhanced by DNA polymerase at the right concentrations. Enhancement only works when the concentrations of the DNA polymerase 1 are much lower than the DNA fragments to be ligated. When the concentrations of Pol I DNA polymerases are higher, it has an adverse effect on E. coli DNA ligase
=== T4 ===
The DNA ligase from bacteriophage T4 (a bacteriophage that infects Escherichia coli bacteria). The T4 ligase is the most-commonly used in laboratory research. It can ligate either cohesive or blunt ends of DNA, oligonucleotides, as well as RNA and RNA-DNA hybrids, but not single-stranded nucleic acids. It can also ligate blunt-ended DNA with much greater efficiency than E. coli DNA ligase. Unlike E. coli DNA ligase, T4 DNA ligase cannot utilize NAD and it has an absolute requirement for ATP as a cofactor. Some engineering has been done to improve the in vitro activity of T4 DNA ligase; one successful approach, for example, tested T4 DNA ligase fused to several alternative DNA binding proteins and found that the constructs with either p50 or NF-kB as fusion partners were over 160% more active in blunt-end ligations for cloning purposes than wild type T4 DNA ligase. A typical reaction for inserting a fragment into a plasmid vector would use about 0.01 (sticky ends) to 1 (blunt ends) units of ligase. The optimal incubation temperature for T4 DNA ligase is 37 °C, a temperature at which T4 enzymes are most active. However, it is not uncommon to setup ligation reactions at 16 °C, a trade-off temperature at which the ligase is active as well as one that is suitable for base-pairing of sticky ends.
Bacteriophage T4 ligase mutants have increased sensitivity to both UV irradiation and the alkylating agent methyl methanesulfonate indicating that DNA ligase is employed in the repair of the DNA damages caused by these agents.
=== Mammalian ===
In mammals, there are four specific types of ligase.
DNA ligase 1: ligates the nascent DNA of the lagging strand after the Ribonuclease H has removed the RNA primer from the Okazaki fragments.
DNA ligase 3: complexes with DNA repair protein XRCC1 to aid in sealing DNA during the process of nucleotide excision repair and recombinant fragments. Of the all known mammalian DNA ligases, only ligase 3 has been found to be present in mitochondria.
DNA ligase 4: complexes with XRCC4. It catalyzes the final step in the non-homologous end joining DNA double-strand break repair pathway. It is also required for V(D)J recombination, the process that generates diversity in immunoglobulin and T-cell receptor loci during immune system development.
DNA ligase 2: A purification artifact resulting from proteolytic degradation of DNA ligase 3. Initially, it has been recognized as another DNA ligase and it is the reason for the unusual nomenclature of DNA ligases.
DNA ligase from eukaryotes and some microbes uses adenosine triphosphate (ATP) rather than NAD.
=== Thermostable ===
Derived from a thermophilic bacterium, the enzyme is stable and active at much higher temperatures than conventional DNA ligases. Its half-life is 48 hours at 65 °C and greater than 1 hour at 95 °C. Ampligase DNA Ligase has been shown to be active for at least 500 thermal cycles (94 °C/80 °C) or 16 hours of cycling.10 This exceptional thermostability permits extremely high hybridization stringency and ligation specificity.
== Measurement of activity ==
There are at least three different units used to measure the activity of DNA ligase:
Weiss unit - the amount of ligase that catalyzes the exchange of 1 nmole of 32P from inorganic pyrophosphate to ATP in 20 minutes at 37°C. This is the one most commonly used.
Modrich-Lehman unit - this is rarely used, and one unit is defined as the amount of enzyme required to convert 100 nmoles of d(A-T)n to an exonuclease-III resistant form in 30 minutes under standard conditions.
Many commercial suppliers of ligases use an arbitrary unit based on the ability of ligase to ligate cohesive ends. These units are often more subjective than quantitative and lack precision.
== Research applications ==
DNA ligases have become indispensable tools in modern molecular biology research for generating recombinant DNA sequences. For example, DNA ligases are used with restriction enzymes to insert DNA fragments, often genes, into plasmids.
Controlling the optimal temperature is a vital aspect of performing efficient recombination experiments involving the ligation of cohesive-ended fragments. Most experiments use T4 DNA Ligase (isolated from bacteriophage T4), which is most active at 37 °C. However, for optimal ligation efficiency with cohesive-ended fragments ("sticky ends"), the optimal enzyme temperature needs to be balanced with the melting temperature Tm of the sticky ends being ligated, the homologous pairing of the sticky ends will not be stable because the high temperature disrupts hydrogen bonding. A ligation reaction is most efficient when the sticky ends are already stably annealed, and disruption of the annealing ends would therefore result in low ligation efficiency. The shorter the overhang, the lower the Tm.
Since blunt-ended DNA fragments have no cohesive ends to anneal, the melting temperature is not a factor to consider within the normal temperature range of the ligation reaction. The limiting factor in blunt end ligation is not the activity of the ligase but rather the number of alignments between DNA fragment ends that occur. The most efficient ligation temperature for blunt-ended DNA would therefore be the temperature at which the greatest number of alignments can occur. The majority of blunt-ended ligations are carried out at 14-25 °C overnight. The absence of stably annealed ends also means that the ligation efficiency is lowered, requiring a higher ligase concentration to be used.
A novel use of DNA ligase can be seen in the field of nano chemistry, specifically in DNA origami. DNA based self-assembly principles have proven useful for organizing nanoscale objects, such as biomolecules, nanomachines, nanoelectronic and photonic component. Assembly of such nano structure requires the creation of an intricate mesh of DNA molecules. Although DNA self-assembly is possible without any outside help using different substrates such as provision of catatonic surface of Aluminium foil, DNA ligase can provide the enzymatic assistance that is required to make DNA lattice structure from DNA over hangs.
== History ==
The first DNA ligase was purified and characterized in 1967 by the Gellert, Lehman, Richardson, and Hurwitz laboratories. It was first purified and characterized by Weiss and Richardson using a six-step chromatographic-fractionation process beginning with elimination of cell debris and addition of streptomycin, followed by several Diethylaminoethyl (DEAE)-cellulose column washes and a final phosphocellulose fractionation. The final extract contained 10% of the activity initially recorded in the E. coli media; along the process it was discovered that ATP and Mg++ were necessary to optimize the reaction. The common commercially available DNA ligases were originally discovered in bacteriophage T4, E. coli and other bacteria.
== Disorders ==
Genetic deficiencies in human DNA ligases have been associated with clinical syndromes marked by immunodeficiency, radiation sensitivity, and developmental abnormalities, LIG4 syndrome (Ligase IV syndrome) is a rare disease associated with mutations in DNA ligase 4 and interferes with dsDNA break-repair mechanisms. Ligase IV syndrome causes immunodeficiency in individuals and is commonly associated with microcephaly and marrow hypoplasia. A list of prevalent diseases caused by lack of or malfunctioning of DNA ligase is as follows.
=== Xeroderma pigmentosum ===
Xeroderma pigmentosum, which is commonly known as XP, is an inherited condition characterized by an extreme sensitivity to ultraviolet (UV) rays from sunlight. This condition mostly affects the eyes and areas of skin exposed to the sun. Some affected individuals also have problems involving the nervous system.
=== Ataxia-telangiectasia ===
Mutations in the ATM gene cause ataxia–telangiectasia. The ATM gene provides instructions for making a protein that helps control cell division and is involved in DNA repair. This protein plays an important role in the normal development and activity of several body systems, including the nervous system and immune system. The ATM protein assists cells in recognizing damaged or broken DNA strands and coordinates DNA repair by activating enzymes that fix the broken strands. Efficient repair of damaged DNA strands helps maintain the stability of the cell's genetic information. Affected children typically develop difficulty walking, problems with balance and hand coordination, involuntary jerking movements (chorea), muscle twitches (myoclonus), and disturbances in nerve function (neuropathy). The movement problems typically cause people to require wheelchair assistance by adolescence. People with this disorder also have slurred speech and trouble moving their eyes to look side-to-side (oculomotor apraxia).
=== Fanconi Anemia ===
Fanconi anemia (FA) is a rare, inherited blood disorder that leads to bone marrow failure. FA prevents bone marrow from making enough new blood cells for the body to work normally. FA also can cause the bone marrow to make many faulty blood cells. This can lead to serious health problems, such as leukemia.
=== Bloom syndrome ===
Bloom syndrome results in skin that is sensitive to sun exposure, and usually the development of a butterfly-shaped patch of reddened skin across the nose and cheeks. A skin rash can also appear on other areas that are typically exposed to the sun, such as the back of the hands and the forearms. Small clusters of enlarged blood vessels (telangiectases) often appear in the rash; telangiectases can also occur in the eyes. Other skin features include patches of skin that are lighter or darker than the surrounding areas (hypopigmentation or hyperpigmentation respectively). These patches appear on areas of the skin that are not exposed to the sun, and their development is not related to the rashes.
== As a drug target ==
In recent studies, human DNA ligase I was used in Computer-aided drug design to identify DNA ligase inhibitors as possible therapeutic agents to treat cancer. Since excessive cell growth is a hallmark of cancer development, targeted chemotherapy that disrupts the functioning of DNA ligase can impede adjuvant cancer forms. Furthermore, it has been shown that DNA ligases can be broadly divided into two categories, namely, ATP- and NAD+-dependent. Previous research has shown that although NAD+-dependent DNA ligases have been discovered in sporadic cellular or viral niches outside the bacterial domain of life, there is no instance in which a NAD+-dependent ligase is present in a eukaryotic organism. The presence solely in non-eukaryotic organisms, unique substrate specificity, and distinctive domain structure of NAD+ dependent compared with ATP-dependent human DNA ligases together make NAD+-dependent ligases ideal targets for the development of new antibacterial drugs.
== See also ==
DNA end
Lagging strand
DNA replication
Okazaki fragment
DNA polymerase
Sequencing by ligation
== References ==
== External links ==
DNA Ligase: PDB molecule of the month
Davidson College General Information on Ligase
OpenWetWare DNA Ligation Protocol
Overview of all the structural information available in the PDB for UniProt: P00970 (DNA ligase) at the PDBe-KB.
Overview of all the structural information available in the PDB for UniProt: P18858 (DNA ligase 1) at the PDBe-KB.
Overview of all the structural information available in the PDB for UniProt: P49916 (DNA ligase 3) at the PDBe-KB. | Wikipedia/DNA_ligase |
A DNA polymerase is a member of a family of enzymes that catalyze the synthesis of DNA molecules from nucleoside triphosphates, the molecular precursors of DNA. These enzymes are essential for DNA replication and usually work in groups to create two identical DNA duplexes from a single original DNA duplex. During this process, DNA polymerase "reads" the existing DNA strands to create two new strands that match the existing ones.
These enzymes catalyze the chemical reaction
deoxynucleoside triphosphate + DNAn ⇌ pyrophosphate + DNAn+1.
DNA polymerase adds nucleotides to the three prime (3')-end of a DNA strand, one nucleotide at a time. Every time a cell divides, DNA polymerases are required to duplicate the cell's DNA, so that a copy of the original DNA molecule can be passed to each daughter cell. In this way, genetic information is passed down from generation to generation.
Before replication can take place, an enzyme called helicase unwinds the DNA molecule from its tightly woven form, in the process breaking the hydrogen bonds between the nucleotide bases. This opens up or "unzips" the double-stranded DNA to give two single strands of DNA that can be used as templates for replication in the above reaction.
== History ==
In 1956, Arthur Kornberg and colleagues discovered DNA polymerase I (Pol I), in Escherichia coli. They described the DNA replication process by which DNA polymerase copies the base sequence of a template DNA strand. Kornberg was later awarded the Nobel Prize in Physiology or Medicine in 1959 for this work. DNA polymerase II was discovered by Thomas Kornberg (the son of Arthur Kornberg) and Malcolm E. Gefter in 1970 while further elucidating the role of Pol I in E. coli DNA replication. Three more DNA polymerases have been found in E. coli, including DNA polymerase III (discovered in the 1970s) and DNA polymerases IV and V (discovered in 1999). From 1983 on, DNA polymerases have been used in the polymerase chain reaction (PCR), and from 1988 thermostable DNA polymerases were used instead, as they do not need to be added in every cycle of a PCR.
== Function ==
The main function of DNA polymerase is to synthesize DNA from deoxyribonucleotides, the building blocks of DNA. The DNA copies are created by the pairing of nucleotides to bases present on each strand of the original DNA molecule. This pairing always occurs in specific combinations, with cytosine along with guanine, and thymine along with adenine, forming two separate pairs, respectively. By contrast, RNA polymerases synthesize RNA from ribonucleotides from either RNA or DNA.
When synthesizing new DNA, DNA polymerase can add free nucleotides only to the 3' end of the newly forming strand. This results in elongation of the newly forming strand in a 5'–3' direction.
It is important to note that the directionality of the newly forming strand (the daughter strand) is opposite to the direction in which DNA polymerase moves along the template strand. Since DNA polymerase requires a free 3' OH group for initiation of synthesis, it can synthesize in only one direction by extending the 3' end of the preexisting nucleotide chain. Hence, DNA polymerase moves along the template strand in a 3'–5' direction, and the daughter strand is formed in a 5'–3' direction. This difference enables the resultant double-strand DNA formed to be composed of two DNA strands that are antiparallel to each other.
The function of DNA polymerase is not quite perfect, with the enzyme making about one mistake for every billion base pairs copied. Error correction is a property of some, but not all DNA polymerases. This process corrects mistakes in newly synthesized DNA. When an incorrect base pair is recognized, DNA polymerase moves backwards by one base pair of DNA. The 3'–5' exonuclease activity of the enzyme allows the incorrect base pair to be excised (this activity is known as proofreading). Following base excision, the polymerase can re-insert the correct base and replication can continue forwards. This preserves the integrity of the original DNA strand that is passed onto the daughter cells.
Fidelity is very important in DNA replication. Mismatches in DNA base pairing can potentially result in dysfunctional proteins and could lead to cancer. Many DNA polymerases contain an exonuclease domain, which acts in detecting base pair mismatches and further performs in the removal of the incorrect nucleotide to be replaced by the correct one. The shape and the interactions accommodating the Watson and Crick base pair are what primarily contribute to the detection or error. Hydrogen bonds play a key role in base pair binding and interaction. The loss of an interaction, which occurs at a mismatch, is said to trigger a shift in the balance, for the binding of the template-primer, from the polymerase, to the exonuclease domain. In addition, an incorporation of a wrong nucleotide causes a retard in DNA polymerization. This delay gives time for the DNA to be switched from the polymerase site to the exonuclease site. Different conformational changes and loss of interaction occur at different mismatches. In a purine:pyrimidine mismatch there is a displacement of the pyrimidine towards the major groove and the purine towards the minor groove. Relative to the shape of DNA polymerase's binding pocket, steric clashes occur between the purine and residues in the minor groove, and important van der Waals and electrostatic interactions are lost by the pyrimidine. Pyrimidine:pyrimidine and purine:purine mismatches present less notable changes since the bases are displaced towards the major groove, and less steric hindrance is experienced. However, although the different mismatches result in different steric properties, DNA polymerase is still able to detect and differentiate them so uniformly and maintain fidelity in DNA replication. DNA polymerization is also critical for many mutagenesis processes and is widely employed in biotechnologies.
=== Structure ===
The known DNA polymerases have highly conserved structure, which means that their overall catalytic subunits vary very little from species to species, independent of their domain structures. Conserved structures usually indicate important, irreplaceable functions of the cell, the maintenance of which provides evolutionary advantages. The shape can be described as resembling a right hand with thumb, finger, and palm domains. The palm domain appears to function in catalyzing the transfer of phosphoryl groups in the phosphoryl transfer reaction. DNA is bound to the palm when the enzyme is active. This reaction is believed to be catalyzed by a two-metal-ion mechanism. The finger domain functions to bind the nucleoside triphosphates with the template base. The thumb domain plays a potential role in the processivity, translocation, and positioning of the DNA.
=== Processivity ===
DNA polymerase's rapid catalysis due to its processive nature. Processivity is a characteristic of enzymes that function on polymeric substrates. In the case of DNA polymerase, the degree of processivity refers to the average number of nucleotides added each time the enzyme binds a template. The average DNA polymerase requires about one second locating and binding a primer/template junction. Once it is bound, a nonprocessive DNA polymerase adds nucleotides at a rate of one nucleotide per second.: 207–208 Processive DNA polymerases, however, add multiple nucleotides per second, drastically increasing the rate of DNA synthesis. The degree of processivity is directly proportional to the rate of DNA synthesis. The rate of DNA synthesis in a living cell was first determined as the rate of phage T4 DNA elongation in phage infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second.
DNA polymerase's ability to slide along the DNA template allows increased processivity. There is a dramatic increase in processivity at the replication fork. This increase is facilitated by the DNA polymerase's association with proteins known as the sliding DNA clamp. The clamps are multiple protein subunits associated in the shape of a ring. Using the hydrolysis of ATP, a class of proteins known as the sliding clamp loading proteins open up the ring structure of the sliding DNA clamps allowing binding to and release from the DNA strand. Protein–protein interaction with the clamp prevents DNA polymerase from diffusing from the DNA template, thereby ensuring that the enzyme binds the same primer/template junction and continues replication.: 207–208 DNA polymerase changes conformation, increasing affinity to the clamp when associated with it and decreasing affinity when it completes the replication of a stretch of DNA to allow release from the clamp.
DNA polymerase processivity has been studied with in vitro single-molecule experiments (namely, optical tweezers and magnetic tweezers) have revealed the synergies between DNA polymerases and other molecules of the replisome (helicases and SSBs) and with the DNA replication fork. These results have led to the development of synergetic kinetic models for DNA replication describing the resulting DNA polymerase processivity increase.
== Variation across species ==
Based on sequence homology, DNA polymerases can be further subdivided into seven different families: A, B, C, D, X, Y, and RT.
Some viruses also encode special DNA polymerases, such as Hepatitis B virus DNA polymerase. These may selectively replicate viral DNA through a variety of mechanisms. Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp). It polymerizes DNA from a template of RNA.
=== Prokaryotic polymerase ===
Prokaryotic polymerases exist in two forms: core polymerase and holoenzyme. Core polymerase synthesizes DNA from the DNA template but it cannot initiate the synthesis alone or accurately. Holoenzyme accurately initiates synthesis.
==== Pol I ====
Prokaryotic family A polymerases include the DNA polymerase I (Pol I) enzyme, which is encoded by the polA gene and ubiquitous among prokaryotes. This repair polymerase is involved in excision repair with both 3'–5' and 5'–3' exonuclease activity and processing of Okazaki fragments generated during lagging strand synthesis. Pol I is the most abundant polymerase, accounting for >95% of polymerase activity in E. coli; yet cells lacking Pol I have been found suggesting Pol I activity can be replaced by the other four polymerases. Pol I adds ~15-20 nucleotides per second, thus showing poor processivity. Instead, Pol I starts adding nucleotides at the RNA primer:template junction known as the origin of replication (ori). Approximately 400 bp downstream from the origin, the Pol III holoenzyme is assembled and takes over replication at a highly processive speed and nature.
Taq polymerase is a heat-stable enzyme of this family that lacks proofreading ability.
==== Pol II ====
DNA polymerase II is a family B polymerase encoded by the polB gene. Pol II has 3'–5' exonuclease activity and participates in DNA repair, replication restart to bypass lesions, and its cell presence can jump from ~30-50 copies per cell to ~200–300 during SOS induction. Pol II is also thought to be a backup to Pol III as it can interact with holoenzyme proteins and assume a high level of processivity. The main role of Pol II is thought to be the ability to direct polymerase activity at the replication fork and help stalled Pol III bypass terminal mismatches.
Pfu DNA polymerase is a heat-stable enzyme of this family found in the hyperthermophilic archaeon Pyrococcus furiosus. Detailed classification divides family B in archaea into B1, B2, B3, in which B2 is a group of pseudoenzymes. Pfu belongs to family B3. Others PolBs found in archaea are part of "Casposons", Cas1-dependent transposons. Some viruses (including Φ29 DNA polymerase) and mitochondrial plasmids carry polB as well.
==== Pol III ====
DNA polymerase III holoenzyme is the primary enzyme involved in DNA replication in E. coli and belongs to family C polymerases. It consists of three assemblies: the pol III core, the beta sliding clamp processivity factor, and the clamp-loading complex. The core consists of three subunits: α, the polymerase activity hub, ɛ, exonucleolytic proofreader, and θ, which may act as a stabilizer for ɛ. The beta sliding clamp processivity factor is also present in duplicate, one for each core, to create a clamp that encloses DNA allowing for high processivity. The third assembly is a seven-subunit (τ2γδδ′χψ) clamp loader complex.
The old textbook "trombone model" depicts an elongation complex with two equivalents of the core enzyme at each replication fork (RF), one for each strand, the lagging and leading. However, recent evidence from single-molecule studies indicates an average of three stoichiometric equivalents of core enzyme at each RF for both Pol III and its counterpart in B. subtilis, PolC. In-cell fluorescent microscopy has revealed that leading strand synthesis may not be completely continuous, and Pol III* (i.e., the holoenzyme α, ε, τ, δ and χ subunits without the ß2 sliding clamp) has a high frequency of dissociation from active RFs. In these studies, the replication fork turnover rate was about 10s for Pol III*, 47s for the ß2 sliding clamp, and 15m for the DnaB helicase. This suggests that the DnaB helicase may remain stably associated at RFs and serve as a nucleation point for the competent holoenzyme. In vitro single-molecule studies have shown that Pol III* has a high rate of RF turnover when in excess, but remains stably associated with replication forks when concentration is limiting. Another single-molecule study showed that DnaB helicase activity and strand elongation can proceed with decoupled, stochastic kinetics.
==== Pol IV ====
In E. coli, DNA polymerase IV (Pol IV) is an error-prone DNA polymerase involved in non-targeted mutagenesis. Pol IV is a Family Y polymerase expressed by the dinB gene that is switched on via SOS induction caused by stalled polymerases at the replication fork. During SOS induction, Pol IV production is increased tenfold and one of the functions during this time is to interfere with Pol III holoenzyme processivity. This creates a checkpoint, stops replication, and allows time to repair DNA lesions via the appropriate repair pathway. Another function of Pol IV is to perform translesion synthesis at the stalled replication fork like, for example, bypassing N2-deoxyguanine adducts at a faster rate than transversing undamaged DNA. Cells lacking the dinB gene have a higher rate of mutagenesis caused by DNA damaging agents.
==== Pol V ====
DNA polymerase V (Pol V) is a Y-family DNA polymerase that is involved in SOS response and translesion synthesis DNA repair mechanisms. Transcription of Pol V via the umuDC genes is highly regulated to produce only Pol V when damaged DNA is present in the cell generating an SOS response. Stalled polymerases causes RecA to bind to the ssDNA, which causes the LexA protein to autodigest. LexA then loses its ability to repress the transcription of the umuDC operon. The same RecA-ssDNA nucleoprotein posttranslationally modifies the UmuD protein into UmuD' protein. UmuD and UmuD' form a heterodimer that interacts with UmuC, which in turn activates umuC's polymerase catalytic activity on damaged DNA. In E. coli, a polymerase "tool belt" model for switching pol III with pol IV at a stalled replication fork, where both polymerases bind simultaneously to the β-clamp, has been proposed. However, the involvement of more than one TLS polymerase working in succession to bypass a lesion has not yet been shown in E. coli. Moreover, Pol IV can catalyze both insertion and extension with high efficiency, whereas pol V is considered the major SOS TLS polymerase. One example is the bypass of intra strand guanine thymine cross-link where it was shown on the basis of the difference in the mutational signatures of the two polymerases, that pol IV and pol V compete for TLS of the intra-strand crosslink.
==== Family D ====
In 1998, the family D of DNA polymerase was discovered in Pyrococcus furiosus and Methanococcus jannaschii. The PolD complex is a heterodimer of two chains, each encoded by DP1 (small proofreading) and DP2 (large catalytic). Unlike other DNA polymerases, the structure and mechanism of the DP2 catalytic core resemble that of multi-subunit RNA polymerases. The DP1-DP2 interface resembles that of Eukaryotic Class B polymerase zinc finger and its small subunit. DP1, a Mre11-like exonuclease, is likely the precursor of small subunit of Pol α and ε, providing proofreading capabilities now lost in Eukaryotes. Its N-terminal HSH domain is similar to AAA proteins, especially Pol III subunit δ and RuvB, in structure. DP2 has a Class II KH domain. Pyrococcus abyssi polD is more heat-stable and more accurate than Taq polymerase, but has not yet been commercialized. It has been proposed that family D DNA polymerase was the first to evolve in cellular organisms and that the replicative polymerase of the Last Universal Cellular Ancestor (LUCA) belonged to family D.
=== Eukaryotic DNA polymerase ===
==== Polymerases β, λ, σ, μ (beta, lambda, sigma, mu) and TdT ====
Family X polymerases contain the well-known eukaryotic polymerase pol β (beta), as well as other eukaryotic polymerases such as Pol σ (sigma), Pol λ (lambda), Pol μ (mu), and Terminal deoxynucleotidyl transferase (TdT). Family X polymerases are found mainly in vertebrates, and a few are found in plants and fungi. These polymerases have highly conserved regions that include two helix-hairpin-helix motifs that are imperative in the DNA-polymerase interactions. One motif is located in the 8 kDa domain that interacts with downstream DNA and one motif is located in the thumb domain that interacts with the primer strand. Pol β, encoded by POLB gene, is required for short-patch base excision repair, a DNA repair pathway that is essential for repairing alkylated or oxidized bases as well as abasic sites. Pol λ and Pol μ, encoded by the POLL and POLM genes respectively, are involved in non-homologous end-joining, a mechanism for rejoining DNA double-strand breaks due to hydrogen peroxide and ionizing radiation, respectively. TdT is expressed only in lymphoid tissue, and adds "n nucleotides" to double-strand breaks formed during V(D)J recombination to promote immunological diversity.
==== Polymerases α, δ and ε (alpha, delta, and epsilon) ====
Pol α (alpha), Pol δ (delta), and Pol ε (epsilon) are members of Family B Polymerases and are the main polymerases involved with nuclear DNA replication. Pol α complex (pol α-DNA primase complex) consists of four subunits: the catalytic subunit POLA1, the regulatory subunit POLA2, and the small and the large primase subunits PRIM1 and PRIM2 respectively. Once primase has created the RNA primer, Pol α starts replication elongating the primer with ~20 nucleotides. Due to its high processivity, Pol δ takes over the leading and lagging strand synthesis from Pol α.: 218–219 Pol δ is expressed by genes POLD1, creating the catalytic subunit, POLD2, POLD3, and POLD4 creating the other subunits that interact with Proliferating Cell Nuclear Antigen (PCNA), which is a DNA clamp that allows Pol δ to possess processivity. Pol ε is encoded by the POLE1, the catalytic subunit, POLE2, and POLE3 gene. It has been reported that the function of Pol ε is to extend the leading strand during replication, while Pol δ primarily replicates the lagging strand; however, recent evidence suggested that Pol δ might have a role in replicating the leading strand of DNA as well. Pol ε's C-terminus "polymerase relic" region, despite being unnecessary for polymerase activity, is thought to be essential to cell vitality. The C-terminus region is thought to provide a checkpoint before entering anaphase, provide stability to the holoenzyme, and add proteins to the holoenzyme necessary for initiation of replication. Pol ε has a larger "palm" domain that provides high processivity independently of PCNA.
Compared to other Family B polymerases, the DEDD exonuclease family responsible for proofreading is inactivated in Pol α. Pol ε is unique in that it has two zinc finger domains and an inactive copy of another family B polymerase in its C-terminal. The presence of this zinc finger has implications in the origins of Eukaryota, which in this case is placed into the Asgard group with archaeal B3 polymerase.
==== Polymerases η, ι and κ (eta, iota, and kappa) ====
Pol η (eta), Pol ι (iota), and Pol κ (kappa), are Family Y DNA polymerases involved in the DNA repair by translation synthesis and encoded by genes POLH, POLI, and POLK respectively. Members of Family Y have five common motifs to aid in binding the substrate and primer terminus and they all include the typical right hand thumb, palm and finger domains with added domains like little finger (LF), polymerase-associated domain (PAD), or wrist. The active site, however, differs between family members due to the different lesions being repaired. Polymerases in Family Y are low-fidelity polymerases, but have been proven to do more good than harm as mutations that affect the polymerase can cause various diseases, such as skin cancer and Xeroderma Pigmentosum Variant (XPS). The importance of these polymerases is evidenced by the fact that gene encoding DNA polymerase η is referred as XPV, because loss of this gene results in the disease Xeroderma Pigmentosum Variant. Pol η is particularly important for allowing accurate translesion synthesis of DNA damage resulting from ultraviolet radiation. The functionality of Pol κ is not completely understood, but researchers have found two probable functions. Pol κ is thought to act as an extender or an inserter of a specific base at certain DNA lesions. All three translesion synthesis polymerases, along with Rev1, are recruited to damaged lesions via stalled replicative DNA polymerases. There are two pathways of damage repair leading researchers to conclude that the chosen pathway depends on which strand contains the damage, the leading or lagging strand.
==== Polymerases Rev1 and ζ (zeta) ====
Pol ζ, another B family polymerase, is made of two subunits: Rev3 – the catalytic subunit; and Rev7 (MAD2L2) – which increases the catalytic function of the polymerase, and is involved in translation synthesis. Pol ζ lacks 3' to 5' exonuclease activity, and is unique in that it can extend primers with terminal mismatches. Rev1 has three regions of interest in the BRCT domain, ubiquitin-binding domain, and C-terminal domain and has dCMP transferase ability, which adds deoxycytidine opposite lesions that would stall replicative polymerases Pol δ and Pol ε. These stalled polymerases activate ubiquitin complexes that, in turn, disassociate replication polymerases and recruit Pol ζ and Rev1. Together, Pol ζ and Rev1 add deoxycytidine, and Pol ζ extends past the lesion. Through a yet undetermined process, Pol ζ disassociates, and replication polymerases reassociate and continue replication. Pol ζ and Rev1 are not required for replication, but loss of REV3 gene in budding yeast can cause increased sensitivity to DNA-damaging agents due to collapse of replication forks where replication polymerases have stalled.
==== Telomerase ====
Telomerase is a ribonucleoprotein which functions to replicate ends of linear chromosomes since normal DNA polymerase cannot replicate the ends, or telomeres. The single-strand 3' overhang of the double-strand chromosome with the sequence 5'-TTAGGG-3' recruits telomerase. Telomerase acts like other DNA polymerases by extending the 3' end, but, unlike other DNA polymerases, telomerase does not require a template. The TERT subunit, an example of a reverse transcriptase, uses the RNA subunit to form the primer–template junction that allows telomerase to extend the 3' end of chromosome ends. The gradual decrease in size of telomeres as the result of many replications over a lifetime are thought to be associated with the effects of aging.: 248–249
==== Polymerases γ, θ and ν (gamma, theta and nu) ====
Pol γ (gamma), Pol θ (theta), and Pol ν (nu) are Family A polymerases. Pol γ, encoded by the POLG gene, was long thought to be the only mitochondrial polymerase. However, recent research shows that at least Pol β (beta), a Family X polymerase, is also present in mitochondria. Any mutation that leads to limited or non-functioning Pol γ has a significant effect on mtDNA and is the most common cause of autosomal inherited mitochondrial disorders. Pol γ contains a C-terminus polymerase domain and an N-terminus 3'–5' exonuclease domain that are connected via the linker region, which binds the accessory subunit. The accessory subunit binds DNA and is required for processivity of Pol γ. Point mutation A467T in the linker region is responsible for more than one-third of all Pol γ-associated mitochondrial disorders. While many homologs of Pol θ, encoded by the POLQ gene, are found in eukaryotes, its function is not clearly understood. The sequence of amino acids in the C-terminus is what classifies Pol θ as Family A polymerase, although the error rate for Pol θ is more closely related to Family Y polymerases. Pol θ extends mismatched primer termini and can bypass abasic sites by adding a nucleotide. It also has Deoxyribophosphodiesterase (dRPase) activity in the polymerase domain and can show ATPase activity in close proximity to ssDNA. Pol ν (nu) is considered to be the least effective of the polymerase enzymes. However, DNA polymerase nu plays an active role in homology repair during cellular responses to crosslinks, fulfilling its role in a complex with helicase.
Plants use two Family A polymerases to copy both the mitochondrial and plastid genomes. They are more similar to bacterial Pol I than they are to mammalian Pol γ.
==== Reverse transcriptase ====
Retroviruses encode an unusual DNA polymerase called reverse transcriptase, which is an RNA-dependent DNA polymerase (RdDp) that synthesizes DNA from a template of RNA. The reverse transcriptase family contain both DNA polymerase functionality and RNase H functionality, which degrades RNA base-paired to DNA. An example of a retrovirus is HIV. Reverse transcriptase is commonly employed in amplification of RNA for research purposes. Using an RNA template, PCR can utilize reverse transcriptase, creating a DNA template. This new DNA template can then be used for typical PCR amplification. The products of such an experiment are thus amplified PCR products from RNA.
Each HIV retrovirus particle contains two RNA genomes, but, after an infection, each virus generates only one provirus. After infection, reverse transcription is accompanied by template switching between the two genome copies (copy choice recombination). From 5 to 14 recombination events per genome occur at each replication cycle. Template switching (recombination) appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes.
=== Bacteriophage T4 DNA polymerase ===
Bacteriophage (phage) T4 encodes a DNA polymerase that catalyzes DNA synthesis in a 5' to 3' direction. The phage polymerase also has an exonuclease activity that acts in a 3' to 5' direction, and this activity is employed in the proofreading and editing of newly inserted bases. A phage mutant with a temperature sensitive DNA polymerase, when grown at permissive temperatures, was observed to undergo recombination at frequencies that are about two-fold higher than that of wild-type phage.
It was proposed that a mutational alteration in the phage DNA polymerase can stimulate template strand switching (copy choice recombination) during replication.
== See also ==
Biological machines
DNA sequencing
Enzyme catalysis
Genetic recombination
Molecular cloning
Polymerase chain reaction
Protein domain dynamics
Reverse transcription
RNA polymerase
Taq DNA polymerase
== References ==
== Further reading ==
== External links ==
DNA+polymerases at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
PDB Molecule of the Month DNA polymerase
Unusual repair mechanism in DNA polymerase lambda, Ohio State University, July 25, 2006.
A great animation of DNA Polymerase from WEHI at 1:45 minutes in Archived 2014-12-05 at the Wayback Machine
3D macromolecular structures of DNA polymerase from the EM Data Bank(EMDB) | Wikipedia/DNA_polymerase |
In molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The structure was discovered by
Rosalind Franklin and her student Raymond Gosling, Maurice Wilkins, James Watson, and Francis Crick, while the term "double helix" entered popular culture with the 1968 publication of Watson's The Double Helix: A Personal Account of the Discovery of the Structure of DNA.
The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. In B-DNA, the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B-DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove.
== History ==
The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based on the work of Rosalind Franklin and her student Raymond Gosling, who took the crucial X-ray diffraction image of DNA labeled as "Photo 51", and Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. Before this, Linus Pauling—who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation.
The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third of the 1962 Nobel Prize in Physiology or Medicine for their contributions to the discovery.
== Nucleic acid hybridization ==
Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as T A and T G. These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase.
== Base pair geometry ==
The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist. These values precisely define the location and orientation in space of every base or base pair in a nucleic acid molecule relative to its predecessor along the axis of the helix. Together, they characterize the helical structure of the molecule. In regions of DNA or RNA where the normal structure is disrupted, the change in these values can be used to describe such disruption.
For each base pair, considered relative to its predecessor, there are the following base pair geometries to consider:
Shear
Stretch
Stagger
Buckle
Propeller: rotation of one base with respect to the other in the same base pair.
Opening
Shift: displacement along an axis in the base-pair plane perpendicular to the first, directed from the minor to the major groove.
Slide: displacement along an axis in the plane of the base pair directed from one strand to the other.
Rise: displacement along the helix axis.
Tilt: rotation around the shift axis.
Roll: rotation around the slide axis.
Twist: rotation around the rise axis.
x-displacement
y-displacement
inclination
tip
pitch: the height per complete turn of the helix.
Rise and twist determine the handedness and pitch of the helix. The other coordinates, by contrast, can be zero. Slide and shift are typically small in B-DNA, but are substantial in A- and Z-DNA. Roll and tilt make successive base pairs less parallel, and are typically small.
"Tilt" has often been used differently in the scientific literature, referring to the deviation of the first, inter-strand base-pair axis from perpendicularity to the helix axis. This corresponds to slide between a succession of base pairs, and in helix-based coordinates is properly termed "inclination".
== Helix geometries ==
At least three DNA conformations are believed to be found in nature, A-DNA, B-DNA, and Z-DNA. The B form described by James Watson and Francis Crick is believed to predominate in cells. It is 23.7 Å wide and extends 34 Å per 10 bp of sequence. The double helix has a right-hand twist that makes one complete turn about its axis every 10.4–10.5 base pairs in solution. This frequency of twist (termed the helical pitch) depends largely on stacking forces that each base exerts on its neighbours in the chain.
A-DNA and Z-DNA differ significantly in their geometry and dimensions to B-DNA, although still form helical structures. It was long thought that the A form only occurs in dehydrated samples of DNA in the laboratory, such as those used in crystallographic experiments, and in hybrid pairings of DNA and RNA strands, but DNA dehydration does occur in vivo, and A-DNA is now known to have biological functions. Segments of DNA that cells have methylated for regulatory purposes may adopt the Z geometry, in which the strands turn about the helical axis the opposite way to A-DNA and B-DNA. There is also evidence of protein-DNA complexes forming Z-DNA structures.
Other conformations are possible; A-DNA, B-DNA, C-DNA, E-DNA, L-DNA (the enantiomeric form of D-DNA), P-DNA, S-DNA, Z-DNA, etc. have been described so far. In fact, only the letters F, Q, U, V, and Y are now available to describe any new DNA structure that may appear in the future. However, most of these forms have been created synthetically and have not been observed in naturally occurring biological systems. There are also triple-stranded DNA forms and quadruplex forms such as the G-quadruplex and the i-motif.
=== Grooves ===
Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
=== Non-double helical forms ===
Alternative non-helical models were briefly considered in the late 1970s as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes and later the nucleosome core particle, and the discovery of topoisomerases. Also, the non-double-helical models are not currently accepted by the mainstream scientific community.
== Bending ==
DNA is a relatively rigid polymer, typically modelled as a worm-like chain. It has three significant degrees of freedom; bending, twisting, and compression, each of which cause certain limits on what is possible with DNA within a cell. Twisting-torsional stiffness is important for the circularisation of DNA and the orientation of DNA bound proteins relative to each other and bending-axial stiffness is important for DNA wrapping and circularisation and protein interactions. Compression-extension is relatively unimportant in the absence of high tension.
=== Persistence length, axial stiffness ===
DNA in solution does not take a rigid structure but is continually changing conformation due to thermal vibration and collisions with water molecules, which makes classical measures of rigidity impossible to apply. Hence, the bending stiffness of DNA is measured by the persistence length, defined as:
Bending flexibility of a polymer is conventionally quantified in terms of its persistence length, Lp, a length scale below which the polymer behaves more or less like a rigid rod. Specifically, Lp is defined as length of the polymer segment over which the time-averaged orientation of the polymer becomes uncorrelated...
This value may be directly measured using an atomic force microscope to directly image DNA molecules of various lengths. In an aqueous solution, the average persistence length has been found to be of around 50 nm (or 150 base pairs). More broadly, it has been observed to be between 45 and 60 nm or 132–176 base pairs (the diameter of DNA is 2 nm) This can vary significantly due to variations in temperature, aqueous solution conditions and DNA length. This makes DNA a moderately stiff molecule.
The persistence length of a section of DNA is somewhat dependent on its sequence, and this can cause significant variation. The variation is largely due to base stacking energies and the residues which extend into the minor and major grooves.
=== Models for DNA bending ===
At length-scales larger than the persistence length, the entropic flexibility of DNA is remarkably consistent with standard polymer physics models, such as the Kratky-Porod worm-like chain model. Consistent with the worm-like chain model is the observation that bending DNA is also described by Hooke's law at very small (sub-piconewton) forces. For DNA segments less than the persistence length, the bending force is approximately constant and behaviour deviates from the worm-like chain predictions.
This effect results in unusual ease in circularising small DNA molecules and a higher probability of finding highly bent sections of DNA.
=== Bending preference ===
DNA molecules often have a preferred direction to bend, i.e., anisotropic bending. This is, again, due to the properties of the bases which make up the DNA sequence - a random sequence will have no preferred bend direction, i.e., isotropic bending.
Preferred DNA bend direction is determined by the stability of stacking each base on top of the next. If unstable base stacking steps are always found on one side of the DNA helix then the DNA will preferentially bend away from that direction. As bend angle increases then steric hindrances and ability to roll the residues relative to each other also play a role, especially in the minor groove. A and T residues will be preferentially be found in the minor grooves on the inside of bends. This effect is particularly seen in DNA-protein binding where tight DNA bending is induced, such as in nucleosome particles. See base step distortions above.
DNA molecules with exceptional bending preference can become intrinsically bent. This was first observed in trypanosomatid kinetoplast DNA. Typical sequences which cause this contain stretches of 4-6 T and A residues separated by G and C rich sections which keep the A and T residues in phase with the minor groove on one side of the molecule. For example:
The intrinsically bent structure is induced by the 'propeller twist' of base pairs relative to each other allowing unusual bifurcated Hydrogen-bonds between base steps. At higher temperatures this structure is denatured, and so the intrinsic bend is lost.
All DNA which bends anisotropically has, on average, a longer persistence length and greater axial stiffness. This increased rigidity is required to prevent random bending which would make the molecule act isotropically.
=== Circularization ===
DNA circularization depends on both the axial (bending) stiffness and torsional (rotational) stiffness of the molecule. For a DNA molecule to successfully circularize it must be long enough to easily bend into the full circle and must have the correct number of bases so the ends are in the correct rotation to allow bonding to occur. The optimum length for circularization of DNA is around 400 base pairs (136 nm), with an integral number of turns of the DNA helix, i.e., multiples of 10.4 base pairs. Having a non integral number of turns presents a significant energy barrier for circularization, for example a 10.4 x 30 = 312 base pair molecule will circularize hundreds of times faster than 10.4 x 30.5 ≈ 317 base pair molecule.
The bending of short circularized DNA segments is non-uniform. Rather, for circularized DNA segments less than the persistence length, DNA bending is localised to 1-2 kinks that form preferentially in AT-rich segments. If a nick is present, bending will be localised to the nick site.
== Stretching ==
=== Elastic stretching regime ===
Longer stretches of DNA are entropically elastic under tension. When DNA is in solution, it undergoes continuous structural variations due to the energy available in the thermal bath of the solvent. This is due to the thermal vibration of the molecule combined with continual collisions with water molecules. For entropic reasons, more compact relaxed states are thermally accessible than stretched out states, and so DNA molecules are almost universally found in a tangled relaxed layouts. For this reason, one molecule of DNA will stretch under a force, straightening it out. Using optical tweezers, the entropic stretching behavior of DNA has been studied and analyzed from a polymer physics perspective, and it has been found that DNA behaves largely like the Kratky-Porod worm-like chain model under physiologically accessible energy scales.
=== Phase transitions under stretching ===
Under sufficient tension and positive torque, DNA is thought to undergo a phase transition with the bases splaying outwards and the phosphates moving to the middle. This proposed structure for overstretched DNA has been called P-form DNA, in honor of Linus Pauling who originally presented it as a possible structure of DNA.
Evidence from mechanical stretching of DNA in the absence of imposed torque points to a transition or transitions leading to further structures which are generally referred to as S-form DNA. These structures have not yet been definitively characterised due to the difficulty of carrying out atomic-resolution imaging in solution while under applied force although many computer simulation studies have been made (for example,).
Proposed S-DNA structures include those which preserve base-pair stacking and hydrogen bonding (GC-rich), while releasing extension by tilting, as well as structures in which partial melting of the base-stack takes place, while base-base association is nonetheless overall preserved (AT-rich).
Periodic fracture of the base-pair stack with a break occurring once per three bp (therefore one out of every three bp-bp steps) has been proposed as a regular structure which preserves planarity of the base-stacking and releases the appropriate amount of extension, with the term "Σ-DNA" introduced as a mnemonic, with the three right-facing points of the Sigma character serving as a reminder of the three grouped base pairs. The Σ form has been shown to have a sequence preference for GNC motifs which are believed under the GNC hypothesis to be of evolutionary importance.
== Supercoiling and topology ==
The B form of the DNA helix twists 360° per 10.4-10.5 bp in the absence of torsional strain. But many molecular biological processes can induce torsional strain. A DNA segment with excess or insufficient helical twisting is referred to, respectively, as positively or negatively supercoiled. DNA in vivo is typically negatively supercoiled, which facilitates the unwinding (melting) of the double-helix required for RNA transcription.
Within the cell most DNA is topologically restricted. DNA is typically found in closed loops (such as plasmids in prokaryotes) which are topologically closed, or as very long molecules whose diffusion coefficients produce effectively topologically closed domains. Linear sections of DNA are also commonly bound to proteins or physical structures (such as membranes) to form closed topological loops.
Francis Crick was one of the first to propose the importance of linking numbers when considering DNA supercoils. In a paper published in 1976, Crick outlined the problem as follows:
In considering supercoils formed by closed double-stranded molecules of DNA certain mathematical concepts, such as the linking number and the twist, are needed. The meaning of these for a closed ribbon is explained and also that of the writhing number of a closed curve. Some simple examples are given, some of which may be relevant to the structure of chromatin.
Analysis of DNA topology uses three values:
L = linking number - the number of times one DNA strand wraps around the other. It is an integer for a closed loop and constant for a closed topological domain.
T = twist - total number of turns in the double stranded DNA helix. This will normally tend to approach the number of turns that a topologically open double stranded DNA helix makes free in solution: number of bases/10.5, assuming there are no intercalating agents (e.g., ethidium bromide) or other elements modifying the stiffness of the DNA.
W = writhe - number of turns of the double stranded DNA helix around the superhelical axis
L = T + W and ΔL = ΔT + ΔW
Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling.
When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands cannot be separated by any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes termed topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints.
=== The linking number paradox ===
For many years, the origin of residual supercoiling in eukaryotic genomes remained unclear. This topological puzzle was referred to by some as the "linking number paradox". However, when experimentally determined structures of the nucleosome displayed an over-twisted left-handed wrap of DNA around the histone octamer, this paradox was considered to be solved by the scientific community.
== See also ==
Comparison of nucleic acid simulation software
DNA nanotechnology
G-quadruplex
Molecular models of DNA
Molecular structure of Nucleic Acids (publication)
Non-B database
Triple-stranded DNA
== References == | Wikipedia/B-DNA |
Chemotherapy (often abbreviated chemo, sometimes CTX and CTx) is the type of cancer treatment that uses one or more anti-cancer drugs (chemotherapeutic agents or alkylating agents) in a standard regimen. Chemotherapy may be given with a curative intent (which almost always involves combinations of drugs), or it may aim only to prolong life or to reduce symptoms (palliative chemotherapy). Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology.
The term chemotherapy now means the non-specific use of intracellular poisons to inhibit mitosis (cell division) or to induce DNA damage (so that DNA repair can augment chemotherapy). This meaning excludes the more-selective agents that block extracellular signals (signal transduction). Therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones (primarily estrogens for breast cancer and androgens for prostate cancer), are now called hormonal therapies. Other inhibitions of growth-signals, such as those associated with receptor tyrosine kinases, are targeted therapy.
The use of drugs (whether chemotherapy, hormonal therapy, or targeted therapy) is systemic therapy for cancer: they are introduced into the blood stream (the system) and therefore can treat cancer anywhere in the body. Systemic therapy is often used with other, local therapy (treatments that work only where they are applied), such as radiation, surgery, and hyperthermia.
Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression (decreased production of blood cells, hence that also immunosuppression), mucositis (inflammation of the lining of the digestive tract), and alopecia (hair loss). Because of the effect on immune cells (especially lymphocytes), chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self (so-called autoimmunity). These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others.
== Treatment strategies ==
There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms.
Induction chemotherapy is the first line treatment of cancer with a chemotherapeutic drug. This type of chemotherapy is used for curative intent.: 55–59
Combined modality chemotherapy is the use of drugs with other cancer treatments, such as surgery, radiation therapy, or hyperthermia therapy.
Consolidation chemotherapy is given after remission in order to prolong the overall disease-free time and improve overall survival. The drug that is administered is the same as the drug that achieved remission.: 55–59
Intensification chemotherapy is identical to consolidation chemotherapy but a different drug than the induction chemotherapy is used.: 55–59
Combination chemotherapy involves treating a person with a number of different drugs simultaneously. The drugs differ in their mechanism and side-effects. The biggest advantage is minimising the chances of resistance developing to any one agent. Also, the drugs can often be used at lower doses, reducing toxicity.: 55–59 : 17–18
Neoadjuvant chemotherapy is given prior to a local treatment such as surgery, and is designed to shrink the primary tumor.: 55–59 It is also given for cancers with a high risk of micrometastatic disease.: 42
Adjuvant chemotherapy is given after a local treatment (radiotherapy or surgery). It can be used when there is little evidence of cancer present, but there is risk of recurrence.: 55–59 It is also useful in killing any cancerous cells that have spread to other parts of the body. These micrometastases can be treated with adjuvant chemotherapy and can reduce relapse rates caused by these disseminated cells.
Maintenance chemotherapy is a repeated low-dose treatment to prolong remission.: 55–59
Salvage chemotherapy or palliative chemotherapy is given without curative intent, but simply to decrease tumor load and increase life expectancy. For these regimens, in general, a better toxicity profile is expected.: 55–59
All chemotherapy regimens require that the recipient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a person can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity.
=== Effectiveness ===
The effectiveness of chemotherapy depends on the type of cancer and the stage. The overall effectiveness ranges from being curative for some cancers, such as some leukemias, to being ineffective, such as in some brain tumors, to being needless in others, like most non-melanoma skin cancers.
=== Dosage ===
Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects) will be intolerable to the person receiving it. The standard method of determining chemotherapy dosage is based on calculated body surface area (BSA). The BSA is usually calculated with a mathematical formula or a nomogram, using the recipient's weight and height, rather than by direct measurement of body area. This formula was originally derived in a 1916 study and attempted to translate medicinal doses established with laboratory animals to equivalent doses for humans. The study only included nine human subjects. When chemotherapy was introduced in the 1950s, the BSA formula was adopted as the official standard for chemotherapy dosing for lack of a better option.
The validity of this method in calculating uniform doses has been questioned because the formula only takes into account the individual's weight and height. Drug absorption and clearance are influenced by multiple factors, including age, sex, metabolism, disease state, organ function, drug-to-drug interactions, genetics, and obesity, which have major impacts on the actual concentration of the drug in the person's bloodstream. As a result, there is high variability in the systemic chemotherapy drug concentration in people dosed by BSA, and this variability has been demonstrated to be more than ten-fold for many drugs. In other words, if two people receive the same dose of a given drug based on BSA, the concentration of that drug in the bloodstream of one person may be 10 times higher or lower compared to that of the other person. This variability is typical with many chemotherapy drugs dosed by BSA, and, as shown below, was demonstrated in a study of 14 common chemotherapy drugs.
The result of this pharmacokinetic variability among people is that many people do not receive the right dose to achieve optimal treatment effectiveness with minimized toxic side effects. Some people are overdosed while others are underdosed. For example, in a randomized clinical trial, investigators found 85% of metastatic colorectal cancer patients treated with 5-fluorouracil (5-FU) did not receive the optimal therapeutic dose when dosed by the BSA standard—68% were underdosed and 17% were overdosed.
There has been controversy over the use of BSA to calculate chemotherapy doses for people who are obese. Because of their higher BSA, clinicians often arbitrarily reduce the dose prescribed by the BSA formula for fear of overdosing. In many cases, this can result in sub-optimal treatment.
Several clinical studies have demonstrated that when chemotherapy dosing is individualized to achieve optimal systemic drug exposure, treatment outcomes are improved and toxic side effects are reduced. In the 5-FU clinical study cited above, people whose dose was adjusted to achieve a pre-determined target exposure realized an 84% improvement in treatment response rate and a six-month improvement in overall survival (OS) compared with those dosed by BSA.
In the same study, investigators compared the incidence of common 5-FU-associated grade 3/4 toxicities between the dose-adjusted people and people dosed per BSA. The incidence of debilitating grades of diarrhea was reduced from 18% in the BSA-dosed group to 4% in the dose-adjusted group and serious hematologic side effects were eliminated. Because of the reduced toxicity, dose-adjusted patients were able to be treated for longer periods of time. BSA-dosed people were treated for a total of 680 months while people in the dose-adjusted group were treated for a total of 791 months. Completing the course of treatment is an important factor in achieving better treatment outcomes.
Similar results were found in a study involving people with colorectal cancer who have been treated with the popular FOLFOX regimen. The incidence of serious diarrhea was reduced from 12% in the BSA-dosed group of patients to 1.7% in the dose-adjusted group, and the incidence of severe mucositis was reduced from 15% to 0.8%.
The FOLFOX study also demonstrated an improvement in treatment outcomes. Positive response increased from 46% in the BSA-dosed group to 70% in the dose-adjusted group. Median progression free survival (PFS) and overall survival (OS) both improved by six months in the dose adjusted group.
One approach that can help clinicians individualize chemotherapy dosing is to measure the drug levels in blood plasma over time and adjust dose according to a formula or algorithm to achieve optimal exposure. With an established target exposure for optimized treatment effectiveness with minimized toxicities, dosing can be personalized to achieve target exposure and optimal results for each person. Such an algorithm was used in the clinical trials cited above and resulted in significantly improved treatment outcomes.
Oncologists are already individualizing dosing of some cancer drugs based on exposure. Carboplatin: 4 and busulfan dosing rely upon results from blood tests to calculate the optimal dose for each person. Simple blood tests are also available for dose optimization of methotrexate, 5-FU, paclitaxel, and docetaxel.
The serum albumin level immediately prior to chemotherapy administration is an independent prognostic predictor of survival in various cancer types.
=== Types ===
==== Alkylating agents ====
Alkylating agents are the oldest group of chemotherapeutics in use today. Originally derived from mustard gas used in World War I, there are now many types of alkylating agents in use. They are so named because of their ability to alkylate many molecules, including proteins, RNA and DNA. This ability to bind covalently to DNA via their alkyl group is the primary cause for their anti-cancer effects. DNA is made of two strands and the molecules may either bind twice to one strand of DNA (intrastrand crosslink) or may bind once to both strands (interstrand crosslink). If the cell tries to replicate crosslinked DNA during cell division, or tries to repair it, the DNA strands can break. This leads to a form of programmed cell death called apoptosis. Alkylating agents will work at any point in the cell cycle and thus are known as cell cycle-independent drugs. For this reason, the effect on the cell is dose dependent; the fraction of cells that die is directly proportional to the dose of drug.
The subtypes of alkylating agents are the nitrogen mustards, nitrosoureas, tetrazines, aziridines, cisplatins and derivatives, and non-classical alkylating agents. Nitrogen mustards include mechlorethamine, cyclophosphamide, melphalan, chlorambucil, ifosfamide and busulfan. Nitrosoureas include N-Nitroso-N-methylurea (MNU), carmustine (BCNU), lomustine (CCNU) and semustine (MeCCNU), fotemustine and streptozotocin. Tetrazines include dacarbazine, mitozolomide and temozolomide. Aziridines include thiotepa, mytomycin and diaziquone (AZQ). Cisplatin and derivatives include cisplatin, carboplatin and oxaliplatin. They impair cell function by forming covalent bonds with the amino, carboxyl, sulfhydryl, and phosphate groups in biologically important molecules. Non-classical alkylating agents include procarbazine and hexamethylmelamine.
==== Antimetabolites ====
Anti-metabolites are a group of molecules that impede DNA and RNA synthesis. Many of them have a similar structure to the building blocks of DNA and RNA. The building blocks are nucleotides; a molecule comprising a nucleobase, a sugar and a phosphate group. The nucleobases are divided into purines (guanine and adenine) and pyrimidines (cytosine, thymine and uracil). Anti-metabolites resemble either nucleobases or nucleosides (a nucleotide without the phosphate group), but have altered chemical groups. These drugs exert their effect by either blocking the enzymes required for DNA synthesis or becoming incorporated into DNA or RNA. By inhibiting the enzymes involved in DNA synthesis, they prevent mitosis because the DNA cannot duplicate itself. Also, after misincorporation of the molecules into DNA, DNA damage can occur and programmed cell death (apoptosis) is induced. Unlike alkylating agents, anti-metabolites are cell cycle dependent. This means that they only work during a specific part of the cell cycle, in this case S-phase (the DNA synthesis phase). For this reason, at a certain dose, the effect plateaus and proportionally no more cell death occurs with increased doses. Subtypes of the anti-metabolites are the anti-folates, fluoropyrimidines, deoxynucleoside analogues and thiopurines.
The anti-folates include methotrexate and pemetrexed. Methotrexate inhibits dihydrofolate reductase (DHFR), an enzyme that regenerates tetrahydrofolate from dihydrofolate. When the enzyme is inhibited by methotrexate, the cellular levels of folate coenzymes diminish. These are required for thymidylate and purine production, which are both essential for DNA synthesis and cell division.: 55–59 : 11 Pemetrexed is another anti-metabolite that affects purine and pyrimidine production, and therefore also inhibits DNA synthesis. It primarily inhibits the enzyme thymidylate synthase, but also has effects on DHFR, aminoimidazole carboxamide ribonucleotide formyltransferase and glycinamide ribonucleotide formyltransferase. The fluoropyrimidines include fluorouracil and capecitabine. Fluorouracil is a nucleobase analogue that is metabolised in cells to form at least two active products; 5-fluourouridine monophosphate (FUMP) and 5-fluoro-2'-deoxyuridine 5'-phosphate (fdUMP). FUMP becomes incorporated into RNA and fdUMP inhibits the enzyme thymidylate synthase; both of which lead to cell death.: 11 Capecitabine is a prodrug of 5-fluorouracil that is broken down in cells to produce the active drug. The deoxynucleoside analogues include cytarabine, gemcitabine, decitabine, azacitidine, fludarabine, nelarabine, cladribine, clofarabine, and pentostatin. The thiopurines include thioguanine and mercaptopurine.
==== Anti-microtubule agents ====
Anti-microtubule agents are plant-derived chemicals that block cell division by preventing microtubule function. Microtubules are an important cellular structure composed of two proteins, α-tubulin and β-tubulin. They are hollow, rod-shaped structures that are required for cell division, among other cellular functions. Microtubules are dynamic structures, which means that they are permanently in a state of assembly and disassembly. Vinca alkaloids and taxanes are the two main groups of anti-microtubule agents, and although both of these groups of drugs cause microtubule dysfunction, their mechanisms of action are completely opposite: Vinca alkaloids prevent the assembly of microtubules, whereas taxanes prevent their disassembly. By doing so, they can induce mitotic catastrophe in the cancer cells. Following this, cell cycle arrest occurs, which induces programmed cell death (apoptosis). These drugs can also affect blood vessel growth, an essential process that tumours utilise in order to grow and metastasise.
Vinca alkaloids are derived from the Madagascar periwinkle, Catharanthus roseus, formerly known as Vinca rosea. They bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules. The original vinca alkaloids are natural products that include vincristine and vinblastine. Following the success of these drugs, semi-synthetic vinca alkaloids were produced: vinorelbine (used in the treatment of non-small-cell lung cancer), vindesine, and vinflunine. These drugs are cell cycle-specific. They bind to the tubulin molecules in S-phase and prevent proper microtubule formation required for M-phase.
Taxanes are natural and semi-synthetic drugs. The first drug of their class, paclitaxel, was originally extracted from Taxus brevifolia, the Pacific yew. Now this drug and another in this class, docetaxel, are produced semi-synthetically from a chemical found in the bark of another yew tree, Taxus baccata.
Podophyllotoxin is an antineoplastic lignan obtained primarily from the American mayapple (Podophyllum peltatum) and Himalayan mayapple (Sinopodophyllum hexandrum). It has anti-microtubule activity, and its mechanism is similar to that of vinca alkaloids in that they bind to tubulin, inhibiting microtubule formation. Podophyllotoxin is used to produce two other drugs with different mechanisms of action: etoposide and teniposide.
==== Topoisomerase inhibitors ====
Topoisomerase inhibitors are drugs that affect the activity of two enzymes: topoisomerase I and topoisomerase II. When the DNA double-strand helix is unwound, during DNA replication or transcription, for example, the adjacent unopened DNA winds tighter (supercoils), like opening the middle of a twisted rope. The stress caused by this effect is in part aided by the topoisomerase enzymes. They produce single- or double-strand breaks into DNA, reducing the tension in the DNA strand. This allows the normal unwinding of DNA to occur during replication or transcription. Inhibition of topoisomerase I or II interferes with both of these processes.
Two topoisomerase I inhibitors, irinotecan and topotecan, are semi-synthetically derived from camptothecin, which is obtained from the Chinese ornamental tree Camptotheca acuminata. Drugs that target topoisomerase II can be divided into two groups. The topoisomerase II poisons cause increased levels enzymes bound to DNA. This prevents DNA replication and transcription, causes DNA strand breaks, and leads to programmed cell death (apoptosis). These agents include etoposide, doxorubicin, mitoxantrone and teniposide. The second group, catalytic inhibitors, are drugs that block the activity of topoisomerase II, and therefore prevent DNA synthesis and translation because the DNA cannot unwind properly. This group includes novobiocin, merbarone, and aclarubicin, which also have other significant mechanisms of action.
==== Cytotoxic antibiotics ====
The cytotoxic antibiotics are a varied group of drugs that have various mechanisms of action. The common theme that they share in their chemotherapy indication is that they interrupt cell division. The most important subgroup is the anthracyclines and the bleomycins; other prominent examples include mitomycin C and actinomycin.
Among the anthracyclines, doxorubicin and daunorubicin were the first, and were obtained from the bacterium Streptomyces peucetius. Derivatives of these compounds include epirubicin and idarubicin. Other clinically used drugs in the anthracycline group are pirarubicin, aclarubicin, and mitoxantrone. The mechanisms of anthracyclines include DNA intercalation (molecules insert between the two strands of DNA), generation of highly reactive free radicals that damage intercellular molecules and topoisomerase inhibition.
Actinomycin is a complex molecule that intercalates DNA and prevents RNA synthesis.
Bleomycin, a glycopeptide isolated from Streptomyces verticillus, also intercalates DNA, but produces free radicals that damage DNA. This occurs when bleomycin binds to a metal ion, becomes chemically reduced and reacts with oxygen.: 87
Mitomycin is a cytotoxic antibiotic with the ability to alkylate DNA.
=== Delivery ===
Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). According to a recent (2016) systematic review, oral therapies present additional challenges for patients and care teams to maintain and support adherence to treatment plans.
There are many intravenous methods of drug delivery, known as vascular access devices. These include the winged infusion device, peripheral venous catheter, midline catheter, peripherally inserted central catheter (PICC), central venous catheter and implantable port. The devices have different applications regarding duration of chemotherapy treatment, method of delivery and types of chemotherapeutic agent.: 94–95
Depending on the person, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access.: 113–118 Commonly used systems are the Hickman line, the Port-a-Cath, and the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and eliminate the need for repeated insertion of peripheral cannulae.
Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumors. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases.
Topical chemotherapies, such as 5-fluorouracil, are used to treat some cases of non-melanoma skin cancer.
If the cancer has central nervous system involvement, or with meningeal disease, intrathecal chemotherapy may be administered.
== Adverse effects ==
Chemotherapeutic techniques have a range of side effects that depend on the type of medications used. The most common medications affect mainly the fast-dividing cells of the body, such as blood cells and the cells lining the mouth, stomach, and intestines. Chemotherapy-related iatrogenic toxicities can occur acutely after administration, within hours or days, or chronically, from weeks to years.: 265
=== Immunosuppression and myelosuppression ===
Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets.
Anemia and thrombocytopenia may require blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 x 109/litre) can be improved with synthetic G-CSF (granulocyte-colony-stimulating factor, e.g., filgrastim, lenograstim, efbemalenograstim alfa).
In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the person before the treatment, multiplied and then re-injected afterward; in allogenic BMTs, the source is a donor.) However, some people still develop diseases because of this interference with bone marrow.
Although people receiving chemotherapy are encouraged to wash their hands, avoid sick people, and take other infection-reducing steps, about 85% of infections are due to naturally occurring microorganisms in the person's own gastrointestinal tract (including oral cavity) and skin.: 130 This may manifest as systemic infections, such as sepsis, or as localized outbreaks, such as Herpes simplex, shingles, or other members of the Herpesviridea. The risk of illness and death can be reduced by taking common antibiotics such as quinolones or trimethoprim/sulfamethoxazole before any fever or sign of infection appears. Quinolones show effective prophylaxis mainly with hematological cancer. However, in general, for every five people who are immunosuppressed following chemotherapy who take an antibiotic, one fever can be prevented; for every 34 who take an antibiotic, one death can be prevented. Sometimes, chemotherapy treatments are postponed because the immune system is suppressed to a critically low level.
In Japan, the government has approved the use of some medicinal mushrooms like Trametes versicolor, to counteract depression of the immune system in people undergoing chemotherapy.
Trilaciclib is an inhibitor of cyclin-dependent kinase 4/6 approved for the prevention of myelosuppression caused by chemotherapy. The drug is given before chemotherapy to protect bone marrow function.
=== Neutropenic enterocolitis ===
Due to immune system suppression, neutropenic enterocolitis (typhlitis) is a "life-threatening gastrointestinal complication of chemotherapy." Typhlitis is an intestinal infection which may manifest itself through symptoms including nausea, vomiting, diarrhea, a distended abdomen, fever, chills, or abdominal pain and tenderness.
Typhlitis is a medical emergency. It has a very poor prognosis and is often fatal unless promptly recognized and aggressively treated. Successful treatment hinges on early diagnosis provided by a high index of suspicion and the use of CT scanning, nonoperative treatment for uncomplicated cases, and sometimes elective right hemicolectomy to prevent recurrence.
=== Gastrointestinal distress ===
Nausea, vomiting, anorexia, diarrhea, abdominal cramps, and constipation are common side-effects of chemotherapeutic medications that kill fast-dividing cells. Malnutrition and dehydration can result when the recipient does not eat or drink enough, or when the person vomits frequently, because of gastrointestinal damage. This can result in rapid weight loss, or occasionally in weight gain, if the person eats too much in an effort to allay nausea or heartburn. Weight gain can also be caused by some steroid medications. These side-effects can frequently be reduced or eliminated with antiemetic drugs. Low-certainty evidence also suggests that probiotics may have a preventative and treatment effect of diarrhoea related to chemotherapy alone and with radiotherapy. However, a high index of suspicion is appropriate, since diarrhoea and bloating are also symptoms of typhlitis, a very serious and potentially life-threatening medical emergency that requires immediate treatment.
=== Anemia ===
Anemia can be a combined outcome caused by myelosuppressive chemotherapy, and possible cancer-related causes such as bleeding, blood cell destruction (hemolysis), hereditary disease, kidney dysfunction, nutritional deficiencies or anemia of chronic disease. Treatments to mitigate anemia include hormones to boost blood production (erythropoietin), iron supplements, and blood transfusions. Myelosuppressive therapy can cause a tendency to bleed easily, leading to anemia. Medications that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover.
Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours.
=== Nausea and vomiting ===
Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes debilitating symptoms results in increased quality of life for the recipient and more efficient treatment cycles, as patients are less likely to avoid or refuse treatment.
=== Hair loss ===
Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in "chemo curls." Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens.
Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men.
Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised.
=== Secondary neoplasm ===
Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy.
=== Infertility ===
Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil.
Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles.
People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years.
Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs.
In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years.
=== Teratogenicity ===
Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression.
Female patients of reproductive potential should use effective contraception during chemotherapy and for a few months after the last dose (e.g. 6 month for doxorubicin).
In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened.
=== Peripheral neuropathy ===
Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment.
=== Cognitive impairment ===
Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media.
=== Tumor lysis syndrome ===
In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated.: 202
=== Organ damage ===
Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine.
Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis.
Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury.
Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss.
=== Other side-effects ===
Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions.
Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease).
Hand-foot syndrome is another side effect to cytotoxic chemotherapy.
Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition.
== Limitations ==
Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer.
The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier.
Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system.
== Resistance ==
Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways.
== Cytotoxics and targeted therapies ==
Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug.
== Mechanism of action ==
Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth.: 93–94
In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis.
As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor.
Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy.
== Other uses ==
Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis.
Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer.
== Occupational exposure and safe handling ==
In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006.
The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist did not have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure.
=== Routes of exposure ===
Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure.
Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers.
=== Hazards ===
Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances.
Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings.
=== Safe handling in health care settings ===
As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines.
==== Preparation ====
NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container.
The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag.
The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet.
==== Administration ====
Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs.
Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site.
==== Employee training ====
All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced.
==== Housekeeping and waste disposal ====
When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container.
==== Spill control ====
A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container.
=== Occupational monitoring ===
A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage.
Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs.
Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally.
== Available agents ==
There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide the medicines used for cancer into several different types.
== History ==
The first use of small-molecule drugs to treat cancer was in the early 20th century, although the specific chemicals first used were not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was discovered to be a potent suppressor of hematopoiesis (blood production). A similar family of compounds known as nitrogen mustards were studied further during World War II at the Yale School of Medicine. It was reasoned that an agent that damaged the rapidly growing white blood cells might have a similar effect on cancer. Therefore, in December 1942, several people with advanced lymphomas (cancers of the lymphatic system and lymph nodes) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable. Concurrently, during a military operation in World War II, following a German air raid on the Italian harbour of Bari, several hundred people were accidentally exposed to mustard gas, which had been transported there by the Allied forces to prepare for possible retaliation in the event of German use of chemical warfare. The survivors were later found to have very low white blood cell counts. After WWII was over and the reports declassified, the experiences converged and led researchers to look for other substances that might have similar effects against cancer. The first chemotherapy drug to be developed from this line of research was mustine. Since then, many other drugs have been developed to treat cancer, and drug development has exploded into a multibillion-dollar industry, although the principles and limitations of chemotherapy discovered by the early researchers still apply.
=== The term chemotherapy ===
The word chemotherapy without a modifier usually refers to cancer treatment, but its historical meaning was broader. The term was coined in the early 1900s by Paul Ehrlich as meaning any use of chemicals to treat any disease (chemo- + -therapy), such as the use of antibiotics (antibacterial chemotherapy). Ehrlich was not optimistic that effective chemotherapy drugs would be found for the treatment of cancer. The first modern chemotherapeutic agent was arsphenamine, an arsenic compound discovered in 1907 and used to treat syphilis. This was later followed by sulfonamides (sulfa drugs) and penicillin. In today's usage, the sense "any treatment of disease with drugs" is often expressed with the word pharmacotherapy.
== Research ==
=== Targeted delivery vehicles ===
Specially targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill or reduced toxicity or both.
==== Antibody-drug conjugates ====
Antibody-drug conjugates (ADCs) comprise an antibody, drug and a linker between them. The antibody will be targeted at a preferentially expressed protein in the tumour cells (known as a tumor antigen) or on cells that the tumor can utilise, such as blood vessel endothelial cells. They bind to the tumor antigen and are internalised, where the linker releases the drug into the cell. These specially targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in people who are sicker and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches.
The first approved drug of this type was gemtuzumab ozogamicin (Mylotarg), released by Wyeth (now Pfizer). The drug was approved to treat acute myeloid leukemia. Two other drugs, trastuzumab emtansine and brentuximab vedotin, are both in late clinical trials, and the latter has been granted accelerated approval for the treatment of refractory Hodgkin's lymphoma and systemic anaplastic large cell lymphoma.
==== Nanoparticles ====
Nanoparticles are 1–1000 nanometer (nm) sized particles that can promote tumor selectivity and aid in delivering low-solubility drugs. Nanoparticles can be targeted passively or actively. Passive targeting exploits the difference between tumor blood vessels and normal blood vessels. Blood vessels in tumors are "leaky" because they have gaps from 200 to 2000 nm, which allow nanoparticles to escape into the tumor. Active targeting uses biological molecules (antibodies, proteins, DNA and receptor ligands) to preferentially target the nanoparticles to the tumor cells. There are many types of nanoparticle delivery systems, such as silica, polymers, liposomes and magnetic particles. Nanoparticles made of magnetic material can also be used to concentrate agents at tumor sites using an externally applied magnetic field. They have emerged as a useful vehicle in magnetic drug delivery for poorly soluble agents such as paclitaxel.
=== Electrochemotherapy ===
Electrochemotherapy is the combined treatment in which injection of a chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor. The treatment enables the chemotherapeutic drugs, which otherwise cannot or hardly go through the membrane of cells (such as bleomycin and cisplatin), to enter the cancer cells. Hence, greater effectiveness of antitumor treatment is achieved.
Clinical electrochemotherapy has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin. The method has been reported as safe, simple and highly effective in all reports on clinical use of electrochemotherapy. According to the ESOPE project (European Standard Operating Procedures of Electrochemotherapy), the Standard Operating Procedures (SOP) for electrochemotherapy were prepared, based on the experience of the leading European cancer centres on electrochemotherapy. Recently, new electrochemotherapy modalities have been developed for treatment of internal tumors using surgical procedures, endoscopic routes or percutaneous approaches to gain access to the treatment area.
=== Hyperthermia therapy ===
Hyperthermia therapy is heat treatment for cancer that can be a powerful tool when used in combination with chemotherapy (thermochemotherapy) or radiation for the control of a variety of cancers. The heat can be applied locally to the tumor site, which will dilate blood vessels to the tumor, allowing more chemotherapeutic medication to enter the tumor. Additionally, the tumor cell membrane will become more porous, further allowing more of the chemotherapeutic medicine to enter the tumor cell.
Hyperthermia has also been shown to help prevent or reverse "chemo-resistance." Chemotherapy resistance sometimes develops over time as the tumors adapt and can overcome the toxicity of the chemo medication. "Overcoming chemoresistance has been extensively studied within the past, especially using CDDP-resistant cells. In regard to the potential benefit that drug-resistant cells can be recruited for effective therapy by combining chemotherapy with hyperthermia, it was important to show that chemoresistance against several anticancer drugs (e.g. mitomycin C, anthracyclines, BCNU, melphalan) including CDDP could be reversed at least partially by the addition of heat.
== Other animals ==
Chemotherapy is used in veterinary medicine similar to how it is used in human medicine.
== See also ==
== References ==
== External links ==
Chemotherapy, American Cancer Society
Hazardous Drug Exposures in Health Care, National Institute for Occupational Safety and Health
NIOSH List of Antineoplastic and Other Hazardous Drugs in Healthcare Settings, 2016, National Institute for Occupational Safety and Health
Wikiversity page for the International Ototoxicity Management Group: https://en.wikiversity.org/wiki/International_Ototoxicity_Management_Group_(IOMG) | Wikipedia/Chemotherapy |
Mitochondrial DNA (mtDNA and mDNA) is the DNA located in the mitochondria organelles in a eukaryotic cell that converts chemical energy from food into adenosine triphosphate (ATP). Mitochondrial DNA is a small portion of the DNA contained in a eukaryotic cell; most of the DNA is in the cell nucleus, and, in plants and algae, the DNA also is found in plastids, such as chloroplasts. Mitochondrial DNA is responsible for coding of 13 essential subunits of the complex oxidative phosphorylation (OXPHOS) system which has a role in cellular energy conversion.
Human mitochondrial DNA was the first significant part of the human genome to be sequenced. This sequencing revealed that human mtDNA has 16,569 base pairs and encodes 13 proteins. As in other vertebrates, the human mitochondrial genetic code differs slightly from nuclear DNA.
Since animal mtDNA evolves faster than nuclear genetic markers, it represents a mainstay of phylogenetics and evolutionary biology. It also permits tracing the relationships of populations, and so has become important in anthropology and biogeography.
== Origin ==
Nuclear and mitochondrial DNA are thought to have separate evolutionary origins, with the mtDNA derived from the circular genomes of bacteria engulfed by the ancestors of modern eukaryotic cells. This theory is called the endosymbiotic theory. In the cells of extant organisms, the vast majority of the proteins in the mitochondria (numbering approximately 1500 different types in mammals) are coded by nuclear DNA, but the genes for some, if not most, of them are thought to be of bacterial origin, having been transferred to the eukaryotic nucleus during evolution.
The reasons mitochondria have retained some genes are debated. The existence in some species of mitochondrion-derived organelles lacking a genome suggests that complete gene loss is possible, and transferring mitochondrial genes to the nucleus has several advantages. The difficulty of targeting remotely produced hydrophobic protein products to the mitochondrion is one hypothesis for why some genes are retained in mtDNA; colocalisation for redox regulation is another, citing the desirability of localised control over mitochondrial machinery. Recent analysis of a wide range of mtDNA genomes suggests that both these features may dictate mitochondrial gene retention.
== Genome structure and diversity ==
Across all organisms, there are six main mitochondrial genome types, classified by structure (i.e. circular versus linear), size, presence of introns or plasmid like structures, and whether the genetic material is a singular molecule or collection of homogeneous or heterogeneous molecules.
In many unicellular organisms (e.g., the ciliate Tetrahymena and the green alga Chlamydomonas reinhardtii), and in rare cases also in multicellular organisms (e.g. in some species of Cnidaria), the mtDNA is linear DNA. Most of these linear mtDNAs possess telomerase-independent telomeres (i.e., the ends of the linear DNA) with different modes of replication, which have made them interesting objects of research because many of these unicellular organisms with linear mtDNA are known pathogens.
=== Animals ===
Most (bilaterian) animals have a circular mitochondrial genome. Medusozoa and calcarea clades however include species with linear mitochondrial chromosomes. With a few exceptions, animals have 37 genes in their mitochondrial DNA: 13 for proteins, 22 for tRNAs, and 2 for rRNAs.
Mitochondrial genomes for animals average about 16,000 base pairs in length. The anemone Isarachnanthus nocturnus has the largest mitochondrial genome of any animal at 80,923 bp. The smallest known mitochondrial genome in animals belongs to the comb jelly Vallicula multiformis, which consist of 9,961 bp.
In February 2020, a jellyfish-related parasite – Henneguya salminicola – was discovered that lacks a mitochondrial genome but retains structures deemed mitochondrion-related organelles. Moreover, nuclear DNA genes involved in aerobic respiration and mitochondrial DNA replication and transcription were either absent or present only as pseudogenes. This is the first multicellular organism known to have this absence of aerobic respiration and live completely free of oxygen dependency.
=== Plants and fungi ===
There are three different mitochondrial genome types in plants and fungi. The first type is a circular genome that has introns (type 2) and may range from 19 to 1000 kbp in length. The second genome type is a circular genome (about 20–1000 kbp) that also has a plasmid-like structure (1 kb) (type 3). The final genome type found in plants and fungi is a linear genome made up of homogeneous DNA molecules (type 5).
Great variation in mtDNA gene content and size exists among fungi and plants, although there appears to be a core subset of genes present in all eukaryotes (except for the few that have no mitochondria at all). In Fungi, however, there is no single gene shared among all mitogenomes.
Some plant species have enormous mitochondrial genomes, with Silene conica mtDNA containing as many as 11,300,000 base pairs. Surprisingly, even those huge mtDNAs contain the same number and kinds of genes as related plants with much smaller mtDNAs.
The genome of the mitochondrion of the cucumber (Cucumis sativus) consists of three circular chromosomes (lengths 1556, 84 and 45 kilobases), which are entirely or largely autonomous with regard to their replication.
=== Protists ===
Protists contain the most diverse mitochondrial genomes, with five different types found in this kingdom. Type 2, type 3, and type 5 of the plant and fungal genomes also exist in some protists, as do two unique genome types. One of these unique types is a heterogeneous collection of circular DNA molecules (type 4) while the other is a heterogeneous collection of linear molecules (type 6). Genome types 4 and 6 each range from 1–200 kbp in size.
The smallest mitochondrial genome sequenced to date is the 5,967 bp mtDNA of the parasite Plasmodium falciparum.
Endosymbiotic gene transfer, the process by which genes that were coded in the mitochondrial genome are transferred to the cell's main genome, likely explains why more complex organisms such as humans have smaller mitochondrial genomes than simpler organisms such as protists.
== Replication ==
The two strands of the human mitochondrial DNA are distinguished as the heavy strand and the light strand. The regulation of mitochondrial DNA replication and transcription initiation is located in a single intergenic noncoding region (NCR). In human, the 1,100 base pairs NCR region contains three promoters of two L-strand promoters (LSP and LSP2) and one H-strand promoter (HSP). Unlike bidirectional and specific origin initiation of nuclear DNA replication, mitochondrial DNA has two strand-specific, unidirectional origins of replication of the leading H strand (OH) which located in NCR and the lagging L strand (OL) which located in the tRNA gene cluster.
Mitochondrial DNA is replicated by the DNA polymerase gamma complex which is composed of a 140 kDa catalytic DNA polymerase encoded by the POLG gene and two 55 kDa accessory subunits encoded by the POLG2 gene. The replisome machinery is formed by DNA polymerase, TWINKLE and mitochondrial SSB proteins. TWINKLE is a helicase, which unwinds short stretches of dsDNA in the 5' to 3' direction. All these polypeptides are encoded in the nuclear genome.
During embryogenesis, replication of mtDNA is strictly down-regulated from the fertilized oocyte through the preimplantation embryo. The resulting reduction in per-cell copy number of mtDNA plays a role in the mitochondrial bottleneck, exploiting cell-to-cell variability to ameliorate the inheritance of damaging mutations. According to Justin St. John and colleagues, "At the blastocyst stage, the onset of mtDNA replication is specific to the cells of the trophectoderm. In contrast, the cells of the inner cell mass restrict mtDNA replication until they receive the signals to differentiate to specific cell types."
== DNA repair ==
Although several DNA repair pathways have been reported to occur in the mitochondria, currently the base excision repair pathway is the pathway most comprehensively described. Proteins that are employed in the maintenance of mitochondrial DNA are encoded by nuclear genes and translocated to the mitochondria. The mitochondria of human cells are capable of repairing DNA base pair mismatches by a pathway that is distinct from the DNA mismatch repair pathway of the nucleus. This distinct mitochondrial pathway includes the activity of the Y box binding protein 1 (designated YB-1 or YBX1), that likely acts in the mismatch binding and recognition steps of mismatch repair. DNA repair mechanisms specific to the mitochondria may reflect the proximity of the mitochondrial DNA to the oxidative phosphorylation system and consequently to the DNA-damaging reactive oxygen species formed during ATP production.
== Genes on the human mtDNA and their transcription ==
The two strands of the human mitochondrial DNA are distinguished as the heavy strand and the light strand. The heavy strand is rich in guanine and encodes 12 subunits of the oxidative phosphorylation system, two ribosomal RNAs (12S and 16S), and 14 transfer RNAs (tRNAs). The light strand encodes one subunit and 8 tRNAs. So, altogether mtDNA encodes for two rRNAs, 22 tRNAs, and 13 protein subunits, all of which are involved in the oxidative phosphorylation process.
The complete sequence of the human mitochondrial DNA in graphic form
Between most (but not all) protein-coding regions, tRNAs are present (see the human mitochondrial genome map). During transcription, the tRNAs acquire their characteristic L-shape that gets recognized and cleaved by specific enzymes. With the mitochondrial RNA processing, individual mRNA, rRNA, and tRNA sequences are released from the primary transcript. Folded tRNAs therefore act as secondary structure punctuations.
Transcription is done by the single-subunit mitochondrial RNA polymerase (POLRMT). In association with two of accessory factors, mitochondrial transcription factor A (TFAM) and mitochondrial transcription factor B2 (TFB2M), the POLRMT complex recognizes promoters and initiates transcription. Transcription resulted in polycistronic transcripts that are processed in discrete mitochondrial RNA granules into individual mRNAs, tRNAs, and rRNAs.
=== Regulation of transcription ===
The promoters for the initiation of the transcription of the heavy and light strands are located in the main non-coding region of the mtDNA called the displacement loop, the D-loop. There is evidence that the transcription of the mitochondrial rRNAs is regulated by the heavy-strand promoter 1 (HSP1), and the transcription of the polycistronic transcripts coding for the protein subunits are regulated by HSP2.
Measurement of the levels of the mtDNA-encoded RNAs in bovine tissues has shown that there are major differences in the expression of the mitochondrial RNAs relative to total tissue RNA. Among the 12 tissues examined the highest level of expression was observed in the heart, followed by brain and steroidogenic tissue samples.
As demonstrated by the effect of the trophic hormone ACTH on adrenal cortex cells, the expression of the mitochondrial genes may be strongly regulated by external factors, apparently to enhance the synthesis of mitochondrial proteins necessary for energy production. Interestingly, while the expression of protein-encoding genes was stimulated by ACTH, the levels of the mitochondrial 16S rRNA showed no significant change.
== Mitochondrial inheritance ==
In most multicellular organisms, mtDNA is inherited from the mother (maternally inherited). Mechanisms for this include simple dilution (an egg contains on average 200,000 mtDNA molecules, whereas a healthy human sperm has been reported to contain on average 5 molecules), degradation of sperm mtDNA in the male genital tract and the fertilized egg; and, at least in a few organisms, failure of sperm mtDNA to enter the egg. Whatever the mechanism, this single parent (uniparental inheritance) pattern of mtDNA inheritance is found in most animals, most plants, and also in fungi.
In a study published in 2018, human babies were reported to inherit mtDNA from both their fathers and their mothers resulting in mtDNA heteroplasmy, a finding that has been rejected by other scientists.
=== Female inheritance ===
In sexual reproduction, mitochondria are normally inherited exclusively from the mother; the mitochondria in mammalian sperm are usually destroyed by the egg cell after fertilization. Also, mitochondria are present solely in the midpiece, which is used for propelling the sperm cells, and sometimes the midpiece, along with the tail, is lost during fertilization. In 1999 it was reported that paternal sperm mitochondria (containing mtDNA) are marked with ubiquitin to select them for later destruction inside the embryo. Some in vitro fertilization techniques, particularly injecting a sperm into an oocyte, may interfere with this.
The fact that mitochondrial DNA is mostly maternally inherited enables genealogical researchers to trace maternal lineage far back in time. (Y-chromosomal DNA, paternally inherited, is used in an analogous way to determine the patrilineal history.) This is usually accomplished on human mitochondrial DNA by sequencing the hypervariable control regions (HVR1 or HVR2), and sometimes the complete molecule of the mitochondrial DNA, as a genealogical DNA test. HVR1, for example, consists of about 440 base pairs. These 440 base pairs are compared to the same regions of other individuals (either specific people or subjects in a database) to determine maternal lineage. Most often, the comparison is made with the revised Cambridge Reference Sequence. Vilà et al. have published studies tracing the matrilineal descent of domestic dogs from wolves.
The concept of the Mitochondrial Eve is based on the same type of analysis, attempting to discover the origin of humanity by tracking the lineage back in time.
=== The mitochondrial bottleneck ===
Entities subject to uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this through a developmental process known as the mtDNA bottleneck. The bottleneck exploits random processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo in which different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilisation or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of the random partitioning of mtDNAs at cell divisions and the random turnover of mtDNA molecules within the cell.
=== Male inheritance ===
Male mitochondrial DNA inheritance has been discovered in Plymouth Rock chickens. Evidence supports rare instances of male mitochondrial inheritance in some mammals as well. Specifically, documented occurrences exist for mice, where the male-inherited mitochondria were subsequently rejected. It has also been found in sheep, and in cloned cattle. Rare cases of male mitochondrial inheritance have been documented in humans. Although many of these cases involve cloned embryos or subsequent rejection of the paternal mitochondria, others document in vivo inheritance and persistence under lab conditions.
Doubly uniparental inheritance of mtDNA is observed in bivalve mollusks. In those species, females have only one type of mtDNA (F), whereas males have F-type mtDNA in their somatic cells, but M-type mtDNA (which can be as much as 30% divergent) in germline cells. Paternally inherited mitochondria have additionally been reported in some insects such as fruit flies, honeybees, and periodical cicadas.
=== Mitochondrial donation ===
An IVF technique known as mitochondrial donation or mitochondrial replacement therapy (MRT) results in offspring containing mtDNA from a donor female, and nuclear DNA from the mother and father. In the spindle transfer procedure, the nucleus of an egg is inserted into the cytoplasm of an egg from a donor female which has had its nucleus removed but still contains the donor female's mtDNA. The composite egg is then fertilized with the male's sperm. The procedure is used when a woman with genetically defective mitochondria wishes to procreate and produce offspring with healthy mitochondria. The first known child to be born as a result of mitochondrial donation was a boy born to a Jordanian couple in Mexico on 6 April 2016.
== Mutations and disease ==
=== Susceptibility ===
The concept that mtDNA is particularly susceptible to reactive oxygen species generated by the respiratory chain due to its proximity remains controversial. mtDNA does not accumulate any more oxidative base damage than nuclear DNA. It has been reported that at least some types of oxidative DNA damage are repaired more efficiently in mitochondria than they are in the nucleus. mtDNA is packaged with proteins which appear to be as protective as proteins of the nuclear chromatin. Moreover, mitochondria evolved a unique mechanism which maintains mtDNA integrity through degradation of excessively damaged genomes followed by replication of intact/repaired mtDNA. This mechanism is not present in the nucleus and is enabled by multiple copies of mtDNA present in mitochondria. The outcome of mutation in mtDNA may be an alteration in the coding instructions for some proteins, which may have an effect on organism metabolism and/or fitness.
=== Genetic illness ===
Mutations of mitochondrial DNA can lead to a number of illnesses including exercise intolerance and Kearns–Sayre syndrome (KSS), which causes a person to lose full function of heart, eye, and muscle movements. Some evidence suggests that they might be major contributors to the aging process and age-associated pathologies. Particularly in the context of disease, the proportion of mutant mtDNA molecules in a cell is termed heteroplasmy. The within-cell and between-cell distributions of heteroplasmy dictate the onset and severity of disease and are influenced by complicated stochastic processes within the cell and during development.
Mutations in mitochondrial tRNAs can be responsible for severe diseases like the MELAS and MERRF syndromes.
Mutations in nuclear genes that encode proteins that mitochondria use can also contribute to mitochondrial diseases. These diseases do not follow mitochondrial inheritance patterns but instead follow Mendelian inheritance patterns.
=== Use in disease diagnosis ===
Recently a mutation in mtDNA has been used to help diagnose prostate cancer in patients with negative prostate biopsy.
mtDNA alterations can be detected in the bio-fluids of patients with cancer. mtDNA is characterized by the high rate of polymorphisms and mutations. Some of these are increasingly recognized as an important cause of human pathology such as oxidative phosphorylation (OXPHOS) disorders, maternally inherited diabetes and deafness (MIDD), Type 2 diabetes mellitus, Neurodegenerative disease, heart failure, and cancer.
=== Relationship with ageing ===
Though the idea is controversial, some evidence suggests a link between aging and mitochondrial genome dysfunction. In essence, mutations in mtDNA upset a careful balance of reactive oxygen species (ROS) production and enzymatic ROS scavenging (by enzymes like superoxide dismutase, catalase, glutathione peroxidase and others). However, some mutations that increase ROS production (e.g., by reducing antioxidant defenses) in worms increase, rather than decrease, their longevity. Also, naked mole rats, rodents about the size of mice, live about eight times longer than mice despite having reduced, compared to mice, antioxidant defenses and increased oxidative damage to biomolecules. Once, there was thought to be a positive feedback loop at work (a 'Vicious Cycle'); as mitochondrial DNA accumulates genetic damage caused by free radicals, the mitochondria lose function and leak free radicals into the cytosol. A decrease in mitochondrial function reduces overall metabolic efficiency. However, this concept was conclusively disproved when it was demonstrated that mice, which were genetically altered to accumulate mtDNA mutations at an accelerated rate to age prematurely, but their tissues do not produce more ROS as predicted by the 'Vicious Cycle' hypothesis. Supporting a link between longevity and mitochondrial DNA, some studies have found correlations between biochemical properties of the mitochondrial DNA and the longevity of species. The application of a mitochondrial-specific ROS scavenger, which lead to a significant longevity of the mice studied, suggests that mitochondria may still be well-implicated in ageing. Extensive research is being conducted to further investigate this link and methods to combat ageing. Presently, gene therapy and nutraceutical supplementation are popular areas of ongoing research. Bjelakovic et al. analyzed the results of 78 studies between 1977 and 2012, involving a total of 296,707 participants, and concluded that antioxidant supplements do not reduce all-cause mortality nor extend lifespan, while some of them, such as beta carotene, vitamin E, and higher doses of vitamin A, may actually increase mortality.
In a recent study, it was shown that dietary restriction can reverse ageing alterations by affecting the accumulation of mtDNA damage in several organs of rats. For example, dietary restriction prevented age-related accumulation of mtDNA damage in the cortex and decreased it in the lung and testis.
=== Neurodegenerative diseases ===
Increased mtDNA damage is a feature of several neurodegenerative diseases.
The brains of individuals with Alzheimer's disease have elevated levels of oxidative DNA damage in both nuclear DNA and mtDNA, but the mtDNA has approximately 10-fold higher levels than nuclear DNA. It has been proposed that aged mitochondria is the critical factor in the origin of neurodegeneration in Alzheimer's disease. Analysis of the brains of AD patients suggested an impaired function of the DNA repair pathway, which would cause reduce the overall quality of mtDNA.
In Huntington's disease, mutant huntingtin protein causes mitochondrial dysfunction involving inhibition of mitochondrial electron transport, higher levels of reactive oxygen species and increased oxidative stress. Mutant huntingtin protein promotes oxidative damage to mtDNA, as well as nuclear DNA, that may contribute to Huntington's disease pathology.
The DNA oxidation product 8-oxoguanine (8-oxoG) is a well-established marker of oxidative DNA damage. In persons with amyotrophic lateral sclerosis (ALS), the enzymes that normally repair 8-oxoG DNA damages in the mtDNA of spinal motor neurons are impaired. Thus oxidative damage to mtDNA of motor neurons may be a significant factor in the etiology of ALS.
=== Correlation of the mtDNA base composition with animal life spans ===
Over the past decade, an Israeli research group led by Professor Vadim Fraifeld has shown that strong and significant correlations exist between the mtDNA base composition and animal species-specific maximum life spans. As demonstrated in their work, higher mtDNA guanine + cytosine content (GC%) strongly associates with longer maximum life spans across animal species. An additional observation is that the mtDNA GC% correlation with the maximum life spans is independent of the well-known correlation between animal species' metabolic rate and maximum life spans. The mtDNA GC% and resting metabolic rate explain the differences in animal species' maximum life spans in a multiplicative manner (i.e., species maximum life span = their mtDNA GC% * metabolic rate). To support the scientific community in carrying out comparative analyses between mtDNA features and longevity across animals, a dedicated database was built named MitoAge.
=== mtDNA mutational spectrum is sensitive to species-specific life-history traits ===
De novo mutations arise either due to mistakes during DNA replication or due to unrepaired damage caused in turn by endogenous and exogenous mutagens. It has been long believed that mtDNA can be particularly sensitive to damage caused by reactive oxygen species (ROS), however, G>T substitutions, the hallmark of the oxidative damage in the nuclear genome, are very rare in mtDNA and do not increase with age. Comparing the mtDNA mutational spectra of hundreds of mammalian species, it has been recently demonstrated that species with extended lifespans have an increased rate of A>G substitutions on single-stranded heavy chains. This discovery led to the hypothesis that A>G is a mitochondria-specific marker of age-associated oxidative damage. This finding provides a mutational (contrary to the selective one) explanation for the observation that long-lived species have GC-rich mtDNA: long-lived species become GC-rich simply because of their biased process of mutagenesis. An association between mtDNA mutational spectrum and species-specific life-history traits in mammals opens a possibility to link these factors together discovering new life-history-specific mutagens in different groups of organisms.
=== Relationship with non-B (non-canonical) DNA structures ===
Deletion breakpoints frequently occur within or near regions showing non-canonical (non-B) conformations, namely hairpins, cruciforms, and cloverleaf-like elements. Moreover, data supports the involvement of helix-distorting intrinsically curved regions and long G-tetrads in eliciting instability events. In addition, higher breakpoint densities were consistently observed within GC-skewed regions and in the close vicinity of the degenerate sequence motif YMMYMNNMMHM.
== Use in forensics ==
Unlike nuclear DNA, which is inherited from both parents and in which genes are rearranged in the process of recombination, there is usually no change in mtDNA from parent to offspring. Although mtDNA also recombines, it does so with copies of itself within the same mitochondrion. Because of this and because the mutation rate of animal mtDNA is higher than that of nuclear DNA, mtDNA is a powerful tool for tracking ancestry through females (matrilineage) and has been used in this role to track the ancestry of many species back hundreds of generations.
mtDNA testing can be used by forensic scientists in cases where nuclear DNA is severely degraded. Autosomal cells only have two copies of nuclear DNA but can have hundreds of copies of mtDNA due to the multiple mitochondria present in each cell. This means highly degraded evidence that would not be beneficial for STR analysis could be used in mtDNA analysis. mtDNA may be present in bones, teeth, or hair, which could be the only remains left in the case of severe degradation. In contrast to STR analysis, mtDNA sequencing uses Sanger sequencing. The known sequence and questioned sequence are both compared to the Revised Cambridge Reference Sequence to generate their respective haplotypes. If the known sample sequence and questioned sequence originated from the same matriline, one would expect to see identical sequences and identical differences from the rCRS. Cases arise where there are no known samples to collect and the unknown sequence can be searched in a database such as EMPOP. The Scientific Working Group on DNA Analysis Methods recommends three conclusions for describing the differences between a known mtDNA sequence and a questioned mtDNA sequence: exclusion for two or more differences between the sequences, inconclusive if there is one nucleotide difference, or inability to exclude if there are no nucleotide differences between the two sequences.
The rapid mutation rate (in animals) makes mtDNA useful for assessing the genetic relationships of individuals or groups within a species and also for identifying and quantifying the phylogeny (evolutionary relationships; see phylogenetics) among different species. To do this, biologists determine and then compare the mtDNA sequences from different individuals or species. Data from the comparisons is used to construct a network of relationships among the sequences, which provides an estimate of the relationships among the individuals or species from which the mtDNAs were taken. mtDNA can be used to estimate the relationship between both closely related and distantly related species. Due to the high mutation rate of mtDNA in animals, the 3rd positions of the codons change relatively rapidly and thus provide information about the genetic distances among closely related individuals or species. On the other hand, the substitution rate of mt-proteins is very low, thus amino acid changes accumulate slowly (with corresponding slow changes at 1st and 2nd codon positions) and thus they provide information about the genetic distances of distantly related species. Statistical models that treat substitution rates among codon positions separately, can thus be used to simultaneously estimate phylogenies that contain both closely and distantly related species
Mitochondrial DNA was admitted into evidence for the first time ever in a United States courtroom in 1996 during State of Tennessee v. Paul Ware.
In the 1998 United States court case of Commonwealth of Pennsylvania v. Patricia Lynne Rorrer, mitochondrial DNA was admitted into evidence in the State of Pennsylvania for the first time. The case was featured in episode 55 of season 5 of the true crime drama series Forensic Files (season 5).
Mitochondrial DNA was first admitted into evidence in California, United States, in the successful prosecution of David Westerfield for the 2002 kidnapping and murder of 7-year-old Danielle van Dam in San Diego: it was used for both human and dog identification. This was the first trial in the U.S. to admit canine DNA.
The remains of King Richard III, who died in 1485, were identified by comparing his mtDNA with that of two matrilineal descendants of his sister who were alive in 2013, 527 years after he died.
== Use in evolutionary biology and systematic biology ==
mtDNA is conserved across eukaryotic organisms given the critical role of mitochondria in cellular respiration. However, due to less efficient DNA repair (compared to nuclear DNA), it has a relatively high mutation rate (but slow compared to other DNA regions such as microsatellites) which makes it useful for studying the evolutionary relationships—phylogeny—of organisms. Biologists can determine and then compare mtDNA sequences among different species and use the comparisons to build an evolutionary tree for the species examined.
For instance, while most nuclear genes are nearly identical between humans and chimpanzees, their mitochondrial genomes are 9.8% different. Human and gorilla mitochondrial genomes are 11.8% different, suggesting that humans may be more closely related to chimpanzees than gorillas.
== mtDNA in nuclear DNA ==
Whole genome sequences of more than 66,000 people revealed that most of them had some mitochondrial DNA inserted into their nuclear genomes. More than 90% of these nuclear-mitochondrial segments (NUMTs) were inserted after humans diverged from the other apes. Results indicate such transfers currently occur as frequently as once in every ≈4,000 human births.
It appears that organellar DNA is much more often transferred to nuclear DNA than previously thought. This observation also supports the idea of the endosymbiont theory that eukaryotes have evolved from endosymbionts which turned into organelles while transferring most of their DNA to the nucleus so that the organellar genome shrunk in the process.
== History ==
Mitochondrial DNA was discovered in the 1960s by Margit M. K. Nass and Sylvan Nass by electron microscopy as DNase-sensitive threads inside mitochondria, and by Ellen Haslbrunner, Hans Tuppy and Gottfried Schatz by biochemical assays on highly purified mitochondrial fractions.
== Mitochondrial sequence databases ==
Several specialized databases have been founded to collect mitochondrial genome sequences and other information. Although most of them focus on sequence data, some of them include phylogenetic or functional information.
AmtDB: a database of ancient human mitochondrial genomes.
InterMitoBase: an annotated database and analysis platform of protein-protein interactions for human mitochondria. (apparently last updated in 2010, but still available)
MitoBreak: the mitochondrial DNA breakpoints database.
MitoFish and MitoAnnotator: a mitochondrial genome database of fish. See also Cawthorn et al.
Mitome: a database for comparative mitochondrial genomics in metazoan animals (no longer available)
MitoRes: a resource of nuclear-encoded mitochondrial genes and their products in metazoa (apparently no longer being updated)
MitoSatPlant: Mitochondrial microsatellites database of viridiplantae.
MitoZoa 2.0: a database for comparative and evolutionary analyses of mitochondrial genomes in Metazoa. (no longer available)
== MtDNA-phenotype association databases ==
Genome-wide association studies can reveal associations of mtDNA genes and their mutations with phenotypes including lifespan and disease risks. In 2021, the largest, UK Biobank-based, genome-wide association study of mitochondrial DNA unveiled 260 new associations with phenotypes including lifespan and disease risks for e.g. type 2 diabetes.
=== Mitochondrial mutation databases ===
Several specialized databases exist that report polymorphisms and mutations in the human mitochondrial DNA, together with the assessment of their pathogenicity.
MitImpact: A collection of pre-computed pathogenicity predictions for all nucleotide changes that cause non-synonymous substitutions in human mitochondrial protein-coding genes MitImpact 3D - IRCCS-CSS Bioinformatics lab.
MITOMAP: A compendium of polymorphisms and mutations in human mitochondrial DNA WebHome < MITOMAP < Foswiki.
== See also ==
== References ==
== External links ==
Media related to Mitochondrial DNA at Wikimedia Commons | Wikipedia/Mitochondrial_DNA |
DNA ends refer to the properties of the ends of linear DNA molecules, which in molecular biology are described as "sticky" or "blunt" based on the shape of the complementary strands at the terminus. In sticky ends, one strand is longer than the other (typically by at least a few nucleotides), such that the longer strand has bases which are left unpaired. In blunt ends, both strands are of equal length – i.e. they end at the same base position, leaving no unpaired bases on either strand.
The concept is used in molecular biology, in cloning, or when subcloning insert DNA into vector DNA. Such ends may be generated by restriction enzymes that break the molecule's phosphodiester backbone at specific locations, which themselves belong to a larger class of enzymes called exonucleases and endonucleases. A restriction enzyme that cuts the backbones of both strands at non-adjacent locations leaves a staggered cut, generating two overlapping sticky ends, while an enzyme that makes a straight cut (at locations directly across from each other on both strands) generates two blunt ends.
== Single-stranded DNA molecules ==
A single-stranded non-circular DNA molecule has two non-identical ends, the 3' end and the 5' end (usually pronounced "three prime end" and "five prime end"). The numbers refer to the numbering of carbon atoms in the deoxyribose, which is a sugar forming an important part of the backbone of the DNA molecule. In the backbone of DNA the 5' carbon of one deoxyribose is linked to the 3' carbon of another by a phosphodiester bond linkage.
== Variations in double-stranded molecules ==
When a molecule of DNA is double stranded, as DNA usually is, the two strands run in opposite directions. Therefore, one end of the molecule will have the 3' end of strand 1 and the 5' end of strand 2, and vice versa in the other end. However, the fact that the molecule is two stranded allows numerous different variations.
=== Blunt ends ===
The simplest DNA end of a double stranded molecule is called a blunt end. Blunt ends are also known as non-cohesive ends. In a blunt-ended molecule, both strands terminate in a base pair. Blunt ends are not always desired in biotechnology since when using a DNA ligase to join two molecules into one, the yield is significantly lower with blunt ends. When performing subcloning, it also has the disadvantage of potentially inserting the insert DNA in the opposite orientation desired. On the other hand, blunt ends are always compatible with each other. Here is an example of a small piece of blunt-ended DNA:
5'-GATCTGACTGATGCGTATGCTAGT-3'
3'-CTAGACTGACTACGCATACGATCA-5'
=== Overhangs and sticky ends ===
Non-blunt ends are created by various overhangs. An overhang is a stretch of unpaired nucleotides in the end of a DNA molecule. These unpaired nucleotides can be in either strand, creating either 3' or 5' overhangs. These overhangs are in most cases palindromic.
The simplest case of an overhang is a single nucleotide. This is most often adenine and is created as a 3' overhang by some DNA polymerases. Most commonly this is used in cloning PCR products created by such an enzyme. The product is joined with a linear DNA molecule with a 3' thymine overhang. Since adenine and thymine form a base pair, this facilitates the joining of the two molecules by a ligase, yielding a circular molecule. Here is an example of an A-overhang:
5'-ATCTGACTA-3'
3'-TAGACTGA -5'
Longer overhangs are called cohesive ends or sticky ends. They are most often created by restriction endonucleases when they cut DNA. Very often they cut the two DNA strands four base pairs from each other, creating a four-base 3' overhang in one molecule and a complementary 3' overhang in the other. These ends are called cohesive since they are easily joined back together by a ligase.
For example, these two "sticky" ends (four-base 5' overhangs) are compatible:
5'-ATCTGACT GATGCGTATGCT-3'
3'-TAGACTGACTACG CATACGA-5'
Also, since different restriction endonucleases usually create different overhangs, it is possible to create a plasmid by excising a piece of DNA (using a different enzyme for each end) and then joining it to another DNA molecule with ends trimmed by the same enzymes. Since the overhangs have to be complementary in order for the ligase to work, the two molecules can only join in one orientation. This is often highly desirable in molecular biology.
Sticky ends can be converted to blunt ends by a process known as blunting, which involves filling in the sticky end with complementary nucleotides. This yields a blunt end, however, sticky ends are often preferable, meaning the main use of this method is to label DNA by using radiolabeled nucleotides to fill the gap. Blunt ends can also be converted to sticky ends by addition of double-stranded linker sequences containing recognition sequences for restriction endonucleases that create sticky ends and subsequent application of the restriction enzyme or by homopolymer tailing, which refers to extending the molecule's 3' ends with only one nucleotide, allowing for specific pairing with the matching nucleotide (e.g. poly-C with poly-G).
=== Frayed ends ===
Across from each single strand of DNA, we typically see adenine pair with thymine, and cytosine pair with guanine to form a parallel complementary strand as described below. Two nucleotide sequences which correspond to each other in this manner are referred to as complementary:
5'-ATCTGACT-3'
3'-TAGACTGA-5'
A frayed end refers to a region of a double stranded (or other multi-stranded) DNA molecule near the end with a significant proportion of non-complementary sequences; that is, a sequence where nucleotides on the adjacent strands do not match up correctly:
5'-ATCTGACTAGGCA-3'
3'-TAGACTGACTACG-5'
The term "frayed" is used because the incorrectly matched nucleotides tend to avoid bonding, thus appearing similar to the strands in a fraying piece of rope.
Although non-complementary sequences are also possible in the middle of double stranded DNA, mismatched regions away from the ends are not referred to as "frayed".
== Discovery ==
Ronald W. Davis first discovered sticky ends as the product of the action of EcoRI, the restriction endonuclease.
== Strength ==
Sticky end links are different in their stability. Free energy of formation can be measured to estimate stability. Free energy approximations can be made for different sequences from data related to oligonucleotide UV thermal denaturation curves. Also predictions from molecular dynamics simulations show that some sticky end links are much stronger in stretch than the others.
== References ==
Sambrook, Joseph; David Russell (2001). Molecular Cloning: A Laboratory Manual. New York: Cold Spring Harbor Laboratory Press, ISBN 0879695765. | Wikipedia/DNA_end |
Molecular models of DNA structures are representations of the molecular geometry and topology of deoxyribonucleic acid (DNA) molecules using one of several means, with the aim of simplifying and presenting the essential, physical and chemical, properties of DNA molecular structures either in vivo or in vitro. These representations include closely packed spheres (CPK models) made of plastic, metal wires for skeletal models, graphic computations and animations by computers, artistic rendering. Computer molecular models also allow animations and molecular dynamics simulations that are very important for understanding how DNA functions in vivo.
The more advanced, computer-based molecular models of DNA involve molecular dynamics simulations and quantum mechanics computations of vibro-rotations, delocalized molecular orbitals (MOs), electric dipole moments, hydrogen-bonding, and so on. DNA molecular dynamics modeling involves simulating deoxyribonucleic acid (DNA) molecular geometry and topology changes with time as a result of both intra- and inter- molecular interactions of DNA. Whereas molecular models of DNA molecules such as closely packed spheres (CPK models) made of plastic or metal wires for skeletal models are useful representations of static DNA structures, their usefulness is very limited for representing complex DNA dynamics. Computer molecular modeling allows both animations and molecular dynamics simulations that are very important to understand how DNA functions in vivo.
== History ==
From the very early stages of structural studies of DNA by X-ray diffraction and biochemical means, molecular models such as the Watson-Crick nucleic acid double helix model were successfully employed to solve the 'puzzle' of DNA structure, and also find how the latter relates to its key functions in living cells. The first high quality X-ray diffraction patterns
of A-DNA were reported by Rosalind Franklin and Raymond Gosling in 1953. Rosalind Franklin made the critical observation that DNA exists in two distinct forms, A and B, and produced the sharpest pictures of both through X-ray diffraction technique. The first calculations of the Fourier transform of an atomic helix were reported one year earlier by Cochran, Crick and Vand, and were followed in 1953 by the computation of the Fourier transform of a coiled-coil by Crick.
Structural information is generated from X-ray diffraction studies of oriented DNA fibers with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns.
The first reports of a double helix molecular model of B-DNA structure were made by James Watson and Francis Crick in 1953. That same year, Maurice F. Wilkins,
A. Stokes and H.R. Wilson, reported the first X-ray patterns
of in vivo B-DNA in partially oriented salmon sperm heads.
The development of the first correct double helix molecular model of DNA by Crick and Watson may not have been possible without the biochemical evidence for the nucleotide base-pairing ([A---T]; [C---G]), or Chargaff's rules. Although such initial studies of DNA structures with the help of molecular models were essentially static, their consequences for explaining the in vivo functions of DNA were significant in the areas of protein biosynthesis and the quasi-universality of the genetic code. Epigenetic transformation studies of DNA in vivo were however much slower to develop despite their importance for embryology, morphogenesis and cancer research. Such chemical dynamics and biochemical reactions of DNA are much more complex than the molecular dynamics of DNA physical interactions with water, ions and proteins/enzymes in living cells.
== Importance ==
An old standing dynamic problem is how DNA "self-replication" takes place in living cells that should involve transient uncoiling of supercoiled DNA fibers. Although DNA consists of relatively rigid, very large elongated biopolymer molecules called fibers or chains (that are made of repeating nucleotide units of four basic types, attached to deoxyribose and phosphate groups), its molecular structure in vivo undergoes dynamic configuration changes that involve dynamically attached water molecules and ions. Supercoiling, packing with histones in chromosome structures, and other such supramolecular aspects also involve in vivo DNA topology which is even more complex than DNA molecular geometry, thus turning molecular modeling of DNA into an especially challenging problem for both molecular biologists and biotechnologists. Like other large molecules and biopolymers, DNA often exists in multiple stable geometries (that is, it exhibits conformational isomerism) and configurational, quantum states which are close to each other in energy on the potential energy surface of the DNA molecule.
Such varying molecular geometries can also be computed, at least in principle, by employing ab initio quantum chemistry methods that can attain high accuracy for small molecules, although claims that acceptable accuracy can be also achieved for polynuclelotides, and DNA conformations, were recently made on the basis of vibrational circular dichroism (VCD) spectral data. Such quantum geometries define an important class of ab initio molecular models of DNA which exploration has barely started, especially related to results obtained by VCD in solutions. More detailed comparisons with such ab initio quantum computations are in principle obtainable through 2D-FT NMR spectroscopy and relaxation studies of polynucleotide solutions or specifically labeled DNA, as for example with deuterium labels.
In an interesting twist of roles, the DNA molecule was proposed to be used for quantum computing via DNA. Both DNA nanostructures and DNA computing biochips have been built.
== Fundamental concepts ==
The chemical structure of DNA is insufficient to understand the complexity of the 3D structures of DNA. In contrast, animated molecular models allow one to visually explore the three-dimensional (3D) structure of DNA. The DNA model shown (far right) is a space-filling, or CPK, model of the DNA double helix. Animated molecular models, such as the wire, or skeletal, type shown at the top of this article, allow one to visually explore the three-dimensional (3D) structure of DNA. Another type of DNA model is the space-filling, or CPK, model.
The hydrogen bonding dynamics and proton exchange is very different by many orders of magnitude between the two systems of fully hydrated DNA and water molecules in ice. Thus, the DNA dynamics is complex, involving nanosecond and several tens of picosecond time scales, whereas that of liquid ice is on the picosecond time scale, and that of proton exchange in ice is on the millisecond time scale. The proton exchange rates in DNA and attached proteins may vary from picosecond to nanosecond, minutes or years, depending on the exact locations of the exchanged protons in the large biopolymers.
A simple harmonic oscillator 'vibration' is only an oversimplified dynamic representation of the longitudinal vibrations of the DNA intertwined helices which were found to be anharmonic rather than harmonic as often assumed in quantum dynamic simulations of DNA.
=== DNA structure ===
The structure of DNA shows a variety of forms, both double-stranded and single-stranded. The mechanical properties of DNA, which are directly related to its structure, are a significant problem for cells. Every process which binds or reads DNA is able to use or modify the mechanical properties of DNA for purposes of recognition, packaging and modification. The extreme length (a chromosome may contain a 10 cm long DNA strand), relative rigidity and helical structure of DNA has led to the evolution of histones and of enzymes such as topoisomerases and helicases to manage a cell's DNA. The properties of DNA are closely related to its molecular structure and sequence, particularly the weakness of the hydrogen bonds and electronic interactions that hold strands of DNA together compared to the strength of the bonds within each strand.
Experimental methods which can directly measure the mechanical properties of DNA are relatively new, and high-resolution visualization in solution is often difficult. Nevertheless, scientists have uncovered large amount of data on the mechanical properties of this polymer, and the implications of DNA's mechanical properties on cellular processes is a topic of active current research.
The DNA found in many cells can be macroscopic in length: a few centimetres long for each human chromosome. Consequently, cells must compact or package DNA to carry it within them. In eukaryotes this is carried by spool-like proteins named histones, around which DNA winds. It is the further compaction of this DNA-protein complex which produces the well known mitotic eukaryotic chromosomes.
In the late 1970s, alternate non-helical models of DNA structure were briefly considered as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes, and later the nucleosome core particle, and the discovery of topoisomerases. Such non-double-helical models are not currently accepted by the mainstream scientific community.
== DNA structure determination using molecular modeling and DNA X-ray patterns ==
After DNA has been separated and purified by standard biochemical methods, one has a sample in a jar much like in the figure at the top of this article. Below are the main steps involved in generating structural information from X-ray diffraction studies of oriented DNA fibers that are drawn from the hydrated DNA sample with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns.
== Paracrystalline lattice models of B-DNA structures ==
A paracrystalline lattice, or paracrystal, is a molecular or atomic lattice with significant amounts (e.g., larger than a few percent) of partial disordering of molecular arrangements. Limiting cases of the paracrystal model are nanostructures, such as glasses, liquids, etc., that may possess only local ordering and no global order. A simple example of a paracrystalline lattice is shown in the following figure for a silica glass:
Liquid crystals also have paracrystalline rather than crystalline structures.
Highly hydrated B-DNA occurs naturally in living cells in such a paracrystalline state, which is a dynamic one despite the relatively rigid DNA double helix stabilized by parallel hydrogen bonds between the nucleotide base-pairs in the two complementary, helical DNA chains (see figures). For simplicity most DNA molecular models omit both water and ions dynamically bound to B-DNA, and are thus less useful for understanding the dynamic behaviors of B-DNA in vivo. The physical and mathematical analysis of X-ray and spectroscopic data for paracrystalline B-DNA is thus far more complex than that of crystalline, A-DNA X-ray diffraction patterns. The paracrystal model is also important for DNA technological applications such as DNA nanotechnology. Novel methods that combine X-ray diffraction of DNA with X-ray microscopy in hydrated living cells are now also being developed.
== Genomic and biotechnology applications of DNA molecular modeling ==
There are various uses of DNA molecular modeling in Genomics and Biotechnology research applications, from DNA repair to PCR and DNA nanostructures. Two-dimensional DNA junction arrays have been visualized by Atomic force microscopy.
DNA molecular modeling has various uses in genomics and biotechnology, with research applications ranging from DNA repair to PCR and DNA nanostructures. These include computer molecular models of molecules as varied as RNA polymerase, an E. coli, bacterial DNA primase template suggesting very complex dynamics at the interfaces between the enzymes and the DNA template, and molecular models of the mutagenic, chemical interaction of potent carcinogen molecules with DNA. These are all represented in the gallery below.
Technological application include a DNA biochip and DNA nanostructures designed for DNA computing and other dynamic applications of DNA nanotechnology.
The image at right is of self-assembled DNA nanostructures. The DNA "tile" structure in this image consists of four branched junctions oriented at 90° angles. Each tile consists of nine DNA oligonucleotides as shown; such tiles serve as the primary "building block" for the assembly of the DNA nanogrids shown in the AFM micrograph.
Quadruplex DNA may be involved in certain cancers. Images of quadruplex DNA are in the gallery below.
== Gallery of DNA models ==
== See also ==
== References ==
== Further reading ==
Applications of Novel Techniques to Health Foods, Medical and Agricultural Biotechnology.(June 2004) I. C. Baianu, P. R. Lozano, V. I. Prisecaru and H. C. Lin., q-bio/0406047.
F. Bessel, Untersuchung des Theils der planetarischen Störungen, Berlin Abhandlungen (1824), article 14.
Sir Lawrence Bragg, FRS. The Crystalline State, A General survey. London: G. Bells and Sons, Ltd., vols. 1 and 2., 1966., 2024 pages.
Cantor, C. R. and Schimmel, P.R. Biophysical Chemistry, Parts I and II., San Francisco: W.H. Freeman and Co. 1980. 1,800 pages.
Voet, D. and J.G. Voet. Biochemistry, 2nd Edn., New York, Toronto, Singapore: John Wiley & Sons, Inc., 1995, ISBN 0-471-58651-X., 1361 pages.
Watson, G. N. A Treatise on the Theory of Bessel Functions., (1995) Cambridge University Press. ISBN 0-521-48391-3.
Watson, James D. Molecular Biology of the Gene. New York and Amsterdam: W.A. Benjamin, Inc. 1965., 494 pages.
Wentworth, W.E. Physical Chemistry. A short course., Malden ( Mass.): Blackwell Science, Inc. 2000.
Herbert R. Wilson, FRS. Diffraction of X-rays by proteins, Nucleic Acids and Viruses., London: Edward Arnold (Publishers) Ltd. 1966.
Kurt Wuthrich. NMR of Proteins and Nucleic Acids., New York, Brisbane, Chicester, Toronto, Singapore: J. Wiley & Sons. 1986., 292 pages.
Hallin PF, David Ussery D (2004). "CBS Genome Atlas Database: A dynamic storage for bioinformatic results and DNA sequence data". Bioinformatics. 20 (18): 3682–6. doi:10.1093/bioinformatics/bth423. PMID 15256401.
Zhang CT, Zhang R, Ou HY (2003). "The Z curve database: a graphic representation of genome sequences". Bioinformatics. 19 (5): 593–599. doi:10.1093/bioinformatics/btg041. PMID 12651717.
== External links ==
DNA the Double Helix Game From the official Nobel Prize website
MDDNA: Structural Bioinformatics of DNA
Double Helix 1953–2003 National Centre for Biotechnology Education
DNAlive: a web interface to compute DNA physical properties. Also allows cross-linking of the results with the UCSC Genome browser and DNA dynamics.
Further details of mathematical and molecular analysis of DNA structure based on X-ray data
Bessel functions corresponding to Fourier transforms of atomic or molecular helices.
overview of STM/AFM/SNOM principles with educative videos
=== Databases for DNA molecular models and sequences ===
X-ray diffraction
NDB ID: UD0017 Database
X-ray Atlas -database
PDB files of coordinates for nucleic acid structures from X-ray diffraction by NA (incl. DNA) crystals
Structure factors downloadable files in CIF format
Neutron scattering
ISIS neutron source: ISIS pulsed neutron source:A world centre for science with neutrons & muons at Harwell, near Oxford, UK.
X-ray microscopy
Electron microscopy
DNA under electron microscope
NMR databases
NMR Atlas--database
mmcif downloadable coordinate files of nucleic acids in solution from 2D-FT NMR data
NMR constraints files for NAs in PDB format
Genomic and structural databases
CBS Genome Atlas Database — contains examples of base skews.
The Z curve database of genomes — a 3-dimensional visualization and analysis tool of genomes.
DNA and other nucleic acids' molecular models: Coordinate files of nucleic acids molecular structure models in PDB and CIF formats
Atomic force microscopy
How SPM Works
SPM image gallery: AFM STM SEM MFM NSOM, more | Wikipedia/Molecular_models_of_DNA |
Chloroplast DNA (cpDNA), also known as plastid DNA (ptDNA) is the DNA located in chloroplasts, which are photosynthetic organelles located within the cells of some eukaryotic organisms. Chloroplasts, like other types of plastid, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. The first complete chloroplast genome sequences were published in 1986, Nicotiana tabacum (tobacco) by Sugiura and colleagues and Marchantia polymorpha (liverwort) by Ozeki et al. Since then, tens of thousands of chloroplast genomes from various species have been sequenced.
== Molecular structure ==
Chloroplast DNAs are circular, and are typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons.
Most chloroplasts have their entire chloroplast genome combined into a single large ring, though those of dinophyte algae are a notable exception—their genome is broken up into about forty small plasmids, each 2,000–10,000 base pairs long. Each minicircle contains one to three genes, but blank plasmids, with no coding DNA, have also been found.
Chloroplast DNA has long been thought to have a circular structure, but some evidence suggests that chloroplast DNA more commonly takes a linear shape. Over 95% of the chloroplast DNA in corn chloroplasts has been observed to be in branched linear form rather than individual circles.
=== Inverted repeats ===
Many chloroplast DNAs contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC).
The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long.
The inverted repeat regions usually contain three ribosomal RNA and two tRNA genes, but they can be expanded or reduced to contain as few as four or as many as over 150 genes.
While a given pair of inverted repeats are rarely completely identical, they are always very similar to each other, apparently resulting from concerted evolution.
The inverted repeat regions are highly conserved among land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceæ), suggesting that they predate the chloroplast, though some chloroplast DNAs like those of peas and a few red algae have since lost the inverted repeats. Others, like the red alga Porphyra flipped one of its inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast DNAs which have lost some of the inverted repeat segments tend to get rearranged more.
=== Nucleoids ===
Each chloroplast contains around 100 copies of its DNA in young leaves, declining to 15–20 copies in older leaves. They are usually packed into nucleoids which can contain several identical chloroplast DNA rings. Many nucleoids can be found in each chloroplast.
Though chloroplast DNA is not associated with true histones, in red algae, a histone-like chloroplast protein (HC) coded by the chloroplast DNA that tightly packs each chloroplast DNA ring into a nucleoid has been found.
In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of a chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma.
== Gene content and plastid gene expression ==
More than 33,000 chloroplast genomes have been sequenced and are accessible via the NCBI organelle genome database. The first chloroplast genomes were sequenced in 1986, from tobacco (Nicotiana tabacum) and liverwort (Marchantia polymorpha). Comparison of the gene sequences of the cyanobacteria Synechocystis to those of the chloroplast genome of Arabidopsis provided confirmation of the endosymbiotic origin of the chloroplast. It also demonstrated the significant extent of gene transfer from the cyanobacterial ancestor to the nuclear genome.
In most plant species, the chloroplast genome encodes approximately 120 genes. The genes primarily encode core components of the photosynthetic machinery and factors involved in their expression and assembly. Across species of land plants, the set of genes encoded by the chloroplast genome is fairly conserved. This includes four ribosomal RNAs, approximately 30 tRNAs, 21 ribosomal proteins, and 4 subunits of the plastid-encoded RNA polymerase complex that are involved in plastid gene expression. The large Rubisco subunit and 28 photosynthetic thylakoid proteins are encoded within the chloroplast genome.
=== Chloroplast genome reduction and gene transfer ===
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer.
As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. The parasitic Pilostyles have even lost their plastid genes for tRNA. Contrarily, there are only a few known instances where genes have been transferred to the chloroplast from various donors, including bacteria.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many chromalveolate lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor (probably the ancestor of all chromalveolates too) had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
=== Proteins encoded by the chloroplast ===
Of the approximately three-thousand proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling.
=== Protein synthesis ===
Protein synthesis within chloroplasts relies on an RNA polymerase coded by the chloroplast's own genome, which is related to RNA polymerases found in bacteria. Chloroplasts also contain a mysterious second RNA polymerase that is encoded by the plant's nuclear genome. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
=== RNA editing in plastids ===
RNA editing is the insertion, deletion, and substitution of nucleotides in a mRNA transcript prior to translation to protein. The highly oxidative environment inside chloroplasts increases the rate of mutation so post-transcription repairs are needed to conserve functional sequences. The chloroplast editosome substitutes C -> U and U -> C at very specific locations on the transcript. This can change the codon for an amino acid or restore a non-functional pseudogene by adding an AUG start codon or removing a premature UAA stop codon.
The editosome recognizes and binds to cis sequence upstream of the editing site. The distance between the binding site and editing site varies by gene and proteins involved in the editosome. Hundreds of different PPR proteins from the nuclear genome are involved in the RNA editing process. These proteins consist of 35-mer repeated amino acids, the sequence of which determines the cis binding site for the edited transcript.
Basal land plants such as liverworts, mosses and ferns have hundreds of different editing sites while flowering plants typically have between thirty and forty. Parasitic plants such as Epifagus virginiana show a loss of RNA editing resulting in a loss of function for photosynthesis genes.
== DNA replication ==
=== Leading model of cpDNA replication ===
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Replication starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to replicate the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine (H). Hypoxanthine can bind to cytosine, and when the HC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
=== Alternative model of replication ===
One of the main competing models for cpDNA asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more still contain complex structures that scientists do not yet understand; however, the predominant view today is that most cpDNA is circular. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not explain the multiple A → G gradients seen in plastomes. This shortcoming is one of the biggest for the linear structure theory.
== Protein targeting and import ==
The movement of so many chloroplast genes to the nucleus means that many chloroplast proteins that were supposed to be translated in the chloroplast are now synthesized in the cytoplasm. This means that these proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway (though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell, because to reach the chloroplast from the cytosol, you have to cross the cell membrane, just like if you were headed for the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway).
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
=== Cytoplasmic translation and N-terminal transit sequences ===
Polypeptides, the precursors of proteins, are chains of amino acids. The two ends of a polypeptide are called the N-terminus, or amino end, and the C-terminus, or carboxyl end. For many (but not all) chloroplast proteins encoded by nuclear genes, cleavable transit peptides are added to the N-termini of the polypeptides, which are used to help direct the polypeptide to the chloroplast for import (N-terminal transit peptides are also used to direct polypeptides to plant mitochondria).
N-terminal transit sequences are also called presequences because they are located at the "front" end of a polypeptide—ribosomes synthesize polypeptides from the N-terminus to the C-terminus.
Chloroplast transit peptides exhibit huge variation in length and amino acid sequence. They can be from 20 to 150 amino acids long—an unusually long length, suggesting that transit peptides are actually collections of domains with different functions. Transit peptides tend to be positively charged, rich in hydroxylated amino acids such as serine, threonine, and proline, and poor in acidic amino acids like aspartic acid and glutamic acid. In an aqueous solution, the transit sequence forms a random coil.
Not all chloroplast proteins include a N-terminal cleavable transit peptide though. Some include the transit sequence within the functional part of the protein itself. A few have their transit sequence appended to their C-terminus instead. Most of the polypeptides that lack N-terminal targeting sequences are the ones that are sent to the outer chloroplast membrane, plus at least one sent to the inner chloroplast membrane.
=== Phosphorylation, chaperones, and transport ===
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, ATP energy can be used to phosphorylate, or add a phosphate group to many (but not all) of them in their transit sequences. Serine and threonine (both very common in chloroplast transit sequences—making up 20–30% of the sequence) are often the amino acids that accept the phosphate group. The enzyme that carries out the phosphorylation is specific for chloroplast polypeptides, and ignores ones meant for mitochondria or peroxisomes.
Phosphorylation changes the polypeptide's shape, making it easier for 14-3-3 proteins to attach to the polypeptide. In plants, 14-3-3 proteins only bind to chloroplast preproteins. It is also bound by the heat shock protein Hsp70 that keeps the polypeptide from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized and imported into the chloroplast.
The heat shock protein and the 14-3-3 proteins together form a cytosolic guidance complex that makes it easier for the chloroplast polypeptide to get imported into the chloroplast.
Alternatively, if a chloroplast preprotein's transit peptide is not phosphorylated, a chloroplast preprotein can still attach to a heat shock protein or Toc159. These complexes can bind to the TOC complex on the outer chloroplast membrane using GTP energy.
=== The translocon on the outer chloroplast membrane (TOC) ===
The TOC complex, or translocon on the outer chloroplast membrane, is a collection of proteins that imports preproteins across the outer chloroplast envelope. Five subunits of the TOC complex have been identified—two GTP-binding proteins Toc34 and Toc159, the protein import tunnel Toc75, plus the proteins Toc64 and Toc12.
The first three proteins form a core complex that consists of one Toc159, four to five Toc34s, and four Toc75s that form four holes in a disk 13 nanometers across. The whole core complex weighs about 500 kilodaltons. The other two proteins, Toc64 and Toc12, are associated with the core complex but are not part of it.
==== Toc34 and 33 ====
Toc34 is an integral protein in the outer chloroplast membrane that's anchored into it by its hydrophobic C-terminal tail. Most of the protein, however, including its large guanosine triphosphate (GTP)-binding domain projects out into the stroma.
Toc34's job is to catch some chloroplast preproteins in the cytosol and hand them off to the rest of the TOC complex. When GTP, an energy molecule similar to ATP attaches to Toc34, the protein becomes much more able to bind to many chloroplast preproteins in the cytosol. The chloroplast preprotein's presence causes Toc34 to break GTP into guanosine diphosphate (GDP) and inorganic phosphate. This loss of GTP makes the Toc34 protein release the chloroplast preprotein, handing it off to the next TOC protein. Toc34 then releases the depleted GDP molecule, probably with the help of an unknown GDP exchange factor. A domain of Toc159 might be the exchange factor that carry out the GDP removal. The Toc34 protein can then take up another molecule of GTP and begin the cycle again.
Toc34 can be turned off through phosphorylation. A protein kinase drifting around on the outer chloroplast membrane can use ATP to add a phosphate group to the Toc34 protein, preventing it from being able to receive another GTP molecule, inhibiting the protein's activity. This might provide a way to regulate protein import into chloroplasts.
Arabidopsis thaliana has two homologous proteins, AtToc33 and AtToc34 (The At stands for Arabidopsis thaliana), which are each about 60% identical in amino acid sequence to Toc34 in peas (called psToc34). AtToc33 is the most common in Arabidopsis, and it is the functional analogue of Toc34 because it can be turned off by phosphorylation. AtToc34 on the other hand cannot be phosphorylated.
==== Toc159 ====
Toc159 is another GTP binding TOC subunit, like Toc34. Toc159 has three domains. At the N-terminal end is the A-domain, which is rich in acidic amino acids and takes up about half the protein length. The A-domain is often cleaved off, leaving an 86 kilodalton fragment called Toc86. In the middle is its GTP binding domain, which is very similar to the homologous GTP-binding domain in Toc34. At the C-terminal end is the hydrophilic M-domain, which anchors the protein to the outer chloroplast membrane.
Toc159 probably works a lot like Toc34, recognizing proteins in the cytosol using GTP. It can be regulated through phosphorylation, but by a different protein kinase than the one that phosphorylates Toc34. Its M-domain forms part of the tunnel that chloroplast preproteins travel through, and seems to provide the force that pushes preproteins through, using the energy from GTP.
Toc159 is not always found as part of the TOC complex—it has also been found dissolved in the cytosol. This suggests that it might act as a shuttle that finds chloroplast preproteins in the cytosol and carries them back to the TOC complex. There isn't a lot of direct evidence for this behavior though.
A family of Toc159 proteins, Toc159, Toc132, Toc120, and Toc90 have been found in Arabidopsis thaliana. They vary in the length of their A-domains, which is completely gone in Toc90. Toc132, Toc120, and Toc90 seem to have specialized functions in importing stuff like nonphotosynthetic preproteins, and can't replace Toc159.
==== Toc75 ====
Toc75 is the most abundant protein on the outer chloroplast envelope. It is a transmembrane tube that forms most of the TOC pore itself. Toc75 is a β-barrel channel lined by 16 β-pleated sheets. The hole it forms is about 2.5 nanometers wide at the ends, and shrinks to about 1.4–1.6 nanometers in diameter at its narrowest point—wide enough to allow partially folded chloroplast preproteins to pass through.
Toc75 can also bind to chloroplast preproteins, but is a lot worse at this than Toc34 or Toc159.
Arabidopsis thaliana has multiple isoforms of Toc75 that are named by the chromosomal positions of the genes that code for them. AtToc75 III is the most abundant of these.
=== The translocon on the inner chloroplast membrane (TIC) ===
The TIC translocon, or translocon on the inner chloroplast membrane translocon is another protein complex that imports proteins across the inner chloroplast envelope. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Like the TOC translocon, the TIC translocon has a large core complex surrounded by some loosely associated peripheral proteins like Tic110, Tic40, and Tic21.
The core complex weighs about one million daltons and contains Tic214, Tic100, Tic56, and Tic20 I, possibly three of each.
==== Tic20 ====
Tic20 is an integral protein thought to have four transmembrane α-helices. It is found in the 1 million dalton TIC complex. Because it is similar to bacterial amino acid transporters and the mitochondrial import protein Tim17 (translocase on the inner mitochondrial membrane), it has been proposed to be part of the TIC import channel. There is no in vitro evidence for this though. In Arabidopsis thaliana, it is known that for about every five Toc75 proteins in the outer chloroplast membrane, there are two Tic20 I proteins (the main form of Tic20 in Arabidopsis) in the inner chloroplast membrane.
Unlike Tic214, Tic100, or Tic56, Tic20 has homologous relatives in cyanobacteria and nearly all chloroplast lineages, suggesting it evolved before the first chloroplast endosymbiosis. Tic214, Tic100, and Tic56 are unique to chloroplastidan chloroplasts, suggesting that they evolved later.
==== Tic214 ====
Tic214 is another TIC core complex protein, named because it weighs just under 214 kilodaltons. It is 1786 amino acids long and is thought to have six transmembrane domains on its N-terminal end. Tic214 is notable for being coded for by chloroplast DNA, more specifically the first open reading frame ycf1. Tic214 and Tic20 together probably make up the part of the one million dalton TIC complex that spans the entire membrane. Tic20 is buried inside the complex while Tic214 is exposed on both sides of the inner chloroplast membrane.
==== Tic100 ====
Tic100 is a nuclear encoded protein that's 871 amino acids long. The 871 amino acids collectively weigh slightly less than 100 thousand daltons, and since the mature protein probably doesn't lose any amino acids when itself imported into the chloroplast (it has no cleavable transit peptide), it was named Tic100. Tic100 is found at the edges of the 1 million dalton complex on the side that faces the chloroplast intermembrane space.
==== Tic56 ====
Tic56 is also a nuclear encoded protein. The preprotein its gene encodes is 527 amino acids long, weighing close to 62 thousand daltons; the mature form probably undergoes processing that trims it down to something that weighs 56 thousand daltons when it gets imported into the chloroplast. Tic56 is largely embedded inside the 1 million dalton complex.
Tic56 and Tic100 are highly conserved among land plants, but they don't resemble any protein whose function is known. Neither has any transmembrane domains.
== See also ==
List of sequenced plastomes
Mitochondrial DNA
== References == | Wikipedia/Chloroplast_DNA |
Protein primary structure is the linear sequence of amino acids in a peptide or protein. By convention, the primary structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal (C) end. Protein biosynthesis is most commonly performed by ribosomes in cells. Peptides can also be synthesized in the laboratory. Protein primary structures can be directly sequenced, or inferred from DNA sequences.
== Formation ==
=== Biological ===
Amino acids are polymerised via peptide bonds to form a long backbone, with the different amino acid side chains protruding along it. In biological systems, proteins are produced during translation by a cell's ribosomes. Some organisms can also make short peptides by non-ribosomal peptide synthesis, which often use amino acids other than the encoded 22, and may be cyclised, modified and cross-linked.
=== Chemical ===
Peptides can be synthesised chemically via a range of laboratory methods. Chemical methods typically synthesise peptides in the opposite order (starting at the C-terminus) to biological protein synthesis (starting at the N-terminus).
== Notation ==
Protein sequence is typically notated as a string of letters, listing the amino acids starting at the amino-terminal end through to the carboxyl-terminal end. Either a three letter code or single letter code can be used to represent the 22 naturally encoded amino acids, as well as mixtures or ambiguous amino acids (similar to nucleic acid notation).
Peptides can be directly sequenced, or inferred from DNA sequences. Large sequence databases now exist that collate known protein sequences.
== Modification ==
In general, polypeptides are unbranched polymers, so their primary structure can often be specified by the sequence of amino acids along their backbone. However, proteins can become cross-linked, most commonly by disulfide bonds, and the primary structure also requires specifying the cross-linking atoms, e.g., specifying the cysteines involved in the protein's disulfide bonds. Other crosslinks include desmosine.
=== Isomerisation ===
The chiral centers of a polypeptide chain can undergo racemization. Although it does not change the sequence, it does affect the chemical properties of the sequence. In particular, the L-amino acids normally found in proteins can spontaneously isomerize at the
C
α
{\displaystyle \mathrm {C^{\alpha }} }
atom to form D-amino acids, which cannot be cleaved by most proteases. Additionally, proline can form stable trans-isomers at the peptide bond.
=== Post-translational modification ===
Additionally, the protein can undergo a variety of post-translational modifications, which are briefly summarized here.
The N-terminal amino group of a polypeptide can be modified covalently, e.g.,
acetylation
−
C
(
=
O
)
−
C
H
3
{\displaystyle \mathrm {-C(=O)-CH_{3}} }
The positive charge on the N-terminal amino group may be eliminated by changing it to an acetyl group (N-terminal blocking).
formylation
−
C
(
=
O
)
H
{\displaystyle \mathrm {-C(=O)H} }
The N-terminal methionine usually found after translation has an N-terminus blocked with a formyl group. This formyl group (and sometimes the methionine residue itself, if followed by Gly or Ser) is removed by the enzyme deformylase.
pyroglutamate
An N-terminal glutamine can attack itself, forming a cyclic pyroglutamate group.
myristoylation
−
C
(
=
O
)
−
(
C
H
2
)
12
−
C
H
3
{\displaystyle \mathrm {-C(=O)-\left(CH_{2}\right)_{12}-CH_{3}} }
Similar to acetylation. Instead of a simple methyl group, the myristoyl group has a tail of 14 hydrophobic carbons, which make it ideal for anchoring proteins to cellular membranes.
The C-terminal carboxylate group of a polypeptide can also be modified, e.g.,
amination (see Figure)
The C-terminus can also be blocked (thus, neutralizing its negative charge) by amination.
glycosyl phosphatidylinositol (GPI) attachment
Glycosyl phosphatidylinositol(GPI) is a large, hydrophobic phospholipid prosthetic group that anchors proteins to cellular membranes. It is attached to the polypeptide C-terminus through an amide linkage that then connects to ethanolamine, thence to sundry sugars and finally to the phosphatidylinositol lipid moiety.
Finally, the peptide side chains can also be modified covalently, e.g.,
phosphorylation
Aside from cleavage, phosphorylation is perhaps the most important chemical modification of proteins. A phosphate group can be attached to the sidechain hydroxyl group of serine, threonine and tyrosine residues, adding a negative charge at that site and producing an unnatural amino acid. Such reactions are catalyzed by kinases and the reverse reaction is catalyzed by phosphatases. The phosphorylated tyrosines are often used as "handles" by which proteins can bind to one another, whereas phosphorylation of Ser/Thr often induces conformational changes, presumably because of the introduced negative charge. The effects of phosphorylating Ser/Thr can sometimes be simulated by mutating the Ser/Thr residue to glutamate.
glycosylation
A catch-all name for a set of very common and very heterogeneous chemical modifications. Sugar moieties can be attached to the sidechain hydroxyl groups of Ser/Thr or to the sidechain amide groups of Asn. Such attachments can serve many functions, ranging from increasing solubility to complex recognition. All glycosylation can be blocked with certain inhibitors, such as tunicamycin.
deamidation (succinimide formation)
In this modification, an asparagine or aspartate side chain attacks the following peptide bond, forming a symmetrical succinimide intermediate. Hydrolysis of the intermediate produces either aspartate or the β-amino acid, iso(Asp). For asparagine, either product results in the loss of the amide group, hence "deamidation".
hydroxylation
Proline residues may be hydroxylated at either of two atoms, as can lysine (at one atom). Hydroxyproline is a critical component of collagen, which becomes unstable upon its loss. The hydroxylation reaction is catalyzed by an enzyme that requires ascorbic acid (vitamin C), deficiencies in which lead to many connective-tissue diseases such as scurvy.
methylation
Several protein residues can be methylated, most notably the positive groups of lysine and arginine. Arginine residues interact with the nucleic acid phosphate backbone and commonly form hydrogen bonds with the base residues, particularly guanine, in protein–DNA complexes. Lysine residues can be singly, doubly and even triply methylated. Methylation does not alter the positive charge on the side chain, however.
acetylation
Acetylation of the lysine amino groups is chemically analogous to the acetylation of the N-terminus. Functionally, however, the acetylation of lysine residues is used to regulate the binding of proteins to nucleic acids. The cancellation of the positive charge on the lysine weakens the electrostatic attraction for the (negatively charged) nucleic acids.
sulfation
Tyrosines may become sulfated on their
O
η
{\displaystyle \mathrm {O^{\eta }} }
atom. Somewhat unusually, this modification occurs in the Golgi apparatus, not in the endoplasmic reticulum. Similar to phosphorylated tyrosines, sulfated tyrosines are used for specific recognition, e.g., in chemokine receptors on the cell surface. As with phosphorylation, sulfation adds a negative charge to a previously neutral site.
prenylation and palmitoylation
−
C
(
=
O
)
−
(
C
H
2
)
14
−
C
H
3
{\displaystyle \mathrm {-C(=O)-\left(CH_{2}\right)_{14}-CH_{3}} }
The hydrophobic isoprene (e.g., farnesyl, geranyl, and geranylgeranyl groups) and palmitoyl groups may be added to the
S
γ
{\displaystyle \mathrm {S^{\gamma }} }
atom of cysteine residues to anchor proteins to cellular membranes. Unlike the GPI and myritoyl anchors, these groups are not necessarily added at the termini.
carboxylation
A relatively rare modification that adds an extra carboxylate group (and, hence, a double negative charge) to a glutamate side chain, producing a Gla residue. This is used to strengthen the binding to "hard" metal ions such as calcium.
ADP-ribosylation
The large ADP-ribosyl group can be transferred to several types of side chains within proteins, with heterogeneous effects. This modification is a target for the powerful toxins of disparate bacteria, e.g., Vibrio cholerae, Corynebacterium diphtheriae and Bordetella pertussis.
ubiquitination and SUMOylation
Various full-length, folded proteins can be attached at their C-termini to the sidechain ammonium groups of lysines of other proteins. Ubiquitin is the most common of these, and usually signals that the ubiquitin-tagged protein should be degraded.
Most of the polypeptide modifications listed above occur post-translationally, i.e., after the protein has been synthesized on the ribosome, typically occurring in the endoplasmic reticulum, a subcellular organelle of the eukaryotic cell.
Many other chemical reactions (e.g., cyanylation) have been applied to proteins by chemists, although they are not found in biological systems.
=== Cleavage and ligation ===
In addition to those listed above, the most important modification of primary structure is peptide cleavage (by chemical hydrolysis or by proteases). Proteins are often synthesized in an inactive precursor form; typically, an N-terminal or C-terminal segment blocks the active site of the protein, inhibiting its function. The protein is activated by cleaving off the inhibitory peptide.
Some proteins even have the power to cleave themselves. Typically, the hydroxyl group of a serine (rarely, threonine) or the thiol group of a cysteine residue will attack the carbonyl carbon of the preceding peptide bond, forming a tetrahedrally bonded intermediate [classified as a hydroxyoxazolidine (Ser/Thr) or hydroxythiazolidine (Cys) intermediate]. This intermediate tends to revert to the amide form, expelling the attacking group, since the amide form is usually favored by free energy, (presumably due to the strong resonance stabilization of the peptide group). However, additional molecular interactions may render the amide form less stable; the amino group is expelled instead, resulting in an ester (Ser/Thr) or thioester (Cys) bond in place of the peptide bond. This chemical reaction is called an N-O acyl shift.
The ester/thioester bond can be resolved in several ways:
Simple hydrolysis will split the polypeptide chain, where the displaced amino group becomes the new N-terminus. This is seen in the maturation of glycosylasparaginase.
A β-elimination reaction also splits the chain, but results in a pyruvoyl group at the new N-terminus. This pyruvoyl group may be used as a covalently attached catalytic cofactor in some enzymes, especially decarboxylases such as S-adenosylmethionine decarboxylase (SAMDC) that exploit the electron-withdrawing power of the pyruvoyl group.
Intramolecular transesterification, resulting in a branched polypeptide. In inteins, the new ester bond is broken by an intramolecular attack by the soon-to-be C-terminal asparagine.
Intermolecular transesterification can transfer a whole segment from one polypeptide to another, as is seen in the Hedgehog protein autoprocessing.
== Sequence compression ==
The compression of amino acid sequences is a comparatively challenging task. The existing specialized amino acid sequence compressors are low compared with that of DNA sequence compressors, mainly because of the characteristics of the data. For example, modeling inversions is harder because of the reverse information loss (from amino acids to DNA sequence). The current lossless data compressor that provides higher compression is AC2. AC2 mixes various context models using Neural Networks and encodes the data using arithmetic encoding.
== History ==
The proposal that proteins were linear chains of α-amino acids was made nearly simultaneously by two scientists at the same conference in 1902, the 74th meeting of the Society of German Scientists and Physicians, held in Karlsbad. Franz Hofmeister made the proposal in the morning, based on his observations of the biuret reaction in proteins. Hofmeister was followed a few hours later by Emil Fischer, who had amassed a wealth of chemical details supporting the peptide-bond model. For completeness, the proposal that proteins contained amide linkages was made as early as 1882 by the French chemist E. Grimaux.
Despite these data and later evidence that proteolytically digested proteins yielded only oligopeptides, the idea that proteins were linear, unbranched polymers of amino acids was not accepted immediately. Some scientists such as William Astbury doubted that covalent bonds were strong enough to hold such long molecules together; they feared that thermal agitations would shake such long molecules asunder. Hermann Staudinger faced similar prejudices in the 1920s when he argued that rubber was composed of macromolecules.
Thus, several alternative hypotheses arose. The colloidal protein hypothesis stated that proteins were colloidal assemblies of smaller molecules. This hypothesis was disproved in the 1920s by ultracentrifugation measurements by Theodor Svedberg that showed that proteins had a well-defined, reproducible molecular weight and by electrophoretic measurements by Arne Tiselius that indicated that proteins were single molecules. A second hypothesis, the cyclol hypothesis advanced by Dorothy Wrinch, proposed that the linear polypeptide underwent a chemical cyclol rearrangement C=O + HN
→
{\displaystyle \rightarrow }
C(OH)-N that crosslinked its backbone amide groups, forming a two-dimensional fabric. Other primary structures of proteins were proposed by various researchers, such as the diketopiperazine model of Emil Abderhalden and the pyrrol/piperidine model of Troensegaard in 1942. Although never given much credence, these alternative models were finally disproved when Frederick Sanger successfully sequenced insulin and by the crystallographic determination of myoglobin and hemoglobin by Max Perutz and John Kendrew.
== Primary structure in other molecules ==
Any linear-chain heteropolymer can be said to have a "primary structure" by analogy to the usage of the term for proteins, but this usage is rare compared to the extremely common usage in reference to proteins. In RNA, which also has extensive secondary structure, the linear chain of bases is generally just referred to as the "sequence" as it is in DNA (which usually forms a linear double helix with little secondary structure). Other biological polymers such as polysaccharides can also be considered to have a primary structure, although the usage is not standard.
== Relation to secondary and tertiary structure ==
The primary structure of a biological polymer to a large extent determines the three-dimensional shape (tertiary structure). Protein sequence can be used to predict local features, such as segments of secondary structure, or trans-membrane regions. However, the complexity of protein folding currently prohibits predicting the tertiary structure of a protein from its sequence alone. Knowing the structure of a similar homologous sequence (for example a member of the same protein family) allows highly accurate prediction of the tertiary structure by homology modeling. If the full-length protein sequence is available, it is possible to estimate its general biophysical properties, such as its isoelectric point.
Sequence families are often determined by sequence clustering, and structural genomics projects aim to produce a set of representative structures to cover the sequence space of possible non-redundant sequences.
== See also ==
Protein sequencing
Nucleic acid primary structure
Translation
Pseudo amino acid composition
== Notes and references == | Wikipedia/Primary_protein_structure |
DNA origami is the nanoscale folding of DNA to create arbitrary two- and three-dimensional shapes at the nanoscale. The specificity of the interactions between complementary base pairs make DNA a useful construction material, through design of its base sequences. DNA is a well-understood material that is suitable for creating scaffolds that hold other molecules in place or to create structures all on its own.
DNA origami was the cover story of Nature on March 16, 2006. Since then, DNA origami has progressed past an art form and has found a number of applications from drug delivery systems to uses as circuitry in plasmonic devices; however, most commercial applications remain in a concept or testing phase.
== Overview ==
The idea of using DNA as a construction material was first introduced in the early 1980s by Nadrian Seeman. The method of DNA origami was developed by Paul Rothemund at the California Institute of Technology. In contrast to common top-down fabrication methods such as 3D printing or lithography which involve depositing or removing material through a tool, DNA Nanotechnology, as well as DNA Origami as a subset, is a bottom-up fabrication method. By rationally designing the constituent subunits of the DNA polymer, DNA can self-assemble into a variety of shapes. The process of constructing DNA Origami involves the folding of a long single strand of viral DNA (typically the 7,249 bp genomic DNA of M13 bacteriophage) aided by multiple smaller "staple" strands. These shorter strands bind the longer in various places, resulting in the formation of a pre-defined two- or three-dimensional shape. Examples include a smiley face and a coarse map of China and the Americas, along with many three-dimensional structures such as cubes.
There are several DNA properties that make the molecule an ideal building material for DNA origami. DNA strands have a natural tendency to bind to their complementary sequences through Watson–Crick base pairing. This allows staple strands to locate the position on the scaffold strand without any external manipulation, leading to self-assembly of the desired structure.
The specific sequence of bases in DNA gives the material an element of programmability by determining its binding behavior. Carefully designing the sequences of the staple strands enables scientists to precisely direct the scaffold strand's folding into a predetermined shape with high precision.
On a chemical level, the hydrogen bonds that exist between the complementary base pairs provide strength and stability to the folded DNA origami structures. Additionally, DNA is a relatively stable molecule, offering resilience in physiological conditions.
One of the advantages of using a DNA Origami nanostructure over an otherwise classified DNA nanostructure is the ease of defining finite structures. In the design of some other DNA nanostructures, it can be impractical to design the extremely large number of individualized strands if the entire structure is composed of smaller strands. One method of bypassing the need for a huge number of different strands is to use repeating units, which comes with the disadvantage of a distribution of sizes and sometimes shapes. DNA Origami, however, forms discrete structures.
Applications for DNA Origami are primarily focused around the ability to exert fine control on systems, especially by constraining positions of molecules, typically by attachment to the DNA Origami nanostructures. Current applications are primarily focused around sensing and drug delivery, but many additional applications have been investigated.
== Fabrication ==
Fabrication of DNA origami objects requires a preliminary intuition of 3-dimensional DNA structural design. This can be difficult to grasp due to the complexity of exclusively using adenine-thymine pairings and guanine-cytosine pairings to both fold and unravel double helical DNA molecules such that the output strands produce uniquely desired shapes.
The design software and the choice of base-pair sequences become crucial for creating intricate 2D or even 3D shapes as the key to DNA origami lies in the precise base-pairing between the technique's two building blocks: staple strands and the scaffold. This ensures specific binding and accurate folding. A scaffold strand is a long, single-stranded DNA molecule, often sourced from a virus. Staple strands are shorter DNA strands designed to bind to specific sequences on the scaffold strand, dictating its folding.
To produce a desired shape, images are drawn with a raster fill of a single long DNA molecule. This design is then fed into a computer program that calculates the placement of individual staple strands. Each staple binds to a specific region of the DNA template, and thus due to Watson–Crick base pairing, the necessary sequences of all staple strands are known and displayed. The DNA is mixed, then heated and cooled. As the DNA cools, the various staples pull the long strand into the desired shape. Designs are directly observable via several methods, including electron microscopy, atomic force microscopy, or fluorescence microscopy when DNA is coupled to fluorescent materials.
Bottom-up self-assembly methods are considered promising alternatives that offer cheap, parallel synthesis of nanostructures under relatively mild conditions.
Since the creation of this method, software was developed to assist the process using CAD software. This allows researchers to use a computer to determine the way to create the correct staples needed to form a certain shape. One such software called caDNAno is an open source software for creating such structures from DNA. The use of software has not only increased the ease of the process but has also drastically reduced the errors made by manual calculations.
After meticulously planning the sequence of the staple strands with software to ensure they bind the scaffold strand at the intended points, the designed staple strand sequences are synthesized in a lab using techniques like automated DNA synthesis. Finally, the scaffold strand and staple strands are mixed in a buffer solution and subjected to a specific temperature cycle. This cycle allows the staple strands to find their complementary sequences on the scaffold strand and bind through hydrogen bonding, causing the scaffold to fold into the desired shape.
== Dynamic Structures and Modifications ==
As in the broader field of DNA nanotechnology, DNA Origami may be made dynamic in nature through the use of a variety of methods. The three primary methods of creating a dynamic DNA Origami machine are toehold mediated strand displacement, enzymatic reactions, and base stacking. While these methods are most commonly used, additional methods for creating dynamic DNA Origami machines exist, such as designing a directional component and using brownian motion to drive rotational movement of structures or leveraging less commonly used DNA self-assembly phenomena like G-quadruplexes or i-motifs which can be pH sensitive.
Modifications can be otherwise used to affect structural properties, to impart unique chemistry to the nanostructures, or to add stimuli responses to the nanostructures. Modifications to structures can be made through conjugation of molecules such as proteins, or through chemical modification of the DNA bases themselves. pH dependent responses, light dependent responses, and more have been shown through modified systems.
One example application of creating dynamic structures is the ability to have a stimuli response resulting in drug release, which is presented by several groups. Other, less common applications comes in sensing moving mechanisms in vivo such as the unwinding of helicase.
== Biomedical Applications ==
DNA Origami, being made of a natural biological polymer, is well suited to the biological environment when salt concentrations allow, and offers fine control over the positioning of molecules and structures in the system. This allows DNA Origami to be applicable to a number of scenarios in biomedical engineering. Current biomedical applications include drug release with 0 order mechanisms, vaccines, cell signaling, and sensing applications.
DNA is folded into an octahedron and coated with a single bilayer of phospholipid, mimicking the envelope of a virus particle. The DNA nanoparticles, each at about the size of a virion, are able to remain in circulation for hours after injected into mice. It also elicits much lower immune response than the uncoated particles. It presents a potential use in drug delivery, reported by researchers in Wyss Institute at Harvard University.
Researchers at the Harvard University Wyss Institute reported the self-assembling and self-destructing drug delivery vessels using the DNA origami in the lab tests. The DNA nanorobot they created is an open DNA tube with a hinge on one side which can be clasped shut. The drug filled DNA tube is held shut by a DNA aptamer, configured to identify and seek certain diseased related protein. Once the origami nanobots get to the infected cells, the aptamers break apart and release the drug. The first disease model the researchers used was leukemia and lymphoma.
Researchers in the National Center for Nanoscience and Technology in Beijing and Arizona State University reported a DNA origami delivery vehicle for Doxorubicin, a well-known anti-cancer drug. The drug was non-covalently attached to DNA origami nanostructures through intercalation and a high drug load was achieved. The DNA-Doxorubicin complex was taken up by human breast adenocarcinoma cancer cells (MCF-7) via cellular internalization with much higher efficiency than doxorubicin in free form. The enhancement of cell killing activity was observed not only in regular MCF-7, more importantly, also in doxorubicin-resistant cells. The scientists theorized that the doxorubicin-loaded DNA origami inhibits lysosomal acidification, resulting in cellular redistribution of the drug to action sites, thus increasing the cytotoxicity against the tumor cells. Further testing on in vivo on mice suggests that over a 12 day period, Doxorubicin was more effective at reducing tumor sizes in mice when it was contained in DNA Origami Nanostructures or DONs.
Researchers from the Massachusetts Institute of Technology are developing a method to attach various viral antigens to Virus-shaped DNA particles to mimic the virus to be used to develop new vaccines. This was started in 2016 when Bathe's lab created an algorithm known as DAEDALUS (DNA Origami Sequence Design Algorithm for User-defined Structures) to generate precision-controlled three-dimensional shapes of DNA. Using the tool they designed virus-shaped scaffolding that can modularly attach different antigens to the surface of the DNA scaffold. Currently, MIT is working to develop optimal geometries for B cells to recognize HIV antigens. Further research has attempted to replace HIV antigens with SARS-CoV-2 and are testing whether vaccines show proper immune response from isolated B cells and in mice.
Similarly, researchers from the Technical University of Munich have developed a method to have T-cells target tumor cells by using antigen coated DNA origami. The researchers developed a method to create chassis known as programmable T-cell Engagers or (PTEs) which are DNA Origami structures that can be configured to bind to user-defined target cells and T-cells based on which antigens are coated on the surfaces of the nanostructure. The in vitro results show that after 24 hours of exposure 90% of the tumor cells were destroyed. Meanwhile in vivo testing showed that their PTEs were capable of binding to the target proteins for several hours which validates the mechanism they designed.
== Nanotechnology Applications ==
Many potential applications have been suggested in literature, including enzyme immobilization, drug delivery systems, and nanotechnological self-assembly of materials. Though DNA is not the natural choice for building active structures for nanorobotic applications, due to its lack of structural and catalytic versatility, several papers have examined the possibility of molecular walkers on origami and switches for algorithmic computing. The following paragraphs list some of the reported applications conducted in the laboratories with clinical potential.
In a study conducted by a group of scientists from iNANO center and CDNA Center at Aarhus university, researchers were able to construct a small multi-switchable 3D DNA Box Origami. The proposed nanoparticle was characterized by AFM, TEM and FRET. The constructed box was shown to have a unique reclosing mechanism, which enabled it to repeatedly open and close in response to a unique set of DNA or RNA keys. The authors proposed that this "DNA device can potentially be used for a broad range of applications such as controlling the function of single molecules, controlled drug delivery, and molecular computing."
Nanorobots made of DNA origami demonstrated computing capacities and completed pre-programmed task inside the living organism was reported by a team of bioengineers at Wyss Institute at Harvard University and Institute of Nanotechnology and Advanced Materials at Bar-Ilan University. As a proof of concept, the team injected various kinds of nanobots (the curled DNA encasing molecules with fluorescent markers) into live cockroaches. By tracking the markers inside the cockroaches, the team found the accuracy of delivery of the molecules (released by the uncurled DNA) in target cells, the interactions among the nanobots and the control are equivalent to a computer system. The complexity of the logic operations, the decisions and actions, increases with the increased number of nanobots. The team estimated that the computing power in the cockroach can be scaled up to that of an 8-bit computer.
A research group at the Indian Institute of Science used nanostructures to develop a platform to elucidate the coaxial stacking between DNA bases. This approach utilized DNA-PAINT based super-resolution microscopy for visualizing these DNA nanostructures and performed DNA binding kinetics analysis to elucidate the fundamental force of base-stacking that helps stabilize the DNA double helical structure. They went on to assemble multimeric DNA origami nanostructures termed as a 'three-point star' into a tetrahedral 3D origami structure. The assembly relied chiefly on base-stacking interactions between each subunit. The group further showed that the knowledge of such interactions can be used to predict and thus tune the relative stabilities of these multimeric DNA nanostructures.
== Similar approaches ==
The idea of using protein design to accomplish the same goals as DNA origami has surfaced as well. Researchers at the National Institute of Chemistry in Slovenia are working on using rational design of protein folding to create structures much like those seen with DNA origami. The main focus of current research in protein folding design is in the drug delivery field, using antibodies attached to proteins as a way to create a targeted vehicle.
== See also ==
RNA origami
DNA nanotechnology
Molecular self-assembly
Folding@home
Origami
== References ==
== Further reading ==
Kube, Massimo; Kohler, Fabian; Feigl, Elija; Nagel-Yüksel, Baki; Willner, Elena M.; Funke, Jonas J.; Gerling, Thomas; Stömmer, Pierre; Honemann, Maximilian N.; Martin, Thomas G.; Scheres, Sjors H. W.; Dietz, Hendrik (December 2020). "Revealing the structures of megadalton-scale DNA complexes with nucleotide resolution". Nature Communications. 11 (1): 6229. Bibcode:2020NatCo..11.6229K. doi:10.1038/s41467-020-20020-7. PMC 7718922. PMID 33277481. | Wikipedia/DNA_origami |
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure.
The sequence represents genetic information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism.
Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
== Nucleotides ==
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix.
The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA.
One sequence can be complementary to another sequence, meaning that they have the base on each position in the complementary (i.e., A to T, C to G) and in the reverse order. For example, the complementary sequence to TTAC is GTAA. If one strand of the double-stranded DNA is considered the sense strand, then the other strand, considered the antisense strand, will have the complementary sequence to the sense strand.
=== Notation ===
While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry (IUPAC) are as follows:
For example, W means that either an adenine or a thymine could occur in that position without impairing the sequence's functionality.
These symbols are also valid for RNA, except with U (uracil) replacing T (thymine).
Apart from adenine (A), cytosine (C), guanine (G), thymine (T) and uracil (U), DNA and RNA also contain bases that have been modified after the nucleic acid chain has been formed. In DNA, the most common modified base is 5-methylcytidine (m5C). In RNA, there are many modified bases, including pseudouridine (Ψ), dihydrouridine (D), inosine (I), ribothymidine (rT) and 7-methylguanosine (m7G). Hypoxanthine and xanthine are two of the many bases created through mutagen presence, both of them through deamination (replacement of the amine-group with a carbonyl-group). Hypoxanthine is produced from adenine, and xanthine is produced from guanine. Similarly, deamination of cytosine results in uracil.
Example of comparing and determining the % difference between two nucleotide sequences
AATCCGCTAG
AAACCCTTAG
Given the two 10-nucleotide sequences, line them up and compare the differences between them. Calculate the percent difference by taking the number of differences between the DNA bases divided by the total number of nucleotides. In this case there are three differences in the 10 nucleotide sequence. Thus there is a 30% difference.
== Biological significance ==
In biological systems, nucleic acids contain information which is used by a living cell to construct specific proteins. The sequence of nucleobases on a nucleic acid strand is translated by cell machinery into a sequence of amino acids making up a protein strand. Each group of three bases, called a codon, corresponds to a single amino acid, and there is a specific genetic code by which each possible combination of three bases corresponds to a specific amino acid.
The central dogma of molecular biology outlines the mechanism by which proteins are constructed using information contained in nucleic acids. DNA is transcribed into mRNA molecules, which travel to the ribosome where the mRNA is used as a template for the construction of the protein strand. Since nucleic acids can bind to molecules with complementary sequences, there is a distinction between "sense" sequences which code for proteins, and the complementary "antisense" sequence, which is by itself nonfunctional, but can bind to the sense strand.
== Sequence determination ==
DNA sequencing is the process of determining the nucleotide sequence of a given DNA fragment. The sequence of the DNA of a living thing encodes the necessary information for that living thing to survive and reproduce. Therefore, determining the sequence is useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the importance of DNA to living things, knowledge of a DNA sequence may be useful in practically any biological research. For example, in medicine it can be used to identify, diagnose and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
RNA is not sequenced directly. Instead, it is copied to a DNA by reverse transcriptase, and this DNA is then sequenced.
Current sequencing methods rely on the discriminatory ability of DNA polymerases, and therefore can only distinguish four bases. An inosine (created from adenosine during RNA editing) is read as a G, and 5-methyl-cytosine (created from cytosine by DNA methylation) is read as a C. With current technology, it is difficult to sequence small amounts of DNA, as the signal is too weak to measure. This is overcome by polymerase chain reaction (PCR) amplification.
=== Digital representation ===
Once a nucleic acid sequence has been obtained from an organism, it is stored in silico in digital format. Digital genetic sequences may be stored in sequence databases, be analyzed (see Sequence analysis below), be digitally altered and be used as templates for creating new actual DNA using artificial gene synthesis.
== Sequence analysis ==
Digital genetic sequences may be analyzed using the tools of bioinformatics to attempt to determine its function.
=== Genetic testing ===
The DNA in an organism's genome can be analyzed to diagnose vulnerabilities to inherited diseases, and can also be used to determine a child's paternity (genetic father) or a person's ancestry. Normally, every person carries two variations of every gene, one inherited from their mother, the other inherited from their father. The human genome is believed to contain around 20,000–25,000 genes. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders.
Genetic testing identifies changes in chromosomes, genes, or proteins. Usually, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use, and more are being developed.
=== Sequence alignment ===
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be due to functional, structural, or evolutionary relationships between the sequences. If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as insertion or deletion mutations (indels) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role.
Computational phylogenetics makes extensive use of sequence alignments in the construction and interpretation of phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The degree to which sequences in a query set differ is qualitatively related to the sequences' evolutionary distance from one another. Roughly speaking, high sequence identity suggests that the sequences in question have a comparatively young most recent common ancestor, while low identity suggests that the divergence is more ancient. This approximation, which reflects the "molecular clock" hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore, it does not account for possible differences among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein.) More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes.
=== Sequence motifs ===
Frequently the primary structure encodes motifs that are of functional importance. Some examples of sequence motifs are: the C/D
and H/ACA boxes
of snoRNAs, Sm binding site found in spliceosomal RNAs such as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence,
the Kozak consensus sequence
and the RNA polymerase III terminator.
=== Sequence entropy ===
In bioinformatics, a sequence entropy, also known as sequence complexity or information profile, is a numerical sequence providing a quantitative measure of the local complexity of a DNA sequence, independently of the direction of processing. The manipulations of the information profiles enable the analysis of the sequences using alignment-free techniques, such as for example in motif and rearrangements detection.
== See also ==
Gene structure
Nucleic acid structure determination
Quaternary numeral system
Single-nucleotide polymorphism (SNP)
== References ==
== External links ==
A bibliography on features, patterns, correlations in DNA and protein texts | Wikipedia/DNA_sequence |
Some chemical authorities define an organic compound as a chemical compound that contains a carbon–hydrogen or carbon–carbon bond; others consider an organic compound to be any chemical compound that contains carbon. For example, carbon-containing compounds such as alkanes (e.g. methane CH4) and its derivatives are universally considered organic, but many others are sometimes considered inorganic, such as certain compounds of carbon with nitrogen and oxygen (e.g. cyanide ion CN−, hydrogen cyanide HCN, chloroformic acid ClCO2H, carbon dioxide CO2, and carbonate ion CO2−3).
Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprise the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate salts and cyanide salts), along with a few other exceptions (e.g., carbon dioxide, and even hydrogen cyanide despite the fact it contains a carbon–hydrogen bond), are generally considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive.
Although organic compounds make up only a small percentage of Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically.
In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom.
== Definition ==
For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and CO2) and cyanides are generally considered inorganic compounds. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes and carbon nanotubes are also excluded because they are simple substances composed of a single element and so not generally considered chemical compounds. The word "organic" in this context does not mean "natural".
== History ==
=== Vitalism ===
Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess.
In the 1810s, Jöns Jacob Berzelius argued that a regulative force must exist within living bodies. Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories.
Vitalism survived for a short period after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving vitalism.
=== Modern classification and ambiguities ===
Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure.
The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen.
As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon-containing compound, excluding several classes of substances traditionally considered "inorganic". The list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon-containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, Fe3C), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, Al4C3 and CaC2 and "covalent" carbides, e.g. B4C and SiC, and graphite intercalation compounds, e.g. KC8). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides of carbon (CO, CO2, and arguably, C3O2), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, (CN)2, BrCN, cyanate anion OCN−, etc.), and heavier analogs thereof (e.g., cyaphide anion CP−, CSe2, COS; although carbon disulfide CS2 is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., CF4 and CClF3), phosgene (COCl2), carboranes, metal carbonyls (e.g., nickel tetracarbonyl), mellitic anhydride (C12O9), and other exotic oxocarbons are also considered inorganic by some authorities.
Nickel tetracarbonyl (Ni(CO)4) and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel tetracarbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is unknown whether organometallic compounds form a subset of organic compounds. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both.
Metal complexes with organic ligands but no carbon-metal bonds (e.g., (CH3CO2)2Cu) are not considered organometallic; instead, they are called metal-organic compounds (and might be considered organic).
The relatively narrow definition of organic compounds as those containing C–H bonds excludes compounds that are (historically and practically) considered organic. Neither urea CO(NH2)2 nor oxalic acid (COOH)2 are organic by this definition, yet they were two key compounds in the vitalism debate. However, the IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid as organic compounds. Other compounds lacking C–H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C–H bonds, is considered a possible organic compound in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite (Al2C6(COO)6·16H2O).
A slightly broader definition of the organic compound includes all compounds bearing C–H or C–C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, CF4 and CCl4 would be considered by this rule to be "inorganic", whereas CHF3, CHCl3, and C2Cl6 would be organic, though these compounds share many physical and chemical properties.
== Classification ==
Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus.
Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers.
=== Natural compounds ===
Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms.
Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils.
=== Synthetic compounds ===
Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that are already found in plants/animals or those artificial compounds that do not occur naturally.
Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds.
=== Biotechnology ===
Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature.
== Databases ==
The CAS database is the most comprehensive repository for data on organic compounds. The search tool SciFinder is offered.
The Beilstein database contains information on 9.8 million substances, covers the scientific literature from 1771 to the present, and is today accessible via Reaxys. Structures and a large diversity of physical and chemical properties are available for each substance, with reference to original literature.
PubChem contains 18.4 million entries on compounds and especially covers the field of medicinal chemistry.
A great number of more specialized databases exist for diverse branches of organic chemistry.
== Structure determination ==
The main tools are proton and carbon-13 NMR spectroscopy, IR spectroscopy, mass spectrometry, UV-Vis spectroscopy and X-ray crystallography.
== See also ==
Inorganic compound – Chemical compound without any carbon-hydrogen bonds
List of chemical compounds
List of organic compounds
Organometallic chemistry – Study of organic compounds containing metal(s)
== References ==
== External links ==
Organic Compounds Database
Organic Materials Database | Wikipedia/Organic_molecules |
Genomic deoxyribonucleic acid (abbreviated as gDNA) is chromosomal DNA, in contrast to extra-chromosomal DNAs like plasmids. Most organisms have the same genomic DNA in every cell; however, only certain genes are active in each cell to allow for cell function and differentiation within the body. gDNA predominantly resides in the cell nucleus packed into dense chromosome structures. Chromatin refers to the combination of DNA and proteins that make up chromosomes. When a cell is not dividing, chromosomes exist as loosely packed chromatin mesh.
The genome of an organism (encoded by the genomic DNA) is the (biological) information of heredity which is passed from one generation of organism to the next. That genome is transcribed to produce various RNAs, which are necessary for the function of the organism. Precursor mRNA (pre-mRNA) is transcribed by RNA polymerase II in the nucleus. pre-mRNA is then processed by splicing to remove introns, leaving the exons in the mature messenger RNA (mRNA). Additional processing includes the addition of a 5' cap and a poly(A) tail to the pre-mRNA. The mature mRNA may then be transported to the cytosol and translated by the ribosome into a protein. Other types of RNA include ribosomal RNA (rRNA) and transfer RNA (tRNA). These types are transcribed by RNA polymerase I and RNA polymerase III, respectively, and are essential for protein synthesis. However 5s rRNA is the only rRNA which is transcribed by RNA Polymerase III.
== References == | Wikipedia/Genomic_DNA |
The ribosomal DNA (rDNA) consists of a group of ribosomal RNA encoding genes and related regulatory elements, and is widespread in similar configuration in all domains of life. The ribosomal DNA encodes the non-coding ribosomal RNA, integral structural elements in the assembly of ribosomes, its importance making it the most abundant section of RNA found in cells of eukaryotes. Additionally, these segments include regulatory sections, such as a promoter specific to the RNA polymerase I, as well as both transcribed and non-transcribed spacer segments.
Due to their high importance in the assembly of ribosomes for protein biosynthesis, the rDNA genes are generally highly conserved in molecular evolution. The number of copies can vary considerably per species. Ribosomal DNA is widely used for phylogenetic studies.
== Structure ==
The ribosomal DNA includes all genes coding for the non-coding structural ribosomal RNA molecules. Across all domains of life, these are the structural sequences of the small subunit (16S or 18S rRNA) and the large subunit (23S or 28S rRNA). The assembly of the latter also include the 5S rRNA as well as the additional 5.8S rRNA in eukaryotes.
The rDNA-genes are commonly present with multiple copies in the genome, where they are organized in linked groups in most species, separated by an internal transcribed spacer (ITS) and preceded by the external transcribed spacer (ETS). The 5S rRNA is also linked to these rDNA region in prokaryotes, while it is located in separate repeating regions in most eukaryotes. They are transcribed together to a precursor RNA which is then processed to equal amounts of each rRNA.
=== Prokaryotes ===
The primary structural rRNA molecules in Bacteria and Archaea are smaller than their counterparts in eukaryotes, grouped as 16S rRNA and 23S rRNA. Meanwhile, the 5S rRNA also present in prokaryotes, is of a similar size to eukaryotes.
The form of rDNA operon most bacteria and archaea is "linked": from 3' to 5' one sees a continuous tract of 16S–23S–5S. The part between 16S and 23S is called the internal transcribed spacer (ITS) and often includes a tRNA. The part between 23S and 5S, though technically also a spacer that is internal and transcribed, does not have a name of its own.
A notable amount of bacteria and archaea diverge from the canonical structure of the operon containing the rDNA genes, instead carrying them as "unlinked" 16S and 23S–5S genes in different places of their genome. Archaea also show some other forms of divergence: the Thermoproteati have an eukaryote-style operon without 5S, and some archaeons have all three segments in different sites.
==== Plastids ====
Ribosomal DNA in typical chloroplasts follows the canonical structure of prokaryotic ribosomal DNA and goes: 16S–trnI–trnA–23S–4.5S–5S. The 4.5S corresponds to the 5' end fragment of 23S in bacteria.
The human microchondrial DNA shows a typical 12S–tRNAVal–16S organization, where 12S and 16S are greatly reduced versions of 16S and 23S of bacteria. Most vertrbrates have the same organization of the rDNA operon, as do ticks. Some eukaryotes such as snails have a split structure where 16S and 12S are separate.
=== Eukaryotes ===
The 45S rDNA gene cluster of eukaryotes consists of the genes for the 18S, 5.8S and 28S rRNA, separated by the two ITS-1 and ITS-2 spacers. The active genome of eukaryotes contains several hundred copies of the polycistronic rDNA transcriptional unit as tandem repeats, they are organized in nucleolus organizer regions (NORs), which in turn can be present at multiple loci in the genome.
Similar to the structure of prokaryotes, the 5S rRNA is appended to the rDNA cluster in the Saccharomycetes such as Saccharomyces cerevisiae. Most eukaryotes however, carry the gene for the 5S rRNA in separate gene repeats at different loci in the genome. 5S rDNA is also present in independent tandem repeats as in Drosophila.
As repetitive DNA regions often undergo recombination events, the rDNA repeats have many regulatory mechanisms that keep the DNA from undergoing mutations, thus keeping the rDNA conserved.
In the nucleus, the nucleolus organizer regions give rise to the nucleolus, where the rDNA regions of the chromosome forms expanded chromosomal loops, accessible for transcription of rRNA. In rDNA, the tandem repeats are mostly found in the nucleolus; but heterochromatic rDNA is found outside of the nucleolus. However, transcriptionally active rDNA resides inside of the nucleolus itself.
==== Humans ====
The human genome contains a total of 560 copies of the 45S rDNA transcriptional unit, spread across five chromosomes with nucleolus organizer regions. The repeat clusters are located on the acrocentric chromosomes 13 (RNR1), 14 (RNR2), 15 (RNR3), 21 (RNR4) and 22 (RNR5).
Human 5S ribosomal DNA is located on chromosome 1. There are 17 copies in the human reference genome, in loci coded RNA5S1 through RNA5S17. An average haploid human genome actually has 98 copies of 5S.
==== Ciliates ====
In ciliates, the presence of a generative micronucleus next to the vegetative macronucleus allows for the reduction of rDNA genes in the germline. The exact number of copies in the micronucleus core genome ranging from several copies in Paramecium as low as a single copy in Tetrahymena thermophila and other Tetrahymena species. During macronucleus formation, the regions containing the rDNA gene clusters are amplified, dramatically increasing the amount of available templates for transcription up to several thousand copies. In some ciliate genera, such as Tetrahymena or the Hypotrich genus Oxytricha, extensive fragmentation of the amplified DNA leads to the formation of microchromosomes, centered on the rDNA transcriptional unit. Similar processes are reported from Glaucoma chattoni and to lesser extent from Paramecium.
== Sequence homogeneity ==
In the large rDNA array, polymorphisms between rDNA repeat units are very low, indicating that rDNA tandem arrays are evolving through concerted evolution. However, the mechanism of concerted evolution is imperfect, such that polymorphisms between repeats within an individual can occur at significant levels and may confound phylogenetic analyses for closely related organisms.
5S tandem repeat sequences in several Drosophila were compared with each other; the result revealed that insertions and deletions occurred frequently between species and often flanked by conserved sequences. They could occur by slippage of the newly synthesized strand during DNA replication or by gene conversion.
== Sequence divergence ==
The rDNA transcription tracts have low rate of polymorphism among species, which allows interspecific comparison to elucidate phylogenetic relationship using only a few specimens. Coding regions of rDNA are highly conserved among species but ITS regions are variable due to insertions, deletions, and point mutations. Between remote species as human and frog comparison of sequences at ITS tracts is not appropriate. Conserved sequences at coding regions of rDNA allow comparisons of remote species, even between yeast and human. Human 5.8S rRNA has 75% identity with yeast 5.8S rRNA.
In cases for sibling species, comparison of the rDNA segment including ITS tracts among species and phylogenetic analysis are made satisfactorily.
The different coding regions of the rDNA repeats usually show distinct evolutionary rates. As a result, this DNA can provide phylogenetic information of species belonging to wide systematic levels.
== Recombination-stimulating activity in yeast ==
A fragment of yeast rDNA containing the 5S gene, non-transcribed spacer DNA, and part of the 25S (yeast version of 28S) gene has localized cis-acting mitotic recombination stimulating activity. This DNA fragment contains a mitotic recombination hotspot, referred to as HOT1. HOT1 expresses recombination-stimulating activity when it is inserted into novel locations in the yeast genome. HOT1 includes an RNA polymerase I (PolI) transcription promoter that catalyzes 35S ribosomal rRNA (yeast version of 45S) gene transcription. In a PolI defective mutant, the HOT1 hotspot recombination-stimulating activity is abolished. The level of PolI transcription in HOT1 appears to determine the level of recombination.
== Clinical significance ==
Diseases can be associated with DNA mutations where DNA can be expanded, such as Huntington's disease, or lost due to deletion mutations. The same is true for mutations that occur in rDNA repeats; it has been found that if the genes that are associated with the synthesis of ribosomes are disrupted or mutated, it can result in various diseases associated with the skeleton or bone marrow. Also, any damage or disruption to the enzymes that protect the tandem repeats of the rDNA, can result in lower synthesis of ribosomes, which also lead to other defects in the cell. Neurological diseases can also arise from mutations in the rDNA tandem repeats, such as Bloom syndrome, which occurs when the number of tandem repeats increases close to a hundred-fold; compared with that of the normal number of tandem repeats. Various types of cancers can also be born from mutations of the tandem repeats in the ribosomal DNA. Cell lines can become malignant from either a rearrangement of the tandem repeats, or an expansion of the repeats in the rDNA.
== References == | Wikipedia/Ribosomal_DNA |
Cell-free fetal DNA (cffDNA) is fetal DNA that circulates freely in the maternal blood. Maternal blood is sampled by venipuncture. Analysis of cffDNA is a method of non-invasive prenatal diagnosis frequently ordered for pregnant women of advanced age. Two hours after delivery, cffDNA is no longer detectable in maternal blood.
== Background ==
cffDNA originates from placental trophoblasts. Fetal DNA is fragmented when placental microparticles are shed into the maternal blood circulation.
cffDNA fragments are approximately 200 base pairs (bp) in length. They are significantly smaller than maternal DNA fragments. The difference in size allows cffDNA to be distinguished from maternal DNA fragments.
Approximately 11 to 13.4 percent of the cell-free DNA in maternal blood is of fetal origin. The amount varies widely from one pregnant woman to another. cffDNA is present after five to seven weeks gestation. The amount of cffDNA increases as the pregnancy progresses. The quantity of cffDNA in maternal blood diminishes rapidly after childbirth. Two hours after delivery, cffDNA is no longer detectable in maternal blood.
Analysis of cffDNA may provide earlier diagnosis of fetal conditions than current techniques. As cffDNA is found in maternal blood, sampling carries no associated risk of spontaneous abortion. cffDNA analysis has the same ethical and practical issues as other techniques such as amniocentesis and chorionic villus sampling.
Some disadvantages of sampling cffDNA include a low concentration of cffDNA in maternal blood; variation in the quantity of cffDNA between individuals; a high concentration of maternal cell free DNA compared to the cffDNA in maternal blood.
New evidence shows that cffDNA test failure rate is higher, fetal fraction (proportion of fetal versus maternal DNA in the maternal blood sample) is lower and PPV for trisomies 18, 13 and SCA is decreased in IVF pregnancies compared to those conceived spontaneously.
== Laboratory methods ==
A number of laboratory methods have been developed for cell-free fetal DNA screening for genetic defects have been developed. The main ones are (1) massively parallel shotgun sequencing (MPSS), (2) targeted massive parallel sequencing (t-MPS) and (3) single nucleotide polymorphism (SNP) based approach.
A maternal peripheral blood sample is taken by venesection at about ten weeks gestation.
=== Separation of cffDNA ===
Blood plasma is separated from the maternal blood sample using a laboratory centrifuge. The cffDNA is then isolated and purified. A standardized protocol for doing this was written through an evaluation of the scientific literature. The highest yield in cffDNA extraction was obtained with the "QIAamp DSP Virus Kit".
Addition of formaldehyde to maternal blood samples increases the yield of cffDNA. Formaldehyde stabilizes intact cells, and therefore inhibits the further release of maternal DNA. With the addition of formaldehyde, the percentage of cffDNA recovered from a maternal blood sample varies between 0.32 percent and 40 percent with a mean of 7.7 percent. Without the addition of formaldehyde, the mean percentage of cffDNA recovered has been measured at 20.2 percent. However, other figures vary between 5 and 96 percent.
Recovery of cffDNA may be related to the length of the DNA fragments. Another way to increase the fetal DNA is based on physical length of DNA fragments. Smaller fragments can represent up to seventy percent of the total cell free DNA in the maternal blood sample.
=== Analysis of cffDNA ===
In real-time PCR, fluorescent probes are used to monitor the accumulation of amplicons. The reporter fluorescent signal is proportional to the number of amplicons generated. The most appropriate real time PCR protocol is designed according to the particular mutation or genotype to be detected. Point mutations are analysed with qualitative real time PCR with the use of allele specific probes. insertions and deletions are analyzed by dosage measurements using quantitative real time PCR.
cffDNA may be detected by finding paternally inherited DNA sequences via polymerase chain reaction (PCR).
==== Quantitative real-time PCR ====
sex-determining region Y gene (SRY) and Y chromosome short tandem repeat "DYS14" in cffDNA from 511 pregnancies were analyzed using quantitative real-time PCR (RT-qPCR). In 401 of 403 pregnancies where maternal blood was drawn at seven weeks gestation or more, both segments of DNA were found.
==== Nested PCR ====
The use of nested polymerase chain reaction (nested PCR) was evaluated to determine sex by detecting a Y chromosome specific signal in the cffDNA from maternal plasma. Nested PCR detected 53 of 55 male fetuses. The cffDNA from the plasma of 3 of 25 women with female fetuses contained the Y chromosome-specific signal. The sensitivity of nested PCR in this experiment was 96 percent. The specificity was 88 percent.
==== Digital PCR ====
Microfluidic devices allow the quantification of cffDNA segments in maternal plasma with accuracy beyond that of real-time PCR. Point mutations, loss of heterozygosity and aneuploidy can be detected in a single PCR step. Digital PCR can differentiate between maternal blood plasma and fetal DNA in a multiplex fashion.
==== Shotgun sequencing ====
High throughput shotgun sequencing using tools such as Solexa or Illumina, yields approximately 5 million sequence tags per sample of maternal serum. Aneuploid pregnancies such as trisomy were identified when testing at the fourteenth week of gestation. Fetal whole of genome mapping by parental haplotype analysis was completed using sequencing of cffDNA from maternal serum. Pregnant females were studied using a 2-plex massively parallel maternal plasma DNA sequencing and trisomy was diagnosed with z-score greater than 3. The sequencing gave sensitivity of 100 percent, specificity of 97.9 percent, a positive predictive value of 96.6 percent and a negative predictive value of 100 percent.
==== Mass spectrometry ====
Matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) combined with single-base extension after PCR allows cffDNA detection with single base specificity and single DNA molecule sensitivity. DNA is amplified by PCR. Then, linear amplification with base extension reaction (with a third primer) is designed to anneal to the region upstream from the mutation site. One or two bases are added to the extension primer to produce two extension products from wild-type DNA and mutant DNA. Single base specificity provides advantages over hybridization-based techniques using TaqMan hydrolysis probes. When assessing the technique, no false positives or negatives were found when looking for cffDNA to determine fetal sex in sixteen maternal plasma samples. The sex of ninety-one male foetuses were correctly detected using MALDI-TOF mass spectrometry. The technique had accuracy, sensitivity and specificity of over 99 percent.
==== Epigenetic modifications ====
Differences in gene activation between maternal and fetal DNA can be exploited. Epigenetic modifications (heritable modifications that change gene function without changing DNA sequence) can be used to detect cffDNA. The hypermethylated RASSF1A promoter is a universal fetal marker used to confirm the presence of cffDNA. A technique was described where cffDNA was extracted from maternal plasma and then digested with methylation-sensitive and insensitive restriction enzymes. Then, real-time PCR analysis of RASSF1A, SRY, and DYS14 was done. The procedure detected 79 out of 90 (88 percent) maternal blood samples where hypermethylated RASSF1A was present.
==== mRNA ====
mRNA transcripts from genes expressed in the placenta are detectable in maternal plasma. In this procedure, plasma is centrifuged so an aqueous layer appears. This layer is transferred and from it RNA is extracted. RT-PCR is used to detect a selected expression of RNA. For example, Human placental lactogen (hPL) and beta-hCG mRNA are stable in maternal plasma and can be detected. (Ng et al. 2002). This can help to confirm the presence of cffDNA in maternal plasma.
== Applications ==
=== Prenatal sex discernment ===
The analysis of cffDNA from a sample of maternal plasma allows for prenatal sex discernment. Applications of prenatal sex discernment include:
Disease testing: Whether the sex of the fetus is male or female allows the determination of the risk of a particular X-linked recessive genetic disorder in a particular pregnancy, especially where the mother is a genetic carrier of the disorder.
Preparation, for any sex-dependent aspects of parenting.
Sex selection, which after preimplantation genetic diagnosis may be performed by selecting only embryos of the preferred sex, or, after post-implantation methods by performing sex-selective abortion depending on the test result and personal preference.
In comparison to obstetric ultrasonography which is unreliable for sex determination in the first trimester and amniocentesis which carries a small risk of miscarriage, sampling of maternal plasma for analysis of cffDNA is without risk. The main targets in the cffDNA analysis are the gene responsible for the sex-determining region Y protein (SRY) on the Y chromosome and the DYS14 sequence.
=== Congenital adrenal hyperplasia ===
In congenital adrenal hyperplasia, the adrenal cortex lacks appropriate corticosteroid synthesis, leading to excess adrenal androgens and affects female fetuses. There is an external masculinization of the genitalia in the female fetuses. Mothers of at risk fetuses are given dexamethasone at 6 weeks gestation to suppress pituitary gland release of androgens.
If analysis of cffDNA obtained from a sample of maternal plasma lacks genetic markers found only on the Y chromosome, it is suggestive of a female fetus. However, it might also indicate a failure of the analysis itself (a false negative result). Paternal genetic polymorphisms and sex-independent markers may be used to detect cffDNA. A high degree of heterozygosity of these markers must be present for this application.
=== Paternity testing ===
Prenatal DNA paternity testing is commercially available. The test can be performed at nine weeks gestation.
=== Single gene disorders ===
Autosomal dominant and recessive single gene disorders which have been diagnosed prenatally by analysing paternally inherited DNA include cystic fibrosis, beta thalassemia, sickle cell anemia, spinal muscular atrophy, and myotonic dystrophy. Prenatal diagnosis of single gene disorders which are due to an autosomal recessive mutation, a maternally inherited autosomal dominant mutation or large sequence mutations that include duplication, expansion or insertion of DNA sequences is more difficult.
In cffDNA, fragments of 200 – 300 bp length involved in single gene disorders are more difficult to detect.
For example, the autosomal dominant condition, achondroplasia is caused by the FGFR3 gene point mutation. In two pregnancies with a fetus with achondroplasia was found a paternally inherited G1138A mutation from cffDNA from a maternal plasma sample in one and a G1138A de novo mutation from the other.
In studies of the genetics of Huntington's chorea using qRT-PCR of cffDNA from maternal plasma samples, CAG repeats have been detected at normal levels (17, 20 and 24).
cffDNA may also be used to diagnose single gene disorders. Developments in laboratory processes using cffDNA may allow prenatal diagnosis of aneuploidies such as trisomy 21 (Down's syndrome) in the fetus.
=== Hemolytic disease of the fetus and newborn ===
Incompatibility of fetal and maternal RhD antigens is the main cause of Hemolytic disease of the newborn. Approximately 15 percent of Caucasian women, 3 to 5 percent of black Africa women and less than 3 percent of Asian women are RhD negative.
Accurate prenatal diagnosis is important because the disease can be fatal to the newborn and because treatment including intramuscular immunoglobulin (Anti-D) or intravenous immunoglobulin can be administered to mothers at risk.
PCR to detect RHD (gene) gene exons 5 and 7 from cffDNA obtained from maternal plasma between 9 and 13 weeks gestation gives a high degree of specificity, sensitivity and diagnostic accuracy (>90 percent) when compared to RhD determination from newborn cord blood serum. Similar results were obtained targeting exons 7 and 10. Droplet digital PCR in fetal RhD determination was comparable to a routine real-time PCR technique.
Routine determination of fetal RhD status from cffDNA in maternal serum allows early management of at risk pregnancies while decreasing unnecessary use of Anti-D by over 25 percent.
=== Aneuploidy ===
Sex chromosomes
Analysis of maternal serum cffDNA by high-throughput sequencing can detect common fetal sex chromosome aneuploidies such as Turner's syndrome, Klinefelter's syndrome and triple X syndrome but the procedure's positive predictive value is low.
Trisomy 21
Fetal trisomy of chromosome 21 is the cause of Down's syndrome. This trisomy can be detected by analysis of cffDNA from maternal blood by massively parallel shotgun sequencing (MPSS). Another technique is digital analysis of selected regions (DANSR). Such tests show a sensitivity of about 99% and a specificity of more than 99.9%. Therefore, they cannot be regarded as diagnostic procedures but may be used to confirm a positive maternal screening test such as a first trimester screening or ultrasound markers of the condition.
Trisomy 13 and 18
Analysis of cffDNA from maternal plasma with MPSS looking for trisomy 13 or 18 is possible
Factors limiting sensitivity and specificity include the levels of cffDNA in the maternal plasma; maternal chromosomes may have mosaicism.
A number of fetal nucleic acid molecules derived from aneuploid chromosomes can be detected including SERPINEB2 mRNA, clad B, hypomethylated SERPINB5 from chromosome 18, placenta-specific 4 (PLAC4), hypermethylated holocarboxylase synthetase (HLCS) and c21orf105 mRNA from chromosome 12. With complete trisomy, the mRNA alleles in maternal plasma isn't the normal 1:1 ratio, but is in fact 2:1. Allelic ratios determined by epigenetic markers can also be used to detect the complete trisomies. Massive parallel sequencing and digital PCR for fetal aneuploidy detection can be used without restriction to fetal-specific nucleic acid molecules. (MPSS) is estimated to have a sensitivity of between 96 and 100%, and a specificity between 94 and 100% for detecting Down syndrome. It can be performed at 10 weeks of gestational age. One study in the United States estimated a false positive rate of 0.3% and a positive predictive value of 80% when using cffDNA to detect Down syndrome.
=== Preeclampsia ===
Preeclampsia is a complex condition of pregnancy involving hypertension and proteinuria usually after 20 weeks gestation. It is associated with poor cytotrophoblastic invasion of the myometrium. Onset of the condition between 20 and 34 weeks gestation, is considered "early". Maternal plasma samples in pregnancies complicated by preeclampsia have significantly higher levels of cffDNA that those in normal pregnancies. This holds true for early onset preeclampsia.
== History ==
In 1997, Hong Kong molecular biologist Dennis Lo and his team first employed the Y-PCR assay to identify fetal Y chromosome sequences (because Y-specific sequences are genetic sequences of the fetus not in the maternal genome) in maternal plasma samples. For this groundbreaking work, Lo was honored with the 2022 Lasker DeBakey Clinical Medical Research Award.
== Future perspectives ==
New generation sequencing may be used to yield a whole genome sequence from cffDNA. This raises ethical questions. However, the utility of the procedure may increase as clear associations between specific genetic variants and disease states are discovered.
== See also ==
Microchimerism
Quad test
Triple test
== References == | Wikipedia/Cell-free_fetal_DNA |
Chromatin remodeling is the dynamic modification of chromatin architecture to allow access of condensed genomic DNA to the regulatory transcription machinery proteins, and thereby control gene expression. Such remodeling is principally carried out by 1) covalent histone modifications by specific enzymes, e.g., histone acetyltransferases (HATs), deacetylases, methyltransferases, and kinases, and 2) ATP-dependent chromatin remodeling complexes which either move, eject or restructure nucleosomes. Besides actively regulating gene expression, dynamic remodeling of chromatin imparts an epigenetic regulatory role in several key biological processes, egg cells DNA replication and repair; apoptosis; chromosome segregation as well as development and pluripotency. Aberrations in chromatin remodeling proteins are found to be associated with human diseases, including cancer. Targeting chromatin remodeling pathways is currently evolving as a major therapeutic strategy in the treatment of several cancers.
== Overview ==
The transcriptional regulation of the genome is controlled primarily at the preinitiation stage by binding of the core transcriptional machinery proteins (namely, RNA polymerase, transcription factors, and activators and repressors) to the core promoter sequence on the coding region of the DNA. However, DNA is tightly packaged in the nucleus with the help of packaging proteins, chiefly histone proteins to form repeating units of nucleosomes which further bundle together to form condensed chromatin structure. Such condensed structure occludes many DNA regulatory regions, not allowing them to interact with transcriptional machinery proteins and regulate gene expression. To overcome this issue and allow dynamic access to condensed DNA, a process known as chromatin remodeling alters nucleosome architecture to expose or hide regions of DNA for transcriptional regulation.
By definition, chromatin remodeling is the enzyme-assisted process to facilitate access of nucleosomal DNA by remodeling the structure, composition and positioning of nucleosomes.
== Classification ==
Access to nucleosomal DNA is governed by two major classes of protein complexes:
Covalent histone-modifying complexes.
ATP-dependent chromatin remodeling complexes.
=== Covalent histone-modifying complexes ===
Specific protein complexes, known as histone-modifying complexes catalyze addition or removal of various chemical elements on histones. These enzymatic modifications include acetylation, methylation, phosphorylation, and ubiquitination and primarily occur at N-terminal histone tails. Such modifications affect the binding affinity between histones and DNA, and thus loosening or tightening the condensed DNA wrapped around histones, e.g., Methylation of specific lysine residues in H3 and H4 causes further condensation of DNA around histones, and thereby prevents binding of transcription factors to the DNA that lead to gene repression. On the contrary, histone acetylation relaxes chromatin condensation and exposes DNA for TF binding, leading to increased gene expression.
==== Known modifications ====
Well characterized modifications to histones include:
Methylation
Both lysine and arginine residues are known to be methylated. Methylated lysines are the best understood marks of the histone code, as specific methylated lysine match well with gene expression states. Methylation of lysines H3K4 and H3K36 is correlated with transcriptional activation while demethylation of H3K4 is correlated with silencing of the genomic region. Methylation of lysines H3K9 and H3K27 is correlated with transcriptional repression. Particularly, H3K9me3 is highly correlated with constitutive heterochromatin.
Acetylation - by HAT (histone acetyl transferase); deacetylation - by HDAC (histone deacetylase)
Acetylation tends to define the 'openness' of chromatin as acetylated histones cannot pack as well together as deacetylated histones.
Phosphorylation
Ubiquitination
However, there are many more histone modifications, and sensitive mass spectrometry approaches have recently greatly expanded the catalog.
==== Histone code hypothesis ====
The histone code is a hypothesis that the transcription of genetic information encoded in DNA is in part regulated by chemical modifications to histone proteins, primarily on their unstructured ends. Together with similar modifications such as DNA methylation it is part of the epigenetic code.
Cumulative evidence suggests that such code is written by specific enzymes which can (for example) methylate or acetylate DNA ('writers'), removed by other enzymes having demethylase or deacetylase activity ('erasers'), and finally readily identified by proteins ('readers') that are recruited to such histone modifications and bind via specific domains, e.g., bromodomain, chromodomain. These triple action of 'writing', 'reading' and 'erasing' establish the favorable local environment for transcriptional regulation, DNA-damage repair, etc.
The critical concept of the histone code hypothesis is that the histone modifications serve to recruit other proteins by specific recognition of the modified histone via protein domains specialized for such purposes, rather than through simply stabilizing or destabilizing the interaction between histone and the underlying DNA. These recruited proteins then act to alter chromatin structure actively or to promote transcription.
A very basic summary of the histone code for gene expression status is given below (histone nomenclature is described here):
=== ATP-dependent chromatin remodeling ===
ATP-dependent chromatin-remodeling complexes regulate gene expression by either moving, ejecting or restructuring nucleosomes. These protein complexes have a common ATPase domain and energy from the hydrolysis of ATP allows these remodeling complexes to reposition nucleosomes (often referred to as "nucleosome sliding") along the DNA, eject or assemble histones on/off of DNA or facilitate exchange of histone variants, and thus creating nucleosome-free regions of DNA for gene activation. Also, several remodelers have DNA-translocation activity to carry out specific remodeling tasks.
All ATP-dependent chromatin-remodeling complexes possess a sub unit of ATPase that belongs to the SNF2 superfamily of proteins. In association to the sub unit's identity, two main groups have been classified for these proteins. These are known as the SWI2/SNF2 group and the imitation SWI (ISWI) group. The third class of ATP-dependent complexes that has been recently described contains a Snf2-like ATPase and also demonstrates deacetylase activity.
==== Known chromatin remodeling complexes ====
There are at least four families of chromatin remodelers in eukaryotes: SWI/SNF, ISWI, NuRD/Mi-2/CHD, and INO80 with first two remodelers being very well studied so far, especially in the yeast model. Although all of remodelers share common ATPase domain, their functions are specific based on several biological processes (DNA repair, apoptosis, etc.). This is due to the fact that each remodeler complex has unique protein domains (Helicase, bromodomain, etc.) in their catalytic ATPase region and also has different recruited subunits.
==== Specific functions ====
Several in-vitro experiments suggest that ISWI remodelers organize nucleosome into proper bundle form and create equal spacing between nucleosomes, whereas SWI/SNF remodelers disorder nucleosomes.
The ISWI-family remodelers have been shown to play central roles in chromatin assembly after DNA replication and maintenance of higher-order chromatin structures.
INO80 and SWI/SNF-family remodelers participate in DNA double-strand break (DSB) repair and nucleotide-excision repair (NER) and thereby plays crucial role in TP53 mediated DNA-damage response.
NuRD/Mi-2/CHD remodeling complexes primarily mediate transcriptional repression in the nucleus and are required for the maintenance of pluripotency of embryonic stem cells.
== Significance ==
=== In normal biological processes ===
Chromatin remodeling plays a central role in the regulation of gene expression by providing the transcription machinery with dynamic access to an otherwise tightly packaged genome. Further, nucleosome movement by chromatin remodelers is essential to several important biological processes, including chromosome assembly and segregation, DNA replication and repair, embryonic development and pluripotency, and cell-cycle progression. Deregulation of chromatin remodeling causes loss of transcriptional regulation at these critical check-points required for proper cellular functions, and thus causes various disease syndromes, including cancer.
=== Response to DNA damage ===
Chromatin relaxation is one of the earliest cellular responses to DNA damage. Several experiments have been performed on the recruitment kinetics of proteins involved in the response to DNA damage. The relaxation appears to be initiated by PARP1, whose accumulation at DNA damage is half complete by 1.6 seconds after DNA damage occurs. This is quickly followed by accumulation of chromatin remodeler Alc1, which has an ADP-ribose–binding domain, allowing it to be quickly attracted to the product of PARP1. The maximum recruitment of Alc1 occurs within 10 seconds of DNA damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. PARP1 action at the site of a double-strand break allows recruitment of the two DNA repair enzymes MRE11 and NBS1. Half maximum recruitment of these two DNA repair enzymes takes 13 seconds for MRE11 and 28 seconds for NBS1.
Another process of chromatin relaxation, after formation of a DNA double-strand break, employs γH2AX, the phosphorylated form of the H2AX protein. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (phosphorylated on serine 139 of H2AX) was detected at 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurred in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break.
γH2AX does not, by itself, cause chromatin decondensation, but within seconds of irradiation the protein "Mediator of the DNA damage checkpoint 1" (MDC1) specifically attaches to γH2AX. This is accompanied by simultaneous accumulation of RNF8 protein and the DNA repair protein NBS1 which bind to MDC1 as MDC1 attaches to γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4 protein, a component of the nucleosome remodeling and deacetylase complex NuRD. CHD4 accumulation at the site of the double-strand break is rapid, with half-maximum accumulation occurring by 40 seconds after irradiation.
The fast initial chromatin relaxation upon DNA damage (with rapid initiation of DNA repair) is followed by a slow recondensation, with chromatin recovering a compaction state close to its pre-damage level in ~ 20 min.
=== Cancer ===
Chromatin remodeling provides fine-tuning at crucial cell growth and division steps, like cell-cycle progression, DNA repair and chromosome segregation, and therefore exerts tumor-suppressor function. Mutations in such chromatin remodelers and deregulated covalent histone modifications potentially favor self-sufficiency in cell growth and escape from growth-regulatory cell signals - two important hallmarks of cancer.
Inactivating mutations in SMARCB1, formerly known as hSNF5/INI1 and a component of the human SWI/SNF remodeling complex have been found in large number of rhabdoid tumors, commonly affecting pediatric population. Similar mutations are also present in other childhood cancers, such as choroid plexus carcinoma, medulloblastoma and in some acute leukemias. Further, mouse knock-out studies strongly support SMARCB1 as a tumor suppressor protein. Since the original observation of SMARCB1 mutations in rhabdoid tumors, several more subunits of the human SWI/SNF chromatin remodeling complex have been found mutated in a wide range of neoplasms.
The SWI/SNF ATPase BRG1 (or SMARCA4) is the most frequently mutated chromatin remodeling ATPase in cancer. Mutations in this gene were first recognized in human cancer cell lines derived from lung. In cancer, mutations in BRG1 show an unusually high preference for missense mutations that target the ATPase domain. Mutations are enriched at highly conserved ATPase sequences, which lie on important functional surfaces such as the ATP pocket or DNA-binding surface. These mutations act in a genetically dominant manner to alter chromatin regulatory function at enhancers and promoters.
Inactivating mutations in BCL7A in Diffuse large B-cell lymphoma (DLBCL) and in other haematological malignancies
PML-RARA fusion protein in acute myeloid leukemia recruits histone deacetylases. This leads to repression of genes responsible for myelocytes to differentiate, leading to leukemia.
Tumor suppressor Rb protein functions by the recruitment of the human homologs of the SWI/SNF enzymes BRG1, histone deacetylase and DNA methyltransferase. Mutations in BRG1 are reported in several cancers causing loss of tumor suppressor action of Rb.
Recent reports indicate DNA hypermethylation in the promoter region of major tumor suppressor genes in several cancers. Although few mutations are reported in histone methyltransferases yet, correlation of DNA hypermethylation and histone H3 lysine-9 methylation has been reported in several cancers, mainly in colorectal and breast cancers.
Mutations in Histone Acetyl Transferases (HAT) p300 (missense and truncating type) are most commonly reported in colorectal, pancreatic, breast and gastric carcinomas. Loss of heterozygosity in coding region of p300 (chromosome 22q13) is present in large number of glioblastomas.
Further, HATs have diverse role as transcription factors beside having histone acetylase activity, e.g., HAT subunit, hADA3 may act as an adaptor protein linking transcription factors with other HAT complexes. In the absence of hADA3, TP53 transcriptional activity is significantly reduced, suggesting role of hADA3 in activating TP53 function in response to DNA damage.
Similarly, TRRAP, the human homolog to yeast Tra1, has been shown to directly interact with c-Myc and E2F1, known oncoproteins.
==== Cancer genomics ====
Rapid advance in cancer genomics and high-throughput ChIP-chip, ChIP-Seq and Bisulfite sequencing methods are providing more insight into role of chromatin remodeling in transcriptional regulation and role in cancer.
==== Therapeutic intervention ====
Epigenetic instability caused by deregulation in chromatin remodeling is studied in several cancers, including breast cancer, colorectal cancer, pancreatic cancer. Such instability largely cause widespread silencing of genes with primary impact on tumor-suppressor genes. Hence, strategies are now being tried to overcome epigenetic silencing with synergistic combination of HDAC inhibitors or HDI and DNA-demethylating agents.
HDIs are primarily used as adjunct therapy in several cancer types. HDAC inhibitors can induce p21 (WAF1) expression, a regulator of p53's tumor suppressoractivity. HDACs are involved in the pathway by which the retinoblastoma protein (pRb) suppresses cell proliferation. Estrogen is well-established as a mitogenic factor implicated in the tumorigenesis and progression of breast cancer via its binding to the estrogen receptor alpha (ERα). Recent data indicate that chromatin inactivation mediated by HDAC and DNA methylation is a critical component of ERα silencing in human breast cancer cells.
Approved usage:
Vorinostat was licensed by the U.S. FDA in October 2006 for the treatment of cutaneous T cell lymphoma (CTCL).
Romidepsin (trade name Istodax) was licensed by the US FDA in Nov 2009 for cutaneous T-cell lymphoma (CTCL).
Phase III Clinical trials:
Panobinostat (LBH589) is in clinical trials for various cancers including a phase III trial for cutaneous T cell lymphoma (CTCL).
Valproic acid (as Mg valproate) in phase III trials for cervical cancer and ovarian cancer.
Started pivotal phase II clinical trials:
Belinostat (PXD101) has had a phase II trial for relapsed ovarian cancer, and reported good results for T cell lymphoma.
HDAC inhibitors.
Current front-runner candidates for new drug targets are Histone Lysine Methyltransferases (KMT) and Protein Arginine Methyltransferases (PRMT).
=== Other disease syndromes ===
ATRX-syndrome (α-thalassemia X-linked mental retardation) and α-thalassemia myelodysplasia syndrome are caused by mutations in ATRX, a SNF2-related ATPase with a PHD finger domain.
CHARGE syndrome, an autosomal dominant disorder, has been linked recently to haploinsufficiency of CHD7, which encodes the CHD family ATPase CHD7.
=== Senescence ===
Chromatin architectural remodeling is implicated in the process of cellular senescence, which is related to, and yet distinct from, organismal aging. Replicative cellular senescence refers to a permanent cell cycle arrest where post-mitotic cells continue to exist as metabolically active cells but fail to proliferate. Senescence can arise due to age associated degradation, telomere attrition, progerias, pre-malignancies, and other forms of damage or disease. Senescent cells undergo distinct repressive phenotypic changes, potentially to prevent the proliferation of damaged or cancerous cells, with modified chromatin organization, fluctuations in remodeler abundance, and changes in epigenetic modifications. Senescent cells undergo chromatin landscape modifications as constitutive heterochromatin migrates to the center of the nucleus and displaces euchromatin and facultative heterochromatin to regions at the edge of the nucleus. This disrupts chromatin-lamin interactions and inverts of the pattern typically seen in a mitotically active cell. Individual Lamin-Associated Domains (LADs) and Topologically Associating Domains (TADs) are disrupted by this migration which can affect cis interactions across the genome. Additionally, there is a general pattern of canonical histone loss, particularly in terms of the nucleosome histones H3 and H4 and the linker histone H1. Histone variants with two exons are upregulated in senescent cells to produce modified nucleosome assembly which contributes to chromatin permissiveness to senescent changes. Although transcription of variant histone proteins may be elevated, canonical histone proteins are not expressed as they are only made during the S phase of the cell cycle and senescent cells are post-mitotic. During senescence, portions of chromosomes can be exported from the nucleus for lysosomal degradation which results in greater organizational disarray and disruption of chromatin interactions.
Chromatin remodeler abundance may be implicated in cellular senescence as knockdown or knockout of ATP-dependent remodelers such as NuRD, ACF1, and SWI/SNP can result in DNA damage and senescent phenotypes in yeast, C. elegans, mice, and human cell cultures. ACF1 and NuRD are downregulated in senescent cells which suggests that chromatin remodeling is essential for maintaining a mitotic phenotype. Genes involved in signaling for senescence can be silenced by chromatin confirmation and polycomb repressive complexes as seen in PRC1/PCR2 silencing of p16. Specific remodeler depletion results in activation of proliferative genes through a failure to maintain silencing. Some remodelers act on enhancer regions of genes rather than the specific loci to prevent re-entry into the cell cycle by forming regions of dense heterochromatin around regulatory regions.
Senescent cells undergo widespread fluctuations in epigenetic modifications in specific chromatin regions compared to mitotic cells. Human and murine cells undergoing replicative senescence experience a general global decrease in methylation; however, specific loci can differ from the general trend. Specific chromatin regions, especially those around the promoters or enhancers of proliferative loci, may exhibit elevated methylation states with an overall imbalance of repressive and activating histone modifications. Proliferative genes may show increases in the repressive mark H3K27me3 while genes involved in silencing or aberrant histone products may be enriched with the activating modification H3K4me3. Additionally, upregulating histone deacetylases, such as members of the sirtuin family, can delay senescence by removing acetyl groups that contribute to greater chromatin accessibility. General loss of methylation, combined with the addition of acetyl groups results in a more accessible chromatin conformation with a propensity towards disorganization when compared to mitotically active cells. General loss of histones precludes addition of histone modifications and contributes changes in enrichment in some chromatin regions during senescence.
== See also ==
Epigenetics
Histone
Nucleosomes
Chromatin
Histone acetyltransferase
Transcription factors
CAF-1 (Chromatin assembly factor-1) - histone chaperone that execute a coordinating role in сhromatin remodeling.
== References ==
== Further reading ==
Chen T, Dent SY (February 2014). "Chromatin modifiers and remodellers: regulators of cellular differentiation". Nature Reviews Genetics. 15 (2): 93–106. doi:10.1038/nrg3607. PMC 3999985. PMID 24366184.
== External links ==
MBInfo - Chromatin
MBInfo - DNA Packaging
YouTube - Chromatin, Histones and Modifications
YouTube - Epigenetics Overview
Chromatin+remodeling at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Chromatin_remodeling |
Nucleic acid structure refers to the structure of nucleic acids such as DNA and RNA. Chemically speaking, DNA and RNA are very similar. Nucleic acid structure is often divided into four different levels: primary, secondary, tertiary, and quaternary.
== Primary structure ==
Primary structure consists of a linear sequence of nucleotides that are linked together by phosphodiester bonds. It is this linear sequence of nucleotides that make up the primary structure of DNA or RNA. Nucleotides consist of 3 components:
Nitrogenous base
Adenine
Guanine
Cytosine
Thymine (present in DNA only)
Uracil (present in RNA only)
5-carbon sugar which is called deoxyribose (found in DNA) and ribose (found in RNA).
One or more phosphate groups.
The nitrogen bases adenine and guanine are purine in structure and form a glycosidic bond between their 9 nitrogen and the 1' -OH group of the deoxyribose. Cytosine, thymine, and uracil are pyrimidines, hence the glycosidic bonds form between their 1 nitrogen and the 1' -OH of the deoxyribose. For both the purine and pyrimidine bases, the phosphate group forms a bond with the deoxyribose sugar through an ester bond between one of its negatively charged oxygen groups and the 5' -OH of the sugar. The polarity in DNA and RNA is derived from the oxygen and nitrogen atoms in the backbone. Nucleic acids are formed when nucleotides come together through phosphodiester linkages between the 5' and 3' carbon atoms.
A nucleic acid sequence is the order of nucleotides within a DNA (GACT) or RNA (GACU) molecule that is determined by a series of letters. Sequences are presented from the 5' to 3' end and determine the covalent structure of the entire molecule. Sequences can be complementary to another sequence in that the base on each position is complementary as well as in the reverse order. An example of a complementary sequence to AGCT is TCGA. DNA is double-stranded containing both a sense strand and an antisense strand. Therefore, the complementary sequence will be to the sense strand.
=== Complexes with alkali metal ions ===
There are three potential metal binding groups on nucleic acids: phosphate, sugar, and base moieties. Solid-state structure of complexes with alkali metal ions have been reviewed.
== Secondary structure ==
=== DNA ===
Secondary structure is the set of interactions between bases, i.e., which parts of strands are bound to each other. In DNA double helix, the two strands of DNA are held together by hydrogen bonds. The nucleotides on one strand base pairs with the nucleotide on the other strand. The secondary structure is responsible for the shape that the nucleic acid assumes. The bases in the DNA are classified as purines and pyrimidines. The purines are adenine and guanine. Purines consist of a double ring structure, a six-membered and a five-membered ring containing nitrogen. The pyrimidines are cytosine and thymine. It has a single ring structure, a six-membered ring containing nitrogen. A purine base always pairs with a pyrimidine base (guanine (G) pairs with cytosine (C) and adenine (A) pairs with thymine (T) or uracil (U)). DNA's secondary structure is predominantly determined by base-pairing of the two polynucleotide strands wrapped around each other to form a double helix. Although the two strands are aligned by hydrogen bonds in base pairs, the stronger forces holding the two strands together are stacking interactions between the bases. These stacking interactions are stabilized by Van der Waals forces and hydrophobic interactions, and show a large amount of local structural variability. There are also two grooves in the double helix, which are called major groove and minor groove based on their relative size.
=== RNA ===
The secondary structure of RNA consists of a single polynucleotide. Base pairing in RNA occurs when RNA folds between complementarity regions. Both single- and double-stranded regions are often found in RNA molecules.
The four basic elements in the secondary structure of RNA are:
Helices
Bulges
Loops
Junctions
The antiparallel strands form a helical shape. Bulges and internal loops are formed by separation of the double helical tract on either one strand (bulge) or on both strands (internal loops) by unpaired nucleotides.
Stem-loop or hairpin loop is the most common element of RNA secondary structure. Stem-loop is formed when the RNA chains fold back on themselves to form a double helical tract called the 'stem', the unpaired nucleotides forms single stranded region called the 'loop'. A tetraloop is a four-base pairs hairpin RNA structure. There are three common families of tetraloop in ribosomal RNA: UNCG, GNRA, and CUUG (N is one of the four nucleotides and R is a purine). UNCG is the most stable tetraloop.
Pseudoknot is an RNA secondary structure first identified in turnip yellow mosaic virus. It is minimally composed of two helical segments connected by single-stranded regions or loops. H-type fold pseudoknots are best characterized. In H-type fold, nucleotides in the hairpin-loop pair with the bases outside the hairpin stem forming second stem and loop. This causes formation of pseudoknots with two stems and two loops. Pseudoknots are functional elements in RNA structure having diverse function and found in most classes of RNA.
Secondary structure of RNA can be predicted by experimental data on the secondary structure elements, helices, loops, and bulges. DotKnot-PW method is used for comparative pseudoknots prediction. The main points in the DotKnot-PW method is scoring the similarities found in stems, secondary elements and H-type pseudoknots.
== Tertiary structure ==
Tertiary structure refers to the locations of the atoms in three-dimensional space, taking into consideration geometrical and steric constraints. It is a higher order than the secondary structure, in which large-scale folding in a linear polymer occurs and the entire chain is folded into a specific 3-dimensional shape. There are 4 areas in which the structural forms of DNA can differ.
Handedness – right or left
Length of the helix turn
Number of base pairs per turn
Difference in size between the major and minor grooves
The tertiary arrangement of DNA's double helix in space includes B-DNA, A-DNA, and Z-DNA. Triple-stranded DNA structures have been demonstrated in repetitive polypurine:polypyrimidine Microsatellite sequences and Satellite DNA.
B-DNA is the most common form of DNA in vivo and is a more narrow, elongated helix than A-DNA. Its wide major groove makes it more accessible to proteins. On the other hand, it has a narrow minor groove. B-DNA's favored conformations occur at high water concentrations; the hydration of the minor groove appears to favor B-DNA. B-DNA base pairs are nearly perpendicular to the helix axis. The sugar pucker which determines the shape of the a-helix, whether the helix will exist in the A-form or in the B-form, occurs at the C2'-endo.
A-DNA, is a form of the DNA duplex observed under dehydrating conditions. It is shorter and wider than B-DNA. RNA adopts this double helical form, and RNA-DNA duplexes are mostly A-form, but B-form RNA-DNA duplexes have been observed. In localized single strand dinucleotide contexts, RNA can also adopt the B-form without pairing to DNA. A-DNA has a deep, narrow major groove which does not make it easily accessible to proteins. On the other hand, its wide, shallow minor groove makes it accessible to proteins but with lower information content than the major groove. Its favored conformation is at low water concentrations. A-DNAs base pairs are tilted relative to the helix axis, and are displaced from the axis. The sugar pucker occurs at the C3'-endo and in RNA 2'-OH inhibits C2'-endo conformation. Long considered little more than a laboratory artifice, A-DNA is now known to have several biological functions.
Z-DNA is a relatively rare left-handed double-helix. Given the proper sequence and superhelical tension, it can be formed in vivo but its function is unclear. It has a more narrow, more elongated helix than A or B. Z-DNA's major groove is not really a groove, and it has a narrow minor groove. The most favored conformation occurs when there are high salt concentrations. There are some base substitutions but they require an alternating purine-pyrimidine sequence. The N2-amino of G H-bonds to 5' PO, which explains the slow exchange of protons and the need for the G purine. Z-DNA base pairs are nearly perpendicular to the helix axis. Z-DNA does not contain single base-pairs but rather a GpC repeat with P-P distances varying for GpC and CpG. On the GpC stack there is good base overlap, whereas on the CpG stack there is less overlap. Z-DNA's zigzag backbone is due to the C sugar conformation compensating for G glycosidic bond conformation. The conformation of G is syn, C2'-endo; for C it is anti, C3'-endo.
A linear DNA molecule having free ends can rotate, to adjust to changes of various dynamic processes in the cell, by changing how many times the two chains of its double helix twist around each other. Some DNA molecules are circular and are topologically constrained. More recently circular RNA was described as well to be a natural pervasive class of nucleic acids, expressed in many organisms (see CircRNA).
A covalently closed, circular DNA (also known as cccDNA) is topologically constrained as the number of times the chains coiled around one other cannot change. This cccDNA can be supercoiled, which is the tertiary structure of DNA. Supercoiling is characterized by the linking number, twist and writhe. The linking number (Lk) for circular DNA is defined as the number of times one strand would have to pass through the other strand to completely separate the two strands. The linking number for circular DNA can only be changed by breaking of a covalent bond in one of the two strands. Always an integer, the linking number of a cccDNA is the sum of two components: twists (Tw) and writhes (Wr).
L
k
=
T
w
+
W
r
{\displaystyle Lk=Tw+Wr}
Twists are the number of times the two strands of DNA are twisted around each other. Writhes are number of times the DNA helix crosses over itself. DNA in cells is negatively supercoiled and has the tendency to unwind. Hence the separation of strands is easier in negatively supercoiled DNA than in relaxed DNA. The two components of supercoiled DNA are solenoid and plectonemic. The plectonemic supercoil is found in prokaryotes, while the solenoidal supercoiling is mostly seen in eukaryotes.
== Quaternary structure ==
The quaternary structure of nucleic acids is similar to that of protein quaternary structure. Although some of the concepts are not exactly the same, the quaternary structure refers to a higher-level of organization of nucleic acids. Moreover, it refers to interactions of the nucleic acids with other molecules. The most commonly seen form of higher-level organization of nucleic acids is seen in the form of chromatin which leads to its interactions with the small proteins histones. Also, the quaternary structure refers to the interactions between separate RNA units in the ribosome or spliceosome.
== See also ==
== References == | Wikipedia/DNA_structure |
DNA methylation is a biological process by which methyl groups are added to the DNA molecule. Methylation can change the activity of a DNA segment without changing the sequence. When located in a gene promoter, DNA methylation typically acts to repress gene transcription. In mammals, DNA methylation is essential for normal development and is associated with a number of key processes including genomic imprinting, X-chromosome inactivation, repression of transposable elements, aging, and carcinogenesis.
As of 2016, two nucleobases have been found on which natural, enzymatic DNA methylation takes place: adenine and cytosine. The modified bases are N6-methyladenine, 5-methylcytosine and N4-methylcytosine.
Cytosine methylation is widespread in both eukaryotes and prokaryotes, even though the rate of cytosine DNA methylation can differ greatly between species: 14% of cytosines are methylated in Arabidopsis thaliana, 4% to 8% in Physarum, 7.6% in Mus musculus, 2.3% in Escherichia coli, 0.03% in Drosophila; methylation is essentially undetectable in Dictyostelium; and virtually absent (0.0002 to 0.0003%) from Caenorhabditis or fungi such as Saccharomyces cerevisiae and S. pombe (but not N. crassa).: 3699 Adenine methylation has been observed in bacterial and plant DNA, and recently also in mammalian DNA, but has received considerably less attention.
Methylation of cytosine to form 5-methylcytosine occurs at the same 5 position on the pyrimidine ring where the DNA base thymine's methyl group is located; the same position distinguishes thymine from the analogous RNA base uracil, which has no methyl group. Spontaneous deamination of 5-methylcytosine converts it to thymine. This results in a T:G mismatch. Repair mechanisms then correct it back to the original C:G pair; alternatively, they may substitute A for G, turning the original C:G pair into a T:A pair, effectively changing a base and introducing a mutation. This misincorporated base will not be corrected during DNA replication as thymine is a DNA base. If the mismatch is not repaired and the cell enters the cell cycle the strand carrying the T will be complemented by an A in one of the daughter cells, such that the mutation becomes permanent. The near-universal use of thymine exclusively in DNA and uracil exclusively in RNA may have evolved as an error-control mechanism, to facilitate the removal of uracils generated by the spontaneous deamination of cytosine. DNA methylation as well as a number of its contemporary DNA methyltransferases have been thought to evolve from early world primitive RNA methylation activity and is supported by several lines of evidence.
In plants and other organisms, DNA methylation is found in three different sequence contexts: CG (or CpG), CHG or CHH (where H correspond to A, T or C). In mammals however, DNA methylation is almost exclusively found in CpG dinucleotides, with the cytosines on both strands being usually methylated. Non-CpG methylation can however be observed in embryonic stem cells, and has also been indicated in neural development. Furthermore, non-CpG methylation has also been observed in hematopoietic progenitor cells, and it occurred mainly in a CpApC sequence context.
== Conserved function of DNA methylation ==
The DNA methylation landscape of vertebrates is particular compared to other organisms. In mammals, around 75% of CpG dinucleotides are methylated in somatic cells, and DNA methylation appears as a default state that has to be specifically excluded from defined locations. By contrast, the genome of most plants, invertebrates, fungi, or protists show "mosaic" methylation patterns, where only specific genomic elements are targeted, and they are characterized by the alternation of methylated and unmethylated domains.
High CpG methylation in mammalian genomes has an evolutionary cost because it increases the frequency of spontaneous mutations. Loss of amino-groups occurs with a high frequency for cytosines, with different consequences depending on their methylation. Methylated C residues spontaneously deaminate to form T residues over time; hence CpG dinucleotides steadily deaminate to TpG dinucleotides, which is evidenced by the under-representation of CpG dinucleotides in the human genome (they occur at only 21% of the expected frequency). (On the other hand, spontaneous deamination of unmethylated C residues gives rise to U residues, a change that is quickly recognized and repaired by the cell.)
=== CpG islands ===
In mammals, the only exception for this global CpG depletion resides in a specific category of GC- and CpG-rich sequences termed CpG islands that are generally unmethylated and therefore retained the expected CpG content. CpG islands are usually defined as regions with: 1) a length greater than 200bp, 2) a G+C content greater than 50%, 3) a ratio of observed to expected CpG greater than 0.6, although other definitions are sometimes used. Excluding repeated sequences, there are around 25,000 CpG islands in the human genome, 75% of which being less than 850bp long. They are major regulatory units and around 50% of CpG islands are located in gene promoter regions, while another 25% lie in gene bodies, often serving as alternative promoters. Reciprocally, around 60-70% of human genes have a CpG island in their promoter region. The majority of CpG islands are constitutively unmethylated and enriched for permissive chromatin modification such as H3K4 methylation. In somatic tissues, only 10% of CpG islands are methylated, the majority of them being located in intergenic and intragenic regions.
=== Repression of CpG-dense promoters ===
DNA methylation was probably present at some extent in early eukaryote ancestors. In virtually every organism analyzed, methylation in promoter regions correlates negatively with gene expression. CpG-dense promoters of actively transcribed genes are never methylated, but, reciprocally, transcriptionally silent genes do not necessarily carry a methylated promoter. In mouse and human, around 60–70% of genes have a CpG island in their promoter region and most of these CpG islands remain unmethylated independently of the transcriptional activity of the gene, in both differentiated and undifferentiated cell types. Of note, whereas DNA methylation of CpG islands is unambiguously linked with transcriptional repression, the function of DNA methylation in CG-poor promoters remains unclear; albeit there is little evidence that it could be functionally relevant.
DNA methylation may affect the transcription of genes in two ways. First, the methylation of DNA itself may physically impede the binding of transcriptional proteins to the gene, and second, and likely more important, methylated DNA may be bound by proteins known as methyl-CpG-binding domain proteins (MBDs). MBD proteins then recruit additional proteins to the locus, such as histone deacetylases and other chromatin remodeling proteins that can modify histones, thereby forming compact, inactive chromatin, termed heterochromatin. This link between DNA methylation and chromatin structure is important. In particular, loss of methyl-CpG-binding protein 2 (MeCP2) has been implicated in Rett syndrome; and methyl-CpG-binding domain protein 2 (MBD2) mediates the transcriptional silencing of hypermethylated genes in "cancer."
=== Repression of transposable elements ===
DNA methylation is a powerful transcriptional repressor, at least in CpG dense contexts. Transcriptional repression of protein-coding genes appears essentially limited to specific classes of genes that need to be silent permanently and in almost all tissues. While DNA methylation does not have the flexibility required for the fine-tuning of gene regulation, its stability is perfect to ensure the permanent silencing of transposable elements. Transposon control is one of the most ancient functions of DNA methylation that is shared by animals, plants and multiple protists. It is even suggested that DNA methylation evolved precisely for this purpose.
=== Genome expansion ===
DNA methylation of transposable elements has been known to be related to genome expansion. However, the evolutionary driver for genome expansion remains unknown. There is a clear correlation between the size of the genome and CpG, suggesting that the DNA methylation of transposable elements led to a noticeable increase in the mass of DNA.
=== Methylation of the gene body of highly transcribed genes ===
A function that appears even more conserved than transposon silencing is positively correlated with gene expression. In almost all species where DNA methylation is present, DNA methylation is especially enriched in the body of highly transcribed genes. The function of gene body methylation is not well understood. A body of evidence suggests that it could regulate splicing and suppress the activity of intragenic transcriptional units (cryptic promoters or transposable elements). Gene-body methylation appears closely tied to H3K36 methylation. In yeast and mammals, H3K36 methylation is highly enriched in the body of highly transcribed genes. In yeast at least, H3K36me3 recruits enzymes such as histone deacetylases to condense chromatin and prevent the activation of cryptic start sites. In mammals, DNMT3a and DNMT3b PWWP domain binds to H3K36me3 and the two enzymes are recruited to the body of actively transcribed genes.
== In mammals ==
=== During embryonic development ===
DNA methylation patterns are largely erased and then re-established between generations in mammals. Almost all of the methylations from the parents are erased, first during gametogenesis, and again in early embryogenesis, with demethylation and remethylation occurring each time. Demethylation in early embryogenesis occurs in the preimplantation period in two stages – initially in the zygote, then during the first few embryonic replication cycles of morula and blastula. A wave of methylation then takes place during the implantation stage of the embryo, with CpG islands protected from methylation. This results in global repression and allows housekeeping genes to be expressed in all cells. In the post-implantation stage, methylation patterns are stage- and tissue-specific, with changes that would define each individual cell type lasting stably over a long period. Studies on rat limb buds during embryogenesis have further illustrated the dynamic nature of DNA methylation in development. In this context, variations in global DNA methylation were observed across different developmental stages and culture conditions, highlighting the intricate regulation of methylation during organogenesis and its potential implications for regenerative medicine strategies.
Whereas DNA methylation is not necessary per se for transcriptional silencing, it is thought nonetheless to represent a "locked" state that definitely inactivates transcription. In particular, DNA methylation appears critical for the maintenance of mono-allelic silencing in the context of genomic imprinting and X chromosome inactivation. In these cases, expressed and silent alleles differ by their methylation status, and loss of DNA methylation results in loss of imprinting and re-expression of Xist in somatic cells. During embryonic development, few genes change their methylation status, at the important exception of a number of genes specifically expressed in the germline. DNA methylation appears absolutely required in differentiated cells, as knockout of any of the three competent DNA methyltransferase results in embryonic or post-partum lethality. By contrast, DNA methylation is dispensable in undifferentiated cell types, such as the inner cell mass of the blastocyst, primordial germ cells or embryonic stem cells. Since DNA methylation appears to directly regulate only a limited number of genes, how precisely DNA methylation absence causes the death of differentiated cells remain an open question.
Due to the phenomenon of genomic imprinting, maternal and paternal genomes are differentially marked and must be properly reprogrammed every time they pass through the germline. Therefore, during gametogenesis, primordial germ cells must have their original biparental DNA methylation patterns erased and re-established based on the sex of the transmitting parent. After fertilization, the paternal and maternal genomes are once again demethylated and remethylated (except for differentially methylated regions associated with imprinted genes). This reprogramming is likely required for totipotency of the newly formed embryo and erasure of acquired epigenetic changes.
=== In cancer ===
In multiple disease processes, such as cancer, gene promoter CpG islands acquire abnormal hypermethylation, which results in transcriptional silencing that can be inherited by daughter cells following cell division. Alterations of DNA methylation have been recognized as an important component of cancer development. Hypomethylation, in general, arises earlier and is linked to chromosomal instability and loss of imprinting, whereas hypermethylation is associated with promoters and can arise secondary to gene (oncogene suppressor) silencing, but might be a target for epigenetic therapy. In developmental contexts, dynamic changes in DNA methylation patterns also have significant implications. For instance, in rat limb buds, shifts in methylation status were associated with different stages of chondrogenesis, suggesting a potential link between DNA methylation and the progression of certain developmental processes.
Global hypomethylation has also been implicated in the development and progression of cancer through different mechanisms. Typically, there is hypermethylation of tumor suppressor genes and hypomethylation of oncogenes.
Generally, in progression to cancer, hundreds of genes are silenced or activated. Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes.
Altered expressions of microRNAs also silence or activate multiple genes in progression to cancer (see microRNAs in cancer). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs.
Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer).
=== In atherosclerosis ===
Epigenetic modifications such as DNA methylation have been implicated in cardiovascular disease, including atherosclerosis. In animal models of atherosclerosis, vascular tissue, as well as blood cells such as mononuclear blood cells, exhibit global hypomethylation with gene-specific areas of hypermethylation. DNA methylation polymorphisms may be used as an early biomarker of atherosclerosis since they are present before lesions are observed, which may provide an early tool for detection and risk prevention.
Two of the cell types targeted for DNA methylation polymorphisms are monocytes and lymphocytes, which experience an overall hypomethylation. One proposed mechanism behind this global hypomethylation is elevated homocysteine levels causing hyperhomocysteinemia, a known risk factor for cardiovascular disease. High plasma levels of homocysteine inhibit DNA methyltransferases, which causes hypomethylation. Hypomethylation of DNA affects genes that alter smooth muscle cell proliferation, cause endothelial cell dysfunction, and increase inflammatory mediators, all of which are critical in forming atherosclerotic lesions. High levels of homocysteine also result in hypermethylation of CpG islands in the promoter region of the estrogen receptor alpha (ERα) gene, causing its down regulation. ERα protects against atherosclerosis due to its action as a growth suppressor, causing the smooth muscle cells to remain in a quiescent state. Hypermethylation of the ERα promoter thus allows intimal smooth muscle cells to proliferate excessively and contribute to the development of the atherosclerotic lesion.
Another gene that experiences a change in methylation status in atherosclerosis is the monocarboxylate transporter (MCT3), which produces a protein responsible for the transport of lactate and other ketone bodies out of a number of cell types, including vascular smooth muscle cells. In atherosclerosis patients, there is an increase in methylation of the CpG islands in exon 2, which decreases MCT3 protein expression. The downregulation of MCT3 impairs lactate transport and significantly increases smooth muscle cell proliferation, which further contributes to the atherosclerotic lesion. An ex vivo experiment using the demethylating agent Decitabine (5-aza-2 -deoxycytidine) was shown to induce MCT3 expression in a dose dependent manner, as all hypermethylated sites in the exon 2 CpG island became demethylated after treatment. This may serve as a novel therapeutic agent to treat atherosclerosis, although no human studies have been conducted thus far.
=== In heart failure ===
In addition to atherosclerosis described above, specific epigenetic changes have been identified in the failing human heart. This may vary by disease etiology. For example, in ischemic heart failure DNA methylation changes have been linked to changes in gene expression that may direct gene expression associated with the changes in heart metabolism known to occur. Additional forms of heart failure (e.g. diabetic cardiomyopathy) and co-morbidities (e.g. obesity) must be explored to see how common these mechanisms are. Most strikingly, in failing human heart these changes in DNA methylation are associated with racial and socioeconomic status which further impact how gene expression is altered, and may influence how the individual's heart failure should be treated.
=== In aging ===
In humans and other mammals, DNA methylation levels can be used to accurately estimate the age of tissues and cell types, forming an accurate epigenetic clock.
A longitudinal study of twin children showed that, between the ages of 5 and 10, there was divergence of methylation patterns due to environmental rather than genetic influences. There is a global loss of DNA methylation during aging.
In a study that analyzed the complete DNA methylomes of CD4+ T cells in a newborn, a 26 years old individual and a 103 years old individual were observed that the loss of methylation is proportional to age. Hypomethylated CpGs observed in the centenarian DNAs compared with the neonates covered all genomic compartments (promoters, intergenic, intronic and exonic regions). However, some genes become hypermethylated with age, including genes for the estrogen receptor, p16, insulin-like growth factor 2, ELOVL2 and FHL2
=== In exercise ===
High intensity exercise has been shown to result in reduced DNA methylation in skeletal muscle. Promoter methylation of PGC-1α and PDK4 were immediately reduced after high intensity exercise, whereas PPAR-γ methylation was not reduced until three hours after exercise. At the same time, six months of exercise in previously sedentary middle-age men resulted in increased methylation in adipose tissue. One study showed a possible increase in global genomic DNA methylation of white blood cells with more physical activity in non-Hispanics.
=== In B-cell differentiation ===
A study that investigated the methylome of B cells along their differentiation cycle, using whole-genome bisulfite sequencing (WGBS), showed that there is a hypomethylation from the earliest stages to the most differentiated stages. The largest methylation difference is between the stages of germinal center B cells and memory B cells. Furthermore, this study showed that there is a similarity between B cell tumors and long-lived B cells in their DNA methylation signatures.
=== In the brain ===
Two reviews summarize evidence that DNA methylation alterations in brain neurons are important in learning and memory. Contextual fear conditioning (a form of associative learning) in animals, such as mice and rats, is rapid and is extremely robust in creating memories. In mice and in rats contextual fear conditioning, within 1–24 hours, it is associated with altered methylations of several thousand DNA cytosines in genes of hippocampus neurons. Twenty four hours after contextual fear conditioning, 9.2% of the genes in rat hippocampus neurons are differentially methylated. In mice, when examined at four weeks after conditioning, the hippocampus methylations and demethylations had been reset to the original naive conditions. The hippocampus is needed to form memories, but memories are not stored there. For such mice, at four weeks after contextual fear conditioning, substantial differential CpG methylations and demethylations occurred in cortical neurons during memory maintenance, and there were 1,223 differentially methylated genes in their anterior cingulate cortex. Mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment were summarized in 2022. That review also indicated the mechanisms by which the new patterns of methylation gave rise to new patterns of messenger RNA expression. These new messenger RNAs were then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they could be translated into proteins. Active changes in neuronal DNA methylation and demethylation appear to act as controllers of synaptic scaling and glutamate receptor trafficking in learning and memory formation.
== DNA methyltransferases (in mammals) ==
In mammalian cells, DNA methylation occurs mainly at the C5 position of CpG dinucleotides and is carried out by two general classes of enzymatic activities – maintenance methylation and de novo methylation.
Maintenance methylation activity is necessary to preserve DNA methylation after every cellular DNA replication cycle. Without the DNA methyltransferase (DNMT), the replication machinery itself would produce daughter strands that are unmethylated and, over time, would lead to passive demethylation. DNMT1 is the proposed maintenance methyltransferase that is responsible for copying DNA methylation patterns to the daughter strands during DNA replication. Mouse models with both copies of DNMT1 deleted are embryonic lethal at approximately day 9, due to the requirement of DNMT1 activity for development in mammalian cells.
It is thought that DNMT3a and DNMT3b are the de novo methyltransferases that set up DNA methylation patterns early in development. DNMT3L is a protein that is homologous to the other DNMT3s but has no catalytic activity. Instead, DNMT3L assists the de novo methyltransferases by increasing their ability to bind to DNA and stimulating their activity. Mice and rats have a third functional de novo methyltransferase enzyme named DNMT3C, which evolved as a paralog of Dnmt3b by tandem duplication in the common ancestral of Muroidea rodents. DNMT3C catalyzes the methylation of promoters of transposable elements during early spermatogenesis, an activity shown to be essential for their epigenetic repression and male fertility. It is yet unclear if in other mammals that do not have DNMT3C (like humans) rely on DNMT3B or DNMT3A for de novo methylation of transposable elements in the germline. Finally, DNMT2 (TRDMT1) has been identified as a DNA methyltransferase homolog, containing all 10 sequence motifs common to all DNA methyltransferases; however, DNMT2 (TRDMT1) does not methylate DNA but instead methylates cytosine-38 in the anticodon loop of aspartic acid transfer RNA.
Since some tumor suppressor genes are silenced by DNA methylation during carcinogenesis, there have been attempts to re-express these genes by inhibiting the DNMTs. 5-Aza-2'-deoxycytidine (decitabine) is a nucleoside analog that inhibits DNMTs by trapping them in a covalent complex on DNA by preventing the β-elimination step of catalysis, thus resulting in the enzymes' degradation. However, for decitabine to be active, it must be incorporated into the genome of the cell, which can cause mutations in the daughter cells if the cell does not die. In addition, decitabine is toxic to the bone marrow, which limits the size of its therapeutic window. These pitfalls have led to the development of antisense RNA therapies that target the DNMTs by degrading their mRNAs and preventing their translation. However, it is currently unclear whether targeting DNMT1 alone is sufficient to reactivate tumor suppressor genes silenced by DNA methylation.
== In plants ==
Significant progress has been made in understanding DNA methylation in the model plant Arabidopsis thaliana. DNA methylation in plants differs from that of mammals: while DNA methylation in mammals mainly occurs on the cytosine nucleotide in a CpG site, in plants the cytosine can be methylated at CpG, CpHpG, and CpHpH sites, where H represents any nucleotide but not guanine. Overall, Arabidopsis DNA is highly methylated, mass spectrometry analysis estimated 14% of cytosines to be modified.: abstract Later, bisulfite sequencing data estimated that around 25% of Arabidopsis CG sites are methylated, but these levels vary based on the geographic location of Arabidopsis accessions (plants in the north are more highly methylated than southern accessions).
The principal Arabidopsis DNA methyltransferase enzymes, which transfer and covalently attach methyl groups onto DNA, are DRM2, MET1, and CMT3. Both the DRM2 and MET1 proteins share significant homology to the mammalian methyltransferases DNMT3 and DNMT1, respectively, whereas the CMT3 protein is unique to the plant kingdom. There are currently two classes of DNA methyltransferases: 1) the de novo class or enzymes that create new methylation marks on the DNA; 2) a maintenance class that recognizes the methylation marks on the parental strand of DNA and transfers new methylation to the daughter strands after DNA replication. DRM2 is the only enzyme that has been implicated as a de novo DNA methyltransferase. DRM2 has also been shown, along with MET1 and CMT3 to be involved in maintaining methylation marks through DNA replication. Other DNA methyltransferases are expressed in plants but have no known function (see the Chromatin Database).
Genome-wide levels of DNA methylation vary widely between plant species, and Arabidopsis cytosines tend to be less densely methylated than those in other plants. For example, ~92.5% of CpG cytosines are methylated in Beta vulgaris. The patterns of methylation also differ between cytosine sequence contexts; universally, CpG methylation is higher than CHG and CHH methylation, and CpG methylation can be found in both active genes and transposable elements, while CHG and CHH are usually relegated to silenced transposable elements.
It is not clear how the cell determines the locations of de novo DNA methylation, but evidence suggests that for some locations, RNA-directed DNA methylation (RdDM) is involved. In RdDM, specific RNA transcripts are produced from a genomic DNA template, and this RNA forms secondary structures called double-stranded RNA molecules. The double-stranded RNAs, through either the small interfering RNA (siRNA) or microRNA (miRNA) pathways direct de-novo DNA methylation of the original genomic location that produced the RNA. This sort of mechanism is thought to be important in cellular defense against RNA viruses and/or transposons, both of which often form a double-stranded RNA that can be mutagenic to the host genome. By methylating their genomic locations, through an as yet poorly understood mechanism, they are shut off and are no longer active in the cell, protecting the genome from their mutagenic effect. Recently, it was described that methylation of the DNA is the main determinant of embryogenic cultures formation from explants in woody plants and is regarded the main mechanism that explains the poor response of mature explants to somatic embryogenesis in the plants (Isah 2016).
== In insects ==
Diverse orders of insects show varied patterns of DNA methylation, from almost undetectable levels in flies to low levels in butterflies and higher in true bugs and some cockroaches (up to 14% of all CG sites in Blattella asahinai).
Functional DNA methylation has been discovered in Honey Bees. DNA methylation marks are mainly on the gene body, and current opinions on the function of DNA methylation is gene regulation via alternative splicing
DNA methylation levels in Drosophila melanogaster are nearly undetectable. Sensitive methods applied to Drosophila DNA Suggest levels in the range of 0.1–0.3% of total cytosine. A 2014 study of found that the low level of methylation in fruit fruit flies appeared "at specific short motifs and is independent of DNMT2 activity." Further, highly sensitive mass spectrometry approaches, have now demonstrated the presence of low (0.07%) but significant levels of adenine methylation during the earliest stages of Drosophila embryogenesis.
== In fungi ==
Many fungi have low levels (0.1 to 0.5%) of cytosine methylation, whereas other fungi have as much as 5% of the genome methylated. This value seems to vary both among species and among isolates of the same species. There is also evidence that DNA methylation may be involved in state-specific control of gene expression in fungi. However, at a detection limit of 250 attomoles by using ultra-high sensitive mass spectrometry DNA methylation was not confirmed in single cellular yeast species such as Saccharomyces cerevisiae or Schizosaccharomyces pombe, indicating that yeasts do not possess this DNA modification.: abstract
Although brewers' yeast (Saccharomyces), fission yeast (Schizosaccharomyces), and Aspergillus flavus have no detectable DNA methylation, the model filamentous fungus Neurospora crassa has a well-characterized methylation system. Several genes control methylation in Neurospora and mutation of the DNA methyl transferase, dim-2, eliminates all DNA methylation but does not affect growth or sexual reproduction. While the Neurospora genome has little repeated DNA, half of the methylation occurs in repeated DNA including transposon relics and centromeric DNA. The ability to evaluate other important phenomena in a DNA methylase-deficient genetic background makes Neurospora an important system in which to study DNA methylation.
== In other eukaryotes ==
DNA methylation is largely absent from Dictyostelium discoideum where it appears to occur at about 0.006% of cytosines. In contrast, DNA methylation is widely distributed in Physarum polycephalum where 5-methylcytosine makes up as much as 8% of total cytosine
== In bacteria ==
Adenine or cytosine methylation are mediated by restriction modification systems of a number of bacteria, in which specific DNA sequences are methylated periodically throughout the genome. A methylase is the enzyme that recognizes a specific sequence and methylates one of the bases in or near that sequence. Foreign DNAs (which are not methylated in this manner) that are introduced into the cell are degraded by sequence-specific restriction enzymes and cleaved. Bacterial genomic DNA is not recognized by these restriction enzymes. The methylation of native DNA acts as a sort of primitive immune system, allowing the bacteria to protect themselves from infection by bacteriophage.
E. coli DNA adenine methyltransferase (Dam) is an enzyme of ~32 kDa that does not belong to a restriction/modification system. The target recognition sequence for E. coli Dam is GATC, as the methylation occurs at the N6 position of the adenine in this sequence (G meATC). The three base pairs flanking each side of this site also influence DNA–Dam binding. Dam plays several key roles in bacterial processes, including mismatch repair, the timing of DNA replication, and gene expression. As a result of DNA replication, the status of GATC sites in the E. coli genome changes from fully methylated to hemimethylated. This is because adenine introduced into the new DNA strand is unmethylated. Re-methylation occurs within two to four seconds, during which time replication errors in the new strand are repaired. Methylation, or its absence, is the marker that allows the repair apparatus of the cell to differentiate between the template and nascent strands. It has been shown that altering Dam activity in bacteria results in an increased spontaneous mutation rate. Bacterial viability is compromised in dam mutants that also lack certain other DNA repair enzymes, providing further evidence for the role of Dam in DNA repair.
One region of the DNA that keeps its hemimethylated status for longer is the origin of replication, which has an abundance of GATC sites. This is central to the bacterial mechanism for timing DNA replication. SeqA binds to the origin of replication, sequestering it and thus preventing methylation. Because hemimethylated origins of replication are inactive, this mechanism limits DNA replication to once per cell cycle.
Expression of certain genes, for example, those coding for pilus expression in E. coli, is regulated by the methylation of GATC sites in the promoter region of the gene operon. The cells' environmental conditions just after DNA replication determine whether Dam is blocked from methylating a region proximal to or distal from the promoter region. Once the pattern of methylation has been created, the pilus gene transcription is locked in the on or off position until the DNA is again replicated. In E. coli, these pili operons have important roles in virulence in urinary tract infections. It has been proposed that inhibitors of Dam may function as antibiotics.
On the other hand, DNA cytosine methylase targets CCAGG and CCTGG sites to methylate cytosine at the C5 position (C meC(A/T) GG). The other methylase enzyme, EcoKI, causes methylation of adenines in the sequences AAC(N6)GTGC and GCAC(N6)GTT.
In Clostridioides difficile, DNA methylation at the target motif CAAAAA was shown to impact sporulation, a key step in disease transmission, as well as cell length, biofilm formation and host colonization.
=== Molecular cloning ===
Most strains used by molecular biologists are derivatives of E. coli K-12, and possess both Dam and Dcm, but there are commercially available strains that are dam-/dcm- (lack of activity of either methylase). In fact, it is possible to unmethylate the DNA extracted from dam+/dcm+ strains by transforming it into dam-/dcm- strains. This would help digest sequences that are not being recognized by methylation-sensitive restriction enzymes.
The restriction enzyme DpnI can recognize 5'-GmeATC-3' sites and digest the methylated DNA. Being such a short motif, it occurs frequently in sequences by chance, and as such its primary use for researchers is to degrade template DNA following PCRs (PCR products lack methylation, as no methylases are present in the reaction). Similarly, some commercially available restriction enzymes are sensitive to methylation at their cognate restriction sites and must as mentioned previously be used on DNA passed through a dam-/dcm- strain to allow cutting.
== Detection ==
DNA methylation can be detected by the following assays currently used in scientific research:
Mass spectrometry is a sensitive and reliable analytical method to detect DNA methylation. MS, in general, is however not informative about the sequence context of the methylation, thus limited in studying the function of this DNA modification.
Methylation-Specific PCR (MSP), which is based on a chemical reaction of sodium bisulfite with DNA that converts unmethylated cytosines of CpG dinucleotides to uracil or UpG, followed by traditional PCR. However, methylated cytosines will not be converted in this process, and primers are designed to overlap the CpG site of interest, which allows one to determine methylation status as methylated or unmethylated.
Whole genome bisulfite sequencing, also known as BS-Seq, which is a high-throughput genome-wide analysis of DNA methylation. It is based on the aforementioned sodium bisulfite conversion of genomic DNA, which is then sequenced on a Next-generation sequencing platform. The sequences obtained are then re-aligned to the reference genome to determine the methylation status of CpG dinucleotides based on mismatches resulting from the conversion of unmethylated cytosines into uracil.
Enzymatic methyl-seq (EM-seq) works similarly to bisulfite sequencing, but uses enzymes, APOBEC and TET2, to deaminate unmethylated cytosine into uracil prior to sequencing. EM-seq libraries are less prone to DNA damage than bisulfite-treated libraries.
Reduced representation bisulfite sequencing, also known as RRBS knows several working protocols. The first RRBS protocol was called RRBS and aims for around 10% of the methylome, a reference genome is needed. Later came more protocols that were able to sequence a smaller portion of the genome and higher sample multiplexing. EpiGBS was the first protocol where you could multiplex 96 samples in one lane of Illumina sequencing and were a reference genome was no longer needed. A de novo reference construction from the Watson and Crick reads made population screening of SNP's and SMP's simultaneously a fact.
The HELP assay, which is based on restriction enzymes' differential ability to recognize and cleave methylated and unmethylated CpG DNA sites.
GLAD-PCR assay, which is based on a new type of enzymes – site-specific methyl-directed DNA endonucleases, which hydrolyze only methylated DNA.
ChIP-on-chip assays, which is based on the ability of commercially prepared antibodies to bind to DNA methylation-associated proteins like MeCP2.
Restriction landmark genomic scanning, a complicated and now rarely used assay based upon restriction enzymes' differential recognition of methylated and unmethylated CpG sites; the assay is similar in concept to the HELP assay.
Methylated DNA immunoprecipitation (MeDIP), analogous to chromatin immunoprecipitation, immunoprecipitation is used to isolate methylated DNA fragments for input into DNA detection methods such as DNA microarrays (MeDIP-chip) or DNA sequencing (MeDIP-seq).
Pyrosequencing of bisulfite treated DNA. This is the sequencing of an amplicon made by a normal forward primer but a biotinylated reverse primer to PCR the gene of choice. The Pyrosequencer then analyses the sample by denaturing the DNA and adding one nucleotide at a time to the mix according to a sequence given by the user. If there is a mismatch, it is recorded and the percentage of DNA for which the mismatch is present is noted. This gives the user a percentage of methylation per CpG island.
Molecular break light assay for DNA adenine methyltransferase activity – an assay that relies on the specificity of the restriction enzyme DpnI for fully methylated (adenine methylation) GATC sites in an oligonucleotide labeled with a fluorophore and quencher. The adenine methyltransferase methylates the oligonucleotide making it a substrate for DpnI. Cutting of the oligonucleotide by DpnI gives rise to a fluorescence increase.
Methyl Sensitive Southern Blotting is similar to the HELP assay, although uses Southern blotting techniques to probe gene-specific differences in methylation using restriction digests. This technique is used to evaluate local methylation near the binding site for the probe.
MethylCpG Binding Proteins (MBPs) and fusion proteins containing just the Methyl Binding Domain (MBD) are used to separate native DNA into methylated and unmethylated fractions. The percentage methylation of individual CpG islands can be determined by quantifying the amount of the target in each fraction. Extremely sensitive detection can be achieved in FFPE tissues with abscription-based detection.
High Resolution Melt Analysis (HRM or HRMA), is a post-PCR analytical technique. The target DNA is treated with sodium bisulfite, which chemically converts unmethylated cytosines into uracils, while methylated cytosines are preserved. PCR amplification is then carried out with primers designed to amplify both methylated and unmethylated templates. After this amplification, highly methylated DNA sequences contain a higher number of CpG sites compared to unmethylated templates, which results in a different melting temperature that can be used in quantitative methylation detection.
Ancient DNA methylation reconstruction, a method to reconstruct high-resolution DNA methylation from ancient DNA samples. The method is based on the natural degradation processes that occur in ancient DNA: with time, methylated cytosines are degraded into thymines, whereas unmethylated cytosines are degraded into uracils. This asymmetry in degradation signals was used to reconstruct the full methylation maps of the Neanderthal and the Denisovan. In September 2019, researchers published a novel method to infer morphological traits from DNA methylation data. The authors were able to show that linking down-regulated genes to phenotypes of monogenic diseases, where one or two copies of a gene are perturbed, allows for ~85% accuracy in reconstructing anatomical traits directly from DNA methylation maps.
Methylation Sensitive Single Nucleotide Primer Extension Assay (msSNuPE), which uses internal primers annealing straight 5' of the nucleotide to be detected.
Illumina Methylation Assay measures locus-specific DNA methylation using array hybridization. Bisulfite-treated DNA is hybridized to probes on "BeadChips." Single-base base extension with labeled probes is used to determine methylation status of target sites. In 2016, the Infinium MethylationEPIC BeadChip was released, which interrogates over 850,000 methylation sites across the human genome.
== Differentially methylated regions (DMRs) ==
Differentially methylated regions, which are genomic regions with different methylation statuses among multiple samples (tissues, cells, individuals or others), are regarded as possible functional regions involved in gene transcriptional regulation. The identification of DMRs among multiple tissues (T-DMRs) provides a comprehensive survey of epigenetic differences among human tissues. For example, these methylated regions that are unique to a particular tissue allow individuals to differentiate between tissue type, such as semen and vaginal fluid. Current research conducted by Lee et al., showed DACT1 and USP49 positively identified semen by examining T-DMRs. The use of T-DMRs has proven useful in the identification of various body fluids found at crime scenes. Researchers in the forensic field are currently seeking novel T-DMRs in genes to use as markers in forensic DNA analysis. DMRs between cancer and normal samples (C-DMRs) demonstrate the aberrant methylation in cancers. It is well known that DNA methylation is associated with cell differentiation and proliferation. Multiple DMRs have been found in the development stages (D-DMRs) and in the reprogrammed progress (R-DMRs). In addition, there are intra-individual DMRs (Intra-DMRs) with longitudinal changes in global DNA methylation along with the increase of age in a given individual. There are also inter-individual DMRs (Inter-DMRs) with different methylation patterns among multiple individuals.
QDMR (Quantitative Differentially Methylated Regions) is a quantitative approach to quantify methylation difference and identify DMRs from genome-wide methylation profiles by adapting Shannon entropy. The platform-free and species-free nature of QDMR makes it potentially applicable to various methylation data. This approach provides an effective tool for the high-throughput identification of the functional regions involved in epigenetic regulation. QDMR can be used as an effective tool for the quantification of methylation difference and identification of DMRs across multiple samples.
Gene-set analysis (a.k.a. pathway analysis; usually performed tools such as DAVID, GoSeq or GSEA) has been shown to be severely biased when applied to high-throughput methylation data (e.g. MeDIP-seq, MeDIP-ChIP, HELP-seq etc.), and a wide range of studies have thus mistakenly reported hyper-methylation of genes related to development and differentiation; it has been suggested that this can be corrected using sample label permutations or using a statistical model to control for differences in the numbers of CpG probes / CpG sites that target each gene.
== DNA methylation marks ==
DNA methylation marks – genomic regions with specific methylation patterns in a specific biological state such as tissue, cell type, individual – are regarded as possible functional regions involved in gene transcriptional regulation. Although various human cell types may have the same genome, these cells have different methylomes. The systematic identification and characterization of methylation marks across cell types are crucial to understanding the complex regulatory network for cell fate determination. Hongbo Liu et al. proposed an entropy-based framework termed SMART to integrate the whole genome bisulfite sequencing methylomes across 42 human tissues/cells and identified 757,887 genome segments. Nearly 75% of the segments showed uniform methylation across all cell types. From the remaining 25% of the segments, they identified cell type-specific hypo/hypermethylation marks that were specifically hypo/hypermethylated in a minority of cell types using a statistical approach and presented an atlas of the human methylation marks. Further analysis revealed that the cell type-specific hypomethylation marks were enriched through H3K27ac and transcription factor binding sites in a cell type-specific manner. In particular, they observed that the cell type-specific hypomethylation marks are associated with the cell type-specific super-enhancers that drive the expression of cell identity genes. This framework provides a complementary, functional annotation of the human genome and helps to elucidate the critical features and functions of cell type-specific hypomethylation.
The entropy-based Specific Methylation Analysis and Report Tool, termed "SMART", which focuses on integrating a large number of DNA methylomes for the de novo identification of cell type-specific methylation marks. The latest version of SMART is focused on three main functions including de novo identification of differentially methylated regions (DMRs) by genome segmentation, identification of DMRs from predefined regions of interest, and identification of differentially methylated CpG sites.
=== In identification and detection of body fluids ===
DNA methylation allows for several tissues to be analyzed in one assay as well as for small amounts of body fluid to be identified with the use of extracted DNA. Usually, the two approaches of DNA methylation are either methylated-sensitive restriction enzymes or treatment with sodium bisulphite. Methylated sensitive restriction enzymes work by cleaving specific CpG, cytosine and guanine separated by only one phosphate group, recognition sites when the CpG is methylated. In contrast, unmethylated cytosines are transformed to uracil and in the process, methylated cytosines remain methylated. In particular, methylation profiles can provide insight on when or how body fluids were left at crime scenes, identify the kind of body fluid, and approximate age, gender, and phenotypic characteristics of perpetrators. Research indicates various markers that can be used for DNA methylation. Deciding which marker to use for an assay is one of the first steps of the identification of body fluids. In general, markers are selected by examining prior research conducted. Identification markers that are chosen should give a positive result for one type of cell. One portion of the chromosome that is an area of focus when conducting DNA methylation are tissue-specific differentially methylated regions, T-DMRs. The degree of methylation for the T-DMRs ranges depending on the body fluid. A research team developed a marker system that is two-fold. The first marker is methylated only in the target fluid while the second is methylated in the rest of the fluids. For instance, if venous blood marker A is un-methylated and venous blood marker B is methylated in a fluid, it indicates the presence of only venous blood. In contrast, if venous blood marker A is methylated and venous blood marker B is un-methylated in some fluid, then that indicates venous blood is in a mixture of fluids. Some examples for DNA methylation markers are Mens1(menstrual blood), Spei1(saliva), and Sperm2(seminal fluid).
DNA methylation provides a relatively good means of sensitivity when identifying and detecting body fluids. In one study, only ten nanograms of a sample was necessary to ascertain successful results. DNA methylation provides a good discernment of mixed samples since it involves markers that give "on or off" signals. DNA methylation is not impervious to external conditions. Even under degraded conditions using the DNA methylation techniques, the markers are stable enough that there are still noticeable differences between degraded samples and control samples. Specifically, in one study, it was found that there were not any noticeable changes in methylation patterns over an extensive period of time.
The detection of DNA methylation in cell-free DNA and other body fluids has recently become one of the main approaches to Liquid biopsy. In particular, the identification of tissue-specific and disease-specific patterns allows for non-invasive detection and monitoring of diseases such as cancer. If compared to strictly genomic approaches to liquid biopsy, DNA methylation profiling offers a larger number of differentially methylated CpG sites and differentially methylated regions (DMRSs), potentially enhancing its sensitivity. Signal deconvolution algorithms based on DNA methylation have been successfully applied to cell-free DNA and can nominate the tissue of origin of cancers of unknown primary, allograft rejection, and resistance to hormone therapy.
== Computational prediction ==
DNA methylation can also be detected by computational models through sophisticated algorithms and methods. Computational models can facilitate the global profiling of DNA methylation across chromosomes, and often such models are faster and cheaper to perform than biological assays. Such up-to-date computational models include Bhasin, et al., Bock, et al., and Zheng, et al. Together with biological assay, these methods greatly facilitate the DNA methylation analysis.
== See also ==
5-Hydroxymethylcytosine
5-Methylcytosine
7-Methylguanosine
Decrease in DNA Methylation I (DDM1), a plant methylation gene
Demethylating agent
Differentially methylated regions
DNA demethylation
DNA methylation reprogramming
Epigenetics, of which DNA methylation is a significant contributor
Epigenetic clock, a method to calculate age based on DNA methylation
Epigenome
Genome
Genomic imprinting, an inherited repression of an allele, relying on DNA methylation
MethBase DNA Methylation database hosted on the UCSC Genome Browser
MethDB DNA Methylation database
N6-Methyladenosine
Protein methylation
Methylenetetrahydrofolate reductase deficiency
== References ==
== Further reading ==
== External links ==
DNA+Methylation at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
ENCODE threads explorer Non-coding RNA characterization. Nature (journal)
PCMdb Pancreatic Cancer Methylation Database.
SMART Specific Methylation Analysis and Report Tool
Human Methylation Mark Atlas
DiseaseMeth Archived 2020-01-27 at the Wayback Machine Human disease methylation database
EWAS Atlas A knowledgebase of epigenome-wide association studies | Wikipedia/DNA_methylation |
National DNA Day is a United States holiday celebrated on April 25. It commemorates the day in 1953 when James Watson, Francis Crick, Maurice Wilkins, Rosalind Franklin and colleagues published papers in the journal Nature on the structure of DNA. Furthermore, in early April 2003 it was declared that the Human Genome Project was very close to complete, and "the remaining tiny gaps were considered too costly to fill."
In the United States, DNA Day was first celebrated on April 25, 2003, by proclamation of both the Senate and the House of Representatives. However, they only declared a one-time celebration, not an annual holiday. Every year from 2003 onward, annual DNA Day celebrations have been organized by the National Human Genome Research Institute (NHGRI), starting as early as April 23 in 2010, April 15 in 2011 and April 20 in 2012. April 25 has since been declared "International DNA Day" and "World DNA Day" by several groups.
Genealogical DNA testing companies and genetic genealogy publishers run annual sales around DNA Day, seeking interest from the public and promoting their services.
== References ==
== External links ==
National DNA Day (official website)
National DNA Day on Facebook | Wikipedia/DNA_Day |
DNA condensation refers to the process of compacting DNA molecules in vitro or in vivo. Mechanistic details of DNA packing are essential for its functioning in the process of gene regulation in living systems. Condensed DNA often has surprising properties, which one would not predict from classical concepts of dilute solutions. Therefore, DNA condensation in vitro serves as a model system for many processes of physics, biochemistry and biology. In addition, DNA condensation has many potential applications in medicine and biotechnology.
DNA diameter is about 2 nm, while the length of a stretched single molecule may be up to several dozens of centimetres depending on the organism. Many features of the DNA double helix contribute to its large stiffness, including the mechanical properties of the sugar-phosphate backbone, electrostatic repulsion between phosphates (DNA bears on average one elementary negative charge per each 0.17 nm of the double helix), stacking interactions between the bases of each individual strand, and strand-strand interactions. DNA is one of the stiffest natural polymers, yet it is also one of the longest molecules. The persistence length of double-stranded DNA (dsDNA) is a measure of its stiffness or flexibility, which depends on the DNA sequence and the surrounding environment, including factors like salt concentration, pH, and temperature. Under physiological conditions (e.g., near-neutral pH and physiological salt concentrations), the persistence length of dsDNA is generally around 50 nm, which corresponds to approximately 150 base pairs. This means that at large distances DNA can be considered as a flexible rope, and on a short scale as a stiff rod. Like a garden hose, unpacked DNA would randomly occupy a much larger volume than when it is orderly packed. Mathematically, for a non-interacting flexible chain randomly diffusing in 3D, the end-to-end distance would scale as a square root of the polymer length. For real polymers such as DNA, this gives only a very rough estimate; what is important, is that the space available for the DNA in vivo is much smaller than the space that it would occupy in the case of a free diffusion in the solution. To cope with volume constraints, DNA can pack itself in the appropriate solution conditions with the help of ions and other molecules. Usually, DNA condensation is defined as "the collapse of extended DNA chains into compact, orderly particles containing only one or a few molecules". This definition applies to many situations in vitro and is also close to the definition of DNA condensation in bacteria as "adoption of relatively concentrated, compact state occupying a fraction of the volume available". In eukaryotes, the DNA size and the number of other participating players are much larger, and a DNA molecule forms millions of ordered nucleoprotein particles, the nucleosomes, which is just the first of many levels of DNA packing.
== In life ==
=== In viruses ===
In viruses and bacteriophages, the DNA or RNA is surrounded by a protein capsid, sometimes further enveloped by a lipid membrane. Double-stranded DNA is stored inside the capsid in the form of a spool, which can have different types of coiling leading to different types of liquid-crystalline packing. This packing can change from hexagonal to cholesteric to isotropic at different stages of the phage functioning. Although the double helices are always locally aligned, the DNA inside viruses does not represent real liquid crystals, because it lacks fluidity. On the other hand, DNA condensed in vitro, e.g., with the help of polyamines also present in viruses, is both locally ordered and fluid.
=== In bacteria ===
Bacterial DNA is packed with the help of polyamines and proteins called nucleoid-associated proteins. Protein-associated DNA occupies about 1/4 of the intracellular volume forming a concentrated viscous phase with liquid crystalline properties, called the nucleoid. Other research also indicated that the genome of bacteria occupies approximately 10-15% of the bacteria's volume. Similar DNA packaging exists also in chloroplasts and mitochondria. Bacterial DNA is sometimes referred to as the bacterial chromosome. Bacterial nucleoid evolutionary represents an intermediate engineering solution between the protein-free DNA packing in viruses and protein-determined packing in eukaryotes.
Sister chromosomes in the bacterium Escherichia coli are induced by stressful conditions to condense and undergo pairing. Stress-induced condensation occurs by a non-random, zipper-like convergence of sister chromosomes. This convergence appears to depend on the ability of identical double-stranded DNA molecules to specifically identify each other, a process that culminates in the proximity of homologous sites along the paired chromosomes. Diverse stress conditions appear to prime bacteria to effectively cope with severe DNA damages such as double-strand breaks. The apposition of homologous sites associated with stress-induced chromosome condensation helps explain how repair of double-strand breaks and other damages occurs.
=== In eukaryotes ===
Eukaryotic DNA with a typical length of dozens of centimeters should be orderly packed to be readily accessible inside the micrometer-size nucleus. In most eukaryotes, DNA is arranged in the cell nucleus with the help of histones. In this case, the basic level of DNA compaction is the nucleosome, where the double helix is wrapped around the histone octamer containing two copies of each histone H2A, H2B, H3 and H4. Linker histone H1 binds the DNA between nucleosomes and facilitates packaging of the 10 nm "beads on the string" nucleosomal chain into a more condensed 30 nm fiber. Most of the time, between cell divisions, chromatin is optimized to allow easy access of transcription factors to active genes, which are characterized by a less compact structure called euchromatin, and to alleviate protein access in more tightly packed regions called heterochromatin. During the cell division, chromatin compaction increases even more to form chromosomes, which can cope with large mechanical forces dragging them into each of the two daughter cells. Many aspects of transcription are controlled by chemical modification on the histone proteins, known as the histone code.
Chromosome scaffold has important role to hold the chromatin into compact chromosome. Chromosome scaffold is made of proteins including condensin, topoisomerase IIα and kinesin family member 4 (KIF4)
Dinoflagellates are very divergent eukaryotes in terms of how they package their DNA. Their chromosomes are packed in a liquid-crystalline state. They have lost many of the conserved histone genes, using mostly dinoflagellate viral nucleoproteins (DVNPs) or bacteria-derived dinoflagellate histone-like proteins (HLPs) for packaging instead. It is unknown how they control access to genes; those do retain histone have a special histone code.
=== In archaea ===
Depending on the organism, an archaeon may use a bacteria-like HU system or a eukaryote-like nucleosome system for packaging.
== In vitro ==
DNA condensation can be induced in vitro either by applying external force to bring the double helices together, or by inducing attractive interactions between the DNA segments. The former can be achieved e.g. with the help of the osmotic pressure exerted by crowding neutral polymers in the presence of monovalent salts. In this case, the forces pushing the double helices together are coming from entropic random collisions with the crowding polymers surrounding DNA condensates, and salt is required to neutralize DNA charges and decrease DNA-DNA repulsion. The second possibility can be realized by inducing attractive interactions between the DNA segments by multivalent cationic charged ligands (multivalent metal ions, inorganic cations, polyamines, protamines, peptides, lipids, liposomes and proteins).
== Physics ==
Condensation of long double-helical DNAs is a sharp phase transition, which takes place within a narrow interval of condensing agent concentrations.[ref] Since the double helices come very closely to each other in the condensed phase, this leads to the restructuring of water molecules, which gives rise to the so-called hydration forces.[ref] To understand attraction between negatively charged DNA molecules, one also must account for correlations between counterions in the solution.[ref] DNA condensation by proteins can exhibit hysteresis, which can be explained using a modified Ising model.
== Role in gene regulation ==
Nowadays descriptions of gene regulation are based on the approximations of equilibrium binding in dilute solutions, although it is clear that these assumptions are in fact violated in chromatin. The dilute-solution approximation is violated for two reasons. First, the chromatin content is far from being dilute, and second, the numbers of the participating molecules are sometimes so small, that it does not make sense to talk about the bulk concentrations. Further differences from dilute solutions arise due to the different binding affinities of proteins to condensed and uncondensed DNA. Thus in condensed DNA both the reaction rates can be changed and their dependence on the concentrations of reactants may become nonlinear.
== See also ==
G-quadruplex
Structural motif
== References ==
== Further reading ==
Gelbart W. M.; Bruinsma R.; Pincus P. A.; Parsegian V. A. (2000). "DNA-Inspired Electrostatics". Physics Today. 53 (9): 38. Bibcode:2000PhT....53i..38G. doi:10.1063/1.1325230.
Strey H. H.; Podgornik R.; Rau D. C.; Parsegian V. A. (1998). "DNA-DNA interactions". Current Opinion in Structural Biology. 8 (3): 309–313. doi:10.1016/s0959-440x(98)80063-8. PMID 9666326.
Schiessel H (2003). "The physics of chromatin". J. Phys.: Condens. Matter. 15 (19): R699 – R774. arXiv:cond-mat/0303455. Bibcode:2003JPCM...15R.699S. doi:10.1088/0953-8984/15/19/203. S2CID 250893202.
Vijayanathan V.; Thomas T.; Thomas T. J. (2002). "DNA nanoparticles and development of DNA delivery vehicles for gene therapy". Biochemistry. 41 (48): 14085–14094. doi:10.1021/bi0203987. PMID 12450371.
Yoshikawa K (2001). "Controlling the higher-order structure of giant DNA molecules". Advanced Drug Delivery Reviews. 52 (3): 235–244. doi:10.1016/s0169-409x(01)00210-1. PMID 11718948.
Hud N. V.; Vilfan I. D. (2005). "Toroidal DNA condensates: unraveling the fine structure and the role of nucleation in determining size". Annu Rev Biophys Biomol Struct. 34: 295–318. doi:10.1146/annurev.biophys.34.040204.144500. PMID 15869392.
Yoshikawa, K., and Y. Yoshikawa. 2002. Compaction and condensation of DNA. In Pharmaceutical perspectives of nucleic acid-based therapeutics. R. I. Mahato, and S. W. Kim, editors. Taylor & Francis. 137–163. | Wikipedia/DNA_condensation |
A protein complex or multiprotein complex is a group of two or more associated polypeptide chains. Protein complexes are distinct from multidomain enzymes, in which multiple catalytic domains are found in a single polypeptide chain.
Protein complexes are a form of quaternary structure. Proteins in a protein complex are linked by non-covalent protein–protein interactions. These complexes are a cornerstone of many (if not most) biological processes. The cell is seen to be composed of modular supramolecular complexes, each of which performs an independent, discrete biological function.
Through proximity, the speed and selectivity of binding interactions between enzymatic complex and substrates can be vastly improved, leading to higher cellular efficiency. Many of the techniques used to enter cells and isolate proteins are inherently disruptive to such large complexes, complicating the task of determining the components of a complex.
Examples of protein complexes include the proteasome for molecular degradation and most RNA polymerases. In stable complexes, large hydrophobic interfaces between proteins typically bury surface areas larger than 2500 square Ås.
== Function ==
Protein complex formation can activate or inhibit one or more of the complex members and in this way, protein complex formation can be similar to phosphorylation. Individual proteins can participate in a variety of protein complexes. Different complexes perform different functions, and the same complex can perform multiple functions depending on various factors. Factors include:
Cell compartment location
Cell cycle stage
Cell nutritional status
Many protein complexes are well understood, particularly in the model organism Saccharomyces cerevisiae (yeast). For this relatively simple organism, the study of protein complexes is now genome wide and the elucidation of most of its protein complexes is ongoing. In 2021, researchers used deep learning software RoseTTAFold along with AlphaFold to solve the structures of 712 eukaryote complexes. They compared 6000 yeast proteins to those from 2026 other fungi and 4325 other eukaryotes.
== Types of protein complexes ==
=== Obligate vs non-obligate protein complex ===
If a protein can form a stable well-folded structure on its own (without any other associated protein) in vivo, then the complexes formed by such proteins are termed "non-obligate protein complexes". However, some proteins can't be found to create a stable well-folded structure alone, but can be found as a part of a protein complex which stabilizes the constituent proteins. Such protein complexes are called "obligate protein complexes".
=== Transient vs permanent/stable protein complex ===
Transient protein complexes form and break down transiently in vivo, whereas permanent complexes have a relatively long half-life. Typically, the obligate interactions (protein–protein interactions in an obligate complex) are permanent, whereas non-obligate interactions have been found to be either permanent or transient. Note that there is no clear distinction between obligate and non-obligate interaction, rather there exist a continuum between them which depends on various conditions e.g. pH, protein concentration etc. However, there are important distinctions between the properties of transient and permanent/stable interactions: stable interactions are highly conserved but transient interactions are far less conserved, interacting proteins on the two sides of a stable interaction have more tendency of being co-expressed than those of a transient interaction (in fact, co-expression probability between two transiently interacting proteins is not higher than two random proteins), and transient interactions are much less co-localized than stable interactions. Though, transient by nature, transient interactions are very important for cell biology: the human interactome is enriched in such interactions, these interactions are the dominating players of gene regulation and signal transduction, and proteins with intrinsically disordered regions (IDR: regions in protein that show dynamic inter-converting structures in the native state) are found to be enriched in transient regulatory and signaling interactions.
=== Fuzzy complex ===
Fuzzy protein complexes have more than one structural form or dynamic structural disorder in the bound state. This means that proteins may not fold completely in either transient or permanent complexes. Consequently, specific complexes can have ambiguous interactions, which vary according to the environmental signals. Hence different ensembles of structures result in different (even opposite) biological functions. Post-translational modifications, protein interactions or alternative splicing modulate the conformational ensembles of fuzzy complexes, to fine-tune affinity or specificity of interactions. These mechanisms are often used for regulation within the eukaryotic transcription machinery.
== Essential proteins in protein complexes ==
Although some early studies suggested a strong correlation between essentiality and protein interaction degree (the "centrality-lethality" rule) subsequent analyses have shown that this correlation is weak for binary or transient interactions (e.g., yeast two-hybrid). However, the correlation is robust for networks of stable co-complex interactions. In fact, a disproportionate number of essential genes belong to protein complexes. This led to the conclusion that essentiality is a property of molecular machines (i.e. complexes) rather than individual components. Wang et al. (2009) noted that larger protein complexes are more likely to be essential, explaining why essential genes are more likely to have high co-complex interaction degree. Ryan et al. (2013) referred to the observation that entire complexes appear essential as "modular essentiality". These authors also showed that complexes tend to be composed of either essential or non-essential proteins rather than showing a random distribution (see Figure). However, this not an all or nothing phenomenon: only about 26% (105/401) of yeast complexes consist of solely essential or solely nonessential subunits.
In humans, genes whose protein products belong to the same complex are more likely to result in the same disease phenotype.
== Homomultimeric and heteromultimeric proteins ==
The subunits of a multimeric protein may be identical as in a homomultimeric (homooligomeric) protein or different as in a heteromultimeric protein. Many soluble and membrane proteins form homomultimeric complexes in a cell, majority of proteins in the Protein Data Bank are homomultimeric. Homooligomers are responsible for the diversity and specificity of many pathways, may mediate and regulate gene expression, activity of enzymes, ion channels, receptors, and cell adhesion processes.
The voltage-gated potassium channels in the plasma membrane of a neuron are heteromultimeric proteins composed of four of forty known alpha subunits. Subunits must be of the same subfamily to form the multimeric protein channel. The tertiary structure of the channel allows ions to flow through the hydrophobic plasma membrane. Connexons are an example of a homomultimeric protein composed of six identical connexins. A cluster of connexons forms the gap-junction in two neurons that transmit signals through an electrical synapse.
=== Intragenic complementation ===
When multiple copies of a polypeptide encoded by a gene form a complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation has been demonstrated in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe; the bacterium Salmonella typhimurium; the virus bacteriophage T4, an RNA virus and humans. In such studies, numerous mutations defective in the same gene were often isolated and mapped in a linear order on the basis of recombination frequencies to form a genetic map of the gene. Separately, the mutants were tested in pairwise combinations to measure complementation. An analysis of the results from such studies led to the conclusion that intragenic complementation, in general, arises from the interaction of differently defective polypeptide monomers to form a multimer. Genes that encode multimer-forming polypeptides appear to be common. One interpretation of the data is that polypeptide monomers are often aligned in the multimer in such a way that mutant polypeptides defective at nearby sites in the genetic map tend to form a mixed multimer that functions poorly, whereas mutant polypeptides defective at distant sites tend to form a mixed multimer that functions more effectively. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle.
== Structure determination ==
The molecular structure of protein complexes can be determined by experimental techniques such as X-ray crystallography, Single particle analysis or nuclear magnetic resonance. Increasingly the theoretical option of protein–protein docking is also becoming available. One method that is commonly used for identifying the meomplexes is immunoprecipitation. Recently, Raicu and coworkers developed a method to determine the quaternary structure of protein complexes in living cells. This method is based on the determination of pixel-level Förster resonance energy transfer (FRET) efficiency in conjunction with spectrally resolved two-photon microscope. The distribution of FRET efficiencies are simulated against different models to get the geometry and stoichiometry of the complexes.
== Assembly ==
Proper assembly of multiprotein complexes is important, since misassembly can lead to disastrous consequences. In order to study pathway assembly, researchers look at intermediate steps in the pathway. One such technique that allows one to do that is electrospray mass spectrometry, which can identify different intermediate states simultaneously. This has led to the discovery that most complexes follow an ordered assembly pathway. In the cases where disordered assembly is possible, the change from an ordered to a disordered state leads to a transition from function to dysfunction of the complex, since disordered assembly leads to aggregation.
The structure of proteins play a role in how the multiprotein complex assembles. The interfaces between proteins can be used to predict assembly pathways. The intrinsic flexibility of proteins also plays a role: more flexible proteins allow for a greater surface area available for interaction.
While assembly is a different process from disassembly, the two are reversible in both homomeric and heteromeric complexes. Thus, the overall process can be referred to as (dis)assembly.
=== Evolutionary significance of multiprotein complex assembly ===
In homomultimeric complexes, the homomeric proteins assemble in a way that mimics evolution. That is, an intermediate in the assembly process is present in the complex's evolutionary history.
The opposite phenomenon is observed in heteromultimeric complexes, where gene fusion occurs in a manner that preserves the original assembly pathway.
== See also ==
Heterotetramer
Biomolecular complex
Protein subunit
== References ==
== External links ==
Multiprotein+Complexes at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Protein_complex |
Repeated sequences (also known as repetitive elements, repeating units or repeats) are short or long patterns that occur in multiple copies throughout the genome. In many organisms, a significant fraction of the genomic DNA is repetitive, with over two-thirds of the sequence consisting of repetitive elements in humans. Some of these repeated sequences are necessary for maintaining important genome structures such as telomeres or centromeres.
Repeated sequences are categorized into different classes depending on features such as structure, length, location, origin, and mode of multiplication. The disposition of repetitive elements throughout the genome can consist either in directly adjacent arrays called tandem repeats or in repeats dispersed throughout the genome called interspersed repeats. Tandem repeats and interspersed repeats are further categorized into subclasses based on the length of the repeated sequence and/or the mode of multiplication.
While some repeated DNA sequences are important for cellular functioning and genome maintenance, other repetitive sequences can be harmful. Many repetitive DNA sequences have been linked to human diseases such as Huntington's disease and Friedreich's ataxia. Some repetitive elements are neutral and occur when there is an absence of selection for specific sequences depending on how transposition or crossing over occurs. However, an abundance of neutral repeats can still influence genome evolution as they accumulate over time. Overall, repeated sequences are an important area of focus because they can provide insight into human diseases and genome evolution.
== History ==
In the 1950s, Barbara McClintock first observed DNA transposition and illustrated the functions of the centromere and telomere at the Cold Spring Harbor Symposium. McClintock's work set the stage for the discovery of repeated sequences because transposition, centromere structure, and telomere structure are all possible through repetitive elements, yet this was not fully understood at the time. The term "repeated sequence" was first used by Roy John Britten and D. E. Kohne in 1968; they found out that more than half of the eukaryotic genomes were repetitive DNA through their experiments on reassociation of DNA. Although the repetitive DNA sequences were conserved and ubiquitous, their biological role was yet unknown. In the 1990s, more research was conducted to elucidate the evolutionary dynamics of minisatellite and microsatellite repeats because of their importance in DNA-based forensics and molecular ecology. DNA-dispersed repeats were increasingly recognized as a potential source of genetic variation and regulation. Discoveries of deleterious repetitive DNA-related diseases stimulated further interest in this area of study. In the 2000s, the data from full eukaryotic genome sequencing enabled the identification of different promoters, enhancers, and regulatory RNAs which are all coded by repetitive regions. Today, the structural and regulatory roles of repetitive DNA sequences remain an active area of research.
== Types and functions ==
Many repeat sequences are likely to be non-functional, decaying remnants of Transposable elements, these have been labelled "junk" or "selfish" DNA. Nevertheless, occasionally some repeats may be exapted for other functions.
=== Tandem repeats ===
Tandem repeats are repeated sequences which are directly adjacent to each other in the genome. Tandem repeats may vary in the number of nucleotides comprising the repeated sequence, as well as the number of times the sequence repeats. When the repeating sequence is only 2–10 nucleotides long, the repeat is referred to as a short tandem repeat (STR) or microsatellite. When the repeating sequence is 10–60 nucleotides long, the repeat is referred to as a minisatellite. For minisatellites and microsatellites, the number of times the sequence repeats at a single locus can range from twice to hundreds of times.
Tandem repeats have a wide variety of biological functions in the genome. For example, minisatellites are often hotspots of meiotic homologous recombination in eukaryotic organisms. Recombination is when two homologous chromosomes align, break, and rejoin to swap pieces. Recombination is important as a source of genetic diversity, as a mechanism for repairing damaged DNA, and a necessary step in the appropriate segregation of chromosomes in meiosis. The presence of repeated sequence DNA makes it easier for areas of homology to align, thereby controlling when and where recombination occurs.
In addition to playing an important role in recombination, tandem repeats also play important structural roles in the genome. For example, telomeres are composed mainly of tandem TTAGGG repeats. These repeats fold into highly organized G quadruplex structures which protect the ends of chromosomal DNA from degradation. Repetitive elements are enriched in the middle of chromosomes as well. Centromeres are the highly compact regions of chromosomes which join sister chromatids together and also allow the mitotic spindle to attach and separate sister chromatids during cell division. Centromeres are composed of a 177 base pair tandem repeat named the α-satellite repeat. Pericentromeric heterochromatin, the DNA which surrounds the centromere and is important for structural maintenance, is composed of a mixture of different satellite subfamilies including the α-, β- and γ-satellites as well as HSATII, HSATIII, and sn5 repeats.
Some repetitive sequences, such as those with structural roles discussed above, play roles necessary for proper biological functioning. Other tandem repeats have deleterious roles which drive diseases. Many other tandem repeats, however, have unknown or poorly understood functions.
=== Interspersed repeats ===
Interspersed repeats are identical or similar DNA sequences which are found in different locations throughout the genome. Interspersed repeats are distinguished from tandem repeats in that the repeated sequences are not directly adjacent to each other but instead may be scattered among different chromosomes or far apart on the same chromosome. Most interspersed repeats are transposable elements (TEs), mobile sequences which can be "cut and pasted" or "copied and pasted" into different places in the genome. TEs were originally called "jumping genes" for their ability to move, yet this term is somewhat misleading as not all TEs are discrete genes.
Transposable elements that are transcribed into RNA, reverse-transcribed into DNA, then reintegrated into the genome are called retrotransposons. Just as tandem repeats are further subcategorized based on the length of the repeating sequence, there are many different types of retrotransposons. Long interspersed nuclear elements (LINEs) are typically 3–7 kilobases in length. Short interspersed nuclear elements (SINEs) are typically 100-300 base pairs and no longer than 600 base pairs. Long-terminal repeat retrotransposons (LTRs) are a third major class of retrotransposons and are characterized by highly repetitive sequences as the ends of the repeat. When a transposable element does not proceed through RNA as an intermediate, it is called a DNA transposon. Other classification systems refer to retrotransposons as "Class I" and DNA transposons as "Class II" transposable elements.
Transposable elements are estimated to constitute 45% of the human genome. Since uncontrolled propagation of TEs could wreak havoc on the genome, many regulatory mechanisms have evolved to silence their spread, including DNA methylation, histone modifications, non-coding RNAs (ncRNAs) including small interfering RNA (siRNA), chromatin remodelers, histone variants, and other epigenetic factors. However, TEs play a wide variety of important biological functions. When TEs are introduced into a new host, such as from a virus, they increase genetic diversity. In some cases, host organisms find new functions for the proteins which arise from expressing TEs in an evolutionary process called TE exaptation. Recent research also suggests that TEs serve to maintain higher-order chromatin structure and 3D genome organization. Furthermore, TEs contribute to regulating the expression of other genes by serving as distal enhancers and transcription factor binding sites.
The prevalence of interspersed elements in the genome has garnered attention for more research on their origins and functions. Some specific interspersed elements have been characterized, such as the Alu repeat and LINE1.
=== Intrachromosomal recombination ===
Homologous recombination between chromosomal repeated sequences in somatic cells of Nicotiana tabacum was found to be increased by exposure to mitomycin C, a bifunctional alkylating agent that crosslinks DNA strands. This increase in recombination was attributed to increased intrachromosomal recombinational repair. By this process, mitomycin C damaged DNA in one sequence is repaired using intact information from the other repeated sequence.
=== Direct and inverted repeats ===
While tandem and interspersed repeats are distinguished based on their location in the genome, direct and inverted repeats are distinguished based on the ordering of the nucleotide bases. Direct repeats occur when a nucleotide sequence is repeated with the same directionality. Inverted repeats occur when a nucleotide sequence is repeated in the inverse direction. For example, a direct repeat of "CATCAT" would be another repetition of "CATCAT". In contrast, the inverted repeated would be "ATGATG". When there are no nucleotides separating the inverted repeat, such as "CATCATATGATG", the sequence is called a palindromic repeat. Inverted repeats can play structural roles in DNA and RNA by forming stem loops and cruciforms.
== Evolutionary emergence of meiosis ==
The evolutionary origin of meiotic sexual reproduction is regarded as a long-standing evolutionary enigma. In prokaryotes, lateral gene transfer emerged as an early evolved form of sexual interaction. However, repeat sequences in prokaryotic DNA limit the effectiveness of lateral gene transfer at purging deleterious mutations, as well as limiting the accurate repair of DNA damages by homologous recombination. Colnoghi et al. proposed that such constraints on the beneficial effects of sexual interaction in prokaryotes favored the evolution of meiotic sex and thus the emergence of eukaryotes. It was concluded that the transition to homologous pairing along linear chromosomes that occurs during meiosis was the crucial innovation in meiotic sexual reproduction, and this innovation was instrumental in the evolutionary expansion of eukaryotic genomes that facilitated increased functional and morphological complexity.
== Repeated sequences in human disease ==
For humans, some repeated DNA sequences are associated with diseases. Specifically, tandem repeat sequences, underlie several human disease conditions, particularly trinucleotide repeat diseases such as Huntington's disease, fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia. Trinucleotide repeat expansions in the germline over successive generations can lead to increasingly severe manifestations of the disease. These trinucleotide repeat expansions may occur through strand slippage during DNA replication or during DNA repair synthesis. It has been noted that genes containing pathogenic CAG repeats often encode proteins that themselves have a role in the DNA damage response and that repeat expansions may impair specific DNA repair pathways. Faulty repair of DNA damages in repeat sequences may cause further expansion of these sequences, thus setting up a vicious cycle of pathology.
=== Huntington's disease ===
Huntington's disease is a neurodegenerative disorder which is due to the expansion of repeated trinucleotide sequence CAG in exon 1 of the huntingtin gene (HTT). This gene is responsible for encoding the protein huntingtin which plays a role in preventing apoptosis, otherwise known as cell death, and repair of oxidative DNA damage. In Huntington's disease the expansion of the trinucleotide sequence CAG encodes for a mutant huntingtin protein with an expanded polyglutamine domain. This domain causes the protein to form aggregates in nerve cells preventing normal cellular function and resulting in neurodegeneration.
=== Fragile X syndrome ===
Fragile X syndrome is caused by the expansion of the DNA sequence CCG in the FMR1 gene on the X chromosome. This gene produces the RNA-binding protein FMRP. In the case of Fragile X syndrome the repeated sequence makes the gene unstable and therefore silences the gene FMR1. Because the gene resides on the X chromosome, females who have two X chromosomes are less effected than males who only have on X chromosome and one Y chromosome because the second X chromosome can compensate for the silencing of the gene on the other X chromosome.
=== Spinocerebellar ataxias ===
The disease spinocerebellar ataxias has CAG trinucleotide repeat sequences that underlie several types of spinocerebellar ataxias (SCAs-SCA1; SCA2; SCA3; SCA6; SCA7; SCA12; SCA17). Similar to Huntington's disease, the polyglutamine tail created due to this trinucleotide expansion causes aggregation of proteins, preventing normal cellular function and causing neurodegeneration.
=== Friedreich's Ataxia ===
Friedreich's ataxia is a type of ataxia that has an expanded repeat sequence GAA in the frataxin gene. The frataxin gene is responsible for producing the frataxin protein, which is a mitochondrial protein involved in energy production and cellular respiration. The expanded GAA sequence results in the silencing of the first intron resulting in loss of function in the frataxin protein. The loss of a functional FXN gene leads to issues with mitochondrial functioning as a whole and can present phenotypically in patients as difficulty walking.
=== Myotonic dystrophy ===
Myotonic dystrophy is a disorder that presents as muscle weakness and consists of two main types: DM1 and DM2. Both types of myotonic dystrophy are due to expanded DNA sequences. In DM1 the DNA sequence that is expanded is CTG while in DM2 it is CCTG. These two sequences are found on different genes with the expanded sequence in DM2 being found on the ZNF9 gene and the expanded sequence in DM1 found on the DMPK gene. The two genes don't encode for proteins unlike other disorders like Huntington's disease or Fragile X syndrome. It has been shown, however, that there is a link between RNA toxicity and the repeat sequences in DM1 and DM2.
=== Amyotrophic lateral sclerosis and Frontotemporal dementia ===
Not all diseases caused by repeated DNA sequences are trinucleotide repeat diseases. The diseases amyotrophic lateral sclerosis and frontotemporal dementia are caused by hexanucleotide GGGGCC repeat sequences in the C9orf72 gene, causing RNA toxicity that leads to neurodegeneration.
== Biotechnology ==
Repetitive DNA is hard to sequence using next-generation sequencing techniques because sequence assembly from short reads simply cannot determine the length of a repetitive part. This issue is particularly serious for microsatellites, which are made of tiny 1-6bp repeat units. Although they are difficult to sequence, these short repeats have great value in DNA fingerprinting and evolutionary studies. Many researchers have historically left out repetitive sequences when analyzing and publishing whole genome data due to technical limitations.
Bustos. et al. proposed one method of sequencing long stretches of repetitive DNA. The method combines the use of a linear vector for stabilization and exonuclease III for deletion of continuing simple sequence repeats (SSRs) rich regions. First, SSR-rich fragments are cloned into a linear vector that can stably incorporate tandem repeats up to 30kb. Expression of repeats is prohibited by the transcriptional terminators in the vector. The second step involves the use of exonuclease III. The enzyme can delete nucleotide at the 3' end which results in the production of a unidirectional deletion of SSR fragments. Finally, this product which has deleted fragments is multiplied and analyzed with colony PCR. The sequence is then built by an ordered sequencing of a set of clones containing different deletions.
== See also ==
== References ==
== External links ==
Function of Repetitive DNA
DNA+Repetitious+Region at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Repeated_sequence_(DNA) |
Z-DNA is one of the many possible double helical structures of DNA. It is a left-handed double helical structure in which the helix winds to the left in a zigzag pattern, instead of to the right, like the more common B-DNA form. Z-DNA is thought to be one of three biologically active double-helical structures along with A-DNA and B-DNA.
== History ==
Left-handed DNA was first proposed by Robert Wells and colleagues, as the structure of a repeating polymer of inosine–cytosine. They observed a "reverse" circular dichroism spectrum for such DNAs, and interpreted this incorrectly to mean that the strands wrapped around one another in a left-handed fashion. The relationship between Z-DNA and the more familiar B-DNA was indicated by the work of Pohl and Jovin, who showed that the ultraviolet circular dichroism of poly(dG-dC) was nearly inverted in 4 M sodium chloride solution and that the structure of poly d(I–C)·poly d(I–C) was in fact a right-handed D-DNA conformation. The suspicion that this was the result of a conversion from B-DNA to Z-DNA was confirmed by examining the Raman spectra of these solutions and the Z-DNA crystals. Subsequently, a crystal structure of "Z-DNA" was published which turned out to be the first single-crystal X-ray structure of a DNA fragment (a self-complementary DNA hexamer d(CG)3). It was resolved as a left-handed double helix with two antiparallel chains that were held together by Watson–Crick base pairs (see X-ray crystallography). It was solved by Andrew H. J. Wang, Alexander Rich, and coworkers in 1979 at MIT. The crystallisation of a B- to Z-DNA junction in 2005 provided a better understanding of the potential role Z-DNA plays in cells. Whenever a segment of Z-DNA forms, there must be B–Z junctions at its two ends, interfacing it to the B-form of DNA found in the rest of the genome.
In 2007, the RNA version of Z-DNA, Z-RNA, was described as a transformed version of an A-RNA double helix into a left-handed helix. The transition from A-RNA to Z-RNA, however, was already described in 1984.
== Structure ==
Z-DNA is quite different from the right-handed forms. In fact, Z-DNA is often compared against B-DNA in order to illustrate the major differences. The Z-DNA helix is left-handed and has a structure that repeats every other base pair. The major and minor grooves, unlike A- and B-DNA, show little difference in width. Formation of this structure is generally unfavourable, although certain conditions can promote it; such as alternating purine–pyrimidine sequence (especially poly(dGC)2), negative DNA supercoiling or high salt and some cations (all at physiological temperature, 37 °C, and pH 7.3–7.4). Z-DNA can form a junction with B-DNA (called a "B-to-Z junction box") in a structure which involves the extrusion of a base pair. The Z-DNA conformation has been difficult to study because it does not exist as a stable feature of the double helix. Instead, it is a transient structure that is occasionally induced by biological activity and then quickly disappears.
=== Predicting Z-DNA structure ===
It is possible to predict the likelihood of a DNA sequence forming a Z-DNA structure. An algorithm for predicting the propensity of DNA to flip from the B-form to the Z-form, ZHunt, was written by P. Shing Ho in 1984 at MIT. This algorithm was later developed by Tracy Camp, P. Christoph Champ, Sandor Maurice, and Jeffrey M. Vargason for genome-wide mapping of Z-DNA (with Ho as the principal investigator).
=== Pathway of formation of Z-DNA from B-DNA ===
Since the discovery and crystallization of Z-DNA in 1979, the configuration has left scientists puzzled about the pathway and mechanism from the B-DNA configuration to the Z-DNA configuration. The conformational change from B-DNA to the Z-DNA structure was unknown at the atomic level, but in 2010, computer simulations conducted by Lee et al. were able to computationally determine that the step-wise propagation of a B-to-Z transition would provide a lower energy barrier than the previously hypothesized concerted mechanism. Since this was computationally proven, the pathway would still need to be tested experimentally in the lab for further confirmation and validity, in which Lee et al. specifically states in their journal article, "The current [computational] result could be tested by Single-molecule FRET (smFRET) experiments in the future." In 2018, the pathway from B-DNA to Z-DNA was experimentally proven using smFRET assays. This was performed by measuring the intensity values between the donor and acceptor fluorescent dyes, also known as Fluorophores, in relation to each other as they exchange electrons, while tagged onto a DNA molecule. The distances between the fluorophores could be used to quantitatively calculate the changes in proximity of the dyes and conformational changes in the DNA. A Z-DNA high affinity binding protein, hZαADAR1, was used at varying concentrations to induce the transformation from B-DNA to Z-DNA. The smFRET assays revealed a B* transition state, which formed as the binding of hZαADAR1 accumulated on the B-DNA structure and stabilized it. This step occurs to avoid high junction energy, in which the B-DNA structure is allowed to undergo a conformational change to the Z-DNA structure without a major, disruptive change in energy. This result coincides with the computational results of Lee et al. proving the mechanism to be step-wise and its purpose being that it provides a lower energy barrier for the conformational change from the B-DNA to Z-DNA configuration. Contrary to the previous notion, the binding proteins do not actually stabilize the Z-DNA conformation after it is formed, but instead they actually promote the formation of the Z-DNA directly from the B* conformation, which is formed by the B-DNA structure being bound by high affinity proteins.
== Biological significance ==
A biological role for Z-DNA in the regulation of type I interferon responses has been confirmed in studies of three well-characterized rare Mendelian Diseases: Dyschromatosis Symmetrica Hereditaria (OMIM: 127400), Aicardi-Goutières syndrome (OMIM: 615010) and Bilateral Striatal Necrosis/Dystonia. Families with haploid ADAR transcriptome enabled mapping of Zα variants directly to disease, showing that genetic information is encoded in DNA by both shape and sequence. A role in regulating type I interferon responses in cancer is also supported by findings that 40% of a panel of tumors were dependent on the ADAR enzyme for survival.
In previous studies, Z-DNA was linked to both Alzheimer's disease and systemic lupus erythematosus. To showcase this, a study was conducted on the DNA found in the hippocampus of brains that were normal, moderately affected with Alzheimer's disease, and severely affected with Alzheimer's disease. Through the use of circular dichroism, this study showed the presence of Z-DNA in the DNA of those severely affected. In this study it was also found that major portions of the moderately affected DNA was in the B-Z intermediate conformation. This is significant because from these findings it was concluded that the transition from B-DNA to Z-DNA is dependent on the progression of Alzheimer's disease. Additionally, Z-DNA is associated with systemic lupus erythematosus (SLE) through the presence of naturally occurring antibodies. Significant amounts of anti Z-DNA antibodies were found in SLE patients and were not present in other rheumatic diseases. There are two types of these antibodies. Through radioimmunoassay, it was found that one interacts with the bases exposed on the surface of Z-DNA and denatured DNA, while the other exclusively interacts with the zig-zag backbone of only Z-DNA. Similar to that found in Alzheimer's disease, the antibodies vary depending on the stage of the disease, with maximal antibodies in the most active stages of SLE.
=== Z-DNA in transcription ===
Z-DNA is commonly believed to provide torsional strain relief during transcription, and it is associated with negative supercoiling. However, while supercoiling is associated with both DNA transcription and replication, Z-DNA formation is primarily linked to the rate of transcription.
A study of human chromosome 22 showed a correlation between Z-DNA forming regions and promoter regions for nuclear factor I. This suggests that transcription in some human genes may be regulated by Z-DNA formation and nuclear factor I activation.
Z-DNA sequences upstream of promoter regions have been shown to stimulate transcription. The greatest increase in activity is observed when the Z-DNA sequence is placed three helical turns after the promoter sequence. Furthermore, using micrococcal nuclease-crosslinking technique, Z-DNA is unlikely to form nucleosomes, which are often located before and/or after a Z-DNA forming sequence. Because of this property, Z-DNA is hypothesized to code for the boundary in nucleosome positioning. Since the placement of nucleosomes influences the binding of transcription factors, Z-DNA is thought to regulate the rate of transcription.
Developed behind the pathway of RNA polymerase through negative supercoiling, Z-DNA formed via active transcription has been shown to increase genetic instability, creating a propensity towards mutagenesis near promoters. A study on Escherichia coli found that gene deletions spontaneously occur in plasmid regions containing Z-DNA-forming sequences. In mammalian cells, the presence of such sequences was found to produce large genomic fragment deletions due to chromosomal double-strand breaks. Both of these genetic modifications have been linked to the gene translocations found in cancers such as leukemia and lymphoma, since breakage regions in tumor cells have been plotted around Z-DNA-forming sequences. However, the smaller deletions in bacterial plasmids have been associated with replication slippage, while the larger deletions associated with mammalian cells are caused by non-homologous end-joining repair, which is known to be prone to error.
The toxic effect of ethidium bromide (EtBr) on trypanosomas is caused by shift of their kinetoplastid DNA to Z-form. The shift is caused by intercalation of EtBr and subsequent loosening of DNA structure that leads to unwinding of DNA, shift to Z-form and inhibition of DNA replication.
=== Discovery of the Zα domain ===
The first domain to bind Z-DNA with high affinity was discovered in ADAR1 using an approach developed by Alan Herbert. Crystallographic and NMR studies confirmed the biochemical findings that this domain bound Z-DNA in a non-sequence-specific manner. Related domains were identified in a number of other proteins through sequence homology. The identification of the Zα domain provided a tool for other crystallographic studies that lead to the characterization of Z-RNA and the B–Z junction. Biological studies suggested that the Z-DNA binding domain of ADAR1 may localize this enzyme that modifies the sequence of the newly formed RNA to sites of active transcription. A role for Zα, Z-DNA and Z-RNA in defense of the genome against the invasion of Alu retro-elements in humans has evolved into a mechanism for the regulation of innate immune responses to dsRNA. Mutations in Zα are causal for human interferonopathies such as the Mendelian Aicardi-Goutières Syndrome.Additionally, Zα domains are demonstrated to localize at the stress granules because of their innate ability in binding nucleic acid. Furthermore, different Zα domains bind to the Z conformation of nucleic acid differently providing important avenues for specific targeting in drug discovery.
=== Consequences of Z-DNA binding to vaccinia E3L protein ===
As Z-DNA has been researched more thoroughly, it has been discovered that the structure of Z-DNA can bind to Z-DNA binding proteins through van der Waal forces and hydrogen bonding. One example of a Z-DNA binding protein is the vaccinia E3L protein, which is a product of the E3L gene and mimics a mammalian protein that binds Z-DNA. Not only does the E3L protein have affinity to Z-DNA, it has also been found to play a role in the level of severity of virulence in mice caused by vaccinia virus, a type of poxvirus. Two critical components to the E3L protein that determine virulence are the N-terminus and the C-terminus. The N-terminus is made of up a sequence similar to that of the Zα domain, also called Adenosine deaminase z-alpha domain, while the C-terminus is composed of a double stranded RNA binding motif. Through research done by Kim, Y. et al. at the Massachusetts Institute of Technology, it was shown that replacing the N-terminus of the E3L protein with a Zα domain sequence, containing 14 Z-DNA binding residues similar to E3L, had little to no effect on pathogenicity of the virus in mice. In Contrast, Kim, Y. et al. also found that deleting all 83 residues of the E3L N-terminus resulted in decreased virulence. This supports their claim that the N-terminus containing the Z-DNA binding residues is necessary for virulence. Overall, these findings show that the similar Z-DNA binding residues within the N-terminus of the E3L protein and the Zα domain are the most important structural factors determining virulence caused by the vaccinia virus, while amino acid residues not involved in Z-DNA binding have little to no effect. A future implication of these findings includes reducing Z-DNA binding of E3L in vaccines containing the vaccinia virus so negative reactions to the virus can be minimized in humans.
Furthermore, Alexander Rich and Jin-Ah Kwon found that E3L acts as a transactivator for human IL-6, NF-AT, and p53 genes. Their results show that HeLa cells containing E3L had increased expression of human IL-6, NF-AT, and p53 genes and point mutations or deletions of certain Z-DNA binding amino acid residues decreased that expression. Specifically, mutations in Tyr 48 and Pro 63 were found to reduce transactivation of the previously mentioned genes, as a result of loss of hydrogen bonding and london dispersion forces between E3L and the Z-DNA. Overall, these results show that decreasing the bonds and interactions between Z-DNA and Z-DNA binding proteins decreases both virulence and gene expression, hence showing the importance of having bonds between Z-DNA and the E3L binding protein.
== Comparison geometries of some DNA forms ==
== See also ==
== References == | Wikipedia/Z-DNA |
Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Some proteins have structural or mechanical functions, such as actin and myosin in muscle, and the cytoskeleton's scaffolding proteins that maintain cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use.
== History and etymology ==
=== Discovery and early studies ===
Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word πρώτειος (proteios), meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da.
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Osborne, alongside Lafayette Mendel, established several nutritionally essential amino acids in feeding experiments with laboratory rats. Diets lacking an essential amino acid stunts the rats' growth, consistent with Liebig's law of the minimum. The final essential amino acid to be discovered, threonine, was identified by William Cumming Rose.
The difficulty in purifying proteins impeded work by early protein biochemists. Proteins could be obtained in large quantities from blood, egg whites, and keratin, but individual proteins were unavailable. In the 1950s, the Armour Hot Dog Company purified 1 kg of bovine pancreatic ribonuclease A and made it freely available to scientists. This gesture helped ribonuclease A become a major target for biochemical study for the following decades.
=== Polypeptides ===
The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum.
=== Structure ===
With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power has supported the sequencing of complex proteins. In 1999, Roger Kornberg sequenced the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons.
Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has helped researchers to approach atomic-level resolution of protein structures.
As of April 2024, the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures.
== Classification ==
Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, gene ontology classifies both genes and proteins by their biological and biochemical function, and by their intracellular location.
Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains).
== Biochemistry ==
Most proteins consist of linear polymers built from series of up to 20 L-α-amino acids. All proteinogenic amino acids have a common structure where an α-carbon is bonded to an amino group, a carboxyl group, and a variable side chain. Only proline differs from this basic structure as its side chain is cyclical, bonding to the amino group, limiting protein chain flexibility. The side chains of the standard amino acids have a variety of chemical structures and properties, and it is the combined effect of all amino acids that determines its three-dimensional structure and chemical reactivity.
The amino acids in a polypeptide chain are linked by peptide bonds between amino and carboxyl group. An individual amino acid in a chain is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone.: 19 The peptide bond has two resonance forms that confer some double-bond character to the backbone. The alpha carbons are roughly coplanar with the nitrogen and the carbonyl (C=O) group. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. One conseqence of the N-C(O) double bond character is that proteins are somewhat rigid.: 31 A polypeptide chain ends with a free amino group, known as the N-terminus or amino terminus, and a free carboxyl group, known as the C-terminus or carboxy terminus. By convention, peptide sequences are written N-terminus to C-terminus, correlating with the order in which proteins are synthesized by ribosomes.
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues.
Proteins can interact with many types of molecules and ions, including with other proteins, with lipids, with carbohydrates, and with DNA.
=== Abundance in cells ===
A typical bacterial cell, e.g. E. coli and Staphylococcus aureus, is estimated to contain about 2 million proteins. Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells. The most abundant protein in nature is thought to be RuBisCO, an enzyme that catalyzes the incorporation of carbon dioxide into organic matter in photosynthesis. Plants can consist of as much as 1% by weight of this enzyme.
== Synthesis ==
=== Biosynthesis ===
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon.: 1002–42 Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.: 1002–42
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
=== Chemical synthesis ===
Short proteins can be synthesized chemically by a family of peptide synthesis methods. These rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
== Structure ==
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation.: 36 Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states.: 37 Biochemists often refer to four distinct aspects of a protein's structure:: 30–34
Primary structure: the amino acid sequence. A protein is a polyamide.
Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of distinct secondary structure can be present in the same protein molecule.
Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein.
Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex.
Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells.
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, protein structures vary because of thermal vibration and collisions with other molecules.: 368–75
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane.: 165–85
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
=== Protein domains ===
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units.: 134 Domains usually have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules.: 155–156
=== Sequence motif ===
Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
== Cellular functions ==
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.: 120
The chief characteristic of proteins that allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can bind to, or be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks.: 830–49
As interactions between proteins are reversible and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.
=== Enzymes ===
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.: 389
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
=== Cell signaling and ligand binding ===
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.: 251–81
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high.: 275–50
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, and release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom.: 222–29 Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.
Transmembrane proteins can serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions.: 232–34
=== Structural proteins ===
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells.: 178–81 Some globular proteins can play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.: 490
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They generate the forces exerted by contracting muscles: 258–64, 272 and play essential roles in intracellular transport.: 481, 490
== Methods of study ==
Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry. The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism, and can often provide more information about protein behavior in different contexts. In silico studies use computational methods to study proteins.
=== Protein purification ===
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography;: 21–24 the advent of genetic engineering has made possible a number of methods to facilitate purification.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity.: 21–24 The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of tags have been developed to help researchers purify specific proteins from complex mixtures.
=== Cellular localization ===
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it indicates an increased likelihood.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties.
=== Proteomics ===
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
=== Structure determination ===
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses;: 340–41 a variant known as electron crystallography can produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
=== Structure prediction ===
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Many proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is an important part of protein structure characterisation.
=== In silico simulation of dynamical processes ===
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree method and the hierarchical equations of motion approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives such as the Folding@home project facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
=== Chemical analysis ===
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
== Digestion ==
In the absence of catalysts, proteins are slow to hydrolyze. The breakdown of proteins to small peptides and amino acids (proteolysis) is a step in digestion; these breakdown products are then absorbed in the small intestine. The hydrolysis of proteins relies on enzymes called proteases or peptidases. Proteases, which are themselves proteins, come in several types according to the particular peptide bonds that they cleave as well as their tendency to cleave peptide bonds at the terminus of a protein (exopeptidases) vs peptide bonds at the interior of the protein (endopeptidases). Pepsin is an endopeptidase in the stomach. Subsequent to the stomach, the pancreas secretes other proteases to complete the hydrolysis, these include trypsin and chymotrypsin.
Protein hydrolysis is employed commercially as a means of producing amino acids from bulk sources of protein, such as blood meal, feathers, keratin. Such materials are treated with hot hydrochloric acid, which effects the hydrolysis of the peptide bonds.
== Mechanical properties ==
The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design.
Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli.
The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data. The internal dynamics of proteins involve subtle elastic and plastic deformations induced by viscoelastic forces, which can be probed by nano-rheology techniques. These estimates yield typical spring constants around k ≈ 100 pN/nm, equivalent to Yonung's moduli of E ≈ 100 MPa, and typical friction coefficients of γ ≈ 0.1 pN·s/nm, corresponding to viscosity of η ≈ 0.01 pN·s/nm2 = 107cP (that is, 107 more viscous than water).
At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
== See also ==
== References ==
== Further reading ==
Textbooks
History
Tanford C, Reynolds JA (2001). Nature's Robots: A History of Proteins. Oxford New York: Oxford University Press, USA. ISBN 978-0-19-850466-5.
== External links ==
=== Databases and projects ===
NCBI Entrez Protein database
NCBI Protein Structure database
Human Protein Reference Database
Human Proteinpedia
Folding@Home (Stanford University) Archived 2012-09-08 at the Wayback Machine
Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures)
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month Archived 2020-07-24 at the Wayback Machine, presenting short accounts on selected proteins from the PDB)
Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure.
UniProt the Universal Protein Resource
=== Tutorials and educational websites ===
"An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford)
Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology | Wikipedia/Protein_interactions |
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists.
DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines.
An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA.
The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET.
DNA walkers are another type of DNA machine.
== See also ==
DNA nanotechnology
== References == | Wikipedia/DNA_machine |
A-DNA is one of the possible double helical structures which DNA can adopt. A-DNA is thought to be one of three biologically active double helical structures along with B-DNA and Z-DNA. It is a right-handed double helix fairly similar to the more common B-DNA form, but with a shorter, more compact helical structure whose base pairs are not perpendicular to the helix-axis as in B-DNA. It was discovered by Rosalind Franklin, who also named the A and B forms. She showed that DNA is driven into the A form when under dehydrating conditions. Such conditions are commonly used to form crystals, and many DNA crystal structures are in the A form. The same helical conformation occurs in double-stranded RNAs, and in DNA-RNA hybrid double helices.
== Structure ==
Like the more common B-DNA, A-DNA is a right-handed double helix with major and minor grooves. However, as shown in the comparison table below, there is a slight increase in the number of base pairs (bp) per turn. This results in a smaller twist angle, and smaller rise per base pair, so that A-DNA is 20-25% shorter than B-DNA. The major groove of A-DNA is deep and narrow, while the minor groove is wide and shallow. A-DNA is broader and more compressed along its axis than B-DNA.
The identifiable characteristic of A-DNA X-ray crystallography is the hole in the center. A-DNA has a C3'-endo pucker, which refers to the C3' carbon in the furanose ring being below the sugar plane.
== Comparison geometries of the most common DNA forms ==
== A/B intermediates ==
Research also indicates that A-form DNA can hybridize with the more common B-DNA. These A-B intermediate forms adopt the sugar pucker properties and/or the base conformation of both DNA forms. In one study, the characteristic C3'-endo pucker is found on the first three sugars of the DNA strand, while the last three sugars have a C2'-endo pucker, like B-DNA. These intermediates can form in aqueous solutions when the cytosine bases are methylated or brominated, altering the conformation. Alternatively, guanine and cytosine rich fragments have been shown to be easily converted from B to A-form in aqueous solutions.
== Biological function ==
A-DNA can be derived from a few processes, including dehydration and protein binding. Dehydration of DNA drives it into the A form, which has been shown to protect DNA under conditions such as the extreme desiccation of bacteria. Protein binding can also strip solvent off of DNA and convert it to the A form, as revealed by the structure of several hyperthermophilic archaeal viruses. These viruses include rod-shaped rudiviruses SIRV2 and SSRV1, enveloped filamentous lipothrixviruses AFV1, SFV1 and SIFV, tristromavirus PFV2 as well as icosahedral portoglobovirus SPV1. A-form DNA is believed to be one of the adaptations of hyperthermophilic archaeal viruses to harsh environmental conditions in which these viruses thrive.
It has been proposed that the motors that package double-stranded DNA in bacteriophages exploit the fact that A-DNA is shorter than B-DNA, and that conformational changes in the DNA itself are the source of the large forces generated by these motors. Experimental evidence for A-DNA as an intermediate in viral biomotor packing comes from double dye Förster resonance energy transfer measurements showing that B-DNA is shortened by 24% in a stalled ("crunched") A-form intermediate. In this model, ATP hydrolysis is used to drive protein conformational changes that alternatively dehydrate and rehydrate the DNA, and the DNA shortening/lengthening cycle is coupled to a protein-DNA grip/release cycle to generate the forward motion that moves DNA into the capsid.
== See also ==
Nucleic acid tertiary structure
DNA
B-DNA
Z-DNA
C-DNA
== References ==
== External links ==
Cornell Comparison of DNA structures
Nucleic Acid Nomenclature | Wikipedia/A-DNA |
Ancient DNA (aDNA) is DNA isolated from ancient sources (typically specimens, but also environmental DNA). Due to degradation processes (including cross-linking, deamination and fragmentation) ancient DNA is more degraded in comparison with present-day genetic material. Genetic material has been recovered from paleo/archaeological and historical skeletal material, mummified tissues, archival collections of non-frozen medical specimens, preserved plant remains, ice and from permafrost cores, marine and lake sediments and excavation dirt.
Even under the best preservation conditions, there is an upper boundary of 0.4–1.5 million years for a sample to contain sufficient DNA for sequencing technologies. The oldest DNA sequenced from physical specimens are from mammoth molars in Siberia over 1 million years old. In 2022, two-million-year-old genetic material was recovered from sediments in Greenland, and is currently considered the oldest DNA discovered so far.
== History of ancient DNA studies ==
=== 1980s ===
The first study of what would come to be called aDNA was conducted in 1984, when Russ Higuchi and colleagues at the University of California, Berkeley reported that traces of DNA from a museum specimen of the Quagga not only remained in the specimen over 150 years after the death of the individual, but could be extracted and sequenced. Over the next two years, through investigations into natural and artificially mummified specimens, Svante Pääbo confirmed that this phenomenon was not limited to relatively recent museum specimens but could apparently be replicated in a range of mummified human samples that dated as far back as several thousand years.
The laborious processes that were required at that time to sequence such DNA (through bacterial cloning) were an effective brake on the study of ancient DNA (aDNA) and the field of museomics. However, with the development of the Polymerase Chain Reaction (PCR) in the late 1980s, the field began to progress rapidly. Double primer PCR amplification of aDNA (jumping-PCR) can produce highly skewed and non-authentic sequence artifacts. Multiple primer, nested PCR strategy was used to overcome those shortcomings.
=== 1990s ===
The post-PCR era heralded a wave of publications as numerous research groups claimed success in isolating aDNA. Soon a series of incredible findings had been published, claiming authentic DNA could be extracted from specimens that were millions of years old, into the realms of what Lindahl (1993b) has labelled Antediluvian DNA. The majority of such claims were based on the retrieval of DNA from organisms preserved in amber. Insects such as stingless bees, termites, and wood gnats, as well as plant and bacterial sequences were said to have been extracted from Dominican amber dating to the Oligocene epoch. Still older sources of Lebanese amber-encased weevils, dating to within the Cretaceous epoch, reportedly also yielded authentic DNA. Claims of DNA retrieval were not limited to amber.
Reports of several sediment-preserved plant remains dating to the Miocene were published. Then in 1994, Woodward et al. reported what at the time was called the most exciting results to date — mitochondrial cytochrome b sequences that had apparently been extracted from dinosaur bones dating to more than 80 million years ago. When in 1995 two further studies reported dinosaur DNA sequences extracted from a Cretaceous egg, it seemed that the field would revolutionize knowledge of the Earth's evolutionary past. Even these extraordinary ages were topped by the claimed retrieval of 250-million-year-old halobacterial sequences from halite.
The development of a better understanding of the kinetics of DNA preservation, the risks of sample contamination and other complicating factors led the field to view these results more skeptically. Numerous careful attempts failed to replicate many of the findings, and all of the decade's claims of multi-million year old aDNA would come to be dismissed as inauthentic.
=== 2000s ===
Single primer extension amplification was introduced in 2007 to address postmortem DNA modification damage. Since 2009 the field of aDNA studies has been revolutionized with the introduction of much cheaper research techniques, and since 2010 been able to sequence ancient human DNA, recovering complete genomes. The use of high-throughput Next Generation Sequencing (NGS) techniques in the field of ancient DNA research has been essential for reconstructing the genomes of ancient or extinct organisms. A single-stranded DNA (ssDNA) library preparation method has sparked great interest among ancient DNA (aDNA) researchers.
In addition to these technical innovations, the start of the decade saw the field begin to develop better standards and criteria for evaluating DNA results, as well as a better understanding of the potential pitfalls.
=== 2020s ===
The 2022 Nobel Prize in Physiology or Medicine was awarded to Svante Pääbo "for his discoveries concerning the genomes of extinct hominins and human evolution". A few days later, on the 7th of December 2022, a study in Nature reported that two-million year old genetic material was found in Greenland, and is currently considered the oldest DNA discovered so far.
== Problems and errors ==
=== Degradation processes ===
Due to degradation processes (including cross-linking, deamination and fragmentation), ancient DNA is of lower quality than modern genetic material. The damage characteristics and ability of aDNA to survive through time restricts possible analyses and places an upper limit on the age of successful samples. There is a theoretical correlation between time and DNA degradation, although differences in environmental conditions complicate matters. Samples subjected to different conditions are unlikely to predictably align to a uniform age-degradation relationship. The environmental effects may even matter after excavation, as DNA decay-rates may increase, particularly under fluctuating storage conditions. Even under the best preservation conditions, there is an upper boundary of 0.4 to 1.5 million years for a sample to contain sufficient DNA for contemporary sequencing technologies.
Research into the decay of mitochondrial and nuclear DNA in moa bones has modelled mitochondrial DNA degradation to an average length of 1 base pair after 6,830,000 years at −5 °C. The decay kinetics have been measured by accelerated aging experiments, further displaying the strong influence of storage temperature and humidity on DNA decay. Nuclear DNA degrades at least twice as fast as mtDNA. Early studies that reported recovery of much older DNA, for example from Cretaceous dinosaur remains, may have stemmed from contamination of the sample.
=== Age limit ===
A critical review of ancient DNA literature through the development of the field highlights that few studies have succeeded in amplifying DNA from remains older than several hundred thousand years. A greater appreciation for the risks of environmental contamination and studies on the chemical stability of DNA have raised concerns over previously reported results. The alleged dinosaur DNA was later revealed to be human Y-chromosome. The DNA reported from encapsulated halobacteria has been criticized based on its similarity to modern bacteria, which hints at contamination, or they may be the product of long-term, low-level metabolic activity.
aDNA may contain a large number of postmortem mutations, increasing with time. Some regions of polynucleotide are more susceptible to this degradation, allowing erroneous sequence data to bypass statistical filters used to check the validity of data. Due to sequencing errors, great caution should be applied to interpretation of population size. Substitutions resulting from deamination of cytosine residues are vastly over-represented in the ancient DNA sequences. Miscoding of C to T and G to A accounts for the majority of errors.
=== Contamination ===
Another problem with ancient DNA samples is contamination by modern human DNA and by microbial DNA (most of which is also ancient). New methods have emerged in recent years to prevent possible contamination of aDNA samples, including conducting extractions under extreme sterile conditions, using special adapters to identify endogenous molecules of the sample (distinguished from those introduced during analysis), and applying bioinformatics to resulting sequences based on known reads in order to approximate rates of contamination.
== Authentication of aDNA ==
Development in the aDNA field in the 2000s increased the importance of authenticating recovered DNA to confirm that it is indeed ancient and not the result of recent contamination. As DNA degrades over time, the nucleotides that make up the DNA may change, especially at the ends of the DNA molecules. The deamination of cytosine to uracil at the ends of DNA molecules has become a way of authentication. During DNA sequencing, the DNA polymerases will incorporate an adenine (A) across from the uracil (U), leading to cytosine (C) to thymine (T) substitutions in the aDNA data. These substitutions increase in frequency as the sample gets older. Frequency measurement of the C-T level, ancient DNA damage, can be made using various software such as mapDamage2.0 or PMDtools and interactively on metaDMG. Due to hydrolytic depurination, DNA fragments into smaller pieces, leading to single-stranded breaks. Combined with the damage pattern, this short fragment length can also help differentiate between modern and ancient DNA.
== Non-human aDNA ==
Despite the problems associated with aDNA, a wide and ever-increasing range of aDNA sequences have now been published from a range of animal and plant taxa. Tissues examined include artificially or naturally mummified animal remains, bone, shells, paleofaeces, alcohol preserved specimens, rodent middens, dried plant remains, and recently, extractions of animal and plant DNA directly from soil samples.
In June 2013, a group of researchers including Eske Willerslev, Marcus Thomas Pius Gilbert and Orlando Ludovic of the Centre for Geogenetics, Natural History Museum of Denmark at the University of Copenhagen, announced that they had sequenced the DNA of a 560–780 thousand year old horse, using material extracted from a leg bone found buried in permafrost in Canada's Yukon territory. A German team also reported in 2013 the reconstructed mitochondrial genome of a bear, Ursus deningeri, more than 300,000 years old, proving that authentic ancient DNA can be preserved for hundreds of thousand years outside of permafrost. The DNA sequence of even older nuclear DNA was reported in 2021 from the permafrost-preserved teeth of two Siberian mammoths, both over a million years old.
Researchers in 2016 measured chloroplast DNA in marine sediment cores, and found diatom DNA dating back to 1.4 million years. This DNA had a half-life significantly longer than previous research, of up to 15,000 years. Kirkpatrick's team also found that DNA only decayed along a half-life rate until about 100 thousand years, at which point it followed a slower, power-law decay rate.
== Human aDNA ==
Due to the considerable anthropological, archaeological, and public interest directed toward human remains, they have received considerable attention from the DNA community. There are also more profound contamination issues, since the specimens belong to the same species as the researchers collecting and evaluating the samples.
=== Sources ===
Due to the morphological preservation in mummies, many studies from the 1990s and 2000s used mummified tissue as a source of ancient human DNA. Examples include both naturally preserved specimens, such as the Ötzi the Iceman frozen in a glacier and bodies preserved through rapid desiccation at high altitude in the Andes, as well as various chemically treated preserved tissue such as the mummies of ancient Egypt. However, mummified remains are a limited resource. The majority of human aDNA studies have focused on extracting DNA from two sources much more common in the archaeological record: bones and teeth. The bone that is most often used for DNA extraction is the petrous ear bone, since its dense structure provides good conditions for DNA preservation. Several other sources have also yielded DNA, including paleofaeces, and hair. Contamination remains a major problem when working on ancient human material.
Ancient pathogen DNA has been successfully retrieved from samples dating to more than 5,000 years old in humans and as long as 17,000 years ago in other species. In addition to the usual sources of mummified tissue, bones and teeth, such studies have also examined a range of other tissue samples, including calcified pleura, tissue embedded in paraffin, and formalin-fixed tissue. Efficient computational tools have been developed for pathogen and microorganism aDNA analyses in a small (QIIME) and large scale (FALCON ).
=== Results ===
Taking preventative measures in their procedure against such contamination though, a 2012 study analyzed bone samples of a Neanderthal group in the El Sidrón cave, finding new insights on potential kinship and genetic diversity from the aDNA. In November 2015, scientists reported finding a 110,000-year-old tooth containing DNA from the Denisovan hominin, an extinct species of human in the genus Homo.
The research has added new complexity to the peopling of Eurasia. A study from 2018 showed that a Bronze Age mass migration had greatly impacted the genetic makeup of the British Isles, bringing with it the Bell Beaker culture from mainland Europe.
It has also revealed new information about links between the ancestors of Central Asians and the indigenous peoples of the Americas. In Africa, older DNA degrades quickly due to the warmer tropical climate, although, in September 2017, ancient DNA samples, as old as 8,100 years old, have been reported.
Moreover, ancient DNA has helped researchers to estimate modern human divergence. By sequencing African genomes from three Stone Age hunter gatherers (2000 years old) and four Iron Age farmers (300 to 500 years old), Schlebusch and colleagues were able to push back the date of the earliest divergence between human populations to 350,000 to 260,000 years ago.
As of 2021, the oldest completely reconstructed human genomes are ~45,000 years old. Such genetic data provides insights into the migration and genetic history – e.g. of Europe – including about interbreeding between archaic and modern humans like a common admixture between initial European modern humans and Neanderthals.
== Researchers specializing in ancient DNA ==
== See also ==
== References ==
== Further reading ==
== External links ==
"Ancient DNA". Ancestral journeys. Archived from the original on October 3, 2016.
Famous mtDNA, isogg wiki
Ancient mtDNA, isogg wiki
Ancient DNA Archived 2020-03-06 at the Wayback Machine, y-str.org
Evidence of the Past: A Map and Status of Ancient Remains – samples from USA no sequence data here.
"Unravelling the mummy mystery – using DNA". Archived from the original on December 14, 2009 – no data on YDNA only mtDNA | Wikipedia/Ancient_DNA |
A DNA clamp, also known as a sliding clamp, is a protein complex that serves as a processivity-promoting factor in DNA replication. As a critical component of the DNA polymerase III holoenzyme, the clamp protein binds DNA polymerase and prevents this enzyme from dissociating from the template DNA strand. The clamp-polymerase protein–protein interactions are stronger and more specific than the direct interactions between the polymerase and the template DNA strand; because one of the rate-limiting steps in the DNA synthesis reaction is the association of the polymerase with the DNA template, the presence of the sliding clamp dramatically increases the number of nucleotides that the polymerase can add to the growing strand per association event. The presence of the DNA clamp can increase the rate of DNA synthesis up to 1,000-fold compared with a nonprocessive polymerase.
== Structure ==
The DNA clamp is an α+β protein that assembles into a multimeric, six-domain ring structure that completely encircles the DNA double helix as the polymerase adds nucleotides to the growing strand. Each domain is in turn made of two β-α-β-β-β structural repeats. The DNA clamp assembles on the DNA at the replication fork and "slides" along the DNA with the advancing polymerase, aided by a layer of water molecules in the central pore of the clamp between the DNA and the protein surface. Because of the toroidal shape of the assembled multimer, the clamp cannot dissociate from the template strand without also dissociating into monomers.
The DNA clamp fold is found in bacteria, archaea, eukaryotes and some viruses. In bacteria, the sliding clamp is a homodimer composed of two identical beta subunits of DNA polymerase III and hence is referred to as the beta clamp. In archaea and eukaryotes, it is a trimer composed of three molecules of PCNA. The T4 bacteriophage also uses a sliding clamp, called gp45 that is a trimer similar in structure to PCNA but lacks sequence homology to either PCNA or the bacterial beta clamp.
=== Bacterial ===
The beta clamp is a specific DNA clamp and a subunit of the DNA polymerase III holoenzyme found in bacteria. Two beta subunits are assembled around the DNA by the gamma subunit and ATP hydrolysis; this assembly is called the pre-initiation complex. After assembly around the DNA, the beta subunits' affinity for the gamma subunit is replaced by an affinity for the alpha and epsilon subunits, which together create the complete holoenzyme. DNA polymerase III is the primary enzyme complex involved in prokaryotic DNA replication.
The gamma complex of DNA polymerase III, composed of γδδ'χψ subunits, catalyzes ATP to chaperone two beta subunits to bind to DNA. Once bound to DNA, the beta subunits can freely slide along double stranded DNA. The beta subunits in turn bind the αε polymerase complex. The α subunit possesses DNA polymerase activity and the ε subunit is a 3’-5’ exonuclease.
The beta chain of bacterial DNA polymerase III is composed of three topologically equivalent domains (N-terminal, central, and C-terminal). Two beta chain molecules are tightly associated to form a closed ring encircling duplex DNA.
==== As a drug target ====
Certain NSAIDs (carprofen, bromfenac, and vedaprofen) exhibit some suppression of bacterial DNA replication by inhibiting bacterial DNA clamp.
=== Eukaryotic and archaeal ===
The sliding clamp in eukaryotes is assembled from a specific subunit of DNA polymerase delta called the proliferating cell nuclear antigen (PCNA). The N-terminal and C-terminal domains of PCNA are topologically identical. Three PCNA molecules are tightly associated to form a closed ring encircling duplex DNA.
The sequence of PCNA is well conserved between plants, animals and fungi, indicating a strong selective pressure for structure conservation, and suggesting that this type of DNA replication mechanism is conserved throughout eukaryotes. In eukaryotes, a homologous, heterotrimeric "9-1-1 clamp" made up of RAD9-RAD1-HUS1 (911) is responsible for DNA damage checkpoint control. This 9-1-1 clamp mounts onto DNA in the opposite direction.
Archaea, probable evolutionary precursor of eukaryotes, also universally have at least one PCNA gene. This PCNA ring works with PolD, the single eukaryotic-like DNA polymerase in archaea responsible for multiple functions from replication to repair. Some unusual species have two or even three PCNA genes, forming heterotrimers or distinct specialized homotrimers. Archaeons also share with eukaryotes the PIP (PCNA-interacting protein) motif, but a wider variety of such proteins performing different functions are found.
PCNA is also appropriated by some viruses. The giant virus genus Chlorovirus, with PBCV-1 as a representative, carries in its genome two PCNA genes (Q84513, O41056) and a eukaryotic-type DNA polymerase. Members of Baculoviridae also encode a PCNA homolog (P11038).
=== Caudoviral ===
The viral gp45 sliding clamp subunit protein contains two domains. Each domain consists of two alpha helices and two beta sheets – the fold is duplicated and has internal pseudo two-fold symmetry. Three gp45 molecules are tightly associated to form a closed ring encircling duplex DNA.
=== Herpesviral ===
Some members of Herpesviridae encode a protein that has a DNA clamp fold but does not associate into a ring clamp. The two-domain protein does, however, associate with the viral DNA polymerase and also acts to increase processivity. As it does not form a ring, it does not need a clamp loader to be attached to DNA.
== Assembly ==
Sliding clamps are loaded onto their associated DNA template strands by specialized proteins known as "sliding clamp loaders", which also disassemble the clamps after replication has completed. The binding sites for these initiator proteins overlap with the binding sites for the DNA polymerase, so the clamp cannot simultaneously associate with a clamp loader and with a polymerase. Thus the clamp will not be actively disassembled while the polymerase remains bound. DNA clamps also associate with other factors involved in DNA and genome homeostasis, such as nucleosome assembly factors, Okazaki fragment ligases, and DNA repair proteins. All of these proteins also share a binding site on the DNA clamp that overlaps with the clamp loader site, ensuring that the clamp will not be removed while any enzyme is still working on the DNA. The activity of the clamp loader requires ATP hydrolysis to "close" the clamp around the DNA.
== References ==
== Further reading ==
Clamping down on pathogenic bacteria– how to shut down a key DNA polymerase complex. Quips at PDBe Archived 2013-08-02 at archive.today
Watson JD, Baker TA, Bell SP, Gann A, Levine M, Losick R (2004). Molecular Biology of the Gene. San Francisco: Pearson/Benjamin Cummings. ISBN 978-0-8053-4635-0.
== External links ==
SCOP DNA clamp fold
CATH box architecture
clamp+protein+DnaN,+E+coli at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/DNA_clamp |
ScienceDaily is an American website launched in 1995 that aggregates press releases and publishes lightly edited press releases (a practice called churnalism) about science, similar to Phys.org and EurekAlert!.
== History ==
The site was founded by married couple Dan and Michele Hogan in 1995; Dan Hogan formerly worked in the public affairs department of Jackson Laboratory writing press releases. The site makes money from selling advertisements. As of 2010, the site said that it had grown "from a two-person operation to a full-fledged news business with worldwide contributors". At the time, it was run out of the Hogans' home, had no reporters, and only reprinted press releases. In 2012, Quantcast ranked it at 614 with 2.6 million U.S. visitors.
== Sections ==
As of August 2023, ScienceDaily mainly has five sections, Health, Tech, Enviro, Society, and Quirky, the last of which includes the top news.
== References ==
== External links ==
Official website
Alexa - ScienceDaily—Archived February 26, 2020, at the Wayback Machine | Wikipedia/ScienceDaily |
DNA (deoxyribonucleic acid) is a molecule encoding the genetic instructions for life.
DNA may also refer to:
== Companies ==
DNA Films, a British film studio
DNA Oyj, a Finnish telecommunications company
DNA Productions, an American animation studio
DNA Publications, an American publishing company
DNA Studio, an advertising agency
Ginkgo Bioworks, a biotech company (NYSE stock symbol:DNA)
Dan Air (Romania), a Romanian airline (ICAO code:DNA)
== Computing ==
DIGITAL Network Architecture, DECnet's peer-to-peer networking architecture
BitTorrent DNA, a download accelerator
Windows DNA, a defunct predecessor of the Microsoft .NET Framework
Direct Note Access, technology for music editing from Celemony Software
== Film and television ==
Daily News and Analysis, an Indian broadsheet newspaper between 2005 and 2019
DNA (1997 film), an American science fiction action film
DNA (2020 film), a French drama film
DNA (British TV series), a British television crime drama, aired in 2004 and 2006
DNA (Danish TV series), a Danish television crime drama starring Anders W. Berthelsen, aired in 2019 and 2023
"DNA" (Red Dwarf), a 1991 episode of Red Dwarf
DNA (2024 film), an Indian Malayalam-language crime thriller film
== Literature ==
DNA Magazine an Australian monthly magazine
Les Dernières Nouvelles d'Alsace or Les DNA, a daily French newspaper
DNA, a 2007 play by Dennis Kelly
DnA, the joint pen name of Dan Abnett and Andy Lanning, British comic book writing duo
== Music ==
=== Bands ===
DNA (American band), a No Wave band
DNA (duo), an electronic dance music duo
DNA, a rock band formed in 1983 by Rick Derringer and Carmine Appice
DNA, a Kazakh boy group under Juz Entertainment
=== Albums ===
DNA (Backstreet Boys album) (2019)
DNA (Wanessa Camargo album) (2011)
DNA (Last Live at CBGB's) 1993 album by DNA
D.N.A. (John Foxx album) (2010)
DNA (Koda Kumi album) (2018)
DNA (Little Mix album) (2012)
D.N.A. (Mario album) (2009)
DNA (Matthew Shipp and William Parker album) (1999)
DNA (Trapt album) (2016)
DNA (Ian Yates album) (2014)
DNA (Ghali album) (2020)
DNA, a 2019 album by Jeanette Biedermann
=== Songs ===
"DNA" (BTS song) (2017)
"DNA" (Empire of the Sun song) (2013)
"D.N.A." (A Flock of Seagulls song) (1982)
"DNA" (Kendrick Lamar song) (2017)
"DNA" (Little Mix song) (2012)
"DNA", a song by Craig David from 22 (2022)
"DNA", a song by Danny Brown from XXX
"DNA", a song by K.Flay from Solutions (2019)
"DNA", a song by the Kills from Blood Pressures
"DNA", a song written by Howard Benson, Lenard Skolnik, Lia Marie Johnson, and Sidnie Tipton and recorded by Lia Marie Johnson (2016)
"DNA", a song by Rye Rye from Go! Pop! Bang!
"DNA", a song by Wale from Shine
"DNA", a song by Earl Sweatshirt eaturing Na-Kel from I Don't Like Shit, I Don't Go Outside
"D/N/A", a song from the video game Hatsune Miku: Colorful Stage!
== Politics and government ==
Democratic National Assembly, a political party in Trinidad and Tobago
Det norske Arbeiderparti or Norwegian Labour Party
National Anticorruption Directorate or Direcția Națională Anticorupție, a Romanian anti-corruption agency
The National Assembly or De Nationale Assemblée, the parliament of Suriname
Democratic – Neutral – Authentic, a political party in Austria
== Other uses ==
DNa inscription, an ancient inscription at the tomb of Darius the Great
DNA Lounge, a nightclub in San Francisco, California, U.S.
Dynamic network analysis, a scientific field in sociology and statistics
Dynamic New Athletics, a team competition format in athletics
DNA, an index herbariorum code in the FloraNT database
Did Not Attend, a motorsport term
== See also ==
Corporate DNA, factors underlying and affecting organizational culture
DeNA, mobile provider in Japan
D.N.A.: Dark Native Apostle, a 2001 PS2 action game from Tamsoft
DNA computing, a field of non-silicon computing technologies based on molecular biology
DNA profiling
DNA², a 1993 manga by Masakazu Katsura, subsequently adapted into an anime
D.N.Angel, a 2003 manga/anime franchise by Yukiru Sugisaki
DNAR or "Do not attempt resuscitation", legal order to withhold cardiopulmonary resuscitation (CPR) or cardiac life support
Z-DNA, one of the possible double helical structures of DNA | Wikipedia/DNA_(disambiguation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.