id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
11,148,848 | https://en.wikipedia.org/wiki/NCAR%20LSM%201.0 | The National Center for Atmospheric Research Land Surface Model (LSM) is a unidimensional computational model developed by Gordon Bonan that describes ecological processes joined in many ecosystem models, hydrological processes found in hydrological models and flow of surface common in surface models using atmospheric models.
In this way, the model examines interactions especially biogeophysics (sensible and latent heat, momentum, albedo, emission of long waves) and biogeochemistry (CO2) of the land-atmosphere the effect of surface of the land in the climate and composition of the atmosphere.
This model has a simplified treatment of the surface flows that reproduce at the very least computational cost the essential characteristics of the important interactions of the land-atmosphere for climatic simulations.
As the types of surface vegetated for some species are several, have a standardization of types of covering being enclosed surfaces covered with water as lakes (amongst others); thus the model wheel for each point of independent form, with the same average of the atmospheric interactions. The model functions in a space grating that can vary of a point until global.
References
Bonan, G.B. (1996). A land surface model (LSM version 1.0) for ecological, hydrological, and atmospheric studies: technical description and user's guide. NCAR Technical Note NCAR/TN-417+STR. National Center for Atmospheric Research 1-150.
Bonan, G.B. (1996). Model Documentation: copy technical note
Computational science
Ecosystems
Systems ecology | NCAR LSM 1.0 | [
"Mathematics",
"Biology",
"Environmental_science"
] | 314 | [
"Symbiosis",
"Systems ecology",
"Applied mathematics",
"Computational science",
"Ecosystems",
"Environmental social science"
] |
11,150,549 | https://en.wikipedia.org/wiki/Precise%20Time%20and%20Time%20Interval | Precise Time and Time Interval (PTTI) is a Department of Defense military and Global Positioning System standard which details a mechanism and waveform for distributing highly accurate timing information.
It is similar to pulse per second (PPS) because it indicates the start of each second using a pulse. PTTI also provides the full Time Of Day (TOD) in hours, minutes, and seconds.
See also
Clock signal
Inter-Range Instrumentation Group (IRIG) time codes
Pulse Per Second (PPS) (1PPS)
Square wave
Timecode
References
External links
DOD-STD-1399-441 Interface Standard for Shipboard Systems Section 441 Precise Time and Time Interval (PTTI)
Timekeeping | Precise Time and Time Interval | [
"Physics"
] | 140 | [
"Spacetime",
"Timekeeping",
"Physical quantities",
"Time"
] |
11,151,209 | https://en.wikipedia.org/wiki/VALBOND | In molecular mechanics, VALBOND is a method for computing the angle bending energy that is based on valence bond theory. It is based on orbital strength functions, which are maximized when the hybrid orbitals on the atom are orthogonal. The hybridization of the bonding orbitals are obtained from empirical formulas based on Bent's rule, which relates the preference towards p character with electronegativity.
The VALBOND functions are suitable for describing the energy of bond angle distortion not only around the equilibrium angles, but also at very large distortions. This represents an advantage over the simpler harmonic oscillator approximation used by many force fields, and allows the VALBOND method to handle hypervalent molecules and transition metal complexes. The VALBOND energy term has been combined with force fields such as CHARMM and UFF to provide a complete functional form that includes also bond stretching, torsions, and non-bonded interactions.
Functional form
Non-hypervalent molecules
For an angle α between normal (non-hypervalent) bonds involving an spmdn hybrid orbital, the energy contribution is
,
where k is an empirical scaling factor that depends on the elements involved in the bond, Smax, the maximum strength function, is
and S(α) is the strength function
which depends on the nonorthogonality integral Δ:
The energy contribution is added twice, once per each of the bonding orbitals involved in the angle (which may have different hybridizations and different values for k).
For non-hypervalent p-block atoms, the hybridization value n is zero (no d-orbital contribution), and m is obtained as %p(1-%p), where %p is the p character of the orbital obtained from
where the sum over j includes all ligands, lone pairs, and radicals on the atom, np is the "gross hybridization" (for example, for an "sp2" atom, np = 2). The weight wti depends on the two elements involved in the bond (or just one for lone pair or radicals), and represents the preference for p character of different elements. The values of the weights are empirical, but can be rationalized in terms of Bent's rule.
Hypervalent molecules
For hypervalent molecules, the energy is represented as a combination of VALBOND configurations, which are akin to resonance structures that place three-center four-electron bonds (3c4e) in different ways. For example, ClF3 is represented as having one "normal" two-center bond and one 3c4e bond. There are three different configurations for ClF3, each one using a different Cl-F bond as the two-center bond. For more complicated systems the number of combinations increases rapidly; SF6 has 45 configurations.
where the sum is over all configurations j, and the coefficient cj is defined by the function
where "hype" refers to the 3c4e bonds. This function ensures that the configurations where the 3c4e bonds are linear are favored.
The energy terms are modified by multiplying them by a bond order factor, BOF, which is the product of the formal bond orders of the two bonds involved in the angle (for 3c4e bonds, the bond order is 0.5). For 3c4e bonds, the energy is calculated as
where Δ is again the non-orthogonality function, but here the angle α is offset by 180 degrees (π radians).
Finally, to ensure that the axial vs equatorial preference of different ligands in hypervalent compounds is reproduced, an "offset energy" term is subtracted. It has the form
where the EN terms depend on the electronegativity difference between the ligand and the central atom as follows:
where ss is 1 if the electronegativity difference is positive and 2 if it is negative.
For p-block hypervalent molecules, d orbitals are not used, so n = 0. The p contribution m is estimated from ab initio quantum chemistry methods and a natural bond orbital (NBO) analysis.
Extension
More recent extensions, available in the CHARMM suite of codes, include the trans-influence (or trans effect) within VALBOND-TRANS and the possibility to run reactive molecular dynamics with "Multi-state VALBOND".
References
Force fields (chemistry) | VALBOND | [
"Chemistry"
] | 888 | [
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
11,151,417 | https://en.wikipedia.org/wiki/Eastern%20Trough%20Area%20Project | The Eastern Trough Area Project, commonly known as ETAP, is a network of nine smaller oil and gas fields in the Central North Sea covering an area up to 35 km in diameter. There are a total of nine different fields, six operated by BP and another three operated by Shell, and together, they are a rich mix of geology, chemistry, technology and equity arrangements.
Development
The ETAP complex was sanctioned for development in 1995 with first hydrocarbons produced in 1998. The original development included Marnock, Mungo, Monan and Machar from BP and Heron, Egret, Skua from Shell. In 2002, BP brought Mirren and Madoes on stream. With these nine fields, the total reserves of ETAP are approximately of oil, of natural gas condensate and of natural gas.
A single central processing facility (CPF) sits over the Marnock field and serves as a hub for all production and operations of the asset including all processing and export and a base for expedition to the Mungo NUI. The CPF consists of separate platforms for operations and accommodation linked by two 60 m bridges. The Processing, drilling and Riser platform (PdR), contains the process plant and the export lines, a riser area to receive production fluids from the other ETAP fields and the wellheads of Marnock. The Quarters and Utilities platform (QU) provides accommodation for up to 157 personnel operating this platform or travelling onwards to the Mungo NUI. This partitioning of accommodation and operations into two platforms, adds an extra element of safety, a particular concern for the designers coming only a few years after the Cullen report on the Piper Alpha disaster.
Liquids are exported to Kinneil at Grangemouth through the Forties pipeline system. Gas is exported by the Central Area Transmission System to Teesside.
Apart from Mungo, which has surface wellheads on a NUI, all other fields use subsea tie-backs.
A tenth field, Fidditch, is currently under development by BP. (which has now been put on hold due to the global economic downturn)
ETAP fields
Marnock
The Marnock field is located in UKCS block 22/24 and is named after Saint Marnock. It is a high pressure, high temperature gas condensate field with initial reservoir pressure of 9000psi. Estimated recoverable reserves are 600 billion scf and of condensate. Marnock produces directly to surface wellheads on the CPF. It is operated by BP in partnership with Shell, Esso and AGIP. The holdings in the Marnock field are as follows: BP = 73%, Esso = 13.5%, Shell = 13.5%.
Mungo
The Mungo field is located in UKCS block 23/16 and is named after Saint Mungo. It is an oilfield with a natural gas cap. Water and gas injection are used to manage the reservoir, which necessitated a small normally unmanned installation be built to support these facilities. The NUI is tied back to the CPF. The field is operated by BP in partnership with Nippon Oil, Murphy Oil and Total S.A.
The holdings in Mungo are: BP = 82.35%, Zennor = 12.65%, JX Nippon = 5%
Monan
The Monan Field is located in UKCS block 22/20 and is named after Saint Monan. It is a small turbidite oil and gas field produced under natural depletion using subsea manifolds. Its production fluids are fed into the pipelines connecting Mungo to the CPF. The field is operated by BP in partnership with Nippon Oil, Murphy Oil and Total S.A.
The holdings in Monan are BP = 83.25%, Zennor = 12.65%, JX Nippon = 5%
Machar
The Machar is located in UKCS block 23/26 named after Saint Machar. It is an oil field in a chalk reservoir located on top of a large salt diapir. Originally, the half dozen wells produced under natural depletion but modifications are being made to include the capacity for gas lift. The field is solely a BP possession.
Mirren and Madoes
These two were later additions to the ETAP complex. The Mirren field is located in UKCS block 22/25 and is named after Saint Mirren. It is an oil field with a gas cap in the Paleocene structure. The Madoes field is located in UKCS block 22/23 and is named after Saint Madoes. It is a light oil field located in the Eocene rock. Both are subsea tiebacks to the CPF, with the capacity for gas lift in the future to aid production. They are both operated by BP with Nippon Oil, Shell, Esso and AGIP.
The holdings in the Mirren field are as follows: BP = 44.7%, ESSO = 21%, JX Nippon = 13.3%, Shell = 21%.
The holdings in the Madoes field are as follows: ARCO = 31.7%, BP = 6.5%, Esso = 25%, JX Nippon = 12%, Shell = 25%
Heron, Egret and Skua
These fields are high temperature, high pressure oil producing wells. Heron is in UKCS block 22/30a and has a Triassic reservoir. Skua is an extension of the Marnock Field. They are subsea tiebacks to the CPF. All three fields are operated by Shell in partnership with Esso.
Helicopter crash
On 18 February 2009, a Super Puma Helicopter ditched in the sea whilst approaching ETAP. All 18 passengers and crew were rescued. Bernard Looney, a President of BP's North Sea business based in Aberdeen, credited their Project Jigsaw with the safe, quick and efficient recovery of the 16 passengers and 2 crew. Project Jigsaw uses locator beacons on all helicopters, standby vessels and fast rescue craft, connected to a computerised system located in Aberdeen. This way locations of all rescue craft and their response time are always known to staff in the BP control centre. In addition all staff are supplied with wristwatch personal locator beacons (WWPLB) that automatically activate when immersed in water.
See also
Oil industry
Oil fields operated by BP
North Sea oil
References
External links
BP Asset Portfolio (pdf)
Oil fields of the United Kingdom
North Sea energy
Oil platforms
Shell plc
BP | Eastern Trough Area Project | [
"Chemistry",
"Engineering"
] | 1,319 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
11,153,041 | https://en.wikipedia.org/wiki/Saint-Venant%27s%20principle | Saint-Venant's principle, named after Adhémar Jean Claude Barré de Saint-Venant, a French elasticity theorist, may be expressed as follows:
The original statement was published in French by Saint-Venant in 1855. Although this informal statement of the principle is well known among structural and mechanical engineers, more recent mathematical literature gives a rigorous interpretation in the context of partial differential equations. An early such interpretation was made by Richard von Mises in 1945.
The Saint-Venant's principle allows elasticians to replace complicated stress distributions or weak boundary conditions with ones that are easier to solve, as long as that boundary is geometrically short. Quite analogous to the electrostatics, where the product of the distance and electric field due to the i-th moment of the load (with 0th being the net charge, 1st the dipole, 2nd the quadrupole) decays as over space, Saint-Venant's principle states that high order moment of mechanical load (moment with order higher than torque) decays so fast that they never need to be considered for regions far from the short boundary. Therefore, the Saint-Venant's principle can be regarded as a statement on the asymptotic behavior of the Green's function by a point-load.
See also
Shallow water equations
References
Elasticity (physics)
Principles | Saint-Venant's principle | [
"Physics",
"Materials_science"
] | 277 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
11,157,092 | https://en.wikipedia.org/wiki/TCF7L2 | Transcription factor 7-like 2 (T-cell specific, HMG-box), also known as TCF7L2 or TCF4, is a protein acting as a transcription factor that, in humans, is encoded by the TCF7L2 gene. The TCF7L2 gene is located on chromosome 10q25.2–q25.3, contains 19 exons. As a member of the TCF family, TCF7L2 can form a bipartite transcription factor and influence several biological pathways, including the Wnt signalling pathway.
Single-nucleotide polymorphisms (SNPs) in this gene are especially known to be linked to higher risk to develop type 2 diabetes, gestational diabetes, multiple neurodevelopmental disorders including schizophrenia and autism spectrum disorder, as well as other diseases. The SNP rs7903146, within the TCF7L2 gene, is, to date, the most significant genetic marker associated with type 2 diabetes risk.
Function
TCF7L2 is a transcription factor influencing the transcription of several genes thereby exerting a large variety of functions within the cell. It is a member of the TCF family that can form a bipartite transcription factor (β-catenin/TCF) alongside β-catenin. Bipartite transcription factors can have large effects on the Wnt signalling pathway. Stimulation of the Wnt signaling pathway leads to the association of β-catenin with BCL9, translocation to the nucleus, and association with TCF7L2, which in turn results in the activation of Wnt target genes. The activation of the Wnt target genes specifically represses proglucagon synthesis in enteroendocrine cells. The repression of TCF7L2 using HMG-box repressor (HBP1) inhibits Wnt signalling. Therefore, TCF7L2 is an effector in the Wnt signalling pathway. TCF7L2's role in glucose metabolism is expressed in many tissues such as gut, brain, liver, and skeletal muscle. However, TCF7L2 does not directly regulate glucose metabolism in β-cells, but regulates glucose metabolism in pancreatic and liver tissues. That said, TCF7L2 directly regulates the expression of multiple transcription factors, axon guidance cues, cell adhesion molecules and ion channels in the thalamus.
The TCF7L2 gene encoding the TCF7L2 transcription factor, exhibits multiple functions through its polymorphisms and thus, is known as a pleiotropic gene. Type 2 diabetes T2DM susceptibility is exhibited in carriers of TCF7L2 rs7903146C>T and rs290481T>C polymorphisms. TCF7L2 rs290481T>C polymorphism, however, has shown no significant correlation to the susceptibility to gestational diabetes mellitus (GDM) in a Chinese Han population, whereas the T alleles of rs7903146 and rs1799884 increase susceptibility to GDM in the Chinese Han population. The difference in effects of the different polymorphisms of the gene indicate that the gene is indeed pleiotropic.
Structure
The TCF7L2 gene, encoding the TCF7L2 protein, is located on chromosome 10q25.2-q25.3. The gene contains 19 exons. Of the 19 exons, 5 are alternative. The TCF7L2 protein contains 619 amino acids and its molecular mass is 67919 Da. TCF7L2's secondary structure is a helix-turn-helix structure.
Tissue distribution
TCF7L2 is primarily expressed in brain (mainly in the diencephalon, including especially high in the thalamus), liver, intestine and fat cells. It does not primarily operate in the β-cells in the pancreas.
Clinical significance
Type 2 Diabetes
Several single nucleotide polymorphisms within the TCF7L2 gene have been associated with type 2 diabetes. Studies conducted by Ravindranath Duggirala and Michael Stern at The University of Texas Health Science Center at San Antonio were the first to identify strong linkage for type 2 diabetes at a region on Chromosome 10 in Mexican Americans This signal was later refined by Struan Grant and colleagues at DeCODE genetics and isolated to the TCF7L2 gene. The molecular and physiological mechanisms underlying the association of TCF7L2 with type 2 diabetes are under active investigation, but it is likely that TCF7L2 has important biological roles in multiple metabolic tissues, including the pancreas, liver and adipose tissue. TCF7L2 polymorphisms can increase susceptibility to type 2 diabetes by decreasing the production of glucagon-like peptide-1 (GLP-1).
Gestational Diabetes (GDM)
TCF7L2 modulates pancreatic islet β-cell function strongly implicating its significant association with GDM risk. T alleles of rs7903146 and rs1799884 TCF7L2 polymorphisms increase susceptibility to GDM in the Chinese Han population.
Cancer
TCF7L2 plays a role in colorectal cancer. A frameshift mutation of TCF7L2 provided evidence that TCF7L2 is implicated in colorectal cancer. The silencing of TCF7L2 in KM12 colorectal cancer cells provided evidence that TCF7L2 played a role in proliferation and metastasis of cancer cells in colorectal cancer.
Variants of the gene are most likely involved in many other cancer types. TCF7L2 is indirectly involved in prostate cancer through its role in activating the PI3K/Akt pathway, a pathway involved in prostate cancer.
Neurodevelopmental disorders
Single nucleotide polymorphisms (SNPs) in TCF7L2 gene have shown an increase in susceptibility to schizophrenia in Arab, European and Chinese Han populations. In the Chinese Han population, SNP rs12573128 in TCF7L2 is the variant that was associated with an increase in schizophrenia risk. This marker is used as a pre-diagnostic marker for schizophrenia. TCF7L2 has also been reported as a risk gene in autism spectrum disorder and has been linked to it in recent large-scale genetic studies.
The mechanism behind TCF7L2's involvement in the emergence of neurodevelopmental disorders is not fully understood, as there have been few studies characterizing its role in brain development in detail. It was shown that during embryogenesis TCF7L2 is involved in the development of fish-specific habenula asymmetry in Danio rerio, and that the dominant negative TCF7L2 isoform influences cephalic separation in the embryo by inhibiting the posteriorizing effect of the Wnt pathway. It was also shown that in Tcf7l2 knockout mice the number of proliferating cells in cortical neural progenitor cells is reduced. In contrast, no such effect was found in the midbrain.
More recently it was shown that TCF7L2 plays a crucial role in both the embryonic development and postnatal maturation of the thalamus through direct and indirect regulation of many genes previously reported to be important for both processes. In late gestation TCF7L2 regulates the expression of many thalamus-enriched transcription factors (e.g. Foxp2, Rora, Mef2a, Lef1, Prox1), axon guidance molecules (e.g. Epha1, Epha4, Ntng1, Epha8) and cell adhesion molecules (e.g. Cdh6, Cdh8, Cdhr1). Accordingly, a total knockout of Tcf7l2 in mice leads to improper growth of thalamocortical axons, changed anatomy and improper sorting of the cells in the thalamo-habenular region. In the early postnaral period TCF7L2 starts to regulate the expression of many genes necessary for the acquisition of characteristic excitability patterns in the thalamus, mainly ion channels, neurotransmitters and their receptors and synaptic vescicle proteins (e.g. Cacna1g, Kcnc2, Slc17a7, Grin2b), and an early postnatal knockout of Tcf7l2 in mouse thalamus leads to significant reduction in the number and frequency of action potentials generated by the thalamocortical neurons. The mechanism that leads to the change in TCF7L2 target genes between gestation and early postnatal period is unknown. It is likely that a perinatal change in the proportion of TCF7L2 isoforms expressed in the thalamus is partially responsible. Abnormalities in the anatomy of the thalamus and the activity of its connections to the cerebral cortex are frequently detected in patients with schizophrenia and autism. Such abnormalities could arise from developmental aberrations in patients with unfavorable mutations of TCF7L2, further strengthening the link between TCF7L2 and neurodevelopmental disorders.
Multiple sclerosis
TCF7L2 is downstream of the WNT/β-catenin pathways. The activation of the WNT/β-catenin pathways have been associated demyelination in multiple sclerosis. TCF7L2 is unregulated during early remyelination, leading scientists to believe that it is involved in remyelination. TCF7L2 could act in dependence or independent of the WNT/β-catenin pathways.
Model organisms
Model organisms have been used in the study of TCF7L2 function. A conditional knockout mouse line called Tcf7l2tm1a(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping
Variations of the protein encoding gene are found in rats, zebra fish, drosophila, and budding yeast. Therefore, all of those organisms can be used as model organisms in the study of TCF7L2 function.
Nomenclature
TCF7L2 is the symbol officially approved by the HUGO Gene Nomenclature Committee for the Transcription Factor 7-Like 2 gene.
See also
TCF/LEF family
References
Further reading
External links
TCF7L2 here called TCF4 features on this Wnt pathway web site: Wnt signalling molecules TCFs
Structure determination of TCF7L2: PDB entry 2GL7 and related publication on PubMed
PubMed GeneRIFs (summaries of related scientific publications) -
Weizmann Institute GeneCard for TCF7L2
Transcription factors
Signal transduction
Gene expression | TCF7L2 | [
"Chemistry",
"Biology"
] | 2,311 | [
"Gene expression",
"Signal transduction",
"Molecular genetics",
"Induced stem cells",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Neurochemistry",
"Transcription factors"
] |
11,157,848 | https://en.wikipedia.org/wiki/FLUKA | FLUKA (FLUktuierende KAskade) is a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter.
FLUKA has many applications in particle physics, high energy experimental physics and engineering, shielding, detector and telescope design, cosmic ray studies, dosimetry, medical physics, radiobiology. A recent line of development concerns hadron therapy.
It is the standard tool used in radiation protection studies in the CERN particle accelerator laboratory.
FLUKA software code is used by Epcard, which is a software program for simulating radiation exposure on airline flights.
Comparison with other codes
MCNPX is slower than FLUKA.
Geant4 is slower than FLUKA.
References
Further reading
External links
Official site of FLUKA collaboration
FLUKA on the CERN bulletin
Physics software used to fight cancer
Fortran software
Physics software
Monte Carlo molecular modelling software
Science software for Linux
Linux-only proprietary software
CERN software
Monte Carlo particle physics software
Proprietary commercial software for Linux | FLUKA | [
"Physics"
] | 201 | [
"Physics software",
"Computational physics stubs",
"Computational physics"
] |
9,676,138 | https://en.wikipedia.org/wiki/Fast%20neutron%20therapy | Fast neutron therapy utilizes high energy neutrons typically between 50 and 70 MeV to treat cancer. Most fast neutron therapy beams are produced by reactors, cyclotrons (d+Be) and linear accelerators. Neutron therapy is currently available in Germany, Russia, South Africa and the United States. In the United States, one treatment center is operational, in Seattle, Washington. The Seattle center uses a cyclotron which produces a proton beam impinging upon a beryllium target.
Advantages
Radiation therapy kills cancer cells in two ways depending on the effective energy of the radiative source. The amount of energy deposited as the particles traverse a section of tissue is referred to as the linear energy transfer (LET). X-rays produce low LET radiation, and protons and neutrons produce high LET radiation. Low LET radiation damages cells predominantly through the generation of reactive oxygen species, see free radicals. The neutron is uncharged and damages cells by direct effect on nuclear structures. Malignant tumors tend to have low oxygen levels and thus can be resistant to low LET radiation. This gives an advantage to neutrons in certain situations. One advantage is a generally shorter treatment cycle. To kill the same number of cancerous cells, neutrons require one third the effective dose as protons. Another advantage is the established ability of neutrons to better treat some cancers, such as salivary gland, adenoid cystic carcinomas and certain types of brain tumors, especially high-grade gliomas
LET
When therapeutic energy X-rays (1 to 25 MeV) interact with cells in human tissue, they do so mainly by Compton interactions, and produce relatively high energy secondary electrons. These high energy electrons deposit their energy at about 1 keV/μm. By comparison, the charged particles produced at a site of a neutron interaction may deliver their energy at a rate of 30–80 keV/μm. The amount of energy deposited as the particles traverse a section of tissue is referred to as the linear energy transfer (LET). X-rays produce low LET radiation, and neutrons produce high LET radiation.
Because the electrons produced from X-rays have high energy and low LET, when they interact with a cell typically only a few ionizations will occur. It is likely then that the low LET radiation will cause only single strand breaks of the DNA helix. Single strand breaks of DNA molecules can be readily repaired, and so the effect on the target cell is not necessarily lethal. By contrast, the high LET charged particles produced from neutron irradiation cause many ionizations as they traverse a cell, and so double-strand breaks of the DNA molecule are possible. DNA repair of double-strand breaks are much more difficult for a cell to repair, and more likely to lead to cell death.
DNA repair mechanisms are quite efficient, and during a cell's lifetime many thousands of single strand DNA breaks will be repaired. A sufficient dose of ionizing radiation, however, delivers so many DNA breaks that it overwhelms the capability of the cellular mechanisms to cope.
Heavy ion therapy (e.g. carbon ions) makes use of the similarly high LET of 12C6+ ions.
Because of the high LET, the relative radiation damage (relative biological effect or RBE) of fast neutrons is 4 times that of X-rays,
meaning 1 rad of fast neutrons is equal to 4 rads of X-rays. The RBE of neutrons is also energy dependent, so neutron beams produced with different energy spectra at different facilities will have different RBE values.
Oxygen effect
The presence of oxygen in a cell acts as a radiosensitizer, making the effects of the radiation more damaging. Tumor cells typically have a lower oxygen content than normal tissue. This medical condition is known as tumor hypoxia and therefore the oxygen effect acts to decrease the sensitivity of tumor tissue. The oxygen effect may be quantitatively described by the Oxygen Enhancement Ratio (OER). Generally it is believed that neutron irradiation overcomes the effect of tumor hypoxia, although there are counterarguments.
Clinical uses
The efficacy of neutron beams for use on prostate cancer has been shown through randomized trials.
Fast neutron therapy has been applied successfully against salivary gland tumors.
Adenoid cystic carcinomas have also been treated.
Various other head and neck tumors have been examined.
Side effects
No cancer therapy is without the risk of side effects. Neutron therapy is a very powerful nuclear scalpel that has to be utilized with exquisite care. For instance, some of the most remarkable cures it has been able to achieve are with cancers of the head and neck. Many of these cancers cannot effectively be treated with other therapies. However, neutron damage to nearby vulnerable areas such as the brain and sensory neurons can produce irreversible brain atrophy, blindness, etc. The risk of these side effects can be greatly mitigated by several techniques, but they cannot be eliminated. Moreover, some patients are more susceptible to such side effects than others and this cannot be predicted. The patient ultimately must decide whether the advantages of a possibly lasting cure outweigh the risks of this treatment when faced with an otherwise incurable cancer.
Fast neutron centers
Several centers around the world have used fast neutrons for treating cancer. Due to lack of funding and support, at present only three are active in the USA.
The University of Washington and the Gershenson Radiation Oncology Center operate fast neutron therapy beams and both are equipped with a Multi-Leaf Collimator (MLC) to shape the neutron beam.
University of Washington
The Radiation Oncology Department operates a proton cyclotron that produces fast neutrons from directing 50.5 MeV protons onto a beryllium target.
The UW Cyclotron is equipped with a gantry mounted delivery system an MLC to produce shaped fields. The UW Neutron system is referred to as the Clinical Neutron Therapy System (CNTS).
The CNTS is typical of most neutron therapy systems. A large, well shielded building is required to cut down on radiation exposure to the general public and to house the necessary equipment.
A beamline transports the proton beam from the cyclotron to a gantry system. The gantry system contains magnets for deflecting and focusing the proton beam onto the beryllium target. The end of the gantry system is referred to as the head, and contains dosimetry systems to measure the dose, along with the MLC and other beam shaping devices. The advantage of having a beam transport and gantry are that the cyclotron can remain stationary, and the radiation source can be rotated around the patient. Along with varying the orientation of the treatment couch which the patient is positioned on, variation of the gantry position allows radiation to be directed from virtually any angle, allowing sparing of normal tissue and maximum radiation dose to the tumor.
During treatment, only the patient remains inside the treatment room (called a vault) and the therapists will remotely control the treatment, viewing the patient via video cameras. Each delivery of a set neutron beam geometry is referred to as a treatment field or beam. The treatment delivery is planned to deliver the radiation as effectively as possible, and usually results in fields that conform to the shape of the gross target, with any extension to cover microscopic disease.
Karmanos Cancer Center / Wayne State University
The neutron therapy facility at the Gershenson Radiation Oncology Center at Karmanos Cancer Center/Wayne State University (KCC/WSU) in Detroit bears some similarities to the CNTS at the University of Washington, but also has many unique characteristics. This unit was decommissioned in 2011.
While the CNTS accelerates protons, the KCC facility produces its neutron beam by accelerating 48.5 MeV deuterons onto a beryllium target. This method produces a neutron beam with depth dose characteristics roughly similar to those of a 4 MV photon beam. The deuterons are accelerated using a gantry mounted superconducting cyclotron (GMSCC), eliminating the need for extra beam steering magnets and allowing the neutron source to rotate a full 360° around the patient couch.
The KCC facility is also equipped with an MLC beam shaping device, the only other neutron therapy center in the USA besides the CNTS. The MLC at the KCC facility has been supplemented with treatment planning software that allows for the implementation of Intensity Modulated Neutron Radiotherapy (IMNRT), a recent advance in neutron beam therapy which allows for more radiation dose to the targeted tumor site than 3-D neutron therapy.
KCC/WSU has more experience than anyone in the world using neutron therapy for prostate cancer, having treated nearly 1,000 patients during the past 10 years.
Fermilab / Northern Illinois University
The Fermilab neutron therapy center first treated patients in 1976, and since that time has treated over 3,000 patients. In 2004, the Northern Illinois University began managing the center. The neutrons produced by the linear accelerator at Fermilab have the highest energies available in the US and among the highest in the world
The Fermilab center was decommissioned in 2013.
See also
Boron neutron capture therapy
References
External links
FermiLab Neutron Therapy overview
Neutron
Radiation therapy procedures
Medical physics | Fast neutron therapy | [
"Physics"
] | 1,888 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
9,683,136 | https://en.wikipedia.org/wiki/Effective%20medium%20approximations | In materials science, effective medium approximations (EMA) or effective medium theory (EMT) pertain to analytical or theoretical modeling that describes the macroscopic properties of composite materials. EMAs or EMTs are developed from averaging the multiple values of the constituents that directly make up the composite material. At the constituent level, the values of the materials vary and are inhomogeneous. Precise calculation of the many constituent values is nearly impossible. However, theories have been developed that can produce acceptable approximations which in turn describe useful parameters including the effective permittivity and permeability of the materials as a whole. In this sense, effective medium approximations are descriptions of a medium (composite material) based on the properties and the relative fractions of its components and are derived from calculations, and effective medium theory. There are two widely used formulae.
Effective permittivity and permeability are averaged dielectric and magnetic characteristics of a microinhomogeneous medium. They both were derived in quasi-static approximation when the electric field inside a mixture particle may be considered as homogeneous. So, these formulae can not describe the particle size effect. Many attempts were undertaken to improve these formulae.
Applications
There are many different effective medium approximations, each of them being more or less accurate in distinct conditions. Nevertheless, they all assume that the macroscopic system is homogeneous and, typical of all mean field theories, they fail to predict the properties of a multiphase medium close to the percolation threshold due to the absence of long-range correlations or critical fluctuations in the theory.
The properties under consideration are usually the conductivity or the dielectric constant of the medium. These parameters are interchangeable in the formulas in a whole range of models due to the wide applicability of the Laplace equation. The problems that fall outside of this class are mainly in the field of elasticity and hydrodynamics, due to the higher order tensorial character of the effective medium constants.
EMAs can be discrete models, such as applied to resistor networks, or continuum theories as applied to elasticity or viscosity. However, most of the current theories have difficulty in describing percolating systems. Indeed, among the numerous effective medium approximations, only Bruggeman's symmetrical theory is able to predict a threshold. This characteristic feature of the latter theory puts it in the same category as other mean field theories of critical phenomena.
Bruggeman's model
For a mixture of two materials with permittivities and with corresponding volume fractions and , D.A.G. Bruggeman proposed a formula of the following form:
Here the positive sign before the square root must be altered to a negative sign in some cases in order to get the correct imaginary part of effective complex permittivity which is related with electromagnetic wave attenuation. The formula is symmetric with respect to swapping the 'd' and 'm' roles. This formula is based on the equality
where is the jump of electric displacement flux all over the integration surface, is the component of microscopic electric field normal to the integration surface, is the local relative complex permittivity which takes the value inside the picked metal particle, the value inside the picked dielectric particle and the value outside the picked particle, is the normal component of the macroscopic electric field. Formula (4) comes out of Maxwell's equality . Thus only one picked particle is considered in Bruggeman's approach. The interaction with all the other particles is taken into account only in a mean field approximation described by . Formula (3) gives a reasonable resonant curve for plasmon excitations in metal nanoparticles if their size is 10 nm or smaller. However, it is unable to describe the size dependence for the resonant frequency of plasmon excitations that are observed in experiments
Formulas
Without any loss of generality, we shall consider the study of the effective conductivity (which can be either dc or ac) for a system made up of spherical multicomponent inclusions with different arbitrary conductivities. Then the Bruggeman formula takes the form:
Circular and spherical inclusions
In a system of Euclidean spatial dimension that has an arbitrary number of components, the sum is made over all the constituents. and are respectively the fraction and the conductivity of each component, and is the effective conductivity of the medium. (The sum over the 's is unity.)
Elliptical and ellipsoidal inclusions
This is a generalization of Eq. (1) to a biphasic system with ellipsoidal inclusions of conductivity into a matrix of conductivity . The fraction of inclusions is and the system is dimensional. For randomly oriented inclusions,
where the 's denote the appropriate doublet/triplet of depolarization factors which is governed by the ratios between the axis of the ellipse/ellipsoid. For example: in the case of a circle (, ) and in the case of a sphere (, , ). (The sum over the 's is unity.)
The most general case to which the Bruggeman approach has been applied involves bianisotropic ellipsoidal inclusions.
Derivation
The figure illustrates a two-component medium. Consider the cross-hatched volume of conductivity , take it as a sphere of volume and assume it is embedded in a uniform medium with an effective conductivity . If the electric field far from the inclusion is then elementary considerations lead to a dipole moment associated with the volume
This polarization produces a deviation from . If the average deviation is to vanish, the total polarization summed over the two types of inclusion must vanish. Thus
where and are respectively the volume fraction of material 1 and 2. This can be easily extended to a system of dimension that has an arbitrary number of components. All cases can be combined to yield Eq. (1).
Eq. (1) can also be obtained by requiring the deviation in current to vanish.
It has been derived here from the assumption that the inclusions are spherical and it can be modified for shapes with other depolarization factors; leading to Eq. (2).
A more general derivation applicable to bianisotropic materials is also available.
Modeling of percolating systems
The main approximation is that all the domains are located in an equivalent mean field.
Unfortunately, it is not the case close to the percolation threshold where the system is governed by the largest cluster of conductors, which is a fractal, and long-range correlations that are totally absent from Bruggeman's simple formula.
The threshold values are in general not correctly predicted. It is 33% in the EMA, in three dimensions, far from the 16% expected from percolation theory and observed in experiments. However, in two dimensions, the EMA gives a threshold of 50% and has been proven to model percolation relatively well.
Maxwell Garnett equation
In the Maxwell Garnett approximation, the effective medium consists of a matrix medium with and inclusions with . Maxwell Garnett was the son of physicist William Garnett, and was named after Garnett's friend, James Clerk Maxwell. He proposed his formula to explain colored pictures that are observed in glasses doped with metal nanoparticles. His formula has a form
where is effective relative complex permittivity of the mixture, is relative complex permittivity of the background medium containing small spherical inclusions of relative permittivity with volume fraction of . This formula is based on the equality
where is the absolute permittivity of free space and is electric dipole moment of a single inclusion induced by the external electric field . However this equality is good only for homogeneous medium and . Moreover, the formula (1) ignores the interaction between single inclusions. Because of these circumstances, formula (1) gives too narrow and too high resonant curve for plasmon excitations in metal nanoparticles of the mixture.
Formula
The Maxwell Garnett equation reads:
where is the effective dielectric constant of the medium, of the inclusions, and of the matrix; is the volume fraction of the inclusions.
The Maxwell Garnett equation is solved by:
so long as the denominator does not vanish. A simple MATLAB calculator using this formula is as follows.
% This simple MATLAB calculator computes the effective dielectric
% constant of a mixture of an inclusion material in a base medium
% according to the Maxwell Garnett theory
% INPUTS:
% eps_base: dielectric constant of base material;
% eps_incl: dielectric constant of inclusion material;
% vol_incl: volume portion of inclusion material;
% OUTPUT:
% eps_mean: effective dielectric constant of the mixture.
function eps_mean = MaxwellGarnettFormula(eps_base, eps_incl, vol_incl)
small_number_cutoff = 1e-6;
if vol_incl < 0 || vol_incl > 1
disp('WARNING: volume portion of inclusion material is out of range!');
end
factor_up = 2 * (1 - vol_incl) * eps_base + (1 + 2 * vol_incl) * eps_incl;
factor_down = (2 + vol_incl) * eps_base + (1 - vol_incl) * eps_incl;
if abs(factor_down) < small_number_cutoff
disp('WARNING: the effective medium is singular!');
eps_mean = 0;
else
eps_mean = eps_base * factor_up / factor_down;
end
end
Derivation
For the derivation of the Maxwell Garnett equation we start with an array of polarizable particles. By using the Lorentz local field concept, we obtain the Clausius-Mossotti relation:
Where is the number of particles per unit volume. By using elementary electrostatics, we get for a spherical inclusion with dielectric constant and a radius a polarisability :
If we combine with the Clausius Mosotti equation, we get:
Where is the effective dielectric constant of the medium, of the inclusions; is the volume fraction of the inclusions.
As the model of Maxwell Garnett is a composition of a matrix medium with inclusions we enhance the equation:
Validity
In general terms, the Maxwell Garnett EMA is expected to be valid at low volume fractions , since it is assumed that the domains are spatially separated and electrostatic interaction between the chosen inclusions and all other neighbouring inclusions is neglected. The Maxwell Garnett formula, in contrast to Bruggeman formula, ceases to be correct when the inclusions become resonant. In the case of plasmon resonance, the Maxwell Garnett formula is correct only at volume fraction of the inclusions . The applicability of effective medium approximation for dielectric multilayers and metal-dielectric multilayers have been studied, showing that there are certain cases where the effective medium approximation does not hold and one needs to be cautious in application of the theory.
Generalization of the Maxwell Garnett Equation to describe the nanoparticle size distribution
Maxwell Garnett Equation describes optical properties of nanocomposites which consist in a collection of perfectly spherical nanoparticles. All these nanoparticles must have the same size. However, due to confinement effect, the optical properties can be influenced by the nanoparticles size distribution. As shown by Battie et al., the Maxwell Garnett equation can be generalized to take into account this distribution.
and are the nanoparticle radius and size distribution, respectively. and are the mean radius and the volume fraction of the nanoparticles, respectively. is the first electric Mie coefficient.
This equation reveals that the classical Maxwell Garnett equation gives a false estimation of the volume fraction nanoparticles when the size distribution cannot be neglected.
Generalization to include shape distribution of nanoparticles
The Maxwell Garnett equation only describes the optical properties of a collection of perfectly spherical nanoparticles. However, the optical properties of nanocomposites are sensitive to the nanoparticles shape distribution. To overcome this limit, Y. Battie et al. have developed the shape distributed effective medium theory (SDEMT). This effective medium theory enables to calculate the effective dielectric function of a nanocomposite which consists in a collection of ellipsoïdal nanoparticles distributed in shape.
with
The depolarization factors () only depend on the shape of nanoparticles. is the distribution of depolarization factors.f is the volume fraction of the nanoparticles.
The SDEMT theory was used to extract the shape distribution of nanoparticles from absorption or ellipsometric spectra.
Formula describing size effect
A new formula describing size effect was proposed. This formula has a form
where is the nanoparticle radius and is wave number. It is supposed here that the time dependence of the electromagnetic field is given by the factor In this paper Bruggeman's approach was used, but electromagnetic field for electric-dipole oscillation mode inside the picked particle was computed without applying quasi-static approximation. Thus the function is due to the field nonuniformity inside the picked particle. In quasi-static region (, i.e. for Ag this function becomes constant and formula (5) becomes identical with Bruggeman's formula.
Effective permeability formula
Formula for effective permeability of mixtures has a form
Here is effective relative complex permeability of the mixture, is relative complex permeability of the background medium containing small spherical inclusions of relative permeability with volume fraction of . This formula was derived in dipole approximation. Magnetic octupole mode and all other magnetic oscillation modes of odd orders were neglected here. When and this formula has a simple form
Effective medium theory for resistor networks
For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. In such case, a random resistor network can be considered as a two-dimensional graph and the effective resistance can be modelled in terms of graph measures and geometrical properties of networks.
Assuming, edge length is much less than electrode spacing and edges to be uniformly distributed, the potential can be considered to drop uniformly from one electrode to another.
Sheet resistance of such a random network () can be written in terms of edge (wire) density (), resistivity (), width () and thickness () of edges (wires) as:
See also
Constitutive equation
Percolation threshold
References
Further reading
Condensed matter physics
Physical chemistry | Effective medium approximations | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,067 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Matter"
] |
13,706,553 | https://en.wikipedia.org/wiki/Quantitative%20proteomics | Quantitative proteomics is an analytical chemistry technique for determining the amount of proteins in a sample. The methods for protein identification are identical to those used in general (i.e. qualitative) proteomics, but include quantification as an additional dimension. Rather than just providing lists of proteins identified in a certain sample, quantitative proteomics yields information about the physiological differences between two biological samples. For example, this approach can be used to compare samples from healthy and diseased patients. Quantitative proteomics is mainly performed by two-dimensional gel electrophoresis (2-DE), preparative native PAGE, or mass spectrometry (MS). However, a recent developed method of quantitative dot blot (QDB) analysis is able to measure both the absolute and relative quantity of an individual proteins in the sample in high throughput format, thus open a new direction for proteomic research. In contrast to 2-DE, which requires MS for the downstream protein identification, MS technology can identify and quantify the changes.
Quantification using spectrophotometry
The concentration of a certain protein in a sample may be determined using spectrophotometric procedures. The concentration of a protein can be determined by measuring the OD at 280 nm on a spectrophotometer, which can be used with a standard curve assay to quantify the presence of tryptophan, tyrosine, and phenylalanine. However, this method is not the most accurate because the composition of proteins can vary greatly and this method would not be able to quantify proteins that do not contain the aforementioned amino acids. This method is also inaccurate due to the possibility of nucleic acid contamination. Other more accurate spectrophotometric procedures for protein quantification include the Biuret, Lowry, BCA, and Bradford methods. An alternative method for label free protein quantification in clear liquid is cuvette-based SPR technique, that simultaneously measures the refractive index ranging 1.0 to 1.6 nD and concentration of the protein ranging from 0.5 μL to 2 mL in volume. This system consists of the calibrated optical filter with very high angular resolution and the interaction of light with this crystal forms a resonance at a wavelength which correlates to concentration and refractive index near the crystal.
Quantification using two dimensional electrophoresis
Two-dimensional gel electrophoresis (2-DE) represents one of the main technologies for quantitative proteomics with advantages and disadvantages. 2-DE provides information about the protein quantity, charge, and mass of the intact protein. It has limitations for the analysis of proteins larger than 150 kDa or smaller than 5kDa and low solubility proteins. Quantitative MS has higher sensitivity but does not provide information about the intact protein.
Classical 2-DE based on post-electrophoretic dye staining has limitations: at least three technical replicates are required to verify the reproducibility. Difference gel electrophoresis (DIGE) uses fluorescence-based labeling of the proteins prior to separation has increased the precision of quantification as well as the sensitivity in the protein detection. Therefore, DIGE represents the current main approach for the 2-DE based study of proteomes.
Quantification using mass spectrometry
Mass spectrometry (MS) represents one of the main technologies for quantitative proteomics with advantages and disadvantages. Quantitative MS has higher sensitivity but can provide only limited information about the intact protein. Quantitative MS has been used for both discovery and targeted proteomic analysis to understand global proteomic dynamics in populations of cells (bulk analysis) or in individual cells (single-cell analysis).
Early approaches developed in the 1990s applied isotope-coded affinity tags (ICAT), which uses two reagents with heavy and light isotopes, respectively, and a biotin affinity tag to modify cysteine containing peptides. This technology has been used to label whole Saccharomyces cerevisiae cells, and, in conjunction with mass spectrometry, helped lay the foundation of quantitative proteomics. This approach has been superseded by isobaric mass tags, which are also used for single-cell protein analysis.
Relative and absolute quantification
Mass spectrometry is not inherently quantitative because of differences in the ionization efficiency and/or detectability of the many peptides in a given sample, which has sparked the development of methods to determine relative and absolute abundance of proteins in samples. The intensity of a peak in a mass spectrum is not a good indicator of the amount of the analyte in the sample, although differences in peak intensity of the same analyte between multiple samples accurately reflect relative differences in its abundance.
Stable isotope labeling in mass spectrometry
Stable isotope labels
An approach for relative quantification that is more costly and time-consuming, though less sensitive to experimental bias than label-free quantification, entails labeling the samples with stable isotope labels that allow the mass spectrometer to distinguish between identical proteins in separate samples. One type of label, isotopic tags, consist of stable isotopes incorporated into protein crosslinkers that causes a known mass shift of the labeled protein or peptide in the mass spectrum. Differentially labeled samples are combined and analyzed together, and the differences in the peak intensities of the isotope pairs accurately reflect difference in the abundance of the corresponding proteins.
Absolute proteomic quantification using isotopic peptides entails spiking known concentrations of synthetic, heavy isotopologues of target peptides into an experimental sample and then performing LC-MS/MS. As with relative quantification using isotopic labels, peptides of equal chemistry co-elute and are analyzed by MS simultaneously. Unlike relative quantification, though, the abundance of the target peptide in the experimental sample is compared to that of the heavy peptide and back-calculated to the initial concentration of the standard using a pre-determined standard curve to yield the absolute quantification of the target peptide.
Relative quantification methods include isotope-coded affinity tags (ICAT), isobaric labeling (tandem mass tags (TMT) and isobaric tags for relative and absolute quantification (iTRAQ)), label-free quantification metal-coded tags (MeCAT), N-terminal labelling, stable isotope labeling with amino acids in cell culture (SILAC), and terminal amine isotopic labeling of substrates (TAILS). A mathematically rigorous approach that integrates peptide intensities and peptide-measurement agreement into confidence intervals for protein ratios has emerged.
Absolute quantification is performed using selected reaction monitoring (SRM).
Metal-coded tags
Metal-coded tags (MeCAT) method is based on chemical labeling, but rather than using stable isotopes, different lanthanide ions in macrocyclic complexes are used. The quantitative information comes from inductively coupled plasma MS measurements of the labeled peptides. MeCAT can be used in combination with elemental mass spectrometry ICP-MS allowing first-time absolute quantification of the metal bound by MeCAT reagent to a protein or biomolecule. Thus it is possible to determine the absolute amount of protein down to attomole range using external calibration by metal standard solution. It is compatible to protein separation by 2D electrophoresis and chromatography in multiplex experiments. Protein identification and relative quantification can be performed by MALDI-MS/MS and ESI-MS/MS.
Mass spectrometers have a limited capacity to detect low-abundance peptides in samples with a high dynamic range. The limited duty cycle of mass spectrometers also restricts the collision rate, resulting in an undersampling. Sample preparation protocols represent sources of experimental bias.
Stable isotope labeling with amino acids in cell culture
Stable isotope labeling with amino acids in cell culture (SILAC) is a method that involves metabolic incorporation of “heavy” C- or N-labeled amino acids into proteins followed by MS analysis. SILAC requires growing cells in specialized media supplemented with light or heavy forms of essential amino acids, lysine or arginine. One cell population is grown in media containing light amino acids while the experimental condition is grown in the presence of heavy amino acids. The heavy and light amino acids are incorporated into proteins through cellular protein synthesis. Following cell lysis, equal amounts of protein from both conditions are combined and subjected to proteotypic digestion. Arginine and lysine amino acids were chosen, because trypsin, the predominant enzyme used to generate proteotypic peptides for MS analysis, cleaves at the C-terminus of lysine and arginine. Following digestion with trypsin, all the tryptic peptides from cells grown in SILAC media would have at least one labeled amino acid, resulting in a constant mass shift from the labeled sample over non-labeled. Because the peptides containing heavy and light amino acids are chemically identical, they co-elute during reverse-phase column fractionation and are detected simultaneously during MS analysis. The relative protein abundance is determined by the relative peak intensities of the isotopically distinct peptides.
Traditionally the level of multiplexing in SILAC was limited due to the number of SILAC isotopes available. Recently, a new technique called NeuCode SILAC, has augmented the level of multiplexing achievable with metabolic labeling (up to 4). The NeuCode amino acid method is similar to SILAC but differs in that the labeling only utilizes heavy amino acids. The use of only heavy amino acids eliminates the need for 100% incorporation of amino acids needed for SILAC. The increased multiplexing capability of NeuCode amino acids is from the use of mass defects from extra neutrons in the stable isotopes. These small mass differences however need to be resolved on high resolution mass spectrometers.
One of the main benefits of SILAC is the level of quantitation bias from processing errors is low because heavy and light samples are combined before sample preparation for MS analysis. SILAC and NeuCode SILAC are excellent techniques for detecting small changes in protein levels or post-translational modifications between experimental groups.
Isobaric labeling
Isobaric mass tags (tandem mass tags) are tags that have identical mass and chemical properties that allow heavy and light isotopologues to co-elute together. All mass tags consist of a mass reporter that has a unique number of 13C substitutions, a mass normalizer that has a unique mass that balances the mass of the tag to make all the tags equal in mass and a reactive moiety that crosslinks to the peptides. These tags are designed to cleave at a specific linker region upon high-energy CID, yielding different-sized tags that are then quantitated by LC-MS/MS. Protein or peptide samples prepared from cells, tissues or biological fluids are labeled in parallel with the isobaric mass tags and combined for analysis. Protein quantitation is accomplished by comparing the intensities of the reporter ions in the MS/MS spectra. Three types of tandem mass tags are available with different reactivity: (1) reactive NHS ester which provides high-efficiency, amine-specific labeling (TMTduplex, TMTsixplex, TMT10plex and TMT11plex), (2) reactive iodacetyl function group which labels sulfhydryl-(-SH) groups (iodoTMT) and (3) reactive alkoxyamine functional group which provides covalent labeling of carbonyl-containing compounds (aminoxyTMT).
A key benefit of isobaric labeling over other quantification techniques (e.g. SILAC, ICAT, Label-free) is the increased multiplex capabilities and thus increased throughput potential. The ability to combine and analyze several samples simultaneously in one LC-MS run eliminates the need to analyze multiple data sets and eliminates run-to-run variation. Multiplexing reduces sample processing variability, improves specificity by quantifying the proteins from each condition simultaneously, and reduces turnaround time for multiple samples. The current available isobaric chemical tags facilitate the simultaneous analysis of up to 11 experimental samples.
Label-free quantification in mass spectrometry
One approach for relative quantification is to separately analyze samples by MS and compare the spectra to determine peptide abundance in one sample relative to another, as in label-free strategies. It is generally accepted, that while label-free quantification is the least accurate of the quantification paradigms, it is also inexpensive and reliable when put under heavy statistical validation. There are two different methods of quantification in label-free quantitative proteomics: AUC (area under the curve) and spectral counting.
Methods of label-free quantification
AUC is a method by which for a given peptide spectrum in an LC-MS run, the area under the spectral peak is calculated. AUC peak measurements are linearly proportional to the concentration of protein in a given analyte mixture. Quantification is achieved through ion counts, the measurement of the amount of an ion at a specific retention time. Discretion is required for the standardization of the raw data. High-resolution spectrometer can alleviate problems that arise when trying to make data reproducible, however much of the work regarding normalizing data can be done through software such as OpenMS, and MassView.
Spectral counting involves counting the spectra of an identified protein and then standardizing using some form of normalization. Typically this is done with an abundant peptide mass selection (MS) that is then fragmented and then MS/MS spectra are counted. Multiple samplings of the protein peak is required for accurate estimation of the protein abundance because of the complex physiochemical nature of peptides. Thus, optimization for MS/MS experiments is a constant concern. One alternative to get around this problems is use a data independent technique that cycles between high and low collision energies. Thus a large survey of all possible precursor and product ions is collected. This is limited, however, by the mass spectrometry software's ability to recognize and match peptide patterns of associations between the precursor and product ions.
Applications
Biomedical applications
Quantitative proteomics has distinct applications in the medical field. Especially in the fields of drug and biomarker discovery. LC-MS/MS techniques have started to over take more traditional methods like the western blot and ELISA due to the cumbersome nature of labeling different and separating proteins using these methods and the more global analysis of protein quantification. Mass spectrometry methods are more sensitive to difference in protein structure like post-translational modification and thus can quantify differing modifications to proteins. Quantitative proteomics can circumvent these issues, only needing sequence information to be performed. It can be applied on a global proteome level, or on specifically isolating binding partners in pull-down or affinity purification experiments. Disadvantages, however, in sensitivity and analysis time must be kept in consideration.
Drug discovery
Quantitative proteomics has the largest applications in the protein target identification, protein target validation, and toxicity profiling of drug discovery. Drug discovery has been used to investigate protein-protein interaction and, more recently, drug-small molecule interactions, a field of study called chemoproteomics. Thus, it has shown great promise in monitoring side-effects of small drug-like molecules and understanding the efficacy and therapeutic effect of one drug target over another. One of the more typical methodologies for absolute protein quantification in drug discovery is the use of LC-MS/MS with multiple reaction monitoring (MRM). The mass spectrometry is typically done by a triple quadrupole MS.
See also
Pierce Protein Assay
Chemoproteomics
Protein mass spectrometry
References
Mass spectrometry
Proteomics | Quantitative proteomics | [
"Physics",
"Chemistry"
] | 3,305 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
13,713,557 | https://en.wikipedia.org/wiki/Auxostat | An auxostat is a continuous culture device which, while in operation, uses feedback from a measurement taken on the growth chamber to control the media flow rate, maintaining the measurement at a constant.
Auxo was the Greek goddess of spring growth, and represents nutrients as a prefix. However, the most typical auxostats are pH-auxostats, with feedback between the growth rate and a pH meter.
Other auxostats may measure oxygen tension, ethanol concentrations, and sugar concentrations
References
Bioreactors | Auxostat | [
"Chemistry",
"Engineering",
"Biology"
] | 103 | [
"Bioreactors",
"Biological engineering",
"Bioengineering stubs",
"Chemical reactors",
"Biotechnology stubs",
"Biochemical engineering",
"Microbiology equipment"
] |
13,714,607 | https://en.wikipedia.org/wiki/Analysis%20of%20molecular%20variance | Analysis of molecular variance (AMOVA), is a statistical model for the molecular algorithm in a single species, typically biological. The name and model are inspired by ANOVA. The method was developed by Laurent Excoffier, Peter Smouse and Joseph Quattro at Rutgers University in 1992.
Since developing AMOVA, Excoffier has written a program for running such analyses. This program, which runs on Windows, is called Arlequin and is freely available on Excoffier's website. There are also implementations in R language in the ade4 and the pegas packages, both available on CRAN (Comprehensive R Archive Network). Another implementation is in Info-Gen, which also runs on Windows. The student version is free and fully functional. Native language of the application is Spanish but an English version is also available.
An additional free statistical package, GenAlEx, is geared toward teaching as well as research and allows for complex genetic analyses to be employed and compared within the commonly used Microsoft Excel interface. This software allows for calculation of analyses such as AMOVA, as well as comparisons with other types of closely related statistics including F-statistics and Shannon's index, and more.
References
External links
Arlequin 3 website
Online AMOVA Calculation for Y-STR Data
Info-Gen website
GenAIEx website
Population genetics
Molecular biology
Analysis of variance | Analysis of molecular variance | [
"Chemistry",
"Biology"
] | 280 | [
"Biochemistry",
"Molecular biology"
] |
16,446,139 | https://en.wikipedia.org/wiki/Probabilistic%20roadmap | The probabilistic roadmap planner is a motion planning algorithm in robotics, which solves the problem of determining a path between a starting configuration of the robot and a goal configuration while avoiding collisions.
The basic idea behind PRM is to take random samples from the configuration space of the robot, testing them for whether they are in the free space, and use a local planner to attempt to connect these configurations to other nearby configurations. The starting and goal configurations are added in, and a graph search algorithm is applied to the resulting graph to determine a path between the starting and goal configurations.
The probabilistic roadmap planner consists of two phases: a construction and a query phase. In the construction phase, a roadmap (graph) is built, approximating the motions that can be made in the environment. First, a random configuration is created. Then, it is connected to some neighbors, typically either the k nearest neighbors or all neighbors less than some predetermined distance. Configurations and connections are added to the graph until the roadmap is dense enough. In the query phase, the start and goal configurations are connected to the graph, and the path is obtained by a Dijkstra's shortest path query.
Given certain relatively weak conditions on the shape of the free space, PRM is provably probabilistically complete, meaning that as the number of sampled points increases without bound, the probability that the algorithm will not find a path if one exists approaches zero. The rate of convergence depends on certain visibility properties of the free space, where visibility is determined by the local planner. Roughly, if each point can "see" a large fraction of the space, and also if a large fraction of each subset of the space can "see" a large fraction of its complement, then the planner will find a path quickly.
The invention of the PRM method is credited to Lydia E. Kavraki. There are many variants on the basic PRM method, some quite sophisticated, that vary the sampling strategy and connection strategy to achieve faster performance. See e.g. for a discussion.
References
Robot control
Automated planning and scheduling
Path planning | Probabilistic roadmap | [
"Engineering"
] | 439 | [
"Robotics engineering",
"Robot control"
] |
16,453,417 | https://en.wikipedia.org/wiki/Melting%20curve%20analysis | Melting curve analysis is an assessment of the dissociation characteristics of double-stranded DNA during heating. As the temperature is raised, the double strand begins to dissociate leading to a rise in the absorbance intensity, hyperchromicity. The temperature at which 50% of DNA is denatured is known as the melting temperature. Measurement of melting temperature can help us predict species by just studying the melting temperature. This is because every organism has a specific melting curve.
The information gathered can be used to infer the presence and identity of single-nucleotide polymorphisms (SNP). This is because G-C base pairing have 3 hydrogen bonds between them while A-T base pairs have only 2. DNA with mutations from either A or T to either C or G will create a higher melting temperature.
The information also gives vital clues to a molecule's mode of interaction with DNA. Molecules such as intercalators slot in between base pairs and interact through pi stacking. This has a stabilizing effect on DNA's structure which leads to a raise in its melting temperature. Likewise, increasing salt concentrations helps diffuse negative repulsions between the phosphates in the DNA's backbone. This also leads to a rise in the DNA's melting temperature. Conversely, pH can have a negative effect on DNA's stability which may lead to a lowering of its melting temperature.
Implementation
The energy required to break the base-base hydrogen bonding between two strands of DNA is dependent on their length, GC content and their complementarity. By heating a reaction-mixture that contains double-stranded DNA sequences and measuring dissociation against temperature, these attributes can be inferred.
Originally, strand dissociation was observed using UV absorbance measurements, but techniques based on fluorescence measurements are now the most common approach.
The temperature-dependent dissociation between two DNA-strands can be measured using a DNA-intercalating fluorophore such as SYBR green, EvaGreen or fluorophore-labelled DNA probes. In the case of SYBR green (which fluoresces 1000-fold more intensely while intercalated in the minor groove of two strands of DNA), the dissociation of the DNA during heating is measurable by the large reduction in fluorescence that results. Alternatively, juxtapositioned probes (one featuring a fluorophore and the other, a suitable quencher) can be used to determine the complementarity of the probe to the target sequence.
The graph of the negative first derivative of the melting-curve may make it easier to pin-point the temperature of dissociation (defined as 50% dissociation), by virtue of the peaks thus formed.
SYBR Green enabled product differentiation in the LightCycler in 1997. Hybridization probes (or FRET probes) were also demonstrated to provide very specific melting curves from the single-stranded (ss) probe-to-amplicon hybrid. Idaho Technology and Roche have done much to popularize this use on the LightCycler instrument.
Applications
Since the late 1990s product analysis via SYBR Green, other double-strand specific dyes, or probe-based melting curve analysis has become nearly ubiquitous. The probe-based technique is sensitive enough to detect single-nucleotide polymorphisms (SNP) and can distinguish between homozygous wildtype, heterozygous and homozygous mutant alleles by virtue of the dissociation patterns produced. Without probes, amplicon melting (melting and analysis of the entire PCR product) was not generally successful at finding single base variants through melting profiles. With higher resolution instruments and advanced dyes, amplicon melting analysis of one base variants is now possible with several commercially available instruments. For example: Applied Biosystems 7500 Fast System and the 7900HT Fast Real-Time PCR System, Idaho Technology's LightScanner (the first plate-based high resolution melting device), Qiagen's Rotor-Gene instruments, and Roche's LightCycler 480 instruments.
Many research and clinical examples exist in the literature that show the use of melting curve analysis to obviate or complement sequencing efforts, and thus reduce costs.
While most quantitative PCR machines have the option of melting curve generation and analysis, the level of analysis and software support varies. High Resolution Melt (known as either Hi-Res Melting, or HRM) is the advancement of this general technology and has begun to offer higher sensitivity for SNP detection within an entire dye-stained amplicon. It is less expensive and simpler in design to develop probeless melting curve systems. However, for genotyping applications, where large volumes of samples must be processed, the cost of development may be less important than the total throughput and ease of interpretation, thus favoring probe-based genotyping methods.
Digital High Resolution Melting (dHRM) is also used in conjunction with digital PCR (dPCR) to improve quantitative power by providing additional information on the melting behavior of the amplified DNA, which can help in distinguishing between different genetic variants and in ensuring the accuracy of the quantification. dHRM is enabled by the use of sensitive DNA-binding dyes and digital PCR instrumentation, which allows for the collection of high-density data points to generate detailed melt profiles. These profiles can be used to identify even subtle differences in nucleic acid sequences, making dHRM a powerful tool for genotyping, mutation scanning, and methylation analysis
dHRM is an advanced molecular technique used for the analysis of genetic variations, such as single nucleotide polymorphisms (SNPs), mutations, and methylations, by monitoring the melting behavior of double-stranded DNA. It is a post-PCR method that involves the gradual heating of PCR-amplified DNA in the presence of intercalating dyes that fluoresce when bound to double-stranded DNA. As the DNA melts, the fluorescence decreases, and the changes in fluorescence are monitored in real-time with digital PCR system. The resulting melting curves are then analyzed to detect genetic differences based on the melting temperatures of the DNA fragments.
The technique has been further advanced by its application on digital microfluidics platforms, which can facilitate the analysis of single-nucleotide polymorphisms (SNPs) with high accuracy and sensitivity. Additionally, massively parallel dHRM has been developed to enable rapid and absolutely quantitative sequence profiling, which can be particularly useful in clinical and industrial settings where accurate quantification of nucleic acids is critical.
See also
High Resolution Melt analysis
Microscale thermophoresis, a method to determine the stability, the length, the conformation and the modifications of DNA and RNA
Nucleic acid thermodynamics
References
External links
Biochemistry | Melting curve analysis | [
"Chemistry",
"Biology"
] | 1,406 | [
"Biochemistry",
"nan"
] |
16,454,056 | https://en.wikipedia.org/wiki/Net%20explosive%20quantity | The net explosive quantity (NEQ), also known as net explosive content (NEC) or net explosive weight (NEW), of a shipment of munitions, fireworks or similar products is the total mass of the contained explosive substances, without the packaging, casings, bullets etc. It also includes the mass of the TNT-equivalent of all contained energetic substances.
The NEQ is often stated on shipment containers for safety purposes.
See also
TNT equivalent
References
Explosives | Net explosive quantity | [
"Chemistry"
] | 93 | [
"Explosives",
"Explosions"
] |
16,454,909 | https://en.wikipedia.org/wiki/Nisoxetine | Nisoxetine (developmental code name LY-94939), originally synthesized in the Lilly research laboratories during the early 1970s, is a potent and selective inhibitor for the reuptake of norepinephrine (noradrenaline) into synapses. It currently has no clinical applications in humans, although it was originally researched as an antidepressant. Nisoxetine is now widely used in scientific research as a standard selective norepinephrine reuptake inhibitor. It has been used to research obesity and energy balance, and exerts some local analgesia effects.
Researchers have attempted to use a carbon-labeled form of nisoxetine for positron emission tomography (PET) imaging of the norepinephrine transporter (NET), with little success. However, it seems that tritium labeled nisoxetine (3H-nisoxetine, 3H-NIS) is a useful radioligand for labeling norepinephrine uptake sites in vitro, which nisoxetine and other antagonists for NET are able to inhibit.
History
In treating depression, it was theorized that substances that could enhance norepinephrine transmission, such as tricyclic antidepressants (TCA), could diminish the symptoms of clinical depression. The origins of nisoxetine can be found within the discovery of fluoxetine (Prozac, by Eli Lilly). In the 1970s, Bryan B. Molloy (a medicinal chemist) and Robert Rathbun (a pharmacologist) began a collaboration to search for potential antidepressant agents that would still retain the therapeutic activity of TCAs without undesirable cardiotoxicity and anticholinergic properties. The antihistamine drug diphenhydramine was found to inhibit monoamine uptake in addition to antagonizing histamine receptors, and this inhibition of monoamine uptake became a potential application for treating depression. As a result, Molloy, along with colleagues Schmiegal and Hauser, synthesized members of the phenoxyphenylpropylamine (PPA) group as analogues of diphenhydramine.
Richard Kattau in the Rathbun laboratory tested the newly created drugs within the series of PPAs for their ability to reverse apomorphine-induced hypothermia in mice (PIHM), a test in which the TCAs were active antagonists. Kattau found that one member of the series, LY94939 (nisoxetine), was as potent and effective as the TCAs in the reversal of PIHM. Nisoxetine was found to be as potent as desipramine in inhibiting norepinephrine uptake in brain synaptosomes while not acting as a potent inhibitor of serotonin (5-HT) or dopamine uptake.
Preclinical studies in humans were also performed in 1976 to evaluate the safety and possible mechanism of nisoxetine. At doses capable of blocking the uptake of norepinephrine and tyramine at nerve terminals, nisoxetine did not produce any substantial side effects. Abnormal electrocardiogram effects were also not observed, indicating it to be a relatively safe compound.
Later, however, researchers considered ways in which subtle chemical differences in the PPA series could selectively inhibit 5-HT uptake, which eventually led to the synthesis of nisoxetine's 4-trifluoremethyl analogue, fluoxetine. Nisoxetine was never marketed as a drug due to a greater interest in pursuing the development of fluoxetine, a selective serotonin reuptake inhibitor (SSRI).
Research
Obesity
Numerous evidence suggests that by altering catecholaminergic signaling (cell communication via norepinephrine and dopamine), food intake and body weight will be affected via classic hypothalamic systems that are involved in the regulation of energy balance. Antidepressants, such as the atypical antidepressant bupropion, can also cause weight loss due to their ability to increase extracellular dopamine and norepinephrine by inhibiting their uptake. Other research has focused on the interaction of serotonin and norepinephrine, leading to serotonin–norepinephrine reuptake inhibitors (SNRIs) as anti-obesity drugs.
The primary forebrain sensor of peripheral cues that relays information about the availability of energy and storage is the arcuate nucleus of the hypothalamus (ARH), and it contains two types of cells that have opposing effects on energy balance. These two types of cells are neuropeptide Y (NPY)-expressing cells, which cause hyperphagia and energy conservation, and cells that pro-opiomelanocortin (POMC), which are related to hypophagia and increased energy expenditure. NPY and norepinephrine are both localized in select neurons in the brain and periphery. A norepinephrine reuptake inhibitor, such as nisoxetine, could potentially cause anorexia by decreasing activity of cells that express NPY and norepinephrine.
In lean and obese mice, selective and combined norepinephrine and dopamine reuptake inhibition reduces food intake and body weight. Yet selective reuptake inhibitors of norepinephrine and dopamine (nisoxetine and a substance codenamed GBR12783, respectively) independently have no effect on food intake in mice. However, when given in combination, there is profound inhibition of food intake. This demonstrates a synergistic interaction between dopamine and norepinephrine in controlling ingestive behavior, similar to the action of SNRIs. The fact that nisoxetine alone does not affect food intake suggests that norepinephrine alone is insufficient to affect feeding or that the blocked reuptake of norepinephrine by nisoxetine is acting in the wrong place. Unlike nisoxetine, its sulfur analog thionisoxetine reduces food consumption in rodents and is a more promising treatment for obesity and eating disorders.
Analgesia effects
An essential activity of local anesthetics is the blockade of sodium channels. In this way, local anesthetics are able to produce infiltrative cutaneous analgesia, peripheral neural blockades, as well as spinal/epidural anesthesia. Due to nisoxetine's sodium channel blocking effect, it is also possible that it may also have a local anesthetic effect. Nisoxetine is able to suppress the nicotine-evoked increase of hippocampal norepinephrine in a dose-dependent nature through effects on the functioning of the nicotinic acetylcholine receptors. It is also able to inhibit tetrodotoxin-facilitated sensitive inward sodium currents in rat superior cervical ganglia.
Nisoxetine elicits local (cutaneous) but not systemic analgesia. Compared to lidocaine, a common anesthetic, nisoxetine is more potent (by four folds) and exhibits longer drug action towards producing cutaneous anesthesia. NMDA receptors are not involved in this local anesthetic effect. However, it is unclear whether nisoxetine may cause toxicity to the neuronal or subcutaneous tissues, which still needs to be investigated in the future.
3H-nisoxetine
Due to shortcomings of the previously available radioligands for the norepinephrine uptake site, researchers needed to find a better ligand for measuring norepinephrine reuptake sites. These shortcomings also meant that the norepinephrine uptake sites in the brain were less studied than the 5-HT uptake sites. Previous radioligands for the norepinephrine uptake sites, 3H-desipramine (3H-DMI) and 3H-mazindol (3H-MA), did not have specific and selective binding properties for norepinephrine sites.
3H-nisoxetine (3H-NIS), on the other hand, is a potent and selective inhibitor for the uptake of norepinephrine and is now used as a selective marker of the norepinephrine transporter. Most studies using 3H-NIS are conducted in the rat model, and not many have been performed in humans. 3H-NIS can be used to map anatomical sites associated with norepinephrine uptake through the technique of quantitative autoradiography (QAR), where the pattern of 3H-NIS binding is consistent with the pattern of norepinephrine activation. Lesion studies also confirm 3H-NIS's relation to presynaptic norepinephrine terminals.
3H-NIS binds with high affinity (Kd = 0.7 nM) and selectivity to a homogenous population of sites that are associated with norepinephrine uptake in the rat brain. Specific 3H-NIS binding increases as sodium concentration is raised, and binding of 3H-NIS is barely detectable in the absence of sodium. Binding of 3H-NIS is sodium-dependent because sodium ions are necessary for the neuronal uptake of norepinephrine. This binding is also heat-sensitive, where heating rat cerebral cortical membranes reduces the amount of specific binding. Nisoxetine (Ki = 0.7 + 0.02 nM), as well as other compounds that have a high affinity for norepinephrine uptake sites (DMI, MAZ, maprotiline), act as potent inhibitors of 3H-NIS binding to rat cortical membranes.
In humans, 3H-NIS is used to measure uptake sites in the locus coeruleus (LC). The LC, a source of norepinephrine axons, has been of focus in research due to reports of cell loss in the area that occurs with aging in humans. Decreased binding of 3H-NIS reflects the loss of LC cells.
NET imaging using PET
Researchers are attempting to image the norepinephrine transporter (NET) system using positron emission tomography (PET). Possible ligands to be used for this methodology must possess high affinity and selectivity, high brain penetration, appropriate lipophilicity, reasonable stability in plasma, as well as high plasma free fraction. 11C-labeled nisoxetine, synthesized by Haka and Kilbourn, was one possible candidate that was investigated for being used as a potential PET tracer. However, in vivo, 11C-labeled nisoxetine exhibits nonspecific binding, therefore limiting its effectiveness as a possible ligand for PET.
Pharmacological properties
Nisoxetine is a potent and selective inhibitor of norepinephrine uptake, where it is about 1000-fold more potent in blocking norepinephrine uptake than that of serotonin. It is 400-fold more potent in blocking the uptake of norepinephrine than that of dopamine. The R-isomer of nisoxetine has 20 times greater affinity than its S-isomer for NET. Nisoxetine has little or no affinity for neurotransmitter receptors. The NET Ki for nisoxetine is generally agreed to be 0.8 nM.
In a preclinical study where nisoxetine was administered to volunteers, the average plasma concentration after a single dose was found to be 0.028 microgram/ml, and after the fifteenth dose was 0.049 microgram/ml. The binding of nisoxetine is saturable in human placental NET, with specific binding values being 13.8 + 0.4 nM for Kd and 5.1 + 0.1 pmol/mg of protein for Bmax Sodium and chloride enhances nisoxetine binding by increasing the affinity of the binding site for its ligand, where Kd values increase as the concentration of chloride decrease. Bmax is not affected.
Activity of 3H-NIS on cerebral cortical homogenates in mice show a Kd of 0.80 + 0.11 nM and a Bmax of + 12 fmol/mg protein. Density of binding is generally associated with brain regions that exhibit norepinephrine levels, where the highest specific 3H-NIS binding is in the brainstem (LC) and the thalamus. Specific 3H-NIS binding is dependent on sodium cations, where specific and total binding is raised as the concentration of sodium is increased (Tejani-Butt et al., 1990). This binding occurs with high affinity towards a single class of sites that have similar pharmacological characteristics of the norepinephrine uptake site.
Nisoxetine and other inhibitors of norepinephrine uptake sites are able to inhibit the binding of 3H-NIS. When rats are intravenously injected with nisoxetine and the binding of 3H-NIS is measured, the Ki of nisoxetine is reported to be 0.8 + 0.1 nM for concentrations of up to 1 μM.
Adverse effects
Norepinephrine, along with dopamine and/or other serotonin reuptake inhibitors, are often prescribed in the treatment of mood disorders and are generally well tolerated.
Preclinical studies in humans using nisoxetine were conducted in the 1970s, and side effects of the drug were examined. Doses ranging from 1 mg to 50 mg do not result in any changes in base line values in haematologic tests, routine blood chemistries, or coagulation parameters. Larger doses produce some side effects, but no electrocardiographic changes are observed in any doses. Injections with doses of tyramine in humans while receiving nisoxetine results in a decreased responsiveness to tyramine with increased duration of administered nisoxetine. Another effect of nisoxetine administration is that subjects require much smaller doses of norepinephrine to produce the same blood pressure responses as those who receive a placebo. In other words, subjects exhibit an increased sensitivity to norepinephrine after nisoxetine administration. Preclinical test conclude that the drug, in tested doses, appears to be safe for use in humans.
Chemical properties
Nisoxetine is a racemic compound with two isomers.
Tricyclic (three-ring) structures can be found in many different drugs, and for medicinal chemists allows restrictions for the conformational mobility of two phenyl rings attached to a common carbon or hetero (non-carbon) atom. Small molecular changes, such as substituents or ring flexibility can cause changes in the pharmacological and physiochemical properties of a drug. The mechanism of action for the phenoxyphenylpropyamines can be explained by the critical role of the type and position of the ring substitution. The unsubstituted molecule is a weak SSRI. A compound highly potent and selective for blocking norepinephrine reuptake, a SNRI, results from 2-substitutions into the phenoxy ring.
See also
Reboxetine
Atomoxetine
Fluoxetine
References
Amines
Antidepressants
Catechol ethers
Norepinephrine reuptake inhibitors
Wakefulness-promoting agents
2-Methoxyphenyl compounds | Nisoxetine | [
"Chemistry"
] | 3,326 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
16,455,757 | https://en.wikipedia.org/wiki/ASR-11 | ASR-11 is a Digital Airport Surveillance Radar (DASR,) an advanced radar system utilized by the United States as the next generation of terminal air traffic control. The ASR-11 is an upgraded, advanced version of the previous ASR-9 radar. This next generation radar system has been developed through a joint effort by the Federal Aviation Administration, the Department of Defense and the United States Air Force, who took most of the lead development tasks.
Operation
Much like the previous ASR-9, the ASR-11 has been deployed around airport terminals across the United States to meet the requirements of a digital, automated air traffic monitoring system. The main purpose of the ASR-11 is to replace aging radar systems at airfields that did not receive the ASR-9, as well as use across the world by the United States Military. Many of the advanced parameters such as weather monitoring and digital pinpoint monitoring found on the ASR-9 are also found on the ASR-11. This radar system consists of two separate electronic subsystems, the first being a primary radar and the other, a secondary surveillance radar often referred to as the beacon. Like the ASR-9, the ASR-11 uses a continuously rotating antenna that is mounted on a tower. Transmitted electromagnetic signals reflect off the surface of an aircraft that is within sixty nautical miles of the radar location. The signals are sent to the processing equipment which measures the echo delay, or the amount of time it takes for the electromagnetic signals to return, and the direction from which they came. The information from the signal is sent to an Air Traffic Control tower, or a Radar Approach Control (RAPCON) with a digital tag that describes the location, heading, and speed at which the aircraft is moving. The overall operation of the ASR-11 is similar to that of the ASR-9, with relatively few differences between the two radar systems. There are only two main areas where the ASR-11 has an advantage over the ASR-9, with it also having some disadvantages due to its weather capability.
Advantages
The first advantage the ASR-11 offers is the use of a low peak-power, solid state transmitter with pulse compression technology, replacing the ASR-9's high peak-power, short pulse power system. This gives the radar the ability to provide the same amount of energy to a target at long range while making the radar less sensitive at shorter ranges. Any aircraft that comes closer than six nautical miles from the radar cannot be located with the long range pulse system built into the ASR-11. Installing additional radar equipment to the same antenna is required for close range location and weather detection. The second advantage of using an ASR-11 is the radar's ability to utilize a pulse sequence diversity. This gives the radar system the capability to limit processing dwells to a small number of pulses. This feature becomes most important when monitoring air traffic is the primary use of the radar. Reducing the number of pulses sent out by the radar system also has a direct effect on the Doppler resolution, resulting in a decreased ability to process live weather conditions.
Disadvantages
The main disadvantage of using an ASR-11 radar system is the reduction of Doppler Radar Resolution. Like the ASR-9, the ASR-11 has an on-site, dedicated weather reflectivity processor, with six separate levels of precipitation reflectivity. The limited number of pulses sent out by the radar system has a direct effect on its ability to measure weather conditions. Unlike the ASR-9, the ASR-11 is less suited for wind shear detection, Doppler wind measurement, and precipitation reflectivity. The ASR-11 radar system will remain as is, with no further plans to upgrade the current detection system with a Weather Systems Processor (WSP.)
References
Airport infrastructure
Ground radars | ASR-11 | [
"Engineering"
] | 789 | [
"Airport infrastructure",
"Aerospace engineering"
] |
16,456,063 | https://en.wikipedia.org/wiki/Interval%20chromatic%20number%20of%20an%20ordered%20graph | In mathematics, the interval chromatic number X<(H) of an ordered graph H is the minimum number of intervals the (linearly ordered) vertex set of H can be partitioned into so that no two vertices belonging to the same interval are adjacent in H.
Difference with chromatic number
It is interesting about interval chromatic number that it is easily computable. Indeed, by a simple greedy algorithm one can efficiently find an optimal partition of the vertex set of H into X<(H) independent intervals. This is in sharp contrast with the fact that even the approximation of the usual chromatic number of graph is an NP hard task.
Let K(H) is the chromatic number of any ordered graph H. Then for any ordered graph H,
X<(H) ≥ K(H).
One thing to be noted, for a particular graph H and its isomorphic graphs the chromatic number is same, but the interval chromatic number may differ. Actually it depends upon the ordering of the vertex set.
References
Graph coloring | Interval chromatic number of an ordered graph | [
"Mathematics"
] | 212 | [
"Graph theory stubs",
"Graph coloring",
"Mathematical relations",
"Graph theory"
] |
5,721,283 | https://en.wikipedia.org/wiki/Journal%20of%20Machine%20Learning%20Research | The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning. It was established in 2000 and the first editor-in-chief was Leslie Kaelbling. The current editors-in-chief are Francis Bach (Inria) and David Blei (Columbia University).
History
The journal was established as an open-access alternative to the journal Machine Learning. In 2001, forty editorial board members of Machine Learning resigned, saying that in the era of the Internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. The open access model employed by the Journal of Machine Learning Research allows authors to publish articles for free and retain copyright, while archives are freely available online.
Print editions of the journal were published by MIT Press until 2004 and by Microtome Publishing thereafter. From its inception, the journal received no revenue from the print edition and paid no subvention to MIT Press or Microtome Publishing.
In response to the prohibitive costs of arranging workshop and conference proceedings publication with traditional academic publishing companies, the journal launched a proceedings publication arm in 2007 and now publishes proceedings for several leading machine learning conferences, including the International Conference on Machine Learning, COLT, AISTATS, and workshops held at the Conference on Neural Information Processing Systems.
References
Further reading
External links
Computer science journals
Open access journals
Machine learning
Academic journals established in 2000 | Journal of Machine Learning Research | [
"Engineering"
] | 282 | [
"Artificial intelligence engineering",
"Machine learning"
] |
5,721,403 | https://en.wikipedia.org/wiki/Machine%20Learning%20%28journal%29 | Machine Learning is a peer-reviewed scientific journal, published since 1986.
In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet.
Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
Selected articles
References
Computer science journals
Machine learning
Delayed open access journals
Springer Science+Business Media academic journals
Academic journals established in 1986 | Machine Learning (journal) | [
"Engineering"
] | 157 | [
"Artificial intelligence engineering",
"Machine learning"
] |
5,721,649 | https://en.wikipedia.org/wiki/Dok-7 | Dok-7 is a non-catalytic cytoplasmic adaptor protein that is expressed specifically in muscle and is essential for the formation of neuromuscular synapses. Further, Dok-7 contains pleckstrin homology (PH) and phosphotyrosine-binding (PTB) domains that are critical for Dok-7 function. Finally, mutations in Dok-7 are commonly found in patients with limb-girdle congenital myasthenia.
Dok-7 regulates neuromuscular synapse formation by activating MuSK
The formation of neuromuscular synapses requires the muscle-specific receptor tyrosine kinase (MuSK). In mice genetically mutant for MuSK, acetylcholine receptors (AChRs) fail to cluster and motor neurons fail to differentiate. Because Dok-7 mutant mice are indistinguishable from MuSK mutant mice, these observations suggest Dok-7 might regulate MuSK activation. Indeed, Dok-7 binds phosphorylated MuSK and activates MuSK in purified protein preparations and in muscle in-vivo by transgenic overexpression. Furthermore, the nerve-derived organizing factor agrin fails to stimulate MuSK activation in muscle cells genetically null for Dok-7. Thus, Dok-7 is both necessary and sufficient for the activation of MuSK.
Dok-7 signaling
The requirement for MuSK in the formation of the NMJ was primarily demonstrated by mouse "knockout" studies. In mice which are deficient for either agrin or MuSK, the neuromuscular junction does not form.
Upon activation by its ligand agrin, MuSK signals via the proteins called Dok-7 and rapsyn, to induce "clustering" of acetylcholine receptors (AChR). Cell signaling downstream of MuSK requires Dok-7. Mice which lack this protein fail to develop endplates. Further, forced expression of Dok-7 induces the tyrosine phosphorylation, and thus the activation of MuSK. Dok-7 interacts with MuSK by way of protein "domain" called a "PTB domain."
In addition to the AChR, MuSK, and Dok-7 other proteins are then gathered, to form the endplate to the neuromuscular junction. The nerve terminates onto the endplate, forming the neuromuscular junction—a structure which is required to transmit nerve impulses to the muscle, and thus initiating muscle contraction.
Congenital Myasthenia Syndrome
Homozygous mutation of Dok-7 is responsible for a form of congenital myasthenic syndrome (CMS) that is unique among disorders in this category because it affects muscles in the limbs and trunk but mostly spares the face, eyes, and functions of the mouth and pharynx (chewing, swallowing and speech). Salbutamol can be effective in relieving CMS symptoms attributable to Dok-7 mutations.
References
Developmental neuroscience
Proteins | Dok-7 | [
"Chemistry"
] | 637 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
1,065,730 | https://en.wikipedia.org/wiki/Vaccine%20trial | A vaccine trial is a clinical trial that aims at establishing the safety and efficacy of a vaccine prior to it being licensed.
A vaccine candidate drug is first identified through preclinical evaluations that could involve high throughput screening and selecting the proper antigen to invoke an immune response.
Some vaccine trials may take months or years to complete, depending on the time required for the subjects to react to the vaccine and develop the required antibodies.
Preclinical stage
Preclinical development stages are necessary to determine the immunogenicity potential and safety profile for a vaccine candidate.
This is also the stage in which the drug candidate may be first tested in laboratory animals prior to moving to the Phase I trials. Vaccines such as the oral polio vaccine have been first tested for adverse effects and immunogenicity in monkeys as well as non-human primates and lab mice.
Recent scientific advances have helped to use transgenic animals as a part of vaccine preclinical protocol in hopes to more accurately determine drug reactions in humans. Understanding vaccine safety and the immunological response to the vaccine, such as toxicity, are necessary components of the preclinical stage. Other drug trials focus on the pharmacodynamics and pharmacokinetics; however, in vaccine studies it is essential to understand toxic effects at all possible dosage levels and the interactions with the immune system.
Phase I
The Phase I study consists of introducing the vaccine candidate to assess its safety in healthy people. A vaccine Phase I trial involves normal healthy subjects, each tested with either the candidate vaccine or a "control" treatment, typically a placebo or an adjuvant-containing cocktail, or an established vaccine (which might be intended to protect against a different pathogen). The primary observation is for detection of safety (absence of an adverse event) and evidence of an immune response.
After the administration of the vaccine or placebo, the researchers collect data on antibody production, on health outcomes (such as illness due to the targeted infection or to another infection). Following the trial protocol, the specified statistical test is performed to gauge the statistical significance of the observed differences in the outcomes between the treatment and control groups. Side effects of the vaccine are also noted, and these contribute to the decision on whether to advance the candidate vaccine to a Phase II trial.
One typical version of Phase I studies in vaccines involves an escalation study, which is used in mainly medicinal research trials. The drug is introduced into a small cohort of healthy volunteers. Vaccine escalation studies aim to minimize chances of serious adverse effects (SAE) by slowly increasing the drug dosage or frequency. The first level of an escalation study usually has two or three groups of around 10 healthy volunteers. Each subgroup receives the same vaccine dose, which is the expected lowest dose necessary to invoke an immune response (the main goal in a vaccine – to create immunity). New subgroups can be added to experiment with a different dosing regimen as long as the previous subgroup did not experience SAEs. There are variations in the vaccination order that can be used for different studies. For example, the first subgroup could complete the entire regimen before the second subgroup starts or the second can begin before the first ends as long as SAEs were not detected. The vaccination schedule will vary depending on the nature of the drug (i.e. the need for a booster or several doses over the course of short time period). Escalation studies are ideal for minimizing risks for SAEs that could occur with less controlled and divided protocols.
Phase II
The transition to Phase II relies on the immunogenic and toxicity results from Phase I in a small cohort of healthy volunteers. Phase II will consist of more healthy volunteers in the vaccine target population (~ hundreds of people) to determine reactions in a more diverse set of humans and test different schedules.
Phase III
Similarly. Phase III trials continue to monitor toxicity, immunogenicity, and SAEs on a much larger scale. The vaccine must be shown to be safe and effective in natural disease conditions before being submitted for approval and then general production. In the United States, the Food and Drug Administration (FDA) is responsible for approving vaccines.
Phase IV
Phase IV trials are typically monitor stages that collect information continuously on vaccine usage, adverse effects, and long-term immunity after the vaccine is licensed and marketed. Harmful effects, such as increased risk of liver failure or heart attacks, discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Further examples include the swine flu vaccine and the rotavirus vaccine, which increased the risk of Guillain-Barré syndrome (GBS) and intussusception respectively. Thus, the fourth phase of clinical trials is used to ensure long-term vaccine safety.
References
External links
Vaccine Research Center Information regarding preventative vaccine research studies
Virology
Trial, Vaccine
Clinical research
Design of experiments | Vaccine trial | [
"Biology"
] | 1,052 | [
"Vaccination"
] |
1,065,888 | https://en.wikipedia.org/wiki/Cephalization | Cephalization is an evolutionary trend in animals that, over many generations, the special sense organs and nerve ganglia become concentrated towards the front of the body where the mouth is located, often producing an enlarged head. This is associated with the animal's movement direction and bilateral symmetry. Cephalization of the nervous system has led to the formation of a brain with varying degrees of functional centralization in three phyla of bilaterian animals, namely the arthropods, cephalopod molluscs, and vertebrates.
Animals without bilateral symmetry
Cnidaria, such as the radially symmetrical Hydrozoa, show some degree of cephalization. The Anthomedusae have a head end with their mouth, photoreceptive cells, and a concentration of neural cells.
Bilateria
Cephalization is a characteristic feature of the bilaterians, a large group containing the majority of animal phyla. These have the ability to move, using muscles, and a body plan with a front end that encounters stimuli first as the animal moves forwards, and accordingly has evolved to contain many of the body's sense organs, able to detect light, chemicals, and gravity. There is often also a collection of nerve cells able to process the information from these sense organs, forming a brain in several phyla and one or more ganglia in others.
Acoela
The Acoela are basal bilaterians, part of the Xenacoelomorpha. They are small and simple animals with flat bodies. They have slightly more nerve cells at the head end than elsewhere, not forming a distinct and compact brain. This represents an early stage in cephalization.
Flatworms
The Platyhelminthes (flatworms) have a more complex nervous system than the Acoela, and are lightly cephalized, for instance having an eyespot above the brain, near the front end.
Complex active bodies
The philosopher Michael Trestman noted that three bilaterian phyla, namely the arthropods, the molluscs in the shape of the cephalopods, and the chordates, were distinctive in having "complex active bodies", something that the acoels and flatworms did not have. Any such animal, whether predator or prey, has to be aware of its environment—to catch its prey, or to evade its predators. These groups are exactly those that are most highly cephalized. These groups, however, are not closely related: in fact, they represent widely separated branches of the Bilateria, as shown on the phylogenetic tree; their lineages split hundreds of millions of years ago. Other (less cephalized) phyla are not shown, for clarity.
Arthropods
In arthropods, cephalization progressed with the gradual incorporation of trunk segments into the head region. This was advantageous because it allowed for the evolution of more effective mouth-parts for capturing and processing food. Insects are strongly cephalized, their brain made of three fused ganglia attached to the ventral nerve cord, which in turn has a pair of ganglia in each segment of the thorax and abdomen. The insect head is an elaborate structure made of several segments fused rigidly together, and equipped with both simple and compound eyes, and multiple appendages including sensory antennae and complex mouthparts (maxillae and mandibles).
Cephalopods
Cephalopod molluscs including octopus, squid, cuttlefish and nautilus are the most intelligent and highly cephalized invertebrates, with well-developed senses, including advanced 'camera' eyes and large brains.
Vertebrates
Cephalization in vertebrates, the group that includes mammals, birds, reptiles, amphibians and fishes, has been studied extensively. The heads of vertebrates are complex structures, with distinct sense organs for sight, olfaction, and hearing, and a large, multi-lobed brain protected by a skull of bone or cartilage. Cephalochordates like the lancelet, a small fishlike animal with very little cephalization, are closely related to vertebrates but do not have these structures. In the 1980s, the new head hypothesis proposed that the vertebrate head is an evolutionary novelty resulting from the emergence of neural crest and cranial placodes (thickened areas of ectoderm), which result in the formation of all senses outside of the brain. However, in 2014, a transient larva tissue of the lancelet was found to be virtually indistinguishable from the neural crest-derived cartilage (later bone, in jawed ones) which forms the vertebrate skull, suggesting that persistence of this tissue and expansion into the entire head space could be a viable evolutionary route to formation of the vertebrate head. Advanced vertebrates have increasingly elaborate brains.
Anterior Hox genes
Bilaterians have many more Hox genes controlling the development, including of the front of the body than do the less cephalized Cnidaria (two Hox clusters) and the Acoelomorpha (three Hox clusters). In the vertebrates, duplication resulted in the four Hox clusters (HoxA to HoxD) of mammals and birds, while another duplication gave teleost fishes eight Hox clusters. Some of these genes, those responsible for the front (anterior) of the body, helped to create the heads of both arthropods and vertebrates. However, the Hox1-5 genes were already present in ancestral arthropods and vertebrates that did not have complex head structures. The Hox genes therefore most likely assisted in cephalization of these two bilaterian groups independently by convergent evolution, resulting in similar gene networks.
See also
Organogenesis
Phylogenetics
References
Evolutionary biology
Evolutionary biology terminology | Cephalization | [
"Biology"
] | 1,203 | [
"Evolutionary biology",
"Evolutionary biology terminology"
] |
1,066,314 | https://en.wikipedia.org/wiki/Dunam | A dunam (Ottoman Turkish, Arabic: ; ; ), also known as a donum or dunum and as the old, Turkish, or Ottoman stremma, was the Ottoman unit of area equivalent to the Greek stremma or English acre, representing the amount of land that could be ploughed by a team of oxen in a day. The legal definition was "forty standard paces in length and breadth", but its actual area varied considerably from place to place, from a little more than in Ottoman Palestine to around in Iraq.
The unit is still in use in many areas previously ruled by the Ottomans, although the new or metric dunam has been redefined as exactly one decare (), which is 1/10 hectare (1/10 × ), like the modern Greek royal stremma.
History
The name dönüm, from the Ottoman Turkish dönmek (, "to turn"), appears to be a calque of the Byzantine Greek stremma and had the same size. It was likely adopted by the Ottomans from the Byzantines in Mysia-Bithynia.
The Dictionary of Modern Greek defines the old Ottoman stremma as approximately , but Costas Lapavitsas used the value of for the region of Naoussa in the early 20th century.
Definition
Albania, Bosnia and Herzegovina, Serbia, Montenegro
In Bosnia and Herzegovina and also Serbia, the unit is called dulum (дулум) or dunum (дунум). In Bosnia and Herzegovina dunum (or dulum) equals . In the region of Leskovac, south Serbia, One dulum is equal to . In Albania it is called dynym or dylym. It is equal to .
Bulgaria
In Bulgaria, the decare (декар) is used, which is an SI unit, literally meaning 10 ares.
Cyprus
In Cyprus, a donum is or 14400 square feet. In the Republic of Cyprus older Greek-Cypriots also still refer to the donum using the local Greek Cypriot dialect word σκάλες [skales], rather than the mainland Greek word stremma (equivalent to a decare). However, since 1986 officially Cyprus uses the square metre and the hectare.
A donum consists of 4 evleks, each of which consists of or 3.600 square feet.
Greece
In Greece, the old dönüm is called a "Turkish stremma", while today, a stremma or "royal stremma" is exactly one decare, like the metric dönüm.
Iraq
In Iraq, the dunam is .
Israel and Turkey
In Israel and Turkey, the dunam is , which is 1 decare. From the Ottoman period and through the early years of the British Mandate for Palestine, the size of a dunam was , but in 1928, the metric dunam of was adopted, and this is still used today in Israel.
United Arab Emirates
The Dubai Statistics Center and Statistics Centre Abu Dhabi use the metric dunam (spelt as donum) for data relating to agricultural land use. One donum equals .
Variations
Other countries using a dunam of some size include Libya and Syria.
Conversions
A metric dunam is equal to:
1,000 square metres (exactly)
10 ares (exactly)
1 decare (exactly)
0.1 hectares (exactly)
0.001 square kilometres (exactly)
0.247105381 acres (approx)
1,195.99005 square yards (approx)
10,763.9104 square feet (approx)
Comparable measures
The Byzantine Greek stremma was the probable source of the Turkish unit. The zeugarion (Turkish çift) was a similar unit derived from the area plowed by a team of oxen in a day. The English acre was originally similar to both units in principle, although it developed separately.
See also
Orders of magnitude (area) for further comparisons
Conversion of units
Feddan, a similar non-SI unit of area used in Egypt, Sudan, and Syria
Resm-i dönüm, a land tax based on the area of a farm
References
External links
Foreign Weights and Measures Formerly in Common Use
Dictionary of units
Variable donums in Turkey
Summary based on UN handbook
Units of area
Turkish words and phrases
Metricated units | Dunam | [
"Mathematics"
] | 881 | [
"Metricated units",
"Quantity",
"Units of area",
"Units of measurement"
] |
1,067,210 | https://en.wikipedia.org/wiki/Crystal%20field%20excitation | Crystal field excitation is the electronic transition of an electron between two orbitals of an atom that is situated in a crystal field environment. They are often observed in coordination complexes of transition metals. Some examples of crystal field excitations are dd-transitions on a copper atom that is surrounded by an octahedron of oxygen atoms, or ff-transitions on the uranium atom in uranium antimonide.
References
Crystallography | Crystal field excitation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 88 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
1,067,485 | https://en.wikipedia.org/wiki/Bismanol | Bismanol is a magnetic alloy of bismuth and manganese (manganese bismuthide) developed by the US Naval Ordnance Laboratory.
History
Bismanol, a permanent magnet made from powder metallurgy of manganese bismuthide, was developed by the US Naval Ordnance Laboratory in the early 1950s – at the time of invention it was one of the highest coercive force permanent magnets available, at 3000 oersteds. Coercive force reached 3650 oersteds and magnetic flux density 4800 by the mid 1950s. The material was generally strong, and stable to shock and vibration, but had a tendency to chip. Slow corrosion of the material occurred under normal conditions.
The material was used to make permanent magnets for use in small electric motors.
Bismanol magnets have been replaced by neodymium magnets which are both cheaper and superior in other ways, by samarium-cobalt magnets in more critical applications, and by alnico magnets.
References
Magnetic alloys
Ferromagnetic materials
Bismuth alloys
Manganese alloys | Bismanol | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 220 | [
"Bismuth alloys",
"Ferromagnetic materials",
"Electric and magnetic fields in matter",
"Manganese alloys",
"Materials science",
"Magnetic alloys",
"Materials",
"Alloys",
"Matter"
] |
1,068,768 | https://en.wikipedia.org/wiki/Phytoremediation | Phytoremediation technologies use living plants to clean up soil, air and water contaminated with hazardous contaminants. It is defined as "the use of green plants and the associated microorganisms, along with proper soil amendments and agronomic techniques to either contain, remove or render toxic environmental contaminants harmless". The term is an amalgam of the Greek phyto (plant) and Latin remedium (restoring balance). Although attractive for its cost, phytoremediation has not been demonstrated to redress any significant environmental challenge to the extent that contaminated space has been reclaimed.
Phytoremediation is proposed as a cost-effective plant-based approach of environmental remediation that takes advantage of the ability of plants to concentrate elements and compounds from the environment and to detoxify various compounds without causing additional pollution. The concentrating effect results from the ability of certain plants called hyperaccumulators to bioaccumulate chemicals. The remediation effect is quite different. Toxic heavy metals cannot be degraded, but organic pollutants can be, and are generally the major targets for phytoremediation. Several field trials confirmed the feasibility of using plants for environmental cleanup.
Background
Soil remediation is an expensive and complicated process. Traditional methods involve removal of the contaminated soil followed by treatment and return of the treated soil.
Phytoremediation could in principle be a more cost effective solution. Phytoremediation may be applied to polluted soil or static water environment. This technology has been increasingly investigated and employed at sites with soils contaminated heavy metals like with cadmium, lead, aluminum, arsenic and antimony. These metals can cause oxidative stress in plants, destroy cell membrane integrity, interfere with nutrient uptake, inhibit photosynthesis and decrease plant chlorophyll.
Phytoremediation has been used successfully in the restoration of abandoned metal mine workings, and sites where polychlorinated biphenyls have been dumped during manufacture and mitigation of ongoing coal mine discharges reducing the impact of contaminants in soils, water, or air. Contaminants such as metals, pesticides, solvents, explosives, and crude oil and its derivatives, have been mitigated in phytoremediation projects worldwide. Many plants such as mustard plants, alpine pennycress, hemp, and pigweed have proven to be successful at hyperaccumulating contaminants at toxic waste sites.
Not all plants are able to accumulate heavy metals or organics pollutants due to differences in the physiology of the plant. Even cultivars within the same species have varying abilities to accumulate pollutants.
Advantages and limitations
Advantages
the cost of the phytoremediation is lower than that of traditional processes both in situ and ex situ
the possibility of the recovery and re-use of valuable metals (by companies specializing in "phytomining")
it preserves the topsoil, maintaining the fertility of the soil
Increase soil health, yield, and plant phytochemicals
the use of plants also reduces erosion and metal leaching in the soil
Noise, smell and visual disruption are usually less than with alternative methods. The :de:Galmeivegetation of hyperaccumulator plants is even protected by environmental legislation in many areas where it occurs.
Limitations
phytoremediation is limited to the surface area and depth occupied by the roots.
with plant-based systems of remediation, it is not possible to completely prevent the leaching of contaminants into the groundwater (without the complete removal of the contaminated ground, which in itself does not resolve the problem of contamination)
the survival of the plants is affected by the toxicity of the contaminated land and the general condition of the soil
bio-accumulation of contaminants, especially metals, into the plants can affect consumer products like food and cosmetics, and requires the safe disposal of the affected plant material
when taking up heavy metals, sometimes the metal is bound to the soil organic matter, which makes it unavailable for the plant to extract
some plants are too hard to cultivate or too slow growing to make them viable for phytoremediation despite their status as hyperacumulators. Genetic engineering may improve desirable properties in target species but is controversial in some countries.
Processes
A range of processes mediated by plants or algae are tested in treating environmental problems.:
Phytoextraction
Phytoextraction (or phytoaccumulation or phytosequestration) exploits the ability of plants or algae to remove contaminants from soil or water into harvestable plant biomass. It is also used for the mining of metals such as copper(II) compounds. The roots take up substances from the soil or water and concentrate them above ground in the plant biomass Organisms that can uptake high amounts of contaminants are called hyperaccumulators. Phytoextraction can also be performed by plants (e.g. Populus and Salix) that take up lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. Phytoextraction has been growing rapidly in popularity worldwide for the last twenty years or so. Typically, phytoextraction is used for heavy metals or other inorganics. At the time of disposal, contaminants are typically concentrated in the much smaller volume of the plant matter than in the initially contaminated soil or sediment. After harvest, a lower level of the contaminant will remain in the soil, so the growth/harvest cycle must usually be repeated through several crops to achieve a significant cleanup. After the process, the soil is remediated.
Of course many pollutants kill plants, so phytoremediation is not a panacea. For example, chromium is toxic to most higher plants at concentrations above 100 μM·kg−1 dry weight.
Mining of these extracted metals through phytomining is a conceivable way of recovering the material. Hyperaccumulating plants are often metallophyte. Induced or assisted phytoextraction is a process where a conditioning fluid containing a chelator or another agent is added to soil to increase metal solubility or mobilization so that the plants can absorb them more easily. While such additives can increase metal uptake by plants, they can also lead to large amounts of available metals in the soil beyond what the plants are able to translocate, causing potential leaching into the subsoil or groundwater.
Examples of plants that are known to accumulate the following contaminants:
Arsenic, using the sunflower (Helianthus annuus), or the Chinese Brake fern (Pteris vittata).
Cadmium, using willow (Salix viminalis), which as a phytoextractor of cadmium (Cd), zinc (Zn), and copper (Cu).
Cadmium and zinc, using alpine pennycress (Thlaspi caerulescens), a hyperaccumulator of these metals at levels that would be toxic to many plants. Specifically, pennycress leaves accumulate up to 380 mg/kg Cd. On the other hand, the presence of copper seems to impair its growth (see table for reference).
Chromium is toxic to most plants. However, tomatoes (Solanum lycopersicum) show some promise.
Lead, using Indian mustard (Brassica juncea), ragweed (Ambrosia artemisiifolia), hemp dogbane (Apocynum cannabinum), or poplar trees, which sequester lead in their biomass.
Salt-tolerant (moderately halophytic) barley and/or sugar beets are commonly used for the extraction of sodium chloride (common salt) to reclaim fields that were previously flooded by sea water.
Caesium-137 and strontium-90 were removed from a pond using sunflowers after the Chernobyl accident.
Mercury, selenium and organic pollutants such as polychlorinated biphenyls (PCBs) have been removed from soils by transgenic plants containing genes for bacterial enzymes.
Thallium is sequestered by some plants.
Phytostabilization
Phytostabilization reduces the mobility of substances in the environment, for example, by limiting the leaching of substances from the soil. It focuses on the long term stabilization and containment of the pollutant. The plant immobilizes the pollutants by binding them to soil particles making them less available for plant or human uptake. Unlike phytoextraction, phytostabilization focuses mainly on sequestering pollutants in soil near the roots but not in plant tissues. Pollutants become less bioavailable, resulting in reduced exposure. The plants can also excrete a substance that produces a chemical reaction, converting the heavy metal pollutant into a less toxic form. Stabilization results in reduced erosion, runoff, leaching, in addition to reducing the bioavailability of the contaminant. An example application of phytostabilization is using a vegetative cap to stabilize and contain mine tailings. Some soil amendments decrease radiosource mobility – while at some concentrations the same amendments will increase mobility. Vidal et al. 2000 find the root mats of meadow grasses are effective at demobilising radiosource materials especially with certain combinations of other agricultural practices. Vidal also find that the particular grass mix makes a significant difference.
Phytodegradation
Phytodegradation (also called phytotransformation) uses plants or microorganisms to degrade organic pollutants in the soil or within the body of the plant. The organic compounds are broken down by enzymes that the plant roots secrete and these molecules are then taken up by the plant and released through transpiration. This process works best with organic contaminants like herbicides, trichloroethylene, and methyl tert-butyl ether.
Phytotransformation results in the chemical modification of environmental substances as a direct result of plant metabolism, often resulting in their inactivation, degradation (phytodegradation), or immobilization (phytostabilization). In the case of organic pollutants, such as pesticides, explosives, solvents, industrial chemicals, and other xenobiotic substances, certain plants, such as Cannas, render these substances non-toxic by their metabolism. In other cases, microorganisms living in association with plant roots may metabolize these substances in soil or water. These complex and recalcitrant compounds cannot be broken down to basic molecules (water, carbon-dioxide, etc.) by plant molecules, and, hence, the term phytotransformation represents a change in chemical structure without complete breakdown of the compound.
The term "Green Liver" is used to describe phytotransformation, as plants behave analogously to the human liver when dealing with these xenobiotic compounds (foreign compound/pollutant). After uptake of the xenobiotics, plant enzymes increase the polarity of the xenobiotics by adding functional groups such as hydroxyl groups (-OH).
This is known as Phase I metabolism, similar to the way that the human liver increases the polarity of drugs and foreign compounds (drug metabolism). Whereas in the human liver enzymes such as cytochrome P450s are responsible for the initial reactions, in plants enzymes such as peroxidases, phenoloxidases, esterases and nitroreductases carry out the same role.
In the second stage of phytotransformation, known as Phase II metabolism, plant biomolecules such as glucose and amino acids are added to the polarized xenobiotic to further increase the polarity (known as conjugation). This is again similar to the processes occurring in the human liver where glucuronidation (addition of glucose molecules by the UGT class of enzymes, e.g. UGT1A1) and glutathione addition reactions occur on reactive centres of the xenobiotic.
Phase I and II reactions serve to increase the polarity and reduce the toxicity of the compounds, although many exceptions to the rule are seen. The increased polarity also allows for easy transport of the xenobiotic along aqueous channels.
In the final stage of phytotransformation (Phase III metabolism), a sequestration of the xenobiotic occurs within the plant. The xenobiotics polymerize in a lignin-like manner and develop a complex structure that is sequestered in the plant. This ensures that the xenobiotic is safely stored, and does not affect the functioning of the plant. However, preliminary studies have shown that these plants can be toxic to small animals (such as snails), and, hence, plants involved in phytotransformation may need to be maintained in a closed enclosure.
Hence, the plants reduce toxicity (with exceptions) and sequester the xenobiotics in phytotransformation. Trinitrotoluene phytotransformation has been extensively researched and a transformation pathway has been proposed.
Phytostimulation
Phytostimulation (or rhizodegradation) is the enhancement of soil microbial activity for the degradation of organic contaminants, typically by organisms that associate with roots. This process occurs within the rhizosphere, which is the layer of soil that surrounds the roots. Plants release carbohydrates and acids that stimulate microorganism activity which results in the biodegradation of the organic contaminants. This means that the microorganisms are able to digest and break down the toxic substances into harmless form. Phytostimulation has been shown to be effective in degrading petroleum hydrocarbons, PCBs, and PAHs. Phytostimulation can also involve aquatic plants supporting active populations of microbial degraders, as in the stimulation of atrazine degradation by hornwort.
Phytovolatilization
Phytovolatilization is the removal of substances from soil or water with release into the air, sometimes as a result of phytotransformation to more volatile and/or less polluting substances. In this process, contaminants are taken up by the plant and through transpiration, evaporate into the atmosphere. This is the most studied form of phytovolatilization, where volatilization occurs at the stem and leaves of the plant, however indirect phytovolatilization occurs when contaminants are volatilized from the root zone. Selenium (Se) and Mercury (Hg) are often removed from soil through phytovolatilization. Poplar trees are one of the most successful plants for removing VOCs through this process due to its high transpiration rate.
Rhizofiltration
Rhizofiltration is a process that filters water through a mass of roots to remove toxic substances or excess nutrients. The pollutants remain absorbed in or adsorbed to the roots. This process is often used to clean up contaminated groundwater through planting directly in the contaminated site or through removing the contaminated water and providing it to these plants in an off-site location. In either case though, typically plants are first grown in a greenhouse under precise conditions.
Biological hydraulic containment
Biological hydraulic containment occurs when some plants, like poplars, draw water upwards through the soil into the roots and out through the plant, which decreases the movement of soluble contaminants downwards, deeper into the site and into the groundwater.
Phytodesalination
Phytodesalination uses halophytes (plants adapted to saline soil) to extract salt from the soil to improve its fertility.
Role of genetics
Breeding programs and genetic engineering are powerful methods for enhancing natural phytoremediation capabilities, or for introducing new capabilities into plants. Genes for phytoremediation may originate from a micro-organism or may be transferred from one plant to another variety better adapted to the environmental conditions at the cleanup site. For example, genes encoding a nitroreductase from a bacterium were inserted into tobacco and showed faster removal of TNT and enhanced resistance to the toxic effects of TNT.
Researchers have also discovered a mechanism in plants that allows them to grow even when the pollution concentration in the soil is lethal for non-treated plants. Some natural, biodegradable compounds, such as exogenous polyamines, allow the plants to tolerate concentrations of pollutants 500 times higher than untreated plants, and to absorb more pollutants.
Hyperaccumulators and biotic interactions
A plant is said to be a hyperaccumulator if it can concentrate the pollutants in a minimum percentage which varies according to the pollutant involved (for example: more than 1000 mg/kg of dry weight for nickel, copper, cobalt, chromium or lead; or more than 10,000 mg/kg for zinc or manganese). This capacity for accumulation is due to hypertolerance, or phytotolerance: the result of adaptative evolution from the plants to hostile environments through many generations. A number of interactions may be affected by metal hyperaccumulation, including protection, interferences with neighbour plants of different species, mutualism (including mycorrhizae, pollen and seed dispersal), commensalism, and biofilm.
Tables of hyperaccumulators
Hyperaccumulators table – 1 : Al, Ag, As, Be, Cr, Cu, Mn, Hg, Mo, Naphthalene, Pb, Pd, Pt, Se, Zn
Hyperaccumulators table – 2 : Nickel
Hyperaccumulators table – 3 : Radionuclides (Cd, Cs, Co, Pu, Ra, Sr, U), Hydrocarbons, Organic Solvents.
Phytoscreening
As plants can translocate and accumulate particular types of contaminants, plants can be used as biosensors of subsurface contamination, thereby allowing investigators to delineate contaminant plumes quickly. Chlorinated solvents, such as trichloroethylene, have been observed in tree trunks at concentrations related to groundwater concentrations. To ease field implementation of phytoscreening, standard methods have been developed to extract a section of the tree trunk for later laboratory analysis, often by using an increment borer. Phytoscreening may lead to more optimized site investigations and reduce contaminated site cleanup costs.
See also
Bioaugmentation
Biodegradation
Bioremediation
Constructed wetland
De Ceuvel
Mycorrhizal bioremediation
Mycoremediation
Phytotreatment
References
Bibliography
"Phytoremediation Website" — Includes reviews, conference announcements, lists of companies doing phytoremediation, and bibliographies.
"An Overview of Phytoremediation of Lead and Mercury" June 6 2000. The Hazardous Waste Clean-Up Information Web Site.
"Enhanced phytoextraction of arsenic from contaminated soil using sunflower" September 22 2004. U.S. Environmental Protection Agency.
"Phytoextraction", February 2000. Brookhaven National Laboratory 2000.
"Phytoextraction of Metals from Contaminated Soil" April 18, 2001. M.M. Lasat
July 2002. Donald Bren School of Environment Science & Management.
"Phytoremediation" October 1997. Department of Civil Environmental Engineering.
"Phytoremediation" June 2001, Todd Zynda.
"Phytoremediation of Lead in Residential Soils in Dorchester, MA" May, 2002. Amy Donovan Palmer, Boston Public Health Commission.
"Technology Profile: Phytoextraction" 1997. Environmental Business Association.
"Ancona V, Barra Caracciolo A, Campanale C, De Caprariis B, Grenni P, Uricchio VF, Borello D, 2019. Gasification Treatment of Poplar Biomass Produced in a Contaminated Area Restored using Plant Assisted Bioremediation. Journal of Environmental Management"
External links
Missouri Botanical Garden (host): Phytoremediation website — Review Articles, Conferences, Phytoremediation Links, Research Sponsors, Books and Journals, and Recent Research.
International Journal of Phytoremediation — devoted to the publication of current laboratory and field research describing the use of plant systems to remediate contaminated environments.
Using Plants To Clean Up Soils — from Agricultural Research magazine
New Alchemy Institute — co-founded by John Todd (Canadian biologist)
Bioremediation
Environmental soil science
Environmental engineering
Environmental terminology
Pollution control technologies
Conservation projects
Ecological restoration
Soil contamination
Biotechnology
Sustainable technologies | Phytoremediation | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 4,323 | [
"Ecological restoration",
"Chemical engineering",
"Phytoremediation plants",
"Environmental chemistry",
"Pollution control technologies",
"Biotechnology",
"Biodegradation",
"Ecological techniques",
"Civil engineering",
"Soil contamination",
"nan",
"Environmental engineering",
"Bioremediation... |
1,069,434 | https://en.wikipedia.org/wiki/Maximum%20takeoff%20weight | The maximum takeoff weight (MTOW) or maximum gross takeoff weight (MGTOW) or maximum takeoff mass (MTOM) of an aircraft, also known as the maximum structural takeoff weight or maximum structural takeoff mass, is the maximum weight at which the pilot is allowed to attempt to take off, due to structural or other limits. The analogous term for rockets is gross lift-off mass, or GLOW. MTOW is usually specified in units of kilograms or pounds.
MTOW is the heaviest weight at which the aircraft has been shown to meet all the airworthiness requirements applicable to it. It refers to the maximum permissible aircraft weight at the start of the takeoff run. MTOW of an aircraft is fixed and does not vary with altitude, air temperature, or the length of the runway to be used for takeoff or landing.
Maximum permissible takeoff weight or "regulated takeoff weight", varies according to flap setting, altitude, air temperature, length of runway and other factors. It is different from one takeoff to the next, but can never be higher than the MTOW.
Certification standards
Certification standards applicable to the airworthiness of an aircraft contain many requirements. Some of these requirements can only be met by specifying a maximum weight for the aircraft, and demonstrating that the aircraft can meet the requirement at all weights up to, and including, the specified maximum. This limit is typically driven by structural requirements – to ensure the aircraft structure is capable of withstanding all the loads likely to be imposed on it during the takeoff, and occasionally by the maximum flight weight.
Multiple MTOW
It is possible to have an aircraft certified with a reduced MTOW, lower than the structural maximum, to take advantage of lower MTOW-based fees, such as insurance premiums, landing fees and air traffic control fees are MTOW based. This is considered a permanent modification.
Alternatively, holders of an Air Operator Certificate (AOC) may vary the Maximum Declared Take-Off Weight (MDTOW) for their aircraft. They can subscribe to a scheme, and then vary the weight for each aircraft without further charge.
An aircraft can have its MTOW increased by reinforcement due to additional or stronger materials. For example, the Airbus A330 242 tonnes MTOW variant / A330neo uses Scandium–aluminium (scalmalloy) to avoid an empty weight increase.
Maximum permissible takeoff weight or maximum allowed takeoff weight
In many circumstances an aircraft may not be permitted to take off at its MTOW. In these circumstances the maximum weight permitted for takeoff will be determined taking account of the following:
Wing flap setting. See the Spanair Flight 5022 crash
Airfield altitude (height above sea-level) – This affects air pressure which affects maximum engine power or thrust.
Air temperature – This affects air density which affects maximum engine power or thrust.
Length of runway – A short runway means the aircraft has less distance to accelerate to takeoff speed. The length for computation of maximum permitted takeoff weight may be adjusted if the runway has clearways and/or stopways.
Runway wind component – The best condition is a strong headwind straight along the runway. The worst condition is a tailwind. If there is a crosswind it is the wind component along the runway which must be taken into account.
Condition of runway – The best runway for taking off is a dry, paved runway. An unpaved runway or one with traces of snow will provide more rolling friction which will cause the airplane to accelerate more slowly. See the Munich air disaster.
Obstacles – An airplane must be able to take off and gain enough height to clear all obstacles and terrain beyond the end of the runway.
The maximum weight at which a takeoff may be attempted, taking into account the above factors, is called the maximum permissible takeoff weight, maximum allowed takeoff weight or regulated takeoff weight.
Field Limited Weight
The Field Limited Weight is the lowest of the:
Take-Off Distance Limited Weight
Engine-Out accelerate-stop distance limited weight
Engine-Out Take-Off Distance Limited Weight
Runway Limited Weight
The Runway Limited Weight is the lowest of the:
Field Limited Weight
VMCG limited weight
Tyre speed limited weight
Brake energy limited weight
Regulated Take-Off Weight
The Regulated Take-Off Weight is the lowest of the:
Runway limited weight
Obstacle limited weight
Climb limited weight
See also
ICAO recommendations on use of the International System of Units
Aircraft gross weight
List of airliners by maximum takeoff weight
Maximum zero-fuel weight
Operating empty weight
Wake turbulence category
References
External links
CAA data terms definition
Aircraft weight measurements | Maximum takeoff weight | [
"Physics",
"Engineering"
] | 909 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
12,131,888 | https://en.wikipedia.org/wiki/Bloch%20oscillation | Bloch oscillation is a phenomenon from solid state physics. It describes the oscillation of a particle (e.g. an electron) confined in a periodic potential when a constant force is acting on it.
It was first pointed out by Felix Bloch and Clarence Zener while studying the electrical properties of crystals. In particular, they predicted that the motion of electrons in a perfect crystal under the action of a constant electric field would be oscillatory instead of uniform. While in natural crystals this phenomenon is extremely hard to observe due to the scattering of electrons by lattice defects, it has been observed in semiconductor superlattices and in different physical systems such as cold atoms in an optical potential and ultrasmall Josephson junctions.
Derivation
The one-dimensional equation of motion for an electron with wave vector in a constant electric field is:
which has the solution
The group velocity of the electron is given by
where denotes the dispersion relation for the given energy band.
Suppose that the latter has the (tight-binding) form
where is the lattice parameter and is a constant. Then is given by
and the electron position can be computed as a function of time:
This shows that the electron oscillates in real space. The angular frequency of the oscillations is given by .
Discovery and experimental realizations
Bloch oscillations were predicted by Nobel laureate Felix Bloch in 1929. However, they were not experimentally observed for a long time, because in natural solid-state bodies, is (even with very high electric field strengths) not large enough to allow for full oscillations of the charge carriers within the diffraction and tunneling times, due to relatively small lattice periods. The development in semiconductor technology has recently led to the fabrication of structures with super lattice periods that are now sufficiently large, based on artificial semiconductors. The oscillation period in those structures is smaller than the diffraction time of the electrons, hence more oscillations can be observed in a time window below the diffraction time. For the first time the experimental observation of Bloch oscillations in such super lattices at very low temperatures was shown by Jochen Feldmann and Karl Leo in 1992. Other realizations were
the observation of coherent Terahertz radiation of Bloch oscillations by Hartmut Roskos et al. in 1993
the observation of Bloch oscillations at room temperature by Thomas Dekorsy et al. in 1995
the observation of Bloch oscillations in the absence of a lattice
the observation of Bloch oscillations in the classical system of macroscopic pendula
See also
Super Bloch oscillations
References
Oscillation
Condensed matter physics | Bloch oscillation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 556 | [
"Phases of matter",
"Materials science",
"Mechanics",
"Condensed matter physics",
"Oscillation",
"Matter"
] |
12,133,394 | https://en.wikipedia.org/wiki/Testicular%20receptor | The testicular receptor proteins are members of the nuclear receptor family of intracellular transcription factors. There are two forms of the receptor, TR2 and TR4, each encode by a separate gene ( and respectively).
References
External links
Intracellular receptors
Transcription factors | Testicular receptor | [
"Chemistry",
"Biology"
] | 54 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
12,134,138 | https://en.wikipedia.org/wiki/Nuclear%20receptor%204A1 | The nuclear receptor 4A1 (NR4A1 for "nuclear receptor subfamily 4 group A member 1") also known as Nur77, TR3, and NGFI-B is a protein that in humans is encoded by the NR4A1 gene.
Nuclear receptor 4A1 (NR4A1) is a member of the NR4A nuclear receptor family of intracellular transcription factors. NR4A1 is involved in cell cycle mediation, inflammation and apoptosis.
Nuclear receptor 4A1 plays a key role in mediating inflammatory responses in macrophages. In addition, subcellular localization of the NR4A1 protein appears to play a key role in the survival and death of cells.
Expression is inducible by phytohemagglutinin in human lymphocytes and by serum stimulation of arrested fibroblasts. Translocation of the protein from the nucleus to mitochondria induces apoptosis. Multiple alternatively spliced variants, encoding the same protein, have been identified.
Structure
The NR4A1 gene contains seven exons. An amino terminal transactivation domain is encoded in exon 2, a DNA-binding domain in exons 3 and 4, and dimerisation and a ligand-binding domain is exons 5 to 7.
The protein has an atypical ligand-binding domain that is unlike the classical ligand-binding domain in most nuclear receptors. The classical domain contains a ligand-receiving pocket and co-activator site, both of which are lacking in the NR4A family. Whereas most nuclear receptors have a hydrophobic surface that results in a cleft, NR4A1 has a hydrophilic surface.
Cofactors interact with Nuclear receptor 4A1 at a hydrophobic region between helices 11 and 12 to modulate transcription.
Function
Along with the two other NR4A family members, NR4A1 is expressed in macrophages following inflammatory stimuli. This process is mediated by the NF-κB (nuclear factor-kappa B) complex, a ubiquitous transcription factor involved in cellular response to stress.
Nuclear receptor 4A1 can be induced by many physiological and physical stimuli. These include physiological stimuli such as "fatty acids, stress, prostaglandins, growth factors, calcium, inflammatory cytokines, peptide hormones, phorbol esters, and neurotransmitters" and physical stimuli including "magnetic fields, mechanical agitation (causing fluid shear stress), and membrane depolarization". No endogenous ligands that bind to NR4A1 have yet been identified, so modulation occurs at the level of protein expression and posttranslational modification.Besides these, NR4A1 can mediate T cell function, the transcription factor NR4A1 is stably expressed at high levels in tolerant T cells. Overexpression of
Nuclear receptor 4A1 inhibits effector T cell differentiation, whereas deletion of NR4A1 overcomes T cell tolerance and exaggerates effector function, as well as enhancing immunity against tumor and chronic virus. Mechanistically, NR4A1 is preferentially recruited to binding sites of the transcription factor AP-1, where it represses effector gene expression by inhibiting AP-1 function. NR4A1 binding also promotes acetylation of histone 3 at lysine 27 (H3K27ac), leading to activation of tolerance-related genes.
There are several ligands that directly bind NR4A1, including cytosporone B, celastrol, and certain polyunsaturated fatty acids. These NR4A1 ligands bind at various NR4A1 sites and show activities that are dependent on ligand structure and cell context. These NR4A1 ligands may have relevance to treatment of cancer, metabolic disease, inflammation, and endometriosis. NR4A1 may play a role in Drug-induced gingival overgrowth associated with exposure to phenytoin, nifedipine, and cyclosporine A.
Biochemistry
Nuclear receptor 4A1 binds as a monomer or homodimer to response element NBRE and as a homodimer to NurRE. It is also capable of heterodimerising with COUP-TF (an orphan nuclear receptor) and retinoid X receptor (RXR) in mediating transcription in response to retinoids.
The binding sites on the response elements for NR4A1, which are common to the two other members of the NR4A family, are:
NBRE - 5’-A/TAAAGGTCA,
NurRE - a AAAT(G/A)(C/T)CA repeat,
RXR - DX, a motif.
Evolution and homology
Nuclear receptor 4A1 has the systematic HUGO gene symbol NR4A1. It belongs to a group of three closely related orphan receptors, the NR4A family (NR4A). The other two members are Nuclear receptor 4A2 (NR4A2) and Nuclear receptor 4A3 (NR4A3).
Nuclear receptor 4A1 has a high degree of structural similarity with other family members at the DNA-binding domain with 91-95% sequence conservation. The C-terminal ligand-binding domain is conserved to a lesser extent at 60% and the N-terminal AB region is not conserved, differing in each member.
The three members are similar in biochemistry and function. They are immediate early genes activated in a ligand-independent manner that bind at the homologous sites on response elements.
Nuclear receptor 4A1 and the rest of the NR4A family are structurally similar to other nuclear receptor superfamily members, but contain an extra intron. The DNA-binding domain at exons 3 and 4 of the NR4A1 gene is conserved among all members of the nuclear receptor.
NR4A1 has homologous genes in a range of species including neuronal growth factor-induced clone B in rats, Nur77 in mice and TR3 in humans.
Pathology
Along with 16 other genes, NR4A1 is a signature gene in the metastasis of some primary solid tumours. It is downregulated in this process.
Interactions
Nuclear receptor 4A1 has been shown to interact with:
AKT1,
Bcl-2,
HIF1A,
Nuclear receptor co-repressor 2,
Promyelocytic leukemia protein,
Retinoid X receptor alpha, and
Von Hippel-Lindau tumor suppressor.
References
Further reading
External links
Intracellular receptors
Transcription factors | Nuclear receptor 4A1 | [
"Chemistry",
"Biology"
] | 1,327 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
12,134,485 | https://en.wikipedia.org/wiki/Nuclear%20receptor%204A2 | The nuclear receptor 4A2 (NR4A2) (nuclear receptor subfamily 4 group A member 2) also known as nuclear receptor related 1 protein (NURR1) is a protein that in humans is encoded by the NR4A2 gene. NR4A2 is a member of the nuclear receptor family of intracellular transcription factors.
NR4A2 plays a key role in the maintenance of the dopaminergic system of the brain. Mutations in this gene have been associated with disorders related to dopaminergic dysfunction, including Parkinson's disease and schizophrenia. Misregulation of this gene may be associated with rheumatoid arthritis. Four transcript variants encoding four distinct isoforms have been identified for this gene. Additional alternate splice variants may exist, but their full-length nature has not been determined.
This protein is thought to be critical to development of the dopaminergic phenotype in the midbrain, as mice without NR4A2 are lacking expression of this phenotype. This is further confirmed by studies showing that forced NR4A2 expression in naïve precursor cells leads to complete dopaminergic phenotype gene expression.
While NR4A2 is a key protein in inducing this phenotype, there are other factors required, as expressing NR4A2 in isolation fails to produce it. One of these suggested factors is winged-helix transcription factor 2 (Foxa2). Studies have found these two factors to be within the same region of developing dopaminergic neurons, and both were required to have expression for the dopaminergic phenotype.
Structure
One investigation conducted research on the structure and found that NR4A2 does not contain a ligand-binding cavity but a patch filled with hydrophobic side chains. Non-polar amino acid residues of NR4A2’s co-regulators, SMRT and NCoR, bind to this hydrophobic patch. Analysis of tertiary structure has shown that the binding surface of the ligand-binding domain is located on the grooves of the 11th and 12th alpha helices. This study also found essential structural components of this hydrophobic patch, to be the three amino acids residues, F574, F592, L593; mutation of any these three inhibits LBD activity.
Clinical significance
Role in disease
Mutations in NR4A2 have been associated with various disorders, including Parkinson's disease, schizophrenia, manic depression, and autism. De novo gene deletions that affect NR4A2 have been identified in some individuals with intellectual disability and language impairment, some of whom meet DSM-5 criteria for an autism diagnosis.
Inflammation
Research has been conducted on NR4A2’s role in inflammation, and may provide important information in treating disorders caused by dopaminergic neuron disease. Inflammation in the central nervous system can result from activated microglia (macrophage analogs for the central nervous system) and other pro-inflammatory factors, such as bacterial lipopolysaccharide (LPS). LPS binds to toll-like receptors (TLR), which induces inflammatory gene expression by promoting signal-dependent transcription factors. To determine which cells are dopaminergic, experiments measured the enzyme tyrosine hydroxylase (TH), which is needed for dopamine synthesis. It has been shown that NR4A2 protects dopaminergic neurons from LPS-induced inflammation by reducing inflammatory gene expression in microglia and astrocytes. When a short hairpin RNA for NR4A2 was expressed in microglia and astrocytes, these cells produced inflammatory mediators such as TNF-alpha, nitric oxide synthase, and interleukin-1 beta (IL-1β), supporting the conclusion that reduced NR4A2 promotes inflammation and leads to cell death of dopaminergic neurons. NR4A2 interacts with the transcription factor complex NF-κB-p65 on the inflammatory gene promoters. However, NR4A2 is dependent on other factors to be able to participate in these interactions. NR4A2 needs to be sumoylated and its co-regulating factor, glycogen synthase kinase 3, needs to be phosphorylated for these interactions to occur. Sumolyated NR4A2 recruits CoREST, a complex made of several proteins that assembles chromatin remodeling enzymes. The NR4A2/CoREST complex inhibits transcription of inflammatory genes.
Applications
NR4A2 induces tyrosine hydroxylase (TH) expression, which eventually leads to differentiation into dopaminergic neurons. NR4A2 has been demonstrated to induce differentiation in CNS precursor cells in vitro but they require additional factors to reach full maturity and dopaminergic differentiation. Therefore, NR4A2 modulation may be promising for generation of dopaminergic neurons for Parkinson's disease research, yet implantation of these induced cells as therapy treatments, has had limited results.
NR4A2 mRNA may be a useful biomarker for Parkinson's disease in combination with inflammatory cytokines.
Knockout studies
Studies have shown that heterozygous knockout mice for the NR4A2 gene demonstrate reduced dopamine release. Initially this was compensated for by a decrease in the rate of dopamine reuptake; however, over time this reuptake could not make up for the reduced amount of dopamine being released. Coupled with the loss of dopamine receptor neurons, this can result in the onset of symptoms for Parkinson's disease.
Interactions
NR4A2 has been shown to interact with:
Beta-catenin,
Pituitary homeobox 3,
Retinoic acid receptor alpha, and
Retinoic acid receptor beta.
References
Further reading
External links
Intracellular receptors
Transcription factors | Nuclear receptor 4A2 | [
"Chemistry",
"Biology"
] | 1,206 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
12,134,677 | https://en.wikipedia.org/wiki/Nuclear%20receptor%204A3 | The nuclear receptor 4A3 (NR4A3) (nuclear receptor subfamily 4, group A, member 3) also known as neuron-derived orphan receptor 1 (NOR1) is a protein that in humans is encoded by the NR4A3 gene. NR4A3 is a member of the nuclear receptor family of intracellular transcription factors.
NR4A3 plays a central regulatory role in cell proliferation, differentiation, mitochondrial respiration, metabolism and apoptosis
Interactions
NR4A3 has been shown to interact with SIX3.
See also
NUR nuclear receptor family
References
Further reading
External links
Intracellular receptors
Transcription factors | Nuclear receptor 4A3 | [
"Chemistry",
"Biology"
] | 127 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
12,138,221 | https://en.wikipedia.org/wiki/Molecular%20switch | A molecular switch is a molecule that can be reversibly shifted between two or more stable states. The molecules may be shifted between the states in response to environmental stimuli, such as changes in pH, light, temperature, an electric current, microenvironment, or in the presence of ions and other ligands. In some cases, a combination of stimuli is required. The oldest forms of synthetic molecular switches are pH indicators, which display distinct colors as a function of pH. Currently synthetic molecular switches are of interest in the field of nanotechnology for application in molecular computers or responsive drug delivery systems. Molecular switches are also important in biology because many biological functions are based on it, for instance allosteric regulation and vision. They are also one of the simplest examples of molecular machines.
Biological molecular switches
In cellular biology, proteins act as intracellular signaling molecules by activating another protein in a signaling pathway. In order to do this, proteins can switch between active and inactive states, thus acting as molecular switches in response to another signal. For example, phosphorylation of proteins can be used to activate or inactivate proteins. The external signal flipping the molecular switch could be a protein kinase, which adds a phosphate group to the protein, or a protein phosphatase, which removes phosphate groups.
Acidochromic molecular switches
The capacity of some compounds to change in function of the pH was known since the sixteenth century. This effect was even known before the development of acid-base theory. Those are found in a wide range of plants like roses, cornflowers, primroses and violets. Robert Boyle was the first person to describe this effect, employing plant juices (in the forms of solution and impregnated paper).
Molecular switches are most commonly used as pH indicators, which are molecules with acidic or basic properties. Their acidic and basic forms have different colors. When an acid or a base is added, the equilibrium between the two forms is displaced.
Photochromic molecular switches
A widely studied class are photochromic compounds which are able to switch between electronic configurations when irradiated by light of a specific wavelength. Each state has a specific absorption maximum which can then be read out by UV-VIS spectroscopy. Members of this class include azobenzenes, diarylethenes, dithienylethenes, fulgides, stilbenes, spiropyrans and phenoxynaphthacene quinones.
Chiroptical molecular switches are a specific subgroup with photochemical switching taking place between an enantiomeric pairs. In these compounds the readout is by circular dichroism rather than by ordinary spectroscopy. Hindered alkenes such as the one depicted below change their helicity (see: planar chirality) as response to irradiation with right or left-handed circularly polarized light
Chiroptical molecular switches that show directional motion are considered synthetic molecular motors: When attached to the end of a helical poly (isocyanate) polymer, they can switch the helical sense of the polymer.
Host–guest molecular switches
In host–guest chemistry the bistable states of molecular switches differ in their affinity for guests. Many early examples of such systems are based on crown ether chemistry. The first switchable host is described in 1978 by Desvergne & Bouas-Laurent who create a crown ether via photochemical anthracene dimerization. Although not strictly speaking switchable the compound is able to take up cations after a photochemical trigger and exposure to acetonitrile gives back the open form.
In 1980 Yamashita et al. construct a crown ether already incorporating the anthracene units (an anthracenophane) and also study ion uptake vs photochemistry.
Also in 1980 Shinkai throws out the anthracene unit as photoantenna in favor of an azobenzene moiety and for the first time envisions the existence of molecules with an on-off switch. In this molecule light triggers a trans-cis isomerization of the azo group which results in ring expansion. Thus in the trans form the crown binds preferentially to ammonium, lithium and sodium ions while in the cis form the preference is for potassium and rubidium (both larger ions in same alkali metal group). In the dark the reverse isomerization takes place.
Shinkai employs this devices in actual ion transport mimicking the biochemical action of monensin and nigericin: in a biphasic system ions are taken up triggered by light in one phase and deposited in the other phase in absence of light.
Mechanically-interlocked molecular switches
Some of the most advanced molecular switches are based on mechanically-interlocked molecular architectures where the bistable states differ in the position of the macrocycle. In 1991 Stoddart devices a molecular shuttle based on a rotaxane on which a molecular bead is able to shuttle between two docking stations situated on a molecular thread. Stoddart predicts that when the stations are dissimilar with each of the stations addressed by a different external stimulus the shuttle becomes a molecular machine. In 1993 Stoddart is scooped by supramolecular chemistry pioneer Fritz Vögtle who actually delivers a switchable molecule based not on a rotaxane but on a related catenane
This compound is based on two ring systems: one ring holds the photoswichable azobenzene ring and two paraquat docking stations and the other ring is a polyether with to arene rings with binding affinity for the paraquat units. In this system NMR spectroscopy shows that in the azo trans-form the polyether ring is free to rotate around its partner ring but then when a light trigger activates the cis azo form this rotation mode is stopped
Kaifer and Stoddart in 1994 modify their molecular shuttle such a way that an electron-poor tetracationic cyclophane bead now has a choice between two docking stations: one biphenol and one benzidine unit. In solution at room temperature NMR spectroscopy reveals that the bead shuttles at a rate comparable to the NMR timescale, reducing the temperature to 229K resolves the signals with 84% of the population favoring the benzidine station. However, on addition of trifluoroacetic acid, the benzidine nitrogen atoms are protonated and the bead is fixed permanently on the biphenol station. The same effect is obtained by electrochemical oxidation (forming the benzidine radical ion) and significantly both processes are reversible.
In 2007 molecular shuttles were utilized in an experimental DRAM circuit. The device consists of 400 bottom silicon nanowire electrodes (16 nanometer (nm) wide at 33 nm intervals) crossed by another 400 titanium top-nanowires with similar dimensions sandwiching a monolayer of a bistable rotaxane depicted below:
Each bit in the device consists of a silicon and a titanium crossbar with around 100 rotaxane molecules filling in the space between them at perpendicular angles. The hydrophilic diethylene glycol stopper on the left (gray) is specifically designed to anchor to the silicon wire (made hydrophilic by phosphorus doping) while the hydrophobic tetraarylmethane stopper on the right does the same to the likewise hydrophobic titanium wire. In the ground state of the switch, the paraquat ring is located around a tetrathiafulvalene unit (in red) but it moves to the dioxynaphthyl unit (in green) when the fulvalene unit is oxidized by application of a current. When the fulvalene is reduced back a metastable high conductance '1' state is formed which relaxes back to the ground state with a chemical half-life of around one hour. The problem of defects is circumvented by adopting a defect-tolerant architecture also found in the Teramac project. In this way a circuit is obtained consisting of 160,000 bits on an area the size of a white blood cell translating into 1011 bits per square centimeter.
References
Further reading
Supramolecular chemistry
Molecular machines | Molecular switch | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 1,697 | [
"Machines",
"Molecular machines",
"Physical systems",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
12,139,197 | https://en.wikipedia.org/wiki/Transfer%20DNA%20binary%20system | A transfer DNA (T-DNA) binary system is a pair of plasmids consisting of a T-DNA binary vector and a vir helper plasmid. The two plasmids are used together (thus binary) to produce genetically modified plants. They are artificial vectors that have been derived from the naturally occurring Ti plasmid found in bacterial species of the genus Agrobacterium, such as A. tumefaciens. The binary vector is a shuttle vector, so-called because it is able to replicate in multiple hosts (e.g. Escherichia coli and Agrobacterium).
Systems in which T-DNA and vir genes are located on separate replicons are called T-DNA binary systems. T-DNA is located on the binary vector (the non-T-DNA region of this vector containing origin(s) of replication that could function both in E. coli and Agrobacterium, and antibiotic resistance genes used to select for the presence of the binary vector in bacteria, became known as vector backbone sequences). The replicon containing the vir genes became known as the vir helper plasmid. The vir helper plasmid is considered disarmed if it does not contain oncogenes that could be transferred to a plant.
Binary system components
T-DNA binary vector
There are several binary vectors that replicate in Agrobacterium and can be used for delivery of T-DNA from Agrobacterium into plant cells. The T-DNA portion of the binary vector is flanked by left and right border sequences and may include a transgene as well as a plant selectable marker. Outside of the T-DNA, the binary vector also contains a bacterial selectable marker and an origin of replication (ori) for bacteria.
Representative series of binary vectors are listed below.
Vir helper plasmid
The vir helper plasmid contains the vir genes that originated from the Ti plasmid of Agrobacterium. These genes code for a series of proteins that cut the binary vector at the left and right border sequences, and facilitate transfer and integration of T-DNA to the plant's cells and genomes, respectively.
Several vir helper plasmids have been reported, and common Agrobacterium strains that include vir helper plasmids are:
EHA101
EHA105
AGL-1
LBA4404
GV2260
Development of T-DNA binary vectors
The pBIN19 vector was developed in the 1980s and is one of the first and most widely used binary vectors. The pGreen vector, which was developed in 2000, is a newer version of the binary vector that allows for a choice of promoters, selectable markers and reporter genes. Another distinguishing feature of pGreen is its large reduction in size (from about 11,7kbp to 4,6kbp) from pBIN19, therefore increasing its transformation efficiency.
Along with higher transformation efficiency, pGreen has been engineered to ensure transformation integrity. Both pBIN19 and pGreen usually use the same selectable marker nptII, but pBIN19 has the selectable marker next to the right border, while pGreen has it close to the left border. Due to a polarity difference in the left and right borders, the right border of the T-DNA enters the host plant first. If the selectable marker is near the right border (as is the case with pBIN19) and the transformation process is interrupted, the resulting plant may have expression of a selectable marker but contain no T-DNA giving a false positive. The pGreen vector has the selectable marker entering the host last (due to its location next to the left border) so any expression of the marker will result in full transgene integration.
The pGreen-based vectors are not autonomous and they will not replicate in Agrobacterium if pSoup is not present. Series of small binary vectors that autonomously replicate in E. coli and Agrobacterium include:
pCB
pLSU
pLX
References
Genetics
Biotechnology
Synthetic biology | Transfer DNA binary system | [
"Engineering",
"Biology"
] | 866 | [
"Synthetic biology",
"Biological engineering",
"Genetics",
"Biotechnology",
"Bioinformatics",
"Molecular genetics",
"nan"
] |
12,139,474 | https://en.wikipedia.org/wiki/Histone-modifying%20enzymes | Histone-modifying enzymes are enzymes involved in the modification of histone substrates after protein translation and affect cellular processes including gene expression. To safely store the eukaryotic genome, DNA is wrapped around four core histone proteins (H3, H4, H2A, H2B), which then join to form nucleosomes. These nucleosomes further fold together into highly condensed chromatin, which renders the organism's genetic material far less accessible to the factors required for gene transcription, DNA replication, recombination and repair. Subsequently, eukaryotic organisms have developed intricate mechanisms to overcome this repressive barrier imposed by the chromatin through histone modification, a type of post-translational modification which typically involves covalently attaching certain groups to histone residues. Once added to the histone, these groups (directly or indirectly) elicit either a loose and open histone conformation, euchromatin, or a tight and closed histone conformation, heterochromatin. Euchromatin marks active transcription and gene expression, as the light packing of histones in this way allows entry for proteins involved in the transcription process. As such, the tightly packed heterochromatin marks the absence of current gene expression.
While there exist several distinct post-translational modifications for histones, the four most common histone modifications include acetylation, methylation, phosphorylation and ubiquitination. Histone-modifying enzymes that induce a modification (e.g., add a functional group) are dubbed writers, while enzymes that revert modifications are dubbed erasers. Furthermore, there are many uncommon histone modifications including O-GlcNAcylation, sumoylation, ADP-ribosylation, citrullination and proline isomerization. For a detailed example of histone modifications in transcription regulation see RNA polymerase control by chromatin structure and table "Examples of histone modifications in transcriptional regulation".
Common histone modifications
The four common histone modifications and their respective writer and eraser enzymes are shown in the table below.
Acetylation
Histone acetylation, or the addition of an acetyl group to histones, is facilitated by histone acetyltransferases (HATs) which target lysine (K) residues on the N-terminal histone tail. Histone deacetylases (HDACs) facilitate the removal of such groups. The positive charge on a histone is always neutralized upon acetylation, creating euchromatin which increases transcription and expression of the target gene. Lysine residues 9, 14, 18, and 23 of core histone H3 and residues 5, 8, 12, and 16 of H4 are all targeted for acetylation.
Methylation
Histone methylation involves adding methyl groups to histones, primarily on lysine (K) or arginine (R) residues. The addition and removal of methyl groups is carried out by histone methyltransferases (HMTs) and histone demethylases (KDMs) respectively. Histone methylation is responsible for either activation or repression of genes, depending on the target site, and plays an important role in development and learning.
Phosphorylation
Histone phosphorylation occurs when a phosphoryl group is added to a histone. Protein kinases (PTKs) catalyze the phosphorylation of histones and protein phosphatases (PPs) catalyze the dephosphorylation of histones. Much like histone acetylation, histone phosphorylation neutralizes the positive charge on histones which induces euchromatin and increases gene expression. Histone phosphorylation occurs on serine (S), threonine (T) and tyrosine (Y) amino-acid residues mainly in the N-terminal histone tails.
Additionally, the phosphorylation of histones has been found to play a role in DNA repair and chromatin condensation during cell division. One such example is the phosphorylation of S139 on H2AX histones, which is needed to repair double-stranded breaks in the DNA.
Ubiquitination
Ubiquitination is a post-translational modification involving the addition of ubiquitin proteins onto target proteins. Histones are often ubiquitinated with one ubiquitin molecule (monoubiquitination), but can also be modified with ubiquitin chains (polyubiquitination), both of which can have variable effects on gene transcription. Ubiquitin ligases add these ubiquitin proteins while deubiquitinating enzymes (DUBs) remove these groups. Ubiquitination of the H2A core histone typically represses gene expression as it prevents methylation at H3K4, while H2B ubiquitination is necessary for H3K4 methylation and can lead to both gene activation or repression. Additionally, histone ubiquitination is related to genomic maintenance, as ubiquitination of histone H2AX is involved in DNA damage recognition of DNA double-strand breaks.
Uncommon histone modifications
Additional infrequent histone modifications and their effects are listed in the table below.
O-GlcNAcylation
The presence of O-GlcNAcylation (O-GlcNAc) on serine (S) and threonine (T) histone residues is known to mediate other post-transcriptional histone modifications. The addition and removal of GlcNAc groups are performed by O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA) respectively. While our understanding of these processes is limited, GlcNAcylation of S112 on core histone H2B has been found to promote monoubiquitination of K120. Similarly, OGT associates with the HCF1 complex which interacts with BAP1 to mediate deubiquitination of H2A. OGT is also involved in the trimethylation of H3K27 and creates a co-repressor complex to promote histone deacetylation upon binding to SIN3A.
Sumoylation
SUMOylation refers to the addition of Small Ubiquitin-like Modifier (SUMO) proteins onto histones. SUMOylation involves covalent attachments between SUMO proteins and lysine (K) residues on histones and is carried out in three main steps by three respective enzymes: activation via SUMO E1, conjugation via SUMO E2, and ligation via SUMO E3. In humans, SUMO E1 has been identified as the heterodimer SAE1/SAE2, SUMO E2 is known as UBE2I, and the SUMO E3 role may be a multi-protein complex played by a handful of different enzymes.
SUMOylation affects the chromatin status (looseness) of the histone and influences the assembly of transcription factors on genetic promoters, leading to either transcriptional repression or activation depending on the substrate. SUMOylation also plays a role in the major DNA repair pathways of base excision repair, nucleotide excision repair, non-homologous end joining and homologous recombination repair. Additionally, SUMOylation facilitates error prone translesion synthesis.
ADP-ribosylation
ADP-ribosylation (ADPr) defines the addition of one or more adenosine diphosphate ribose (ADP-ribose) groups to a protein. ADPr is an important mechanism in gene regulation that affects chromatin organization, the binding of transcription factors, and mRNA processing through poly-ADP ribose polymerase (PARP) enzymes. There are multiple types of PARP proteins, but the subclass of DNA-dependent PARP proteins including PARP-1, PARP-2, and PARP-3 interact with the histone. The PARP-1 enzyme is the most prominent of these three proteins in the context of gene regulation and interacts with all five histone proteins.
Like PARPs 2 and 3, the catalytic activity of PARP-1 is activated by discontinuous DNA fragments, DNA fragments with single-stranded breaks. PARP-1 binds histones near the axis where DNA enters and exits the nucleosome and additionally interacts with numerous chromatin-associated proteins which allow for indirect association with chromatin. Upon binding to chromatin, PARP-1 produces repressive histone marks that can alter the conformational state of histones and inhibit gene expression so that DNA repair can take place. Other avenues of transcription regulation by PARP-1 include acting as a transcription coregulator, regulation of RNA and modulation of DNA methylation via inhibiting the DNA methyltransferase Dnmt1.
Citrullination
Citrullination, or deimination, is the process by which the amino acid arginine (R) is converted into citrulline. Protein arginine deiminases (PADs) replace the ketimine group of arginine with a ketone group to form the citrulline. PAD4 is the deaminase involved in histone modification and converts arginine to citrulline on histones H3 and H4; because arginine methylation on these histones is important for transcriptional activation, citrullination of certain residues can cause the eventual loss of methylation, leading to decreased gene transcription; specific citrullination of H3R2, H3R8, H3R17, and H3R26 residues have been identified in breast cancer cells. As of research conducted in 2019, this process is thought to be irreversible.
Proline Isomerization
Isomerization involves transforming a molecule so that it adopts a different structural conformation; proline isomerization plays an integral role in the modification of histone tails. Fpr4 is the prolyl isomerase enzyme (PPIase) which converts the amino acid proline (P) on histones between the cis and trans conformations. While Fpr4 has catalytic activity on a number of prolines on the N-terminal region of core histone H3 (P16, P30 and P38), it most readily binds to P38.
H3P38 lies near the lysine (K) residue H3K36, and changes in P38 can affect the methylation status of K36. The two possible P38 isomers available, cis and trans, cause differential effects that are opposite of each other. The cis position induces compact histones and decreases the ability of proteins to bind to the DNA, thus preventing methylation of K36 and decreasing gene transcription. Conversely, the trans position of P38 promotes a more open histone conformation, allowing for K36 methylation and leading to an increase gene transcription.
Role in research
Cancer
Alterations in the functions of histone-modifying enzymes deregulate the control of chromatin-based processes, ultimately leading to oncogenic transformation and cancer. Both DNA methylation and histone modifications show patterns of distribution in cancer cells. These epigenetic alterations may occur at different stages of tumourigenesis and thus contribute to both the development and/or progression of cancer.
Other Research
Vitamin B12 deficiency in mice has been shown to alter expression of histone modifying enzymes in the brain, leading to behavioral changes and epigenetic reprogramming. Evidences also show the importance of HDACs in regulation of lipid metabolism and other metabolic pathways playing a role in the pathophysiology of metabolic disorders.
See also
DNA
Histone
Nucleosome
Chromatin
Euchromatin
Heterochromatin
Histone acetylation and deacetylation
Histone methylation
Protein phosphorylation
Ubiquitin
O-GlcNAc
Sumoylation
ADP-ribosylation
Citrullination
Proline isomerization in epigenetics
References
Epigenetics
Proteins | Histone-modifying enzymes | [
"Chemistry"
] | 2,554 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
12,141,074 | https://en.wikipedia.org/wiki/Harmonic%20measure | In mathematics, especially potential theory, harmonic measure is a concept related to the theory of harmonic functions that arises from the solution of the classical Dirichlet problem. In probability theory, the harmonic measure of a subset of the boundary of a bounded domain in Euclidean space , is the probability that a Brownian motion started inside a domain hits that subset of the boundary. More generally, harmonic measure of an Itō diffusion X describes the distribution of X as it hits the boundary of D. In the complex plane, harmonic measure can be used to estimate the modulus of an analytic function inside a domain D given bounds on the modulus on the boundary of the domain; a special case of this principle is Hadamard's three-circle theorem. On simply connected planar domains, there is a close connection between harmonic measure and the theory of conformal maps.
The term harmonic measure was introduced by Rolf Nevanlinna in 1928 for planar domains, although Nevanlinna notes the idea appeared implicitly in earlier work by Johansson, F. Riesz, M. Riesz, Carleman, Ostrowski and Julia (original order cited). The connection between harmonic measure and Brownian motion was first identified by Kakutani ten years later in 1944.
Definition
Let D be a bounded, open domain in n-dimensional Euclidean space Rn, n ≥ 2, and let ∂D denote the boundary of D. Any continuous function f : ∂D → R determines a unique harmonic function Hf that solves the Dirichlet problem
If a point x ∈ D is fixed, by the Riesz–Markov–Kakutani representation theorem and the maximum principle Hf(x) determines a probability measure ω(x, D) on ∂D by
The measure ω(x, D) is called the harmonic measure (of the domain D with pole at x).
Properties
For any Borel subset E of ∂D, the harmonic measure ω(x, D)(E) is equal to the value at x of the solution to the Dirichlet problem with boundary data equal to the indicator function of E.
For fixed D and E ⊆ ∂D, ω(x, D)(E) is a harmonic function of x ∈ D and
Hence, for each x and D, ω(x, D) is a probability measure on ∂D.
If ω(x, D)(E) = 0 at even a single point x of D, then is identically zero, in which case E is said to be a set of harmonic measure zero. This is a consequence of Harnack's inequality.
Since explicit formulas for harmonic measure are not typically available, we are interested in determining conditions which guarantee a set has harmonic measure zero.
F. and M. Riesz Theorem: If is a simply connected planar domain bounded by a rectifiable curve (i.e. if ), then harmonic measure is mutually absolutely continuous with respect to arc length: for all , if and only if .
Makarov's theorem: Let be a simply connected planar domain. If and for some , then . Moreover, harmonic measure on D is mutually singular with respect to t-dimensional Hausdorff measure for all t > 1.
Dahlberg's theorem: If is a bounded Lipschitz domain, then harmonic measure and (n − 1)-dimensional Hausdorff measure are mutually absolutely continuous: for all , if and only if .
Examples
If is the unit disk, then harmonic measure of with pole at the origin is length measure on the unit circle normalized to be a probability, i.e. for all where denotes the length of .
If is the unit disk and , then for all where denotes length measure on the unit circle. The Radon–Nikodym derivative is called the Poisson kernel.
More generally, if and is the n-dimensional unit ball, then harmonic measure with pole at is for all where denotes surface measure ((n − 1)-dimensional Hausdorff measure) on the unit sphere and .
If is a simply connected planar domain bounded by a Jordan curve and XD, then for all where is the unique Riemann map which sends the origin to X, i.e. . See Carathéodory's theorem.
If is the domain bounded by the Koch snowflake, then there exists a subset of the Koch snowflake such that has zero length () and full harmonic measure .
The harmonic measure of a diffusion
Consider an Rn-valued Itō diffusion X starting at some point x in the interior of a domain D, with law Px. Suppose that one wishes to know the distribution of the points at which X exits D. For example, canonical Brownian motion B on the real line starting at 0 exits the interval (−1, +1) at −1 with probability and at +1 with probability , so Bτ(−1, +1) is uniformly distributed on the set {−1, +1}.
In general, if G is compactly embedded within Rn, then the harmonic measure (or hitting distribution) of X on the boundary ∂G of G is the measure μGx defined by
for x ∈ G and F ⊆ ∂G.
Returning to the earlier example of Brownian motion, one can show that if B is a Brownian motion in Rn starting at x ∈ Rn and D ⊂ Rn is an open ball centred on x, then the harmonic measure of B on ∂D is invariant under all rotations of D about x and coincides with the normalized surface measure on ∂D
General references
(See Sections 7, 8 and 9)
References
P. Jones and T. Wolff, Hausdorff dimension of Harmonic Measure in the plane, Acta. Math. 161 (1988) 131-144 (MR962097)(90j:31001)
C. Kenig and T. Toro, Free Boundary regularity for Harmonic Measores and Poisson Kernels, Ann. of Math. 150 (1999)369-454MR 172669992001d:31004)
C. Kenig, D. Preissand, T. Toro, Boundary Structure and Size in terms of Interior and Exterior Harmonic Measures in Higher Dimensions, Jour. of Amer. Math. Soc. vol 22 July 2009, no3,771-796
S. G. Krantz, The Theory and Practice of Conformal Geometry, Dover Publ. Mineola New York (2016) esp. Ch 6 classical case
External links
Measures (measure theory)
Potential theory | Harmonic measure | [
"Physics",
"Mathematics"
] | 1,350 | [
"Functions and mappings",
"Physical quantities",
"Measures (measure theory)",
"Quantity",
"Mathematical objects",
"Potential theory",
"Size",
"Mathematical relations"
] |
12,141,201 | https://en.wikipedia.org/wiki/Particle%20therapy | Particle therapy is a form of external beam radiotherapy using beams of energetic neutrons, protons, or other heavier positive ions for cancer treatment. The most common type of particle therapy as of August 2021 is proton therapy.
In contrast to X-rays (photon beams) used in older radiotherapy, particle beams exhibit a Bragg peak in energy loss through the body, delivering their maximum radiation dose at or near the tumor and minimizing damage to surrounding normal tissues.
Particle therapy is also referred to more technically as hadron therapy, excluding photon and electron therapy. Neutron capture therapy, which depends on a secondary nuclear reaction, is also not considered here. Muon therapy, a rare type of particle therapy not within the categories above, has also been studied theoretically; however, muons are still most commonly used for imaging, rather than therapy.
Method
Particle therapy works by aiming energetic ionizing particles at the target tumor. These particles damage the DNA of tissue cells, ultimately causing their death. Because of their reduced ability to repair DNA, cancerous cells are particularly vulnerable to such damage.
The figure shows how beams of electrons, X-rays or protons of different energies (expressed in MeV) penetrate human tissue. Electrons have a short range and are therefore only of interest close to the skin (see electron therapy). Bremsstrahlung X-rays penetrate more deeply, but the dose absorbed by the tissue then shows the typical exponential decay with increasing thickness. For protons and heavier ions, on the other hand, the dose increases while the particle penetrates the tissue and loses energy continuously. Hence the dose increases with increasing thickness up to the Bragg peak that occurs near the end of the particle's range. Beyond the Bragg peak, the dose drops to zero (for protons) or almost zero (for heavier ions).
The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. This enables higher dose prescription to the tumor, theoretically leading to a higher local control rate, as well as achieving a low toxicity rate.
The ions are first accelerated by means of a cyclotron or synchrotron. The final energy of the emerging particle beam defines the depth of penetration, and hence, the location of the maximum energy deposition. Since it is easy to deflect the beam by means of electro-magnets in a transverse direction, it is possible to employ a raster scan method, i.e., to scan the target area quickly, as the electron beam scans a TV tube. If, in addition, the beam energy and hence the depth of penetration is varied, an entire target volume can be covered in three dimensions, providing an irradiation exactly following the shape of the tumor. This is one of the great advantages compared to conventional X-ray therapy.
At the end of 2008, 28 treatment facilities were in operation worldwide and over 70,000 patients had been treated by means of pions, protons and heavier ions. Most of this therapy has been conducted using protons.
At the end of 2013, 105,000 patients had been treated with proton beams, and approximately 13,000 patients had received carbon-ion therapy.
As of April 1, 2015, for proton beam therapy, there are 49 facilities in the world, including 14 in the US with another 29 facilities under construction. For Carbon-ion therapy, there are eight centers operating and four under construction. Carbon-ion therapy centers exist in Japan, Germany, Italy, and China. Two US federal agencies are hoping to stimulate the establishment of at least one US heavy-ion therapy center.
Proton therapy
Proton therapy is a type of particle therapy that uses a beam of protons to irradiate diseased tissue, most often to treat cancer. The chief advantage of proton therapy over other types of external beam radiotherapy (e.g., radiation therapy, or photon therapy) is that the dose of protons is deposited over a narrow range of depth, which results in minimal entry, exit, or scattered radiation dose to healthy nearby tissues. High dose rates are key in cancer treatment advancements. PSI demonstrated that for cyclotron-based proton therapy facility using momentum cooling, it is possible to achieve remarkable dose rates of 952 Gy/s and 2105 Gy/s at the Bragg peak (in water) for 70 MeV and 230 MeV beams, respectively. When combined with field-specific ridge filters, Bragg peak-based FLASH proton therapy becomes feasible.
Fast-neutron therapy
Fast neutron therapy utilizes high energy neutrons typically between 50 and 70 MeV to treat cancer. Most fast neutron therapy beams are produced by reactors, cyclotrons (d+Be) and linear accelerators. Neutron therapy is currently available in Germany, Russia, South Africa and the United States. In the United States, the only treatment center still operational is in Seattle, Washington. The Seattle center use a cyclotron which produces a proton beam impinging upon a beryllium target.
Carbon ion radiotherapy
Carbon ion therapy (C-ion RT) was pioneered at the National Institute of Radiological Sciences (NIRS) in Chiba, Japan, which began treating patients with carbon ion beams in 1994. This facility was the first to utilize carbon ions clinically, marking a significant advancement in particle therapy for cancer treatment. The therapeutic advantages of carbon ions were recognized earlier, but NIRS was instrumental in establishing its clinical application.
C-ion RT uses particles more massive than protons or neutrons. Carbon ion radiotherapy has increasingly garnered scientific attention as technological delivery options have improved and clinical studies have demonstrated its treatment advantages for many cancers such as prostate, head and neck, lung, and liver cancers, bone and soft tissue sarcomas, locally recurrent rectal cancer, and pancreatic cancer, including locally advanced disease. It also has clear advantages to treat otherwise intractable hypoxic and radio-resistant cancers while opening the door for substantially hypo-fractionated treatment of normal and radio-sensitive disease.
By mid 2017, more than 15,000 patients have been treated worldwide in over 8 operational centers. Japan has been a conspicuous leader in this field. There are five heavy-ion radiotherapy facilities in operation and plans exist to construct several more facilities in the near future. In Germany this type of treatment is available at the Heidelberg Ion-Beam Therapy Center (HIT) and at the Marburg Ion-Beam Therapy Center (MIT). In Italy the National Centre of Oncological Hadrontherapy (CNAO) provides this treatment. Austria will open a CIRT center in 2017, with centers in South Korea, Taiwan, and China soon to open. No CIRT facility now operates in the United States but several are in various states of development.
Biological advantages of heavy-ion radiotherapy
From a radiation biology standpoint, there is considerable rationale to support use of heavy-ion beams in treating cancer patients. All proton and other heavy ion beam therapies exhibit a defined Bragg peak in the body so they deliver their maximum lethal dosage at or near the tumor. This minimizes harmful radiation to the surrounding normal tissues. However, carbon-ions are heavier than protons and so provide a higher relative biological effectiveness (RBE), which increases with depth to reach the maximum at the end of the beam's range. Thus the RBE of a carbon ion beam increases as the ions advance deeper into the tumor-lying region. CIRT provides the highest linear energy transfer (LET) of any currently available form of clinical radiation. This high energy delivery to the tumor results in many double-strand DNA breaks which are very difficult for the tumor to repair. Conventional radiation produces principally single strand DNA breaks which can allow many of the tumor cells to survive. The higher outright cell mortality produced by CIRT may also provide a clearer antigen signature to stimulate the patient's immune system.
Particle therapy of moving targets
The precision of particle therapy of tumors situated in thorax and abdominal region is strongly affected by the target motion. The mitigation of its negative influence requires advanced techniques of tumor position monitoring (e.g., fluoroscopic imaging of implanted radio-opaque fiducial markers or electromagnetic detection of inserted transponders) and irradiation (gating, rescanning, gated rescanning and tumor tracking).
References
External links
Touro University announces first combined particle therapy center in U.S.
PTCOG annual conference
Radiation therapy procedures
Medical physics
Particle physics | Particle therapy | [
"Physics"
] | 1,726 | [
"Applied and interdisciplinary physics",
"Particle physics",
"Medical physics"
] |
8,987,340 | https://en.wikipedia.org/wiki/Variational%20Monte%20Carlo | In computational physics, variational Monte Carlo (VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of a quantum system.
The basic building block is a generic wave function depending on some parameters . The optimal values of the parameters is then found upon minimizing the total energy of the system.
In particular, given the Hamiltonian , and denoting with a many-body configuration, the expectation value of the energy can be written as:
Following the Monte Carlo method for evaluating integrals, we can interpret as a probability distribution function, sample it, and evaluate the energy expectation value as the average of the so-called local energy . Once is known for a given set of variational parameters , then optimization is performed in order to minimize the energy and obtain the best possible representation of the ground-state wave-function.
VMC is no different from any other variational method, except that the many-dimensional integrals are evaluated numerically. Monte Carlo integration is particularly crucial in this problem since the dimension of the many-body Hilbert space, comprising all the possible values of the configurations , typically grows exponentially with the size of the physical system. Other approaches to the numerical evaluation of the energy expectation values would therefore, in general, limit applications to much smaller systems than those analyzable thanks to the Monte Carlo approach.
The accuracy of the method then largely depends on the choice of the variational state. The simplest choice typically corresponds to a mean-field form, where the state is written as a factorization over the Hilbert space. This particularly simple form is typically not very accurate since it neglects many-body effects. One of the largest gains in accuracy over writing the wave function separably comes from the introduction of the so-called Jastrow factor. In this case the wave function is written as , where is the distance between a pair of quantum particles and is a variational function to be determined. With this factor, we can explicitly account for particle-particle correlation, but the many-body integral becomes unseparable, so Monte Carlo is the only way to evaluate it efficiently. In chemical systems, slightly more sophisticated versions of this factor can obtain 80–90% of the correlation energy (see electronic correlation) with less than 30 parameters. In comparison, a configuration interaction calculation may require around 50,000 parameters to reach that accuracy, although it depends greatly on the particular case being considered. In addition, VMC usually scales as a small power of the number of particles in the simulation, usually something like N2−4 for calculation of the energy expectation value, depending on the form of the wave function.
Wave function optimization in VMC
QMC calculations crucially depend on the quality of the trial-function, and so it is essential to have an optimized wave-function as close as possible to the ground state.
The problem of function optimization is a very important research topic in numerical simulation. In QMC, in addition to the usual difficulties to find the minimum of multidimensional parametric function, the statistical noise is present in the estimate of the cost function (usually the energy), and its derivatives, required for an efficient optimization.
Different cost functions and different strategies were used to optimize a many-body trial-function. Usually three cost functions were used in QMC optimization energy, variance or a linear combination of them. The variance optimization method has the advantage that the exact wavefunction's variance is known. (Because the exact wavefunction is an eigenfunction of the Hamiltonian, the variance of the local energy is zero). This means that variance optimization is ideal in that it is bounded from below, it is positive defined and its minimum is known. Energy minimization may ultimately prove more effective, however, as different authors recently showed that the energy optimization is more effective than the variance one.
There are different motivations for this: first, usually one is interested in the lowest energy rather than in the lowest variance in both variational and diffusion Monte Carlo; second, variance optimization takes many iterations to optimize determinant parameters and often the optimization can get stuck in multiple local minimum and it suffers of the "false convergence" problem; third energy-minimized wave functions on average yield more accurate values of other expectation values than variance minimized wave functions do.
The optimization strategies can be divided into three categories. The first strategy is based on correlated sampling together with deterministic optimization methods. Even if this idea yielded very accurate results for the first-row atoms, this procedure can have problems if parameters affect the nodes, and moreover density ratio of the current and initial trial-function increases exponentially with the size of the system. In the second strategy one use a large bin to evaluate the cost function and its derivatives in such way that the noise can be neglected and deterministic methods can be used.
The third approach, is based on an iterative technique to handle directly with noise functions. The first example of these methods is the so-called Stochastic Gradient Approximation (SGA), that was used also for structure optimization. Recently an improved and faster approach of this kind was proposed the so-called Stochastic Reconfiguration (SR) method.
VMC and deep learning
In 2017, Giuseppe Carleo and Matthias Troyer used a VMC objective function to train an artificial neural network to find the ground state of a quantum mechanical system. More generally, artificial neural networks are being used as a wave function ansatz (known as neural network quantum states) in VMC frameworks for finding ground states of quantum mechanical systems. The use of neural network ansatzes for VMC has been extended to fermions, enabling electronic structure calculations that are significantly more accurate than VMC calculations which do not use neural networks.
See also
Metropolis–Hastings algorithm
Rayleigh–Ritz method
Time-dependent variational Monte Carlo
Further reading
General
Wave-function optimization in VMC
References
Quantum chemistry
Quantum Monte Carlo
Mathematical optimization | Variational Monte Carlo | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,224 | [
"Mathematical optimization",
"Mathematical analysis",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Quantum Monte Carlo",
" and optical physics"
] |
8,987,495 | https://en.wikipedia.org/wiki/Diffusion%20Monte%20Carlo | Diffusion Monte Carlo (DMC) or diffusion quantum Monte Carlo is a quantum Monte Carlo method that uses a Green's function to calculate low-lying energies of a quantum many-body Hamiltonian.
Introduction and motivation of the algorithm
Diffusion Monte Carlo has the potential to be numerically exact, meaning that it can find the exact ground state energy for any quantum system within a given error, but approximations must often be made and their impact must be assessed in particular cases. When actually attempting the calculation, one finds that for bosons, the algorithm scales as a polynomial with the system size, but for fermions, DMC scales exponentially with the system size. This makes exact large-scale DMC simulations for fermions impossible; however, DMC employing a clever approximation known as the fixed-node approximation can still yield very accurate results.
To motivate the algorithm, let's look at the Schrödinger equation for a particle in some potential in one dimension:
We can condense the notation a bit by writing it in terms of an operator equation, with
where is the Hamiltonian operator. So then we have
where we have to keep in mind that is an operator, not a simple number or function. There are special functions, called eigenfunctions, for which , where is a number. These functions are special because no matter where we evaluate the action of the operator on the wave function, we always get the same number . These functions are called stationary states, because the time derivative at any point is always the same, so the amplitude of the wave function never changes in time. Since the overall phase of a wave function is not measurable, the system does not change in time.
We are usually interested in the wave function with the lowest energy eigenvalue, the ground state. We're going to write a slightly different version of the Schrödinger equation that will have the same energy eigenvalue, but, instead of being oscillatory, it will be convergent. Here it is:
.
We've removed the imaginary number from the time derivative and added in a constant offset of , which is the ground state energy. We don't actually know the ground state energy, but there will be a way to determine it self-consistently which we'll introduce later. Our modified equation (some people call it the imaginary-time Schrödinger equation) has some nice properties. The first thing to notice is that if we happen to guess the ground state wave function, then and the time derivative is zero. Now suppose that we start with another wave function(), which is not the ground state but is not orthogonal to it. Then we can write it as a linear sum of eigenfunctions:
Since this is a linear differential equation, we can look at the action of each part separately. We already determined that is stationary. Suppose we take . Since is the lowest-energy eigenfunction, the associate eigenvalue of satisfies the property . Thus the time derivative of is negative, and will eventually go to zero, leaving us with only the ground state. This observation also gives us a way to determine . We watch the amplitude of the wave function as we propagate through time. If it increases, then decrease the estimation of the offset energy. If the amplitude decreases, then increase the estimate of the offset energy.
Stochastic implementation and the Green's function
Now we have an equation that, as we propagate it forward in time and adjust appropriately, we find the
ground state of any given Hamiltonian. This is still a harder problem than classical mechanics, though, because instead of
propagating single positions of particles, we must propagate entire functions. In classical mechanics, we could simulate the
motion of the particles by setting , if we assume that the force is constant over the time span of . For the imaginary time Schrödinger equation, instead, we propagate forward in time using a convolution integral with a special function called a Green's function. So we get . Similarly to classical mechanics, we can only propagate for small slices of time; otherwise the Green's function is inaccurate. As the number of particles increases, the dimensionality of the integral increases as well, since we have to integrate over all coordinates of all particles. We can do these integrals by Monte Carlo integration.
References
Quantum chemistry
Computational chemistry
Quantum Monte Carlo | Diffusion Monte Carlo | [
"Physics",
"Chemistry"
] | 911 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"Atomic",
"Quantum Monte Carlo",
" and optical physics"
] |
8,988,217 | https://en.wikipedia.org/wiki/Path%20integral%20Monte%20Carlo | Path integral Monte Carlo (PIMC) is a quantum Monte Carlo method used to solve quantum statistical mechanics problems numerically within the path integral formulation. The application of Monte Carlo methods to path integral simulations of condensed matter systems was first pursued in a key paper by John A. Barker.
The method is typically (but not necessarily) applied under the assumption that symmetry or antisymmetry under exchange can be neglected, i.e., identical particles are assumed to be quantum Boltzmann particles, as opposed to fermion and boson particles. The method is often applied to calculate thermodynamic properties such as the internal energy, heat capacity, or free energy. As with all Monte Carlo method based approaches, a large number of points must be calculated.
In principle, as more path descriptors are used (these can be "replicas", "beads," or "Fourier coefficients," depending on what strategy is used to represent the paths), the more quantum (and the less classical) the result is. However, for some properties the correction may cause model predictions to initially become less accurate than neglecting them if a small number of path descriptors are included. At some point the number of descriptors is sufficiently large and the corrected model begins to converge smoothly to the correct quantum answer. Because it is a statistical sampling method, PIMC can take anharmonicity fully into account, and because it is quantum, it takes into account important quantum effects such as tunneling and zero-point energy (while neglecting the exchange interaction in some cases).
The basic framework was originally formulated within the canonical ensemble, but has since been extended to include the grand canonical ensemble and the microcanonical ensemble. Its use has been extended to fermion systems as well as systems of bosons.
An early application was to the study of liquid helium. Numerous applications have been made to other systems, including liquid water and the hydrated electron. The algorithms and formalism have also been mapped onto non-quantum mechanical problems in the field of financial modeling, including option pricing.
See also
Path integral molecular dynamics
Quantum algorithm
References
External links
Path Integral Monte Carlo Simulation
Quantum chemistry
Quantum Monte Carlo
Quantum information theory
Quantum algorithms | Path integral Monte Carlo | [
"Physics",
"Chemistry"
] | 452 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Quantum Monte Carlo",
"Physical chemistry stubs",
" and optical physics"
] |
8,988,283 | https://en.wikipedia.org/wiki/Reptation%20Monte%20Carlo | Reptation Monte Carlo is a quantum Monte Carlo method.
It is similar to Diffusion Monte Carlo, except that it works with paths rather than points. This has some advantages relating to calculating certain properties of the system under study that diffusion Monte Carlo has difficulty with.
In both diffusion Monte Carlo and reptation Monte Carlo, the method first aims to solve the time-dependent Schrödinger equation in the imaginary time direction. When you propagate the Schrödinger equation in time, you get the dynamics of the system under study. When you propagate it in imaginary time, you get a system that tends towards the ground state of the system.
When substituting in place of , the Schrodinger equation becomes identical with a diffusion equation. Diffusion equations can be solved by imagining a huge population of particles (sometimes called "walkers"), each diffusing in a way that solves the original equation. This is how diffusion Monte Carlo works.
Reptation Monte Carlo works in a very similar way, but is focused on the paths that the walkers take, rather than the density of walkers.
In particular, a path may be mutated using a Metropolis algorithm which tries a change (normally at one end of the path) and then accepts or rejects the change based on a probability calculation.
The update step in diffusion Monte Carlo would be moving the walkers slightly, and then duplicating and removing some of them. By contrast, the update step in reptation Monte Carlo mutates a path, and then accepts or rejects the mutation.
References
Quantum chemistry
Quantum Monte Carlo | Reptation Monte Carlo | [
"Physics",
"Chemistry"
] | 320 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
"Quantum Monte Carlo",
"Physical chemistry stubs",
" and optical physics"
] |
8,994,925 | https://en.wikipedia.org/wiki/USC%20Wrigley%20Institute%20for%20Environmental%20Studies | The USC Wrigley Institute for Environmental Studies is an environmental research and education facility run by the University of Southern California. It is an organized research unit that encompasses a wide range of faculty and topics across the university as well as operating a marine laboratory at the edge of Two Harbors, California on Catalina Island approximately 22 miles (35 km) south-southwest of Los Angeles.
The USC Wrigley Institute has specialized programs in environmental microbiology, geobiology, ocean biogeochemistry, living marine resources (including fisheries and aquaculture), climate change, coastal environmental quality and the urban ocean. The Institute is also home to the USC Sea Grant Program, part of the National Sea Grant Program through the National Oceanic and Atmospheric Administration.
History of the Wrigley Institute
USC established the Philip K. Wrigley Marine Science Center on the island at Big Fisherman's Cove following a grant of more than 14 acres of land from the families of Philip Wrigley and Paxson Offield. In 1995, William and Julie Wrigley continued their family legacy by providing USC with the capital to initiate the Wrigley Institute for Environmental Studies. Their gift provided for an endowed directorship, an endowed chair and the renovation of the Wrigley Marine Science Center. Today, the USC research complex on Catalina Island is the centerpiece of the Wrigley Institute, with additional staff and offices on the University of Southern California's University Park Campus in downtown Los Angeles.
In addition, USC administers the Tyler Prize for Environmental Achievement. USC also manages the USC Sea Grant Program, a federally funded program of research, education and outreach. The Sea Grant program at USC places special emphasis on the "urban ocean."
Current leadership and initiatives
The Wrigley Institute of Environmental Studies is currently led by Interim Director Dr. John Heidelberg. Early in his career, Dr. Heidelberg led the collaboration to sequence the genome of Vibrio cholerae, the bacterium that causes cholera, one of humanity's most ancient and deadly scourges. He was later a fundamental team member in the development of shotgun metagenomics sequencing technologies used throughout the world’s oceans. He continues to develop and employ novel sequencing methodologies, contributing to fundamental discoveries about the nature and properties of microbial life in the sea.
Currently, Heidelberg and his staff are focusing research on healthy oceans, coastal megacities, and sustainable solutions. A primary goal is to use the Wrigley Marine Science Center on Catalina Island to become a testbed for sustainable solutions. Signature programs include the San Pedro Ocean Time-Series monitoring program in the waters off the coast of Los Angeles, kelp biodiesel research, sustainable aquaculture, graduate fellowships and a premier scientific diving program.
Past leadership
Since the founding of the Wrigley Institute in 1995, past leadership has included:
Dr. Anthony "Tony" Michaels
Dr. Donal Monahan
Dr. Roberta Marinelli
Dr. Ken Nealson
USC Wrigley Marine Science Center
The Wrigley Institute manages the USC Wrigley Marine Science Center, located on the West End of Catalina Island and bordering the Blue Cavern State Marine Conservation Area.
USC provides daily weekday boat transportation for the USC community to the Catalina facility from the Southern California Marine Institute on Terminal Island.
USC Wrigley Sustainability Prize
The institute launched a pitch competition in 2017 for sustainable businesses called the USC Wrigley Sustainability Prize. The event highlights innovative start-up ideas from all disciplines and rewards concepts that could result in meaningful environmental change. Winning teams receive prize money to help translate their ideas into action. Past student businesses have included:
Catapower creates renewable biofuel and plastic from vegetable oil and won the 2018 prize
Apeiron creates graphene by retrofitting power plants
Interphase increases energy efficiency for power plants
Closed Composites - recycles the carbon fiber from airplanes and won the 2019 prize
Catalina Hyperbaric Chamber
Based at the USC Wrigley Marine Science Center, the USC Catalina Hyperbaric Chamber is an emergency medical facility on Catalina Island for the treatment of scuba diving accidents. The chamber facilities are on the waterfront of the Wrigley Marine Science Center and adjacent to a helipad that is licensed for day or night helicopter landings. The chamber itself is large enough to treat several patients at once and provides enough room for staff and volunteers to perform cardiopulmonary resuscitation (CPR) and advanced life support for patients who arrive in cardiac arrest.
The Catalina Chamber Crew works closely with the Los Angeles County Medical Alert Center (MAC) and operates as an extension of the Los Angeles County-USC Medical Center Department of Emergency Medicine. The chamber is managed by a fulltime member of the USC Wrigley Marine Science Center, and it is staffed all day, every day, by a rotating team of trained volunteers. Financial support comes from Los Angeles County; from donations by individual contributors, dive clubs and dive boat operators, and from special fund raising events.
Wrigley Advisory Board members
The advisory board has 19 members, including Wrigley family members.
Philip Hagenah - (Advisory Board Co-chair), Founder, Film House, Inc.
Todd Bauer - (Advisory Board Co-chair), Founder and President, Guardian Group, Inc.
Alison Wrigley Rusack - Executive Chairman, Santa Catalina Island Company; Co-owner, Rusack Vineyards
Terry Adams - Director, SA Recycling
Sevag Ajemian - President and CEO, Kinetix Air AHU Software
Anjini Desai - Founder and CEO, DeChai Tea; Admissions Director, Roots and Wings Center TK
Brock Dewey - Executive Vice President, Dewey Pest Control
Rod Diefendorf - President and Chief Operating Officer, PitchBook
Alexandra Jameson - Managing Partner, Jameson GDP
Sam King - Chairman of the Board and CEO, King’s Seafood Company
Calen Offield - President, CBO Investment LLC; Co-director, Offield Center for Billfish Studies/Catalina Sea Bass Fund
Chase Offield - CEO, Offield Center for Billfish Studies; Director, Catalina Sea Bass Fund
Maria Pellegrini - Executive Director for Programs, W.M. Keck Foundation
John Rego - Senior Vice President for Sustainability, Sony Pictures, and Environmental Officer, Sony Group
Diane Sonosky Montgomery - Vice President of Administration, Innovative Solutions Insurance Services (ret.)
David Thomas - Owner/Principal, FoodSci Advisory LLC; Chief Research & Development Officer, Keurig Dr Pepper (ret.)
Denise Verret - CEO and Zoo Director, Los Angeles Zoo and Botanical Gardens
Julie Wrigley - President and CEO, Wrigley Investments LLC; President, Julie A. Wrigley Foundation; Manager, GlenNeva Landholdings
Daniel Zinsmeyer - Co-creator, Zinsmeyer Family Endowed Undergraduate Research Fund
References
External links
USC Wrigley Institute website
Institutes of the University of Southern California
Environmental studies institutions in the United States
Environmental science
Science and technology in Greater Los Angeles | USC Wrigley Institute for Environmental Studies | [
"Environmental_science"
] | 1,406 | [
"nan"
] |
8,994,982 | https://en.wikipedia.org/wiki/Relational%20space | The relational theory of space is a metaphysical theory according to which space is composed of relations between objects, with the implication that it cannot exist in the absence of matter. Its opposite is the container theory. A relativistic physical theory implies a relational metaphysics, but not the other way round: even if space is composed of nothing but relations between observers and events, it would be conceptually possible for all observers to agree on their measurements, whereas relativity implies they will disagree. Newtonian physics can be cast in relational terms, but Newton insisted, for philosophical reasons, on absolute (container) space. The subject was famously debated by Gottfried Wilhelm Leibniz and a supporter of Newton's in the Leibniz–Clarke correspondence.
An absolute approach can also be applied to time, with, for instance, the implication that there might have been vast epochs of time before the first event.
See also
René Descartes
Philosophy of space and time
Spacetime
References
Metaphysical theories
Philosophy of physics
Space | Relational space | [
"Physics",
"Mathematics"
] | 202 | [
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Space",
"Geometry",
"Spacetime"
] |
8,995,919 | https://en.wikipedia.org/wiki/Hypsometric%20equation | The hypsometric equation, also known as the thickness equation, relates an atmospheric pressure ratio to the equivalent thickness of an atmospheric layer considering the layer mean of virtual temperature, gravity, and occasionally wind. It is derived from the hydrostatic equation and the ideal gas law.
Formulation
The hypsometric equation is expressed as:
where:
= thickness of the layer [m],
= geometric height [m],
= specific gas constant for dry air,
= mean virtual temperature in Kelvin [K],
= gravitational acceleration [m/s2],
= pressure [Pa].
In meteorology, and are isobaric surfaces. In radiosonde observation, the hypsometric equation can be used to compute the height of a pressure level given the height of a reference pressure level and the mean virtual temperature in between. Then, the newly computed height can be used as a new reference level to compute the height of the next level given the mean virtual temperature in between, and so on.
Derivation
The hydrostatic equation:
where is the density [kg/m3], is used to generate the equation for hydrostatic equilibrium, written in differential form:
This is combined with the ideal gas law:
to eliminate :
This is integrated from to :
R and g are constant with z, so they can be brought outside the integral.
If temperature varies linearly with z (e.g., given a small change in z),
it can also be brought outside the integral when replaced with , the average virtual temperature between and .
Integration gives
simplifying to
Rearranging:
or, eliminating the natural log:
Correction
The Eötvös effect can be taken into account as a correction to the hypsometric equation. Physically, using a frame of reference that rotates with Earth, an air mass moving eastward effectively weighs less, which corresponds to an increase in thickness between pressure levels, and vice versa. The corrected hypsometric equation follows:
where the correction due to the Eötvös effect, A, can be expressed as follows:
where
= Earth rotation rate,
= latitude,
= distance from Earth center to the air mass,
= mean velocity in longitudinal direction (east-west), and
= mean velocity in latitudinal direction (north-south).
This correction is considerable in tropical large-scale atmospheric motion.
See also
Barometric formula
Vertical pressure variation
References
Equations
Vertical position
Atmospheric pressure | Hypsometric equation | [
"Physics",
"Mathematics"
] | 489 | [
"Vertical position",
"Physical quantities",
"Distance",
"Mathematical objects",
"Meteorological quantities",
"Atmospheric pressure",
"Equations"
] |
14,813,581 | https://en.wikipedia.org/wiki/Thermoelastic%20damping | Thermoelastic damping is a source of intrinsic material damping due to thermoelasticity present in almost all materials. As the name thermoelastic suggests, it describes the coupling between the elastic field in the structure caused by deformation and the temperature field.
Definition
In any vibrating structure, the strain field causes a change in the internal energy such that compressed region becomes hotter (assuming a positive coefficient of thermal expansion) and extended region becomes cooler. The mechanism responsible for thermoelastic damping is the resulting lack of thermal equilibrium between various parts of the vibrating structure. Energy is dissipated when irreversible heat flow driven by the temperature gradient occurs.
The earliest study of thermoelastic damping can be found in Clarence Zener’s classical work, in 1937, in which he studied thermoelastic damping in beams undergoing flexural vibrations. Flexural vibrations cause alternating tensile and compressive strains to build up on opposite sides of the neutral axis leading to a thermal imbalance. Irreversible heat flow which is driven by the temperature gradient causes vibrational energy to be dissipated.
References
Elasticity (physics)
Mechanical vibrations | Thermoelastic damping | [
"Physics",
"Materials_science",
"Engineering"
] | 235 | [
"Structural engineering",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Mechanics",
"Mechanical vibrations",
"Physical properties"
] |
14,813,583 | https://en.wikipedia.org/wiki/BSCL2 | Seipin is a protein that in humans is encoded by the BSCL2 gene.
Clinical significance
Mutations in BSCL2 are known to cause the following conditions:
Congenital generalized lipodystrophy type 2;
Spastic paraplegia 17, autosomal dominant (SPG17);
Neuronopathy, distal hereditary motor, 5C (HMN5C);
Encephalopathy, progressive, with or without lipodystrophy (PELD).
References
External links
GeneReviews/NCBI/NIH/UW entry on BSCL2-Related Neurologic Disorders/Seipinopathy
Further reading | BSCL2 | [
"Chemistry"
] | 135 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,813,600 | https://en.wikipedia.org/wiki/DNA%20polymerase%20mu | DNA polymerase mu is a polymerase enzyme found in eukaryotes. In humans, this protein is encoded by the POLM gene.
Function
Pol μ is a member of the X family of DNA polymerases. It participates in resynthesis of damaged or missing nucleotides during the non-homologous end joining (NHEJ) pathway of DNA repair. Pol μ interacts with Ku and DNA ligase IV, which also participate in NHEJ. It is structurally and functionally related to pol λ, and, like pol λ, pol μ has a BRCT domain that is thought to mediate interactions with other DNA repair proteins. Unlike pol λ, however, pol μ has the unique ability to add a base to a blunt end that is templated by the overhang on the opposite end of the double-strand break. Pol μ is also closely related to terminal deoxynucleotidyl transferase (TdT), a specialized DNA polymerase that adds random nucleotides to DNA ends during V(D)J recombination, the process by which B-cell and T-cell receptor diversity is generated in the vertebrate immune system. Like TdT, pol μ participates in V(D)J recombination, but only during light chain rearrangements. This is distinct from pol λ, which is involved in heavy chain rearrangements.
POLM mutant mice
In polymerase mu mutant mice, hematopoietic cell development is defective in several peripheral and bone marrow cell populations with about a 40% decrease in bone marrow cell number that includes several hematopoietic lineages. Expansion potential of hematopoietic progenitor cells is also reduced. These characteristics correlate with reduced ability to repair double-strand breaks in hematopoietic tissue. Whole body gamma irradiation of polymerase mu mutant mice indicates that polymerase mu also has a role in double-strand break repair in other tissues unrelated to hematopoietic tissue. Thus polymerase mu has a significant role in maintaining genetic stability in hematopoietic and non-hematopoietic tissue.
References
External links
DNA repair
DNA-binding proteins | DNA polymerase mu | [
"Biology"
] | 456 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
14,813,741 | https://en.wikipedia.org/wiki/GTF3C1 | General transcription factor 3C polypeptide 1 is a protein that in humans is encoded by the GTF3C1 gene.
Interactions
GTF3C1 has been shown to interact with GTF3C4.
References
Further reading
External links
Transcription factors | GTF3C1 | [
"Chemistry",
"Biology"
] | 54 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,813,801 | https://en.wikipedia.org/wiki/HOXC6 | Homeobox protein Hox-C6 is a protein that in humans is encoded by the HOXC6 gene. Hox-C6 expression is highest in the fallopian tube and ovary. HoxC6 has been highly expressed in many types of cancers including prostate, breast, and esophageal squamous cell cancer.
Function
This gene belongs to the homeobox family, members of which encode a highly conserved family of transcription factors that play an important role in morphogenesis in all multicellular organisms. Mammals possess four similar homeobox gene clusters, HOXA, HOXB, HOXC and HOXD, which are located on different chromosomes and consist of 9 to 11 genes arranged in tandem. This gene, HOXC6, is one of several HOXC genes located in a cluster on chromosome 12. Three genes, HOXC5, HOXC4 and HOXC6, share a 5' non-coding exon. Transcripts may include the shared exon spliced to the gene-specific exons, or they may include only the gene-specific exons. Alternatively spliced transcript variants encoding different isoforms have been identified for HOXC6. Transcript variant two includes the shared exon, and transcript variant one includes only gene-specific exons. HOXC6 plays a role in lymphoma. The HOXC6 isoform, HOXC6-2 is an active carcinogenic for gastric cancer. It stimulates gastric cancer cells proliferation by acting as an oncogene. Downregulation of this gene’s isoforms could potentially lead to less proliferation of certain cancerous cells. With the HOXC6-1 isoform, there were no statistically significant effects on migration, invasion, apoptosis, or proliferation when it was downregulated. According to a study in Cancer Cell International, suppression of the HOXC6 gene plays a role in blocking the TGF-β/SMAD cascade. This then leads to the weakening of epithelial to mesenchymal transition for the cervical carcinoma.
Knock-out model
A knockout model using small interfering RNA showed that knockout of HOXC6 was associated with apoptosis. Additionally, the presence of HOXC6 was associated with inhibition of paclitaxel-induced apoptosis. Thus, HOXC6 was demonstrated to induce proliferative activity.
See also
Homeobox
References
Further reading
External links
Transcription factors | HOXC6 | [
"Chemistry",
"Biology"
] | 526 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,813,842 | https://en.wikipedia.org/wiki/ID4 | ID4 is a protein coding gene. In humans, it encodes the protein known as DNA-binding protein inhibitor ID-4.
This protein is known to be involved in the regulation of many cellular processes during both prenatal development and tumorigenesis. This is inclusive of embryonic cellular growth, senescence, cellular differentiation, apoptosis, and as an oncogene in angiogenesis.
Structure
The gene spans 3.3kb on the plus strand. It is composed of 3 exons and during transcription its mRNA is 2343 bp. The encoded protein consists of 161 amino acids, is 16.6 KDa and contains a poly-Ala segment from amino acid 39 to 48, a helix-loop-helix motif from amino acid 65 to 105 and a poly-Pro region from amino acid 118 to 124. This protein is expressed in various tissues.
Function
The ID4 gene is part of the ID gene family. This family is also known as inhibitors of DNA binding protein family and are composed of transcription inhibitory proteins which modulate a number of processes. They are transcriptional regulators that work by negatively regulating their basic helix-loop-helix (bHLH) transcription factors by forming heterodimers. The heterodimer is what inhibits their DNA binding and transcriptional activity.
Transcription factors containing a basic helix-loop-helix (bHLH) motif regulate expression of tissue-specific genes in a number of mammalian and insect systems. DNA-binding activity of the bHLH proteins is dependent on formation of homo- and/or heterodimers. Dominant-negative (antimorph) HLH proteins encoded by Id-related genes, such as ID4, also contain the HLH-dimerization domain but lack the DNA-binding basic domain. Consequently, ID proteins inhibit binding to DNA and transcriptional transactivation by heterodimerization with bHLH proteins.
Regulation during embryogenesis
The ID4 gene plays a pivotal role in development and is a key player in many pathways of embryogenesis and foetal development. ID4 expression is upregulated in embryogenesis during days 9.5 and 13.5 of gestation and restricted to specific cells of the central and peripheral nervous system. ID4 transcription control has both negative and positive regulatory elements inclusive of novel inhibitory functions.
ID4 expression has been shown to be discrete in the early stages, with transcription transiently expressed in subsets of migrating neural crest cells, the dorsal myocardium, the segmental plate mesoderm, and the tail bud. Later stages show ID4 expression in the telencephalic vesicles and corneal epithelium. ID4 expression is only detected in neuronal tissues and the ventral portion of the epithelium in the developing stomach during embryogenesis.
ID4 is expressed in the central nervous system and is required for G1-S transition and to enhance proliferation in early cortical progenitors. It is complexly involved in regulating neural stem cell proliferation and differentiation by inhibiting proliferation of differentiating neurons through enhancement of RB1-mediated pathways. This is either by direct interaction or through interaction with other molecules of the cell cycle machinery.
ID4 also regulates the lateral expansion of the proliferative zone in the developing cortex and hippocampus. This is integral to normal brain size formation. ID4 regulates neural progenitor proliferation and differentiation. Its expression is seen in the neural tube much later than other ID genes.
ID4 was also shown to be involved in the regulation of cardiac mesoderm function in frog embryos and human embryonic stem cells. Ablation of the ID gene family mouse embryos showed failure of anterior cardiac progenitor specification and the development of heartless embryos. This study also demonstrated that ID4 protein is involved in the regulating cardiac cell fate by a pathway which represses two inhibitors of cardiogenic mesoderm formation (TCF3 and FOXA2) whilst activating inducers (EVX1, GRRP1, and MESP1).
Clinical significance
Role in endometriosis
ID4 has been linked to the molecular pathogenicity of endometriosis. These pathways are still poorly understood. It is thought that ID4 plays a role in the transcription of HOXA9 and CDKN1A which are known to be associated with endometriosis.
A genome wide association study revealed over 100 candidate genes associated with endometriosis. Of these, six were shown to have a highly reliable association, of which the ID4 gene was identified. This is thought to be due to an independent single nucleotide polymorphism at loci rs7739264 near ID4 on chromosome 6p22.3. ID4 is implicated in the molecular pathogenicity of endometriosis as being differentially expressed between the proliferative, early and mid-secretory phases.
Tumorigenesis association
ID4 is not expressed in normal ovary and fallopian tubes. It has been shown to be overexpressed in most primary ovarian cancers. The ID4 gene is also seen to be overexpressed in most ovarian, endometrial and breast cancer cell lines. The mechanism behind this is believed to be that ID4 regulates HOXA9 and CDKN1A genes, which are mediators of cell proliferation and differentiation. HOXA genes are known to play a role in the differentiation of fallopian tubes, uterus, cervix and vagina.
In B-Cell (B lymphocyte) acute lymphoblastic leukaemia (B-ALL), ID4 is overexpressed due to being located in close proximity to the IgH enhancer region.
In Non Hodgkin lymphoma, the ID4 promoter region is implicated in follicular lymphomas, diffuse B Cell lymphomas and lymphoid cell lines due to hypermethylation.
In brain tumours, more specifically oligodendroglial tumours and glioblastomas, the ID4 gene has been shown to be expressed in the neoplastic astrocytes but not expressed in the neoplastic oligodendrocytes.
The ID4 promoter region has been found to be hypermethylated and its mRNA suppressed in breast cancer cell lines including that of primary breast cancers. Patients with invasive carcinomas have shown ID4 expression in their breast cancer specimens. This has been identified as a significant risk factor in nodal metastasis. ID4 is constitutively expressed in normal human mammary epithelium but found to be suppressed in ER positive breast carcinomas and preneoplastic lesions. ER negative carcinomas also show ID4 expression. There is a hypothesis that ID4 acts as a tumour suppressor factory in human breast tissue where oestrogen is responsible for regulation of ID4 expression in the mammary ductal epithelium.
It is unclear whether the ID4 gene plays a role in bladder cancer. ID4 is found on the 6p22.3 amplicon which is frequently associated with advance stage bladder cancer. ID4 has also been shown to be overexpressed in bladder cancer cell lines. This overexpression is equally seen in both normal urothelium which lines the urinary tract inclusive of the renal pelvis, ureters, bladder and parts of the urethra, but also seen in fresh cancer tissues.
ID4 is closely associated with gastric cancer. The ID4 promoter region is hypermethylated and infrequently expressed in gastric adenocarcinomas and frequently expressed in gastric cancer cell lines. In contrast, ID4 is highly expressed in normal gastric mucosa. There is an undefined but significant association seen in ID4 promoter hypermethylation (which results in its down regulation) and microsatellite instability.
ID4 is not found in normal epitheliums nor adenomas of colorectal cancer. Hypermethylation of ID4 causes silencing of the gene. This has been identified as a significant independent risk factor for poor prognosis of colorectal cancer. It is also frequently observed in liver metastases of colorectal cancer specimens.
Developmental disorders
Rett syndrome is an X linked neurodevelopment disorder. It is often identified after six to eight months of age in females. In human brain tissue specimens of Rett syndrome patients, the family of ID genes are seen to be significantly increased in expression.
Society and culture
Commonly used names
The ID4 gene is also known as DNA-binding protein inhibitor ID-4, Id-4, IDb4, IDB4, Inhibitor of DNA binding 4, Inhibitor of differentiation 4, helix protein 271, Inhibitor of DNA binding 4 HLH Protein, Inhibitor of Differentiation 4, Inhibitor of DNA Binding 4 Dominant Negative Helix-Loop-Helix Protein, Class B Basic Helix-Loop-Helix Protein 27, and BHLHb272.
See also
Inhibitor of DNA-binding protein
References
Further reading
External links
Transcription factors | ID4 | [
"Chemistry",
"Biology"
] | 1,884 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,813,851 | https://en.wikipedia.org/wiki/IGHE | Ig epsilon chain C region is a protein that in humans is encoded by the IGHE gene.
Function
IGHE (Immunoglobulin Heavy constant Epsilon), (located on chromosome 14 for humans) has been predicted to enable antigen binding activity and immunoglobulin receptor binding activity. Predicted to be involved in several processes, including activation of immune response; defense response to other organism; and phagocytosis. IGHE has also been predicted to be located in extracellular region, a part of immunoglobulin complex, circulating, and active in external side of plasma membrane.
Structure
IGHE (immunoglobulin heavy constant epsilon): The gene that encodes the ε heavy chain constant region for the IgE antibody. This gene is critical for the production and function of IgE in the body. The IGHE gene provides instructions for making a part of an antibody (immunoglobulin) called Immunoglobulin E, or IgE.
IGHE is a type of functioning gene, with four Ig domains, member of the IGH constant gene cluster (component on the cluster), forming an homodimer of two E heavy chains bound by two disulfide bonds, each heavy chain is bound to a light chain (kappa or lambda), the N terminus of the heavy chain is bound to a V segment.
Allergies
Immunoglobulins also known as antibodies, are glycoprotein molecules produced by plasma cells (white blood cells). They act as a critical part of the immune response by specifically recognizing and binding to particular antigens, such as bacteria or viruses, and aiding in their destruction. Immunoglobulin E (IgE) are antibodies produced by the immune system.
Each type of IgE has specific "radar" for each type of allergen. That's why some people are only allergic to cat dander (they only have the IgE antibodies specific to cat dander); while others have allergic reactions to multiple allergens because they have many more types of IgE antibodies.
IgE-mediated food allergies is when the immune system reacts abnormally when exposed to one or more specific foods such as milk, egg, wheat or nuts. All of these foods can trigger anaphylaxis (a severe, whole-body allergic reaction) in patients who are allergic. Individuals with this type of food allergy will react quickly — within a few minutes to a few hours. Immediate reactions are caused by an allergen-specific immunoglobulin E (IgE) antibody that floats around in the blood stream. Another useful tool in diagnosing and managing food allergies is blood testing, called allergen-specific IgE testing. This test measures the level of antibody produced in the blood in response to a food allergen.
References
Further reading
Proteins | IGHE | [
"Chemistry"
] | 604 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
14,813,856 | https://en.wikipedia.org/wiki/IGHG2 | Ig gamma-2 chain C region is a protein that in humans is encoded by the IGHG2 gene.
References
Further reading | IGHG2 | [
"Chemistry"
] | 29 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,813,883 | https://en.wikipedia.org/wiki/ING2 | Inhibitor of growth protein 2 is a protein that in humans is encoded by the ING2 gene.
Function
This gene is a member of the inhibitor of growth (ING) family. Members of the ING family associate with and modulate the activity of histone acetyltransferase (HAT) and histone deacetylase (HDAC) complexes and function in DNA repair and apoptosis.
References
Further reading
External links
Transcription factors | ING2 | [
"Chemistry",
"Biology"
] | 91 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,814,110 | https://en.wikipedia.org/wiki/NFIC%20%28gene%29 | Nuclear factor 1 C-type is a protein that in humans is encoded by the NFIC gene.
References
Further reading
External links
Transcription factors | NFIC (gene) | [
"Chemistry",
"Biology"
] | 30 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,814,267 | https://en.wikipedia.org/wiki/ING4 | Inhibitor of growth protein 4 is a protein that in humans is encoded by the ING4 gene.
Function
The protein encoded by this gene is similar to ING1, a tumor suppressor protein that can interact with TP53, inhibit cell growth, and induce apoptosis. This protein contains a PHD-finger, which is a common motif in proteins involved in chromatin remodeling. This protein can bind TP53 and EP300/p300, a component of the histone acetyl transferase complex, suggesting its involvement in the TP53-dependent regulatory pathway. Alternatively spliced transcript variants have been observed, but the biological validity of them has not been determined.
Interactions
ING4 has been shown to interact with EP300, RELA and P53.
References
Further reading
External links
Transcription factors | ING4 | [
"Chemistry",
"Biology"
] | 171 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,814,646 | https://en.wikipedia.org/wiki/SOX6 | Transcription factor SOX-6 is a protein that in humans is encoded by the SOX6 gene.
Function
The SOX gene family encodes a group of transcription factors defined by the conserved high mobility group (HMG) DNA-binding domain. Unlike most transcription factors, SOX transcription factors bind to the minor groove of DNA, causing a 70- to 85-degree bend and introducing local conformational changes.[supplied by OMIM]
Interactions
SOX6 has been shown to interact with CTBP2 and CENPK.
It has also been demonstrated that SOX6 protein accumulates in the differentiating human erythrocytes, and then is able to downregulate its own transcription, by directly binding to an evolutionarily conserved consensus sequences located near SOX6 transcriptional start site.
Sox6 appears to have a crucial role in the transcriptional regulation of globin genes, and in directing the terminal differentiation of red blood cells. In addition, SOX6 may have a role in tumor growth of Ewing sarcoma. A new role of Sox6 in renin and prorenin regulation was studied using a Sox KO mouse model in which Sox6 is only knockout in renin expressing cells. This study showed that renin promoter possesses the binding site for Sox6. The highlight of the study was that Sox6 is one of the key regulators of renin and prorenin regulation and JG cell expansion during low salt and dehydration in mice. PMID 31760770; DOI: 10.1152/ajprenal.00095.2019
See also
SOX genes
References
Further reading
External links
Transcription factors | SOX6 | [
"Chemistry",
"Biology"
] | 330 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,815,002 | https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S4%2C%20Y%20isoform%201 | 40S ribosomal protein S4, Y isoform 1 is a protein that in humans is encoded by the RPS4Y1 gene.
Cytoplasmic ribosomes, organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes ribosomal protein S4, a component of the 40S subunit. Ribosomal protein S4 is the only ribosomal protein known to be encoded by more than one gene, namely this gene, RPS4Y2 and the ribosomal protein S4, X-linked (RPS4X). The 3 isoforms encoded by these genes are not identical, but appear to be functionally equivalent. Ribosomal protein S4 belongs to the S4E family of ribosomal proteins. It has been suggested that haploinsufficiency of the ribosomal protein S4 genes plays a role in Turner syndrome; however, this hypothesis is controversial.
See also
S4 protein domain
References
Further reading
Ribosomal proteins | 40S ribosomal protein S4, Y isoform 1 | [
"Chemistry"
] | 226 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,815,510 | https://en.wikipedia.org/wiki/TBX2 | T-box transcription factor 2 Tbx2 is a transcription factor that is encoded by the Tbx2 gene on chromosome 17q21-22 in humans. This gene is a member of a phylogenetically conserved family of genes that share a common DNA-binding domain, the T-box. Tbx2 and Tbx3 are the only T-box transcription factors that act as transcriptional repressors rather than transcriptional activators, and are closely related in terms of development and tumorigenesis. This gene plays a significant role in embryonic and fetal development through control of gene expression, and also has implications in various cancers. Tbx2 is associated with numerous signaling pathways, BMP, TGFβ, Wnt, and FGF, which allow for patterning and proliferation during organogenesis in fetal development.
Role in development
The molecule Tbx-2 is a transcription factor in the T box transcription factor family. Tbx2 helps form the outflow tract and atrioventricular canal. Tbx2 can repress genes as well as being competitors that take over binding sites. It also plays a role in cancer because it will suppress cell growth and supports invasiveness. In human melanoma, the expression of endogenous Tbx 2 is shown to help reduce the growth of melanomas. It has also been shown that overexpression of Tbx2 can lead to breast cancer. Tbx2 has shown septal defects of the outflow tract, and this has been shown using a knockout mouse. The knockout mouse is a mouse in which the gene is inactivated in order to study the role of genes. Tbx 2 also helps in regulating the cell cycle. This was first shown when Tbx2 was found in a chromosomal region that is often mutated in ovarian cancer and pancreatic cancer cells.
During fetal development, the relationship of Tbx2 to FGF, BMP, and Wnt signaling pathways indicates its extensive control in development of various organ systems. It functions predominantly in the patterning of organ development rather than tissue proliferation. Tbx2 has implications in limb development, atrioventricular development of the heart, and development of the anterior brain tissues.
During limb bud development, Shh and FGF signaling stimulate the outgrowth of the limb. At a certain point, Tbx2 concentrations are such that the signaling of Shh and FGF are terminated, halting further progression and outgrowth of the limb development. This occurs directly through Tbx2 repressing the expression of Grem1, creating a negative Grem1 zone, thereby disrupting the outgrowth signaling by Shh and FGF.
Cardiac development is heavily regulated and requires the development of the four cardiac chambers, septum, and various valve components for outflow and inflow. In heart development, Tbx2 is up-regulated by BMP2 to stimulate atrioventricular development. The development of a Tbx2 knockout mouse model allowed for the determination of specific roles of Tbx2 in cardiac development, and scientists determined Tbx2 and Tbx3 to be redundant in much of heart development. Further, the use of these knockout models determined the significance of Tbx2 in the BMP signaling pathway for development of the atrioventricular canal, atrioventricular nodal phenotype, and atrioventricular cushion.
The atrioventricular canal signaling cascade involves the atrial natriuretic factor gene (ANF). This gene is one of the first hallmarks of chamber formation in the developing myocardium. A small fragment within this gene can repress the promoter of cardiac troponin I (cTnI) selectively in the atrioventricular canal. T-box factor and NK2-homeobox factor binding element are involved in the repression of the atrioventricular canal without affecting its chamber activity. Tbx2 forms a complex with Nkx2.5 on the ANF gene to repress its promoter activity, so that the gene's expression is inhibited in the atrioventricular canal during chamber differentiation. The atrioventricular canal is also the origin of the atrioventricular nodal axis and helps eventually coordinate the beating heart. The role of Tbx2 in cushion formation in the developing heart is by working with Tbx3 to trigger a feed-forward loop with BMP2 for the coordinated development of these cushions. Tbx2 has also been found to temporally suppress the proliferation and differentiation a subset of the primary myocardial cells.
Finally, during anterior brain development, BMP stimulates the expression of Tbx2, which suppresses FGF signaling. This suppression of FGF signaling further represses the expression of Flrt3, which is necessary for anterior brain development.
Tbx2 has been shown to be a master regulator in the differentiation of inner and outer hair cells.
Associated congenital defects
It is known that Tbx2 functions in a dose-dependent manner; therefore, duplication or deletion of the region encompassing Tbx2 can cause various congenital defects, including: microcephaly, various ventricular-septal defects, and skeletal abnormalities. Some specific abnormalities are discussed further below. Mutations in TBX2 cause predisposition to hernias.
Abnormalities of the digits
During limb bud development, down-regulation of Tbx2 fails to inhibit Shh/FGF4 signaling; therefore, resulting in increased limb bud size and duplication of the 4th digit, polydactyly. Opposite this, when Tbx2 is over expressed or duplicated, limb buds are smaller and can have reduced digit number because of the early termination of Shh and FGF4 signaling.
Ventricular septal defects
This is a broad category encompassing many more specific congenital heart defects. Of those related to Tbx2, some are caused by duplication, or over expression, of Tbx2, and others are caused by deletion of the Tbx2 gene region. For example, patients with a duplication of the Tbx2 gene region have presented with atrioventricular abnormalities including: interventricular septal defect, patent foramen ovale, aortic coarctation, tricuspid valve insufficiency, and mitral valve stenosis. Contrary, those with Tbx2 gene deletion have presented with pulmonary hypertension and other heart defects, but is less reported.
Role in tumorigenesis
Tbx2 has been implicated in cancers associated with the lung, breast, bone, pancreas, and melanoma. It is known to be over-expressed in this group of cancers, altering cell-signaling pathways leading to tumorigenesis. Several pathways have been suggested and studied using mouse knockout models of genes within the signaling pathways. Currently, research using the knockout model of Tbx2 for study of tumorigenesis is limited.
p14ARF/MDM2/p35/p21CIP1 Pathway. When up-regulated, Tbx2 inhibits p21CIP1. p21CIP1 is necessary for tissue senescence, and when compromised, leaves the tissue vulnerable to tumor-promoting signals.
Wnt/beta-catenin Pathway. The role of Tbx2 in Wnt signaling has yet to be confirmed; however, up-regulation of Tbx2 in the beta-catenin signaling pathway leads to loss of the adhesion molecule E-cadherin. This returns cells to a mesenchymal state, and facilitates invasion of tumor cells.
EGR1 Signaling Pathway. Finally, Tbx2 up-regulation increases its interaction with EGR1. EGR1 represses NDGR1 to increase cell proliferation, resulting in metastasis or tumor development.
Together, the up-regulation of Tbx2 on these signaling pathways can lead to development of malignant tumors.
Cancer treatment target
Understanding the signaling pathways, and the role of Tbx2 in tumorigenesis, can aid in developing gene-targeted cancer treatments. Because Tbx2 is up-regulated in various types of cancer cells in multiple organ systems, the potential for gene therapy is optimistic. Scientists are interested in targeting a small domain of Tbx2 and Tbx3 to reduce its expression, and utilize small peptides known to suppress tumor genes to inhibit proliferation. An in vitro study using a cell line of human prostate cancer blocked endogenous Tbx2 using Tbx2 dominant-negative retroviral vectors found reduced tumor cell proliferation. Further, the same study suggests targeting WNT3A because of its role in cell-signaling with Tbx2, by utilizing a WNT antagonist such as SFRP-2. Because somatic cells have low expression of Tbx2, a targeted Tbx2 gene treatment would leave healthy somatic cells unharmed, thereby providing a treatment with low toxicity and negative side effects. Much research is still required to determine the efficacy of these specific gene targets to anti-cancer treatments.
References
Further reading
External links
Transcription factors | TBX2 | [
"Chemistry",
"Biology"
] | 1,862 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,815,568 | https://en.wikipedia.org/wiki/KLF10 | Krueppel-like factor 10 is a protein that in humans is encoded by the KLF10 gene.
See also
Kruppel-like factors
References
Further reading
External links
Transcription factors | KLF10 | [
"Chemistry",
"Biology"
] | 40 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,816,039 | https://en.wikipedia.org/wiki/TSC22D1 | TSC22 domain family protein 1 is a protein that in humans is encoded by the TSC22D1 gene.
TSC22 encodes a transcription factor and belongs to the large family of early response genes.
TSC22D1 forms homodimers via its conserved leucine zipper domain and heterodimerizes with TSC22D4. TSC22D1 has transcriptional repressor activity.
References
Further reading
External links
Transcription factors | TSC22D1 | [
"Chemistry",
"Biology"
] | 95 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,816,883 | https://en.wikipedia.org/wiki/DIDO1 | Death-inducer obliterator 1 is a protein that in humans is encoded by the DIDO1 gene.
Function
Apoptosis, a major form of cell death, is an efficient mechanism for eliminating unwanted cells and is of central importance for development and homeostasis in metazoan animals. In mice, the death inducer-obliterator-1 gene is upregulated by apoptotic signals and encodes a cytoplasmic protein that translocates to the nucleus upon apoptotic signal activation. When overexpressed, the mouse protein induced apoptosis in cell lines growing in vitro. This gene is similar to the mouse gene and therefore is thought to be involved in apoptosis. Alternatively spliced transcripts have been found for this gene, encoding multiple isoforms.
References
Further reading
External links
Transcription factors | DIDO1 | [
"Chemistry",
"Biology"
] | 176 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,817,183 | https://en.wikipedia.org/wiki/ERF%20%28gene%29 | ETS domain-containing transcription factor ERF is a protein that in humans is encoded by the ERF gene.
References
Further reading
External links
Transcription factors | ERF (gene) | [
"Chemistry",
"Biology"
] | 32 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,818,071 | https://en.wikipedia.org/wiki/MEF2B | Myocyte enhancer binding factor 2B (MEF2B) is a transcription factor part of the MEF2 gene family including MEF2A, MEF2C, and MEF2D. However, MEF2B is distant from the other three branches of MEF2 genes as it lacks the protein-coding Holliday junction recognition protein C-terminal (HJURP_C) region in vertebrates.
Functions
The MEF2 gene family is expressed in muscle-specific gene activation and maintenance during development. MEF2B mRNA is present in skeletal, smooth, brain and heart muscles. MEF2B is directly involved in smooth muscle myosin heavy chain (SMHC) gene regulation. Overexpression of MEF2B will activate the SMHC promoter in smooth muscle when it is bound to the A/T-rich element of the promoter.
Interactions
MEF2B has been shown to interact with CABIN1.
Clinical relevance
Recurrent mutations in this gene have been associated with cases of diffuse large B-cell lymphoma. In its mutated form, MEF2B can lead to deregulation of the proto-oncogene BCL6 expression in diffuse large B-cell lymphomas (DLBCL). Mutations of MEF2B enhance its transcriptional activity due to either a disruption with its corepressor CABIN1 or causing the gene to become insensitive to inhibitory signaling events.
See also
Mef2
References
Further reading
External links
Transcription factors | MEF2B | [
"Chemistry",
"Biology"
] | 319 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
14,818,515 | https://en.wikipedia.org/wiki/PRRX1 | Paired related homeobox 1 is a protein that in humans is encoded by the PRRX1 gene.
Function
The DNA-associated protein encoded by this gene is a member of the paired family of homeobox proteins localized to the nucleus. The protein functions as a transcription coactivator, enhancing the DNA-binding activity of serum response factor, a protein required for the induction of genes by growth and differentiation factors. The protein regulates muscle creatine kinase, indicating a role in the establishment of diverse mesodermal muscle types. Alternative splicing yields two isoforms that differ in abundance and expression patterns.
Role in mesenchymal stem cell differentiation
Prrx1 expression is restricted to the mesoderm during embryonic development, and both Prrx1 and Prrx2 are expressed in mesenchymal tissues in adult mice. Mice that lack both Prrx1 and Prrx2 have profound defects in mesenchymal cell differentiation in the craniofacial region. Several recent studies demonstrate that PRRX1 can regulate differentiation of mesenchymal precursors. For example, PRRX1 inhibits adipogenesis by activating transforming growth factor-beta (TGF-beta) signaling, and also acts downstream of tumor necrosis factor-alpha to inhibit osteoblast differentiation.
References
Further reading
External links
Transcription factors | PRRX1 | [
"Chemistry",
"Biology"
] | 284 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,818,641 | https://en.wikipedia.org/wiki/Cyclic%20nucleotide%20gated%20channel%20beta%203 | Cyclic nucleotide gated channel beta 3, also known as CNGB3, is a human gene encoding an ion channel protein.
See also
Cyclic nucleotide-gated ion channel
Stargardt disease
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on Achromatopsia
OMIM entries on Achromatopsia
Ion channels | Cyclic nucleotide gated channel beta 3 | [
"Chemistry"
] | 80 | [
"Neurochemistry",
"Ion channels"
] |
14,818,749 | https://en.wikipedia.org/wiki/BCS1L | Mitochondrial chaperone BCS1 (BCS1L), also known as BCS1 homolog, ubiquinol-cytochrome c reductase complex chaperone (h-BCS1), is a protein that in humans is encoded by the BCS1L gene. BCS1L is a chaperone protein involved in the assembly of Ubiquinol Cytochrome c Reductase (complex III), which is located in the inner mitochondrial membrane and is part of the electron transport chain. Mutations in this gene are associated with mitochondrial complex III deficiency (nuclear, 1), GRACILE syndrome, and Bjoernstad syndrome.
Structure
BCS1L is located on the q arm of chromosome 2 in position 35 and has 10 exons. The BCS1L gene produces a 47.5 kDa protein composed of 419 amino acids. The protein encoded by BCS1L belongs to the AAA ATPase family, BCS1 subfamily. BCS1L is a phosphoprotein and chaperone for Ubiquinol Cytochrome c Reductase assembly. It contains a nucleotide binding site for ATP-binding. BCS1L does not contain a mitochondrial targeting sequence but experimental studies confirm that it is imported into mitochondria. A conserved domain at the N-terminus of BCS1L is responsible for the import and intramitochondrial sorting. Associating to the inner mitochondrial membrane, BCS1L has a transmembrane domain in between two topological domains, passing through the inner mitochondrial membrane once. The majority of the protein is in the mitochondrial matrix. Several alternatively spliced transcripts encoding two different isoforms have been described.
Function
BCS1L encodes a protein that is located in the inner mitochondrial membrane and involved in the assembly of Ubiquinol Cytochrome c Reductase (complex III). Complex III plays an important role in the mitochondrial respiratory chain by transferring electrons from the Rieske iron-sulfur protein to cytochrome c. BCS1L is essential for this process through its role in the maintenance of mitochondrial tubular networks, respiratory chain assembly, and formation of the LETM1 complex.
Clinical Significance
Variants of BCS1L have been associated with mitochondrial complex III deficiency, nuclear 1, GRACILE syndrome, and Bjoernstad syndrome. Mitochondrial complex III deficiency, nuclear 1 is a disorder of the mitochondrial respiratory chain resulting in reduced complex III activity and highly variable clinical features usually resulting in multi-system organ failure. Clinical features may include mitochondrial encephalopathy, psychomotor retardation, ataxia, severe failure to thrive, liver dysfunction, renal tubulopathy, muscle weakness, exercise intolerance, lactic acidosis, hypotonia, seizures, and optic atrophy. Pathogenic mutations have included R45C, R56X, T50A, R73C, P99L, R155P, V353M, G129R, R183C, F368I, and S277N. These mutations tend to affect the ATP-binding residues of BCS1L.
Growth retardation, aminoaciduria, cholestasis, iron overload, lactic acidosis, and early death (GRACILE) is a recessively inherited lethal disease that results in multi-system organ failure. GRACILE is characterized by fetal growth retardation, lactic acidosis, aminoaciduria, cholestasis, and abnormalities in iron metabolism. Pathogenic mutations have included S78G, R144Q, and V327A.
Bjoernstad syndrome is an autosomal recessive disease primarily affecting hearing. This disease is characterized by congenital hearing loss and twisted hairs, a condition known as pili torti, in which hair shafts are flattened at irregular intervals and twisted 180 degrees from the normal axis, making the hair extremely brittle. Pathogenic mutations have included Y301N, R184C, G35R, R114W, R183H, Q302E, and R306H. These mutations tend to affect the protein-protein interactions of BCS1L.
Interactions
BCS1L has 11 protein-protein interactions with 8 of them being co-complex interactions. BCS1L has been found to interact with LETM1, DNAJA1, and DDX24.
See also
Björnstad syndrome
GRACILE syndrome
References
External links
Further reading
Protein domains | BCS1L | [
"Biology"
] | 944 | [
"Protein domains",
"Protein classification"
] |
14,819,769 | https://en.wikipedia.org/wiki/HAND1 | Heart- and neural crest derivatives-expressed protein 1 is a protein that in humans is encoded by the HAND1 gene.
A member of the HAND subclass of basic Helix-loop-helix (bHLH) transcription factors, the Heart and neural crest-derived transcript-1 (HAND1) gene is vital for the development and differentiation of three distinct embryological lineages including the cardiac muscle cells of the heart, trophoblast of the placenta, and yolk sac vasculogenesis. Most highly related to twist-like bHLH genes in amino acid identity and embryonic expression, HAND1 can form homo- and heterodimer combinations with multiple bHLH partners, mediating transcriptional activity in the nucleus.
Function
The protein encoded by this gene belongs to the basic helix-loop-helix family of transcription factors. This gene product is one of two closely related family members, the HAND proteins are expressed within the developing ventricular chambers, cardiac neural crest, endocardium (HAND2 only) and epicardium (HAND2 only). HAND1 is expressed with myocardium of the primary heart field and plays an essential but poorly understood role in cardiac morphogenesis.
HAND1 works jointly with HAND2 in cardiac development of embryos based on a crucial HAND gene dosage system. If HAND1 is over or under expressed then morphological abnormalities can form; most notable are cleft lips and palates. Expression was modeled with a knock-in of phosphorylation to turn on and off gene expression which induced the craniofacial abnormalities. Knock-out experimentation on mice caused death and severe cardiac malformations such as failed cardiac looping, impaired ventricular development and defective chamber septation. This aids in the implication that HAND1 expression is a factor to patients with congenital heart disease. However, a lack of HAND1 in the distal regions of the Neural Crest has no effect on cranial feature formation. Mutation of HAND1 has been shown to hinder the effect of GATA4, another vital cardiac transcription factor, and is associated with congenital heart disease. The lack of HAND1 detection in the developing embryo leads to many of the structural defects that causes heart disease and facial deformities while the dosage of HAND1 relates to the severity of these maladies.
HAND factors function in the formation of the right ventricle, left ventricle, aortic arch arteries, epicardium, and endocardium implicating them as mediators of congenital heart disease. In addition, HAND1 is uniquely expressed in trophoblasts and is essential for early trophoblast differentiation.
Cardiac morphogenesis
In the third week of fetal development the rudimentary heart (bilaterally symmetrical cardiac tube) undergoes a characteristic dextral looping, forming an asymmetrical structure with bulges that represent the incipient ventricular and atrial chambers of the heart. Arising from cells derived from the primary heart field in the cardiac crescent, HAND1 goes from being expressed on both sides of the heart tube to the ventral surface of the caudal heart segment and the aortic sac, then being restricted to the outer curvature of the left ventricle in the looped heart. In conjunction with HAND2 (a fellow bHLH transcription factor), complementary and overlapping expression patterns are thought to play a role in interpreting asymmetrical signals in the developing heart which leads to the characteristic looping. The two are implemented in cardiac development of embryos based on a crucial HAND gene dosage system. If HAND1 is over or under expressed then morphological abnormalities can form; most notable are cleft lips and palates. Expression was modeled with a knock-in of phosphorylation to turn on and off gene expression which induced the craniofacial abnormalities.
HAND1 mutants also appear to develop a spectrum of cardiac abnormalities, as demonstrated in knock-out experimentation in the mouse model, where HAND1-null mice displayed defects in the ventral septum, malformation of the AV valve, hypoplastic ventricles, and outflow tract abnormalities. In humans, evidence of a frameshift mutation in the bHLH domain of HAND1 has been correlated with hypoplastic left heart syndrome (a serious form of congenital heart disease where the left side of the heart is severely underdeveloped), aiding in the implication that HAND1 expression is a factor to patients with the disease.
However, a lack of HAND1 in the distal regions of the Neural Crest has no effect on cranial feature formation. Mutation of HAND1 has been shown to hinder the effect of GATA4, another vital cardiac transcription factor, and is associated with congenital heart disease. The lack of HAND1 detection in the developing embryo leads to many of the structural defects that causes heart disease and facial deformities while the dosage of HAND1 relates to the severity of these maladies.
Trophoblast differentiation
In addition, HAND1 is uniquely expressed in trophoblasts and is essential for early trophoblast giant cell differentiation. Trophoblast giant cells are necessary in order for placental development to proceed, participating in vital processes such as blastocyst implantation, remodeling of the maternal decidua, and secretion of hormones. The importance of this relationship is demonstrated in HAND1-null mutant mice, which display significant abnormalities in trophoblast development, such as a reduced ectoplacental cone, thin parietal yolk sac, and reduced density of trophoblast giant cells. These homozygous HAND1-null mutant embryos were arrested by E7.5 of gestation, though could be saved by contribution of wild-type cells to the trophoblast.
Yolk sac vasculogenesis
Expressed in high levels in the extraembryonic membranes throughout development, HAND1 also plays a functional role in vascular development of the yolk sac. Though not strictly required for vasculogenesis, data has shown that HAND1 contributes to the fine-tuning of the vasculogenic response in the yolk sac, recruiting smooth muscle cells to the endothelial network in order to refine the primitive endothelial plexus to a functional vascular system. This relationship has been demonstrated in the HAND1-null mouse model, where embryos lacking the HAND1 gene had a yolk sac vasculature defect caused by lack of vasculature refinement leading to the accumulation of hematopoietic cells between the yolk sac and the amnion.
References
Further reading
External links
Transcription factors | HAND1 | [
"Chemistry",
"Biology"
] | 1,378 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
14,819,897 | https://en.wikipedia.org/wiki/CEBPG | CCAAT/enhancer-binding protein gamma (C/EBPγ) is a protein that in humans is encoded by the CEBPG gene. This gene has no introns.
The C/EBP family of transcription factors regulates viral and cellular CCAAT/enhancer element-mediated transcription. C/EBP proteins contain the bZIP region, which is characterized by two motifs in the C-terminal half of the protein: a basic region involved in DNA binding and a leucine zipper motif involved in dimerization. The C/EBP family consist of several related proteins, C/EBPα, C/EBPβ, C/EBPγ, C/EBPδ, C/EBPζ, and C/EBPε, that form homodimers and that form heterodimers with each other. CCAAT/enhancer binding protein gamma may cooperate with Fos to bind the positive regulatory element-I (PRE-I) enhancer elements.
C/EBPγ forms heterodimer with ATF4 for transcriptional activation of target genes in autophagy specifically to amino acid starvation.
C/EBPγ along with its regulator, the trauma-induced transcription factor EGR1, plays an important role in the development of chronic pain and mechanical hypersensitivity after some types of injury or surgery.
References
Further reading
External links
Transcription factors | CEBPG | [
"Chemistry",
"Biology"
] | 302 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
17,660,040 | https://en.wikipedia.org/wiki/CDX%20Format | CDX (ChemDraw Exchange) is a binary file type created by CambridgeSoft Corporation's ChemDraw chemical structure application. CDXML is the XML and preferred version of this format.
CDX is the native file format used by ChemDraw to store molecular data, such as atoms, bonds, fragments, arrows and text in a tagged binary format, accurately. The CDX file format is used across Windows, Mac and Linux distributions.
References
External links
CDX File Format , Documentation of File Specification
Chemical file formats | CDX Format | [
"Chemistry",
"Technology"
] | 112 | [
"Computing stubs",
"Chemistry software",
"Chemical file formats"
] |
17,663,305 | https://en.wikipedia.org/wiki/LigandScout | LigandScout is computer software that allows creating three-dimensional (3D) pharmacophore models from structural data of macromolecule–ligand complexes, or from training and test sets of organic molecules. It incorporates a complete definition of 3D chemical features (such as hydrogen bond donors, acceptors, lipophilic areas, positively and negatively ionizable chemical groups) that describe the interaction of a bound small organic molecule (ligand) and the surrounding binding site of the macromolecule. These pharmacophores can be overlaid and superimposed using a pattern-matching based alignment algorithm that is solely based on pharmacophoric feature points instead of chemical structure. From such an overlay, shared features can be interpolated to create a so-called shared-feature pharmacophore that shares all common interactions of several binding sites/ligands or extended to create a so-called merged-feature pharmacophore. The software has been successfully used to predict new lead structures in drug design, e.g., predicting biological activity of novel human immunodeficiency virus (HIV) reverse transcriptase inhibitors.
Similar tools
Other software tools which help to model pharmacophores include:
Molecular Operating Environment] (MOE) – by the Chemical Computing Group
Phase – by Schrödinger
Discovery Studio – by Accelrys
SYBYL-X – by Tripos
Pharao by Silicos-It
See also
Comparison of software for molecular mechanics modeling
References
Further reading
Medicinal chemistry
Molecular modelling software | LigandScout | [
"Chemistry",
"Biology"
] | 317 | [
"Molecular modelling software",
"Computational chemistry software",
"Molecular modelling",
"Medicinal chemistry",
"nan",
"Biochemistry"
] |
17,666,127 | https://en.wikipedia.org/wiki/Transient%20equilibrium | In nuclear physics, transient equilibrium is a situation in which equilibrium is reached by a parent-daughter radioactive isotope pair where the half-life of the daughter is shorter than the half-life of the parent. Contrary to secular equilibrium, the half-life of the daughter is not negligible compared to parent's half-life. An example of this is a molybdenum-99 generator producing technetium-99 for nuclear medicine diagnostic procedures. Such a generator is sometimes called a cow because the daughter product, in this case technetium-99, is milked at regular intervals. Transient equilibrium occurs after four half-lives, on average.
Activity in transient equilibrium
The activity of the daughter is given by the Bateman equation:
where and are the activity of the parent and daughter, respectively. and are the half-lives (inverses of reaction rates in the above equation modulo ln(2)) of the parent and daughter, respectively, and BR is the branching ratio.
In transient equilibrium, the Bateman equation cannot be simplified by assuming the daughter's half-life is negligible compared to the parent's half-life. The ratio of daughter-to-parent activity is given by:
Time of maximum daughter activity
In transient equilibrium, the daughter activity increases and eventually reaches a maximum value that can exceed the parent activity. The time of maximum activity is given by:
where and are the half-lives of the parent and daughter, respectively. In the case of ^{99\!m}Tc-^{99}Mo generator, the time of maximum activity () is approximately 24 hours, which makes it convenient for medical use.
See also
Bateman equation
Secular equilibrium
References
Radioactivity | Transient equilibrium | [
"Physics",
"Chemistry"
] | 353 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Radioactivity",
"Nuclear physics"
] |
17,667,375 | https://en.wikipedia.org/wiki/Critical%20line%20%28thermodynamics%29 | In thermodynamics, a critical line is the higher-dimensional equivalent of a critical point. It is the
locus of contiguous critical points in a phase diagram. These lines cannot occur for
a single substance due to the phase rule, but they can be observed in systems with more variables, such as mixtures. Two critical lines may meet and terminate in a tricritical point.
References
Thermodynamics
Critical phenomena | Critical line (thermodynamics) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 88 | [
"Thermodynamics stubs",
"Physical phenomena",
"Critical phenomena",
"Thermodynamics",
"Condensed matter physics",
"Statistical mechanics",
"Physical chemistry stubs",
"Dynamical systems"
] |
17,670,411 | https://en.wikipedia.org/wiki/Photon%20antibunching | Photon antibunching generally refers to a light field with photons more equally spaced than a coherent laser field, a signature being a measured two-time correlation suppressed below that of a coherent laser field. More specifically, it can refer to sub-Poissonian photon statistics, that is a photon number distribution for which the variance is less than the mean. A coherent state, as output by a laser far above threshold, has Poissonian statistics yielding random photon spacing; while a thermal light field has super-Poissonian statistics and yields bunched photon spacing. In the thermal (bunched) case, the number of fluctuations is larger than a coherent state; for an antibunched source they are smaller.
Explanation
The variance of the photon number distribution is
Using commutation relations, this can be written as
This can be written as
The second-order intensity correlation function (for zero delay time) is defined as
This quantity is basically the probability of detecting two simultaneous photons, normalized by the probability of detecting two photons at once for a random photon source. Here and after we assume stationary counting statistics.
Then we have
Then we see that sub-Poisson photon statistics, one definition of photon antibunching, is given by . We can equivalently express antibunching by where the Mandel Q parameter is defined as
If the field had a classical stochastic process underlying it, say a positive definite probability distribution for photon number, the variance would have to be greater than or equal to the mean. This can be shown by an application of the Cauchy–Schwarz inequality to the definition of . Sub-Poissonian fields violate this, and hence are nonclassical in the sense that there can be no underlying positive definite probability distribution for photon number (or intensity).
Photon antibunching by this definition was first proposed by Carmichael and Walls and first observed by Kimble, Mandel, and Dagenais in resonance fluorescence. A driven atom cannot emit two photons at once, and so in this case . An experiment with more precision that did not require subtraction of a background count rate was done for a single atom in an ion trap by Walther et al.
A more general definition for photon antibunching concerns the slope of the correlation function away from zero time delay. It can also be shown by an application of the Cauchy–Schwarz inequality to the time dependent intensity correlation function
It can be shown that for a classical positive definite probability distribution to exist (i.e. for the field to be classical) . Hence a rise in the second order intensity correlation function at early times is also nonclassical. This initial rise is photon antibunching.
Another way of looking at this time dependent correlation function, inspired by quantum trajectory theory is
where
with is the state conditioned on previous detection of a photon at time .
Experiments
Spatial antibunching has been observed in photon pairs produced by spontaneous parametric down-conversion.
See also
Correlation does not imply causation
Degree of coherence
Fock state
Hong–Ou–Mandel effect
Hanbury Brown and Twiss effect
Squeezed coherent state
Sources
Article based on text from Qwiki, reproduced under the GNU Free Documentation License: see Photon Antibunching
References
External links
Photon antibunching (Becker & Hickl GmbH, web page)
Quantum optics | Photon antibunching | [
"Physics"
] | 688 | [
"Quantum optics",
"Quantum mechanics"
] |
17,670,991 | https://en.wikipedia.org/wiki/Time%20displacement | Time displacement in sociology refers to the idea that new forms of activities may replace older ones. New activities that cause time displacement are usually technology-based, most common are the information and communication technologies such as Internet and television. Those technologies are seen as responsible for declines of previously more common activities such as in- and out-of-home socializing, work, and even personal care and sleep.
For example, Internet users may spend time online using it as a substitute of other activities that served similar function(s) (watching television, reading printed media, face to face interaction, etc.). Internet is not the first technology to result in time displacement. Earlier, television had a similar impact, as it shifted people's time from activities such as listening to radio, going to movie theaters or, talking in house, or spending time outside it.
See also
Parkinson's law
References
Paul DiMaggio, Eszter Hargittai1, W. Russell Neuman, and John P. Robinson, Social Implications of the Internet, Annual Review of Sociology, Vol. 27: 307-336 (Volume publication date August 2001),
Waipeng Lee and Eddie C. Y. Kuo, Internet and Displacement Effect: Children's Media Use and Activities in Singapore, JCMC 7 (2) January 2002
Time
Sociological terminology | Time displacement | [
"Physics",
"Mathematics"
] | 271 | [
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
17,671,709 | https://en.wikipedia.org/wiki/Chromatography%20software | Chromatography software is called also Chromatography Data System. It is located in the data station of the modern liquid, gas or supercritical fluid chromatographic systems. This is a dedicated software connected to an hardware interface within the chromatographic system, which serves as a central hub for collecting, analyzing, and managing the data generated during the chromatographic analysis.
The data station is connected to the entire instrument in modern systems, especially the detectors, allowing real-time monitoring of the runs, exhibiting them as chromatograms. A chromatogram is a graphical representation of the results obtained from the chromatographic system. In a chromatogram, each component of the mixture appears as a peak or band at a specific retention time, which is related to its characteristics, such as molecular weight, polarity, and affinity for the stationary phase. The height, width, and area of the peaks in a chromatogram provide information about the amount and purity of the components in the sample. Analyzing a chromatogram helps identify and quantify the substances present in the mixture being analyzed.
Integration & Processing
The major tool of the chromatographic software is peaks "integration". A series of articles describes it: Peak Integration Part 1, Peak Integration Part 2, Peak Integration Part 3. The parameters inside the chromatography software which affect the integration are called the Integration events.
Peak integration in any chromatographic software refers to the process of quantifying the areas under the peak's curve in the chromatogram. The area under the peak is proportional to the amount of that particular component in the sample.
Here are the basics of peak integration in a chromatographic system:
Peak Identification: Before integration, the peaks corresponding to different components in the sample need to be identified, based on their retention times. This is typically done by comparing the observed peaks with known standards or reference data.
Baseline Correction: Establish a baseline for the chromatogram, which represents the lowest signal level along the time axis next to the peak. The baseline represents the noise and background signal. Taking into account the baseline level allows an accurate integration, because it takes into account any drift or fluctuations in the baseline.
Peak Integration parameters and settings: Use appropriate algorithms to integrate the peaks in the chromatogram. Adjust integration parameters and settings as needed, such as noting peak width, noise threshold, and baseline correction method, which determine where the peak starts and ends and its maximum point. Optimizing these parameters helps obtain accurate and precise integration results.
Quantification: Once the areas under the peaks are determined through integration, the quantification of each component is performed. The integrated areas are compared to a calibration curve, created using standards' concentrations to calculate the concentration of each component in the unknown sample.
Data Interpretation: The software analyzes the integrated data to draw conclusions about the composition, concentration, and purity of the sample. The integrated areas provide valuable information for various applications, including quality control, research, and analysis.
Validation and Quality Control: It is important to ensure the accuracy and reliability of the integration process, by performing validation and quality control checks to the software itself. This may involve comparing integration results with known standards, replicating analyses, and assessing precision and accuracy
Applications are also available for simulation of chromatography, for example for teaching, demonstration, or for method development &/or optimization.
Software Packages
Many chromatography software packages are provided by manufacturers, and many of them only provide a simple interface to acquire data. They also provide different tools to analyze this data.
The following is a list of software and the (unexplained) tools that each provides. Please note that some of them were discontinued with the years.
See also
Laboratory informatics
References | Chromatography software | [
"Chemistry"
] | 783 | [
"Chromatography",
"Chromatography software",
"Chemistry software"
] |
1,635,629 | https://en.wikipedia.org/wiki/Society%20of%20Petroleum%20Engineers | The Society of Petroleum Engineers (SPE) is a 501(c)(3) not-for-profit professional organization.
SPE provides a worldwide forum for oil and natural gas exploration and production (E&P) professionals to exchange technical knowledge and best practices. SPE manages OnePetro and PetroWiki, in addition to publishing magazines, peer-reviewed journals, and books. SPE also hosts more than 100 events each year across the globe as well as providing online tools and in-person training opportunities. SPE's technical library (OnePetro) contains more than 314,000 technical papers—products of SPE conferences and periodicals, made available to the entire industry.
SPE has offices in Dallas, Houston, Calgary, Dubai and Kuala Lumpur. SPE is a professional association for more than 127,000 engineers, scientists, managers, and educators. There are about 59,000 student members of SPE.
History
The history of the SPE began well before its actual establishment. During the decade after the 1901 discovery of the Spindletop field, the American Institute of Mining Engineers (AIME) saw a growing need for a forum in the booming new field of petroleum engineering. As a result, AIME formed a standing committee on oil and gas in 1913.
In 1922, the committee was expanded to become one of AIME's 10 professional divisions. The Petroleum Division of AIME continued to grow throughout the next three decades. By 1950, the Petroleum Division had become one of three separate branches of AIME, and in 1957 the Petroleum Branch of AIME was expanded once again to form a professional society.
SPE became tax-exempt in March 1985.
The first SPE Board of Directors meeting was held 6 October 1957. SPE continues to operate more than 100 events around the world.
Membership
SPE is a non-profit association for petroleum engineers. Petroleum engineers who become members of SPE gain access to several member benefits like a complimentary subscription to the Journal of Petroleum Technology, unlimited free webinars, and discounts on SPE events (conferences, workshops, training courses, etc.) and publications. SPE Connect is a site and app for SPE members to exchange technical knowledge, answer each other's practical application questions, and share best practices.
SPE is made up of about 127,000 members in 145 countries. SPE Sections are groups of SPE Professional Members, and SPE Student Chapters are groups of SPE Student Members typically named for the hosting university or a geographical region. 67,000+ professional members are affiliated with 192 SPE Sections, and about 59,000 student members are affiliated with the 392 SPE Student Chapters.
SPE annually grants scholarships to student members.
Awards
Annually, SPE recognizes individuals for their contribution to the oil and gas industry at the regional and international levels.
All individuals who receive SPE Awards were nominated by either an industry colleague, mentor, etc., except for recipients of the Cedric K. Ferguson Young Technical Author Medal,which is awarded to SPE members who author a paper approved for publication in an SPE journal (peer-reviewed journals on oil and gas topics) before age 36. Eligibility for the awards is denoted online.
SPE International Awards are announced online, featured in the Journal of Petroleum Technology, and presented at the Annual Technical Conference and Exhibition.
Regional awards
SPE grants technical and professional awards at the regional level. To be considered for these awards, one must be nominated online. Regional technical award eligibility is described online. SPE regional award recipients are considered for the international level of the award they received in the following award season. Regional awards are presented at regional or section meetings.
Distinguished Lecturers
The SPE Distinguished Lecturer Committee (DL) each year selects a group of nominees to become SPE Distinguished Lecturers. SPE Distinguished Lecturers are nominated for the program and selected by the committee to share their industry expertise by lecturing at local SPE sections across the globe. Nominees are notified of their nomination and must submit a summary of their biography, a presentation that can be given in thirty minutes or less, and additional information for the DL committee. The schedule of DL talks is available online. Some DL talks are very popular and are made available online as webinars.
Publications
SPE publishes peer-reviewed journals, magazines, and books. Technical papers presented at SPE conferences or approved for publication in SPE peer-reviewed journals are also published to OnePetro.org.
Peer-reviewed Journal
SPE Journal, a leading publication in oil, petroleum, and natural gas, offers peer-reviewed papers showcasing methods and technology solutions by industry experts. Its first issue was published in 1996.
Magazines
SPE publishes five online magazines:
Journal of Petroleum Technology (JPT) is the SPE flagship magazine, providing articles on oil & gas technology advancements, issues, and other exploration and production industry news. The JPT newsletter is sent out weekly on Wednesdays. Anyone may sign up to receive the JPT newsletter, though some content is only accessible to members of SPE. Every SPE member receives a complimentary subscription to JPT.
Oil and Gas Facilities (OGF) is focused on delivering the latest news on project and technology shifts in the industry.
The Way Ahead (TWA) is by and for young professional members of SPE. It is the newest SPE magazine. It was first published in 2006 and moved from print to online in 2016.
HSE Now is aimed at covering the changes in health, safety, security, environmental, social responsibility, and regulations that impact the oil and gas industry.
Data Science and Digital Engineering presents the evolving landscape of digital technology and data management in the upstream oil and gas industry.
OnePetro
Launched in March 2007, OnePetro.org is a multi-society library that allows users to search for and access a broad range of technical literature related to the oil and gas exploration and production industry. OnePetro is a multi-association effort that reflects participation of many organizations. The Society of Petroleum Engineers (SPE) operates OnePetro on behalf of the participating organizations.
OnePetro currently contains more than 1.3 million searchable documents from 23 publishing partners. OnePetr users viewed 4.9 million items in 2023. OnePetro is the first online offering of documents from some organizations, making these materials widely available for the first time.
SPE Petroleum Engineering Certification
The SPE Petroleum Engineering Certification program was instituted as a way to certify petroleum engineers by examination and experience. This certification is similar to the Registration of Petroleum Engineers by state in the United States.
Certified professionals use "SPEC" after their name.
See also
Energy law
References
International professional associations
Engineering societies based in the United States
Petroleum engineering
International organizations based in the United States
Organizations based in Dallas | Society of Petroleum Engineers | [
"Engineering"
] | 1,381 | [
"Petroleum engineering",
"Energy engineering",
"Society of Petroleum Engineers"
] |
1,636,762 | https://en.wikipedia.org/wiki/Reductive%20elimination | Reductive elimination is an elementary step in organometallic chemistry in which the oxidation state of the metal center decreases while forming a new covalent bond between two ligands. It is the microscopic reverse of oxidative addition, and is often the product-forming step in many catalytic processes. Since oxidative addition and reductive elimination are reverse reactions, the same mechanisms apply for both processes, and the product equilibrium depends on the thermodynamics of both directions.
General information
Reductive elimination is often seen in higher oxidation states, and can involve a two-electron change at a single metal center (mononuclear) or a one-electron change at each of two metal centers (binuclear, dinuclear, or bimetallic).
For mononuclear reductive elimination, the oxidation state of the metal decreases by two, while the d-electron count of the metal increases by two. This pathway is common for d8 metals Ni(II), Pd(II), and Au(III) and d6 metals Pt(IV), Pd(IV), Ir(III), and Rh(III). Additionally, mononuclear reductive elimination requires that the groups being eliminated must be cis to one another on the metal center.
For binuclear reductive elimination, the oxidation state of each metal decreases by one, while the d-electron count of each metal increases by one. This type of reactivity is generally seen with first row metals, which prefer a one-unit change in oxidation state, but has been observed in both second and third row metals.
Mechanisms
As with oxidative addition, several mechanisms are possible with reductive elimination. The prominent mechanism is a concerted pathway, meaning that it is a nonpolar, three-centered transition state with retention of stereochemistry. In addition, an SN2 mechanism, which proceeds with inversion of stereochemistry, or a radical mechanism, which proceeds with obliteration of stereochemistry, are other possible pathways for reductive elimination.
Octahedral complexes
The rate of reductive elimination is greatly influenced by the geometry of the metal complex. In octahedral complexes, reductive elimination can be very slow from the coordinatively saturated center; and often, reductive elimination only proceeds via a dissociative mechanism, where a ligand must initially dissociate to make a five-coordinate complex. This complex adopts a Y-type distorted trigonal bipyramidal structure where a π-donor ligand is at the basal position and the two groups to be eliminated are brought very close together. After elimination, a T-shaped three-coordinate complex is formed, which will associate with a ligand to form the square planar four-coordinate complex.
Square planar complexes
Reductive elimination of square planar complexes can progress through a variety of mechanisms: dissociative, nondissociative, and associative. Similar to octahedral complexes, a dissociative mechanism for square planar complexes initiates with loss of a ligand, generating a three-coordinate intermediate that undergoes reductive elimination to produce a one-coordinate metal complex. For a nondissociative pathway, reductive elimination occurs from the four-coordinate system to afford a two-coordinate complex. If the eliminating ligands are trans to each other, the complex must first undergo a trans to cis isomerization before eliminating. In an associative mechanism, a ligand must initially associate with the four-coordinate metal complex to generate a five-coordinate complex that undergoes reductive elimination synonymous to the dissociation mechanism for octahedral complexes.
Factors that affect reductive elimination
Reductive elimination is sensitive to a variety of factors including: (1) metal identity and electron density, (2) sterics, (3) participating ligands, (4) coordination number, (5) geometry, and (6) photolysis/oxidation. Additionally, because reductive elimination and oxidative addition are reverse reactions, any sterics or electronics that enhance the rate of reductive elimination must thermodynamically hinder the rate of oxidative addition.
Metal identity and electron density
First-row metal complexes tend to undergo reductive elimination faster than second-row metal complexes, which tend to be faster than third-row metal complexes. This is due to bond strength, with metal-ligand bonds in first-row complexes being weaker than metal-ligand bonds in third-row complexes. Additionally, electron-poor metal centers undergo reductive elimination faster than electron-rich metal centers since the resulting metal would gain electron density upon reductive elimination.
Sterics
Reductive elimination generally occurs more rapidly from a more sterically hindered metal center because the steric encumbrance is alleviated upon reductive elimination. Additionally, wide ligand bite angles generally accelerate reductive elimination because the sterics force the eliminating groups closer together, which allows for more orbital overlap.
Participating ligands
Kinetics for reductive elimination are hard to predict, but reactions that involve hydrides are particularly fast due to effects of orbital overlap in the transition state.
Coordination number
Reductive elimination occurs more rapidly for complexes of three- or five-coordinate metal centers than for four- or six-coordinate metal centers. For even coordination number complexes, reductive elimination leads to an intermediate with a strongly metal-ligand antibonding orbital. When reductive elimination occurs from odd coordination number complexes, the resulting intermediate occupies a nonbonding molecular orbital.
Geometry
Reductive elimination generally occurs faster for complexes whose structures resemble the product.
Photolysis/oxidation
Reductive elimination can be induced by oxidizing the metal center to a higher oxidation state via light or an oxidant.
Applications
Reductive elimination has found widespread application in academia and industry, most notable being hydrogenation, the Monsanto acetic acid process, hydroformylation, and cross-coupling reactions. In many of these catalytic cycles, reductive elimination is the product forming step and regenerates the catalyst; however, in the Heck reaction and Wacker process, reductive elimination is involved only in catalyst regeneration, as the products in these reactions are formed via β–hydride elimination.
References
Chemical reactions
Coordination chemistry
Organometallic chemistry
Reaction mechanisms
Redox | Reductive elimination | [
"Chemistry"
] | 1,329 | [
"Reaction mechanisms",
"Redox",
"Coordination chemistry",
"Electrochemistry",
"nan",
"Physical organic chemistry",
"Chemical kinetics",
"Organometallic chemistry"
] |
1,636,763 | https://en.wikipedia.org/wiki/Oxidative%20addition | Oxidative addition and reductive elimination are two important and related classes of reactions in organometallic chemistry. Oxidative addition is a process that increases both the oxidation state and coordination number of a metal centre. Oxidative addition is often a step in catalytic cycles, in conjunction with its reverse reaction, reductive elimination.
Role in transition metal chemistry
For transition metals, oxidative reaction results in the decrease in the dn to a configuration with fewer electrons, often 2e fewer. Oxidative addition is favored for metals that are (i) basic and/or (ii) easily oxidized. Metals with a relatively low oxidation state often satisfy one of these requirements, but even high oxidation state metals undergo oxidative addition, as illustrated by the oxidation of Pt(II) with chlorine:
[PtCl4]2− + Cl2 → [PtCl6]2−
In classical organometallic chemistry, the formal oxidation state of the metal and the electron count of the complex both increase by two. One-electron changes are also possible and in fact some oxidative addition reactions proceed via series of 1e changes. Although oxidative additions can occur with the insertion of a metal into many different substrates, oxidative additions are most commonly seen with H–H, H–X, and C–X bonds because these substrates are most relevant to commercial applications.
Oxidative addition requires that the metal complex have a vacant coordination site. For this reason, oxidative additions are common for four- and five-coordinate complexes.
Reductive elimination is the reverse of oxidative addition. Reductive elimination is favored when the newly formed X–Y bond is strong. For reductive elimination to occur the two groups (X and Y) should be mutually adjacent on the metal's coordination sphere. Reductive elimination is the key product-releasing step of several reactions that form C–H and C–C bonds.
Mechanisms
Oxidative additions proceed by diverse pathways that depend on the metal center and the substrates.
Concerted pathway
Oxidative additions of nonpolar substrates such as hydrogen and hydrocarbons appear to proceed via concerted pathways. Such substrates lack π-bonds, consequently a three-centered σ complex is invoked, followed by intramolecular ligand bond cleavage of the ligand (probably by donation of electron pair into the sigma* orbital of the inter ligand bond) to form the oxidized complex. The resulting ligands will be mutually cis, although subsequent isomerization may occur.
This mechanism applies to the addition of homonuclear diatomic molecules such as H2. Many C–H activation reactions also follow a concerted mechanism through the formation of an M–(C–H) agostic complex.
A representative example is the reaction of hydrogen with Vaska's complex, trans-IrCl(CO)[P(C6H5)3]2. In this transformation, iridium changes its formal oxidation state from +1 to +3. The product is formally bound to three anions: one chloride and two hydride ligands. As shown below, the initial metal complex has 16 valence electrons and a coordination number of four whereas the product is a six-coordinate 18 electron complex.
Formation of a trigonal bipyramidal dihydrogen intermediate is followed by cleavage of the H–H bond, due to electron back donation into the H–H σ*-orbital, i.e. a sigma complex. This system is also in chemical equilibrium, with the reverse reaction proceeding by the elimination of hydrogen gas with simultaneous reduction of the metal center.
The electron back donation into the H–H σ*-orbital to cleave the H–H bond causes electron-rich metals to favor this reaction. The concerted mechanism produces a cis dihydride, while the stereochemistry of the other oxidative addition pathways do not usually produce cis adducts.
SN2-type
Some oxidative additions proceed analogously to the well known bimolecular nucleophilic substitution reactions in organic chemistry. Nucleophilic attack by the metal center at the less electronegative atom in the substrate leads to cleavage of the R–X bond, to form an [M–R]+ species. This step is followed by rapid coordination of the anion to the cationic metal center. For example, reaction of a square planar complex with methyl iodide:
This mechanism is often assumed in the addition of polar and electrophilic substrates, such as alkyl halides and halogens.
Ionic
The ionic mechanism of oxidative addition is similar to the SN2 type in that it involves the stepwise addition of two distinct ligand fragments. The key difference being that ionic mechanisms involve substrates which are dissociated in solution prior to any interactions with the metal center. An example of ionic oxidative addition is the addition of hydrogen chloride.
Radical
In addition to undergoing SN2-type reactions, alkyl halides and similar substrates can add to a metal center via a radical mechanism, although some details remain controversial. Reactions which are generally accepted to proceed by a radical mechanism are known however. One example was proposed by Lednor and co-workers.
Initiation
[(CH3)2C(CN)N]2 → 2 (CH3)2(CN)C• + N2
(CH3)2(CN)C• + PhBr → (CH3)2(CN)CBr + Ph•
Propagation
Ph• + [Pt(PPh3)2] → [Pt(PPh3)2Ph]•
[Pt(PPh3)2Ph]• + PhBr → [Pt(PPh3)2PhBr] + Ph•
Applications
Oxidative addition and reductive elimination are invoked in many catalytic processes in homogeneous catalysis, e.g., hydrogenations, hydroformylations, hydrosilylations, etc. Cross-coupling reactions like the Suzuki coupling, Negishi coupling, and the Sonogashira coupling also proceed by oxidative addition.
References
Further reading
External links
Chemical reactions
Coordination chemistry
Organometallic chemistry
Reaction mechanisms
Redox | Oxidative addition | [
"Chemistry"
] | 1,285 | [
"Reaction mechanisms",
"Redox",
"Coordination chemistry",
"Electrochemistry",
"nan",
"Physical organic chemistry",
"Chemical kinetics",
"Organometallic chemistry"
] |
1,637,306 | https://en.wikipedia.org/wiki/Cellulose%20triacetate | Cellulose triacetate, triacetate, CTA or TAC is a chemical compound produced from cellulose and a source of acetate esters, typically acetic anhydride. Triacetate is commonly used for the creation of fibres and film base. It is chemically similar to cellulose acetate. Its distinguishing characteristic is that in triacetate, at least "92 percent of the hydroxyl groups are acetylated." During the manufacture of triacetate, the cellulose is completely acetylated; whereas in normal cellulose acetate or cellulose diacetate, it is only partially acetylated. Triacetate is significantly more heat resistant than cellulose acetate.
History
Triacetate, whose chemical formula is [C6H7O2(OOCCH3)3]n, was first produced commercially in the U.S. in 1954 by Celanese Corporation. Eastman Kodak was a manufacturer of CTA until March 15, 2007. For almost 3 years, Mitsubishi Rayon Co. Ltd. was the only manufacturer. In 2010, Eastman Chemical announced a 70% increase in cellulose triacetate output at its Kingsport, Tennessee manufacturing site to meet the increasing demand for the chemical's use as an intermediate in the production of polarized films for liquid crystal displays.
Production
Triacetate is derived from cellulose by acetylating cellulose with acetic acid and/or acetic anhydride. Acetylation converts hydroxyl groups in cellulose to acetyl groups, which renders the cellulose polymer much more soluble in organic solvents. The cellulose acetate is dissolved in a mixture of dichloromethane and methanol for spinning. As the filaments emerge from a spinneret, the solvent is evaporated in warm air, in a process known as dry spinning, leaving a fibre of almost pure triacetate.
A finishing process called S-Finishing or surface saponification is sometimes applied to acetate and triacetate fabrics using a sodium hydroxide solution. This removes part or all of the acetyl groups from the surface of the fibres leaving them with a cellulose coating. This reduces the tendency for the fibres to acquire a static charge.
Applications
As a fibre
Triacetate fibres have a crenate cross section.
Characteristics
Shrink resistant
Wrinkle resistant
Easily washable
Often washable at high temperatures
Maintains creases and pleats well
Usage scenarios
Triacetate is particularly effective in clothing where crease or pleat retention is important, such as skirts and dresses.
In the 1980s triacetate was also used with polyester to create shiny tracksuits. The fabric was smooth and shiny on the outside and soft and fleecy on the inside.
General care tips
Ironable up to 200 °C
Pleated garments are best hand laundered. Most other garments containing 100% triacetate can be machine washed and dried
Articles containing triacetate fibres require little special care due mainly to the fibre's stability at high temperatures
As a film
Characteristics
Resistant to grease, oil, aromatic hydrocarbons, and most common solvents
Films have hard glossy surfaces
Excellent optical clarity
High dielectric constant
Easily laminated, coated, folded, and die-cut
Cellulose acetate film prone to degradation known as vinegar syndrome
Usage scenarios
Polarizer films for LCD projectors
Specialized overhead projector transparencies
Specialized photographic film
Motion picture film
Production of animation cels
Packaging
Face screens
As a semipermeable membrane
Usage scenarios
Water purification through reverse osmosis. The membrane may consist of a blend of cellulose acetate, diacetate and triacetate.
See also
Cellulose acetate
Vinegar syndrome
Rayon
References
External links
Description of triacetate fibre
Description of triacetate film
Federal Trade Commission definition of triacetate
The long term archival of triacetate photographic films
Glossary of terms relation to the manufacture of cellulose / acetate fibres
Fundamentals of membranes for water treatment
Synthetic fibers
Cellulose
Acetate esters | Cellulose triacetate | [
"Chemistry"
] | 847 | [
"Synthetic materials",
"Synthetic fibers"
] |
1,637,397 | https://en.wikipedia.org/wiki/Biorefinery | A biorefinery is a refinery that converts biomass to energy and other beneficial byproducts (such as chemicals). The International Energy Agency Bioenergy Task 42 defined biorefining as "the sustainable processing of biomass into a spectrum of bio-based products (food, feed, chemicals, materials) and bioenergy (biofuels, power and/or heat)". As refineries, biorefineries can provide multiple chemicals by fractioning an initial raw material (biomass) into multiple intermediates (carbohydrates, proteins, triglycerides) that can be further converted into value-added products. Each refining phase is also referred to as a "cascading phase". The use of biomass as feedstock can provide a benefit by reducing the impacts on the environment, as lower pollutants emissions and reduction in the emissions of hazard products. In addition, biorefineries are intended to achieve the following goals:
Supply the current fuels and chemical building blocks
Supply new building blocks for the production of novel materials with disruptive characteristics
Creation of new jobs, including rural areas
Valorization of waste (agricultural, urban, and industrial waste)
Achieve the ultimate goal of reducing GHG emissions
Classification of biorefinery systems
Biorefineries can be classified based in four main features:
Platforms: Refers to key intermediates between raw material and final products. The most important intermediates are:
Biogas from anaerobic digestion
Syngas from gasification
Hydrogen from water-gas shift reaction, steam reforming, water electrolysis and fermentation
C6 sugars from hydrolysis of sucrose, starch, cellulose and hemicellulose
C5 sugars (e.g., xylose, arabinose: C5H10O5), from hydrolysis of hemicellulose and food and feed side streams
Lignin from the processing of lignocellulosic biomass.
Liquid from pyrolysis (pyrolysis oil)
Products: Biorefineries can be grouped in two main categories according to the conversion of biomass in an energetic or non-energetic product. In this classification the main market must be identified:
Energy-driven biorefinery systems: The main product is a second energy carrier as biofuels, power and heat.
Material-driven biorefinery systems: The main product is a biobased product
Feedstock: Dedicated feedstocks (Sugar crops, starch crops, lignocellulosic crops, oil-based crops, grasses, marine biomass); and residues (oil-based residues, lignocellulosic residues, organic residues and others)
Processes: Conversion process to transform biomass into a final product:
Mechanical/physical: The chemical structure of the biomass components is preserved. This operation includes pressing, milling, separation, distillation, among others
Biochemical: Processes under low temperature and pressure, using microorganism or enzymes.
Chemical processes: The substrate suffer change by the action of an external chemical (e.g., hydrolysis, transesterification, hydrogenation, oxidation, pulping)
Thermochemical: Severe conditions are apply to the feedstock (high pressure and high temperature, with or without catalyst).
The aforementioned features are used to classified biorefineries systems according to the following method:
Identify the feedstock, the main technologies included in the process, platform, and the final products
Draw the scheme of the refinery using the features identified in step 1.
Label the refinery system according by citing the number of platforms, products, feedstock, and processes involved
Elaborate a table with the features identified, and the source of internal energy demand
Some examples of classifications are:
C6 sugar platform biorefinery for bioethanol and animal feed from starch crops.
Syngas platform biorefinery for FT-diesel and phenols from straw
C6 and C5 sugar and syngas platform biorefinery for bioethanol, FT-diesel and furfural from saw mill residues.
Economic viability of biorefinery systems
Techno-economic assessment (TEA) is a methodology to evaluate whether a technology or process is economically attractive. TEA research has been developed to provide information about the performance of the biorefinery concept in diverse production systems as sugarcane mills, biodiesel production, pulp and paper mills, and the treatment of industrial and municipal solid waste.
Bioethanol plants and sugarcane mills are well-established processes where the biorefinery concept can be implemented since sugarcane bagasse is a feasible feedstock to produce fuels and chemicals; lignocellulosic bioethanol (2G) is produced in Brazil in two plants with capacities of 40 and 84 Ml/y (about 0.4% of the production capacity in Brazil). TEA of ethanol production using mild liquefaction of bagasse plus simultaneous saccharification and co-fermentation shows a minimum selling price between 50.38 and 62.72 US cents/L which is comparable with the market price. The production of xylitol, citric acid and glutamic acid from sugarcane lignocellulose (bagasse and harvesting residues), each in combination with electricity have been evaluated; the three biorefinery systems were simulated to be annexed to an existing sugar mill in South Africa. The production of xylitol and glutamic acid has shown economic feasibility with an Internal Rate of Return (IRR) of 12.3% and 31.5%, exceeding the IRR of the base case (10.3%). Likewise, the production of ethanol, lactic acid or methanol and ethanol-lactic acid from sugarcane bagasse have been studied; lactic acid demonstrated to be economically attractive by showing the greatest net present value (M$476–1278); in the same way; the production of ethanol and lactic acid as co-product was found to be a favorable scenario (net present value between M$165 and M$718) since this acid has applications in the pharmaceutical, cosmetic, chemical and food industry.
As for biodiesel production, this industry also has the potential to integrate biorefinery systems to convert residual biomasses and wastes into biofuel, heat, electricity and bio-based green products. Glycerol is the main co-product in biodiesel production and can be transformed into valuable products through chemocatalytic technologies; the valorization of glycerol for the production of lactic acid, acrylic acid, allyl alcohol, propanediols, and glycerol carbonate has been evaluated; all glycerol valorization routes shown to be profitable, being the most attractive the manufacture of glycerol carbonate. Palm empty fruit bunches (EFB) are an abundant lignocellulosic residues from the palm oil/biodiesel industry, the conversion of this residue into ethanol, heat and power, and cattle feed were evaluated according to techno-economic principles, the scenarios under study shown reduced economic benefits, although their implementation represented a reduction in the environmental impact (climate change and fossil fuel depletion) compared to the traditional biodiesel production. The economic feasibility for bio-oil production from EFB via fast pyrolysis using the fluidized-bed was studied, crude bio-oil can potentially be produced from EFB at a product value of 0.47 $/kg with a payback period and return on investment of 3.2 years and 21.9%, respectively. The integration of microalgae and Jatropha as a viable route for the production of biofuels and biochemicals has been analyzed in the United Arab Emirates (UAE) context. Three scenarios were examined; in all of them, biodiesel and glycerol is produced; in the first scenario biogas and organic fertilizer is produced by anaerobic fermentation of Jatropha fruit cake and seedcake; the second scenario includes the production of lipids from Jatropha and microalgae to produce biodiesel and the production of animal feed, biogas and organic fertilizer; the third scenario involves the production of lipids from microalgae for the production of biodiesel as well as hydrogen and animal feed as final product; only the first scenario was profitable.
In regard to the pulp and paper industry; lignin is a natural polymer co-generated and is generally used as boiler fuel to generate heat or steam to cover the energy demand in the process. Since lignin accounts for 10–30 wt% of the available lignocellulosic biomass and is equivalent to ~40% of its energy contents; the economics of biorefineries depend on the cost-effective processes to transform lignin into value-added fuels and chemicals. The conversion of an existing Swedish kraft pulp mill to the production of dissolving pulp, electricity, lignin, and hemicellulose has been studied; self-sufficiency in terms of steam and the production of excess steam was a key factor for the integration of a lignin separation plant; in this case; the digester has to be upgraded for preserving the same production level and represents 70% of the total investment cost of conversion. The potential of using the kraft process for producing bioethanol from softwoods in a repurposed or co-located kraft mill has been studied, a sugar recovery higher than 60% enables the process to be competitive for the production of ethanol from softwood. The repurposing of a kraft pulp mill to produce both ethanol and dimethyl ether has been investigated; in the process, cellulose is separated by and an alkaline pretreatment and then is hydrolyzed and fermented to produce ethanol, while the resulting liquor containing dissolved lignin is gasified and refined to dimethyl ether; the process demonstrate to be self-sufficient in terms of hot utility (fresh steam) demand but with a deficit of electricity; the process can be feasible, economically speaking, but is highly dependent on the development of biofuel prices. The exergetic and economic evaluation for the production of catechol from lignin was performed to determine its feasibility; the results showed that the total capital investment was 4.9 M$ based on the plant capacity of 2,544 kg/d of feedstock; besides, the catechol price was estimated to be 1,100 $/t and the valorization ratio was found to be 3.02.
The high generation of waste biomass is an attractive source for conversion to valuable products, several biorefinery routes has been proposed to upgrade waste streams in valuable products. The production of biogas from banana peel (Musa x paradisiaca) under the biorefinery concept is a promissory alternative since is possible to obtain biogas and other co-products including ethanol, xylitol, syngas, and electricity; this process also provides high profitability for high production scales. The economic assessment of the integration of organic waste anaerobic digestion with other mixed culture anaerobic fermentation technologies was studied; the highest profit is obtained by dark fermentation of food waste with separation and purification of acetic and butyric acids (47 USD/t of food waste). The technical feasibility, profitability and extent of investment risk to produce sugar syrups from food and beverage waste was analyzed; the returns on investment shown to be satisfactory for the production of fructose syrup (9.4%), HFS42 (22.8%) and glucose-rich syrup (58.9%); the sugar syrups also have high cost competitiveness with relatively low net production costs and minimum selling prices. The valorization of municipal solid waste through integrated mechanical biological chemical treatment (MBCT) systems for the production of levulinic acid has been studied, the revenue from resource recovery and product generation (without the inclusion of gate fees) is more than enough to out- weigh the waste collection fees, annual capital and operating costs.
Environmental impact of biorefinery systems
One of the main goals of biorefineries is to contribute to a more sustainable industry by the conservation of resources and by reducing greenhouse gas emissions and other pollutants. Nevertheless, other environmental impacts may be associated to the production of biobased products; as land use change, eutrophication of water, the pollution of the environment with pesticides, or higher energy and material demand that lead to environmental burdens. Life cycle assessment (LCA) is a methodology to evaluate the environmental load of a process, from the extraction of raw materials to the end use. LCA can be used to investigate the potential benefits of biorefinery systems; multiple LCA studies has been developed to analyse whether biorefineries are more environmentally friendly compared to conventional alternatives.
Feedstock is one of the main sources of environmental impacts in the biofuel production, the source of this impacts are related to the field operation to grow, handle and transport the biomass to the biorefinery gate. Agricultural residues are the feedstock with the lowest environmental impact followed by lignocellulosic crops; and finally by first-generation arable crops, although the environmental impacts are sensitive to factors such as crop management practices, harvesting systems, and crop yields. The production of chemicals from biomass feedstock has shown environmental benefits; bulk chemicals from biomass-derived feedstocks have been studied showing savings on non renewable energy use and greenhouse gas emissions.
The environmental assessment for 1G and 2G ethanol shows that these two biorefinery systems are able to mitigate climate change impacts in comparison to gasoline, but higher climate change benefits are achieved with 2G ethanol production (up to 80% reduction). The conversion of palm empty fruit bunches into valuable products (ethanol, heat and power, and cattle feed) reduces the impact for climate change and fossil fuel depletion compared to the traditional biodiesel production; but the benefits for toxicity and eutrophication are limited. Propionic acid produced by fermentation of glycerol leads to significant reduction of GHG emissions compared to fossil fuel alternatives; however the energy input is double and the contribution to eutrophication is significantly higher The LCA for the integration of butanol from prehydrolysate in a Canadian Kraft dissolving pulp mill shows that the carbon footprint of this butanol may be 5% lower compare to gasoline; but is not as low as corn butanol (23% lower than that of gasoline).
The majority of the LCA studies for the valorization of food waste have been focused on the environmental impacts on biogas or energy production, with only few on the synthesis of high value-added chemicals; hydroxymethylfurfural (HMF) has been listed as one of the top 10 bio-based chemicals by the US Department of Energy; the LCA of eight food waste valorization routes for the production of HMF shows that the most environmentally favorable option uses less polluting catalyst (AlCl3) and co-solvent (acetone), and provides the highest yield of HMF (27.9 Cmol%), metal depletion and toxicity impacts (marine ecotoxicity, freshwater toxicity, and human toxicity) were the categories with the highest values.
Biorefinery in the pulp and paper industry
The pulp and paper industry is considered as the first industrialized biorefinery system; in this industrial process other co-products are produced including tall oil, rosin, vanillin, and lignosulfonates. Apart from these co-products; the system includes energy generation (in for of steam and electricity) to cover its internal energy demand; and it has the potential to feed heat and electricity to the grid.
This industry has consolidated as the highest consumer of biomass; and uses not only wood as feedstock, it is capable of processing agricultural waste as bagasse, rice straw and corn stover. Other important features of this industry are a well-established logistic for biomass production, avoiding competition with food production for fertile land, and presenting higher biomass yields.
Examples
The fully operational Blue Marble Energy company has multiple biorefineries located in Odessa, WA and Missoula, MT.
Canada's first Integrated Biorefinery, developed on anaerobic digestion technology by Himark BioGas is located in Alberta. The biorefinery utilizes Source Separated Organics from the metro Edmonton region, open pen feedlot manure, and food processing waste.
Chemrec's technology for black liquor gasification and production of second-generation biofuels such as biomethanol or BioDME is integrated with a host pulp mill and utilizes a major sulfate or sulfite process waste product as feedstock.
Novamont has converted old petrochemical factories into biorefineries, producing protein, plastics, animal feed, lubricants, herbicides and elastomers from cardoon.
C16 Biosciences produces synthetic palm oil from carbon-containing waste (i.e. food waste, glycerol) by means of yeast.
MacroCascade aims to refine seaweed into food and fodder, and then products for healthcare, cosmetics, and fine chemicals industries. The side streams will be used for the production of fertilizer and biogas. Other seaweed biorefinery projects include MacroAlgaeBiorefinery (MAB4), SeaRefinery and SEAFARM.
FUMI Ingredients produces foaming agents, heat-set gels and emulsifiers from micro-algae with the help of micro-organisms such as brewer's yeast and baker's yeast.
The BIOCON platform is researching the processing of wood into various products. More precisely, their researchers are looking at transforming lignin and cellulose into various products. Lignin for example can be transformed into phenolic components which can be used to make glue, plastics and agricultural products (e.g. crop protection). Cellulose can be transformed into clothes and packaging.
In South Africa, Numbitrax LLC bought a Blume Biorefinery system for producing bioethanol as well as additional high-return offtake products from local and readily available resources such as the prickly pear cactus.
Circular Organics (part of Kempen Insect Valley) grows black soldier fly larvae on waste from the agricultural and food industry (i.e. fruit and vegetable surplus, remaining waste from fruit juice and jam production). These larvae are used to produce protein, grease, and chitin. The grease is usable in the pharmaceutical industry (cosmetics, surfactants for shower gel), replacing other vegetable oils such as palm oil, or it can be used in fodder.
Biteback Insect makes insect cooking oil, insect butter, fatty alcohols, insect frass protein and chitin from superworm (Zophobas morio).
See also
Microalgae
Food waste: can be made into PHA (thus a 2nd generation feedstock bioplastic)
Tomato: can be made into tomato flesh (food), tomato seeds (containing fatty acids) and tomato peel (containing lycopene)
Biomaterials use in sustainable textile
Tobacco: GM tobacco could provide industrial enzymes for biofuel production. Tobacco can also supply nicotine (i.e. as used in e-liquids).
Citrus: can be made into juice (food) and citrus peel (containing succinic acid, pectin, essential oil, cellulose; also just usable as zest)
Biomass (can be used in CHP systems)
Gasification
Carbon neutrality
Renewable energy commercialization
Maggot farming
References
External links
Tactical Biorefinery
Saccharification
Biosynergy
Biorefinery from biomass
Aqueous-Phase Reforming.
Wisconsin Biorefining Development Initiative.
Biorefinery Film
Active Biorefinery Facilities
Top Value Added Chemicals from Biomass: list of chemicals that can be extracted from biomass
Biofuels technology
Oil refineries
Sustainable technologies
Bright green environmentalism | Biorefinery | [
"Chemistry",
"Biology"
] | 4,170 | [
"Petroleum",
"Biofuels technology",
"Oil refineries",
"Oil refining"
] |
1,638,328 | https://en.wikipedia.org/wiki/Faraday%20wave | Faraday waves, also known as Faraday ripples, named after Michael Faraday (1791–1867), are nonlinear standing waves that appear on liquids enclosed by a vibrating receptacle. When the vibration frequency exceeds a critical value, the flat hydrostatic surface becomes unstable. This is known as the Faraday instability. Faraday first described them in an appendix to an article in the Philosophical Transactions of the Royal Society of London in 1831.
If a layer of liquid is placed on top of a vertically oscillating piston, a pattern of standing waves appears which oscillates at half the driving frequency, given certain criteria of instability. This relates to the problem of parametric resonance. The waves can take the form of stripes, close-packed hexagons, or even squares or quasiperiodic patterns. Faraday waves are commonly observed as fine stripes on the surface of wine in a wine glass that is ringing like a bell. Faraday waves also explain the 'fountain' phenomenon on a singing bowl.
The Faraday wave and its wavelength is analogous to the de Broglie wave with the de Broglie wavelength in de Broglie–Bohm theory in the field of quantum mechanics.
Application
Faraday waves are used as a liquid-based template for directed assembly of microscale materials including soft matter, rigid bodies, biological entities (e.g., individual cells, cell spheroids and cell-seeded microcarrier beads). Unlike solid-based template, this liquid-based template can be dynamically changed by tuning vibrational frequency and acceleration and generate diverse sets of symmetrical and periodic patterns.
This phenomenon is also used by alligators to call mates. They vibrate their lungs at low frequencies slightly below the surface, causing their spikes to move and induce surface waves. These surface waves are basically Faraday waves and one can observe the splashing effect characteristic of certain resonances.
This effect can also be used for mixing two liquids acoustically. Faraday waves form on the interface between the two liquids, which increases the surface area between the two, rapidly and thoroughly mixing the liquids.
See also
Chladni patterns
Cymatics
Oscillation
Wave–particle duality
matter wave
References
External links
YouTube video of Faraday waves in corn starch.
YouTube video Yves Couder Explains Wave/Particle Duality via Silicon Droplets
YouTube video of Singing Bowl creating Fountain
Wave mechanics
Fluid dynamics
Michael Faraday | Faraday wave | [
"Physics",
"Chemistry",
"Engineering"
] | 492 | [
"Physical phenomena",
"Chemical engineering",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Piping",
"Fluid dynamics"
] |
1,638,614 | https://en.wikipedia.org/wiki/Velocimetry | Velocimetry is the measurement of the velocity of fluids. This is a task often taken for granted, and involves far more complex processes than one might expect. It is often used to solve fluid dynamics problems, study fluid networks, in industrial and process control applications, as well as in the creation of new kinds of fluid flow sensors. Methods of velocimetry include particle image velocimetry and particle tracking velocimetry, Molecular tagging velocimetry, laser-based interferometry, ultrasonic Doppler methods, Doppler sensors, and new signal processing methodologies.
In general, velocity measurements are made in the Lagrangian or Eulerian frames of reference (see Lagrangian and Eulerian coordinates). Lagrangian methods assign a velocity to a volume of fluid at a given time, whereas Eulerian methods assign a velocity to a volume of the measurement domain at a given time. A classic example of the distinction is particle tracking velocimetry, where the idea is to find the velocity of individual flow tracer particles (Lagrangian) and particle image velocimetry, where the objective is to find the average velocity within a sub-region of the field of view (Eulerian).
History
Velocimetry can be traced back to the days of Leonardo da Vinci, who would float grass seeds on a flow and sketch the resulting trajectories of the seeds that he observed (a Lagrangian measurement). Eventually da Vinci's flow visualizations were used in his cardio vascular studies, attempting to learn more about blood flow throughout the human body.
Methods similar to da Vinci's were carried out for close to four hundred years due to technological limitations. One other notable study comes from Felix Savart in 1833. Using a stroboscopic instrument, he sketched water jet impacts.
In the late 19th century a huge breakthrough was made in these technologies when it became possible to take photographs of flow patterns. One notable instance of this is Ludwig Mach using particles unresolvable by the naked eye to visualize streamlines. Another notable contribution occurred in the 20th century by Étienne-Jules Marey who used photographic techniques to introduce the concept of the smoke box. This model allowed both for the directions of the flow to be tracked but also the speed, as streamlines closer together indicated faster flow.
More recently, high speed cameras and digital technology has revolutionized the field. allowing for the possibility of many more techniques and rendering of flow fields in three dimensions.
Methods
Today the basic ideas established by Leonardo are the same; the flow must be seeded with particles that can be observed by the method of choice. The seeding particles depend on many factors including the fluid, the sensing method, the size of the measurement domain, and sometimes the expected accelerations in the flow. If the flow contains particles that can be measured naturally, seeding the flow is unnecessary.
Spatial reconstruction of fluid streamtubes using long exposure imaging of tracer can be applied for streamlines imaging velocimetry, high resolution frame rate free velocimetry of stationary flows. Temporal integration of velocimetric information can be used to totalize fluid flow. For measuring velocity and length on moving surfaces, laser surface velocimeters are used.
The fluid generally limits the particle selection according to its specific gravity; the particles should ideally be of the same density as the fluid. This is especially important in flows with a high acceleration (for example, high-speed flow through a 90-degree pipe elbow). Heavier fluids like water and oil are thus very attractive to velocimetry, whereas air ads a challenge in most techniques that it is rarely possible to find particles of the same density as air.
Still, even large-field measurement techniques like PIV have been performed successfully in air. Particles used for seeding can be both liquid droplets or solid particles. Solid particles being preferred when high particle concentrations are necessary. For point measurements like laser Doppler velocimetry, particles in the nanometre diameter range, such as those in cigarette smoke, are sufficient to perform a measurement.
In water and oil there are a variety of inexpensive industrial beads that can be used, such as silver-coated hollow glass spheres manufactured to be conductive powders (tens of micrometres diameter range) or other beads used as reflectors and texturing agents in paints and coatings. The particles need not be spherical; in many cases titanium dioxide particles can be used.
Relevant Applications
PIV has been used in research for controlling aircraft noise. This noise is created by the high speed mixing of hot jet exhaust with the ambient temperature of the environment. PIV has been used to model this behavior.
Additionally, Doppler velocimetry enables noninvasive techniques of determining whether fetuses are the proper size at a given term of pregnancy.
Basis for Four-Dimensional Pulmonary Imaging
Velocimetry has also been applied to medical images in order to obtain regional measurements of blood flow and tissue motion. Initially, standard PIV (single plane illumination) was adapted to work with x-ray images (full volume illumination), enabling the measurement of opaque flows such as blood flow. This was then extended to investigate the regional 2D motion of lung tissue, and was found to be a sensitive indicator of regional lung disease.
Velocimetry was also expanded to 3D regional measurements blood flow and tissue motion with a new technique – computed tomographic x-ray velocimetry – which uses information contained within the PIV cross-correlation to extract 3D measurements from 2D image sequences. Specifically, computed tomographic x-ray velocimetry generates a model solution, compares the cross-correlations of the model to the cross-correlation from the 2D image sequence, and iterates the model solution until the difference between the model cross-correlations and the image sequence cross-correlations are minimised. This technique is being used as a non invasive method to quantify functional performance of the lungs. It is being used in a clinical setting, and is being utilised in clinical trails conducted by institutions including Duke University, Vanderbilt University Medical Center and Oregon Health Science University
External links
Velocimetry portal is an online center for Laser Flow Diagnostic Techniques (PIV, StereoPIV, MicroPIV, NanoPIV, High speed PIV, PTV, LDV, PDPA, PLIF, ILIDS, PSP etc.). This portal is being developed so as to provide as much information as possible about the Laser Flow Diagnostic Techniques in a consolidated manner. Services include Basic Principles, Applications, Discussion forums, Links to Links. A concentrated effort is taken to put together all the present and possible applications of PIV, StereoPIV, MicroPIV, NanoPIV, High speed PIV, PTV, LDV, PDPA, PLIF, ILIDS, PSP. Velocimetry portal aims to become as the reference point for all queries related to Laser Flow Diagnostic Techniques.
References
Measurement
Fluid dynamics | Velocimetry | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,443 | [
"Physical quantities",
"Chemical engineering",
"Quantity",
"Measurement",
"Size",
"Piping",
"Fluid dynamics"
] |
15,056 | https://en.wikipedia.org/wiki/Isoelectric%20point | The isoelectric point (pI, pH(I), IEP), is the pH at which a molecule carries no net electrical charge or is electrically neutral in the statistical mean. The standard nomenclature to represent the isoelectric point is pH(I). However, pI is also used. For brevity, this article uses pI. The net charge on the molecule is affected by pH of its surrounding environment and can become more positively or negatively charged due to the gain or loss, respectively, of protons (H+).
Surfaces naturally charge to form a double layer. In the common case when the surface charge-determining ions are H+/HO−, the net surface charge is affected by the pH of the liquid in which the solid is submerged.
The pI value can affect the solubility of a molecule at a given pH. Such molecules have minimum solubility in water or salt solutions at the pH that corresponds to their pI and often precipitate out of solution. Biological amphoteric molecules such as proteins contain both acidic and basic functional groups. Amino acids that make up proteins may be positive, negative, neutral, or polar in nature, and together give a protein its overall charge. At a pH below their pI, proteins carry a net positive charge; above their pI they carry a net negative charge. Proteins can, thus, be separated by net charge in a polyacrylamide gel using either preparative native PAGE, which uses a constant pH to separate proteins, or isoelectric focusing, which uses a pH gradient to separate proteins. Isoelectric focusing is also the first step in 2-D gel polyacrylamide gel electrophoresis.
In biomolecules, proteins can be separated by ion exchange chromatography. Biological proteins are made up of zwitterionic amino acid compounds; the net charge of these proteins can be positive or negative depending on the pH of the environment. The specific pI of the target protein can be used to model the process around and the compound can then be purified from the rest of the mixture. Buffers of various pH can be used for this purification process to change the pH of the environment. When a mixture containing a target protein is loaded into an ion exchanger, the stationary matrix can be either positively-charged (for mobile anions) or negatively-charged (for mobile cations). At low pH values, the net charge of most proteins in the mixture is positive – in cation exchangers, these positively-charged proteins bind to the negatively-charged matrix. At high pH values, the net charge of most proteins is negative, where they bind to the positively-charged matrix in anion exchangers. When the environment is at a pH value equal to the protein's pI, the net charge is zero, and the protein is not bound to any exchanger, and therefore, can be eluted out.
Calculating pI values
For an amino acid with only one amine and one carboxyl group, the pI can be calculated from the mean of the pKas of this molecule.
The pH of an electrophoretic gel is determined by the buffer used for that gel. If the pH of the buffer is above the pI of the protein being run, the protein will migrate to the positive pole (negative charge is attracted to a positive pole). If the pH of the buffer is below the pI of the protein being run, the protein will migrate to the negative pole of the gel (positive charge is attracted to the negative pole). If the protein is run with a buffer pH that is equal to the pI, it will not migrate at all. This is also true for individual amino acids.
Examples
In the two examples (on the right) the isoelectric point is shown by the green vertical line. In glycine the pK values are separated by nearly 7 units. Thus in the gas phase, the concentration of the neutral species, glycine (GlyH), is effectively 100% of the analytical glycine concentration. Glycine may exist as a zwitterion at the isoelectric point, but the equilibrium constant for the isomerization reaction in solution
H2NCH2CO2H <=> H3N+CH2CO2-
is not known.
The other example, adenosine monophosphate is shown to illustrate the fact that a third species may, in principle, be involved. In fact the concentration of is negligible at the isoelectric point in this case.
If the pI is greater than the pH, the molecule will have a positive charge.
Peptides and proteins
A number of algorithms for estimating isoelectric points of peptides and proteins have been developed. Most of them use Henderson–Hasselbalch equation with different pK values. For instance, within the model proposed by Bjellqvist and co-workers, the pKs were determined between closely related immobilines by focusing the same sample in overlapping pH gradients. Some improvements in the methodology (especially in the determination of the pK values for modified amino acids) have been also proposed. More advanced methods take into account the effect of adjacent amino acids ±3 residues away from a charged aspartic or glutamic acid, the effects on free C terminus, as well as they apply a correction term to the corresponding pK values using genetic algorithm. Other recent approaches are based on a support vector machine algorithm and pKa optimization against experimentally known protein/peptide isoelectric points.
Moreover, experimentally measured isoelectric point of proteins were aggregated into the databases. Recently, a database of isoelectric points for all proteins predicted using most of the available methods had been also developed.
In practice, a protein with an excess of basic aminoacids (arginine, lysine and/or histidine) will bear an isoelectric point roughly greater than 7 (basic), while a protein with an excess of acidic aminoacids (aspartic acid and/or glutamic acid) will often have an isoelectric point lower than 7 (acidic).
The electrophoretic linear (horizontal) separation of proteins by Ip along a pH gradient in a polyacrylamide gel (also known as isoelectric focusing), followed by a standard molecular weight linear (vertical) separation in a second polyacrylamide gel (SDS-PAGE), constitutes the so called two-dimensional gel electrophoresis or PAGE 2D. This technique allows a thorough separation of proteins as distinct "spots", with proteins of high molecular weight and low Ip migrating to the upper-left part of the bidimensional gel, while proteins with low molecular weight and high Ip locate to the bottom-right region of the same gel.
Ceramic materials
The isoelectric points (IEP) of metal oxide ceramics are used extensively in material science in various aqueous processing steps (synthesis, modification, etc.). In the absence of chemisorbed or physisorbed species particle surfaces in aqueous suspension are generally assumed to be covered with surface hydroxyl species, M-OH (where M is a metal such as Al, Si, etc.). At pH values above the IEP, the predominant surface species is M-O−, while at pH values below the IEP, M-OH2+ species predominate. Some approximate values of common ceramics are listed below:
Note: The following list gives the isoelectric point at 25 °C for selected materials in water. The exact value can vary widely, depending on material factors such as purity and phase as well as physical parameters such as temperature. Moreover, the precise measurement of isoelectric points can be difficult, thus many sources often cite differing values for isoelectric points of these materials.
Mixed oxides may exhibit isoelectric point values that are intermediate to those of the corresponding pure oxides. For example, a synthetically prepared amorphous aluminosilicate (Al2O3-SiO2) was initially measured as having IEP of 4.5 (the electrokinetic behavior of the surface was dominated by surface Si-OH species, thus explaining the relatively low IEP value). Significantly higher IEP values (pH 6 to 8) have been reported for 3Al2O3-2SiO2 by others. Similarly, also IEP of barium titanate, BaTiO3 was reported in the range 5–6 while others got a value of 3. Mixtures of titania (TiO2) and zirconia (ZrO2) were studied and found to have an isoelectric point between 5.3–6.9, varying non-linearly with %(ZrO2). The surface charge of the mixed oxides was correlated with acidity. Greater titania content led to increased Lewis acidity, whereas zirconia-rich oxides displayed Br::onsted acidity. The different types of acidities produced differences in ion adsorption rates and capacities.
Versus point of zero charge
The terms isoelectric point (IEP) and point of zero charge (PZC) are often used interchangeably, although under certain circumstances, it may be productive to make the distinction.
In systems in which H+/OH− are the interface potential-determining ions, the point of zero charge is given in terms of pH. The pH at which the surface exhibits a neutral net electrical charge is the point of zero charge at the surface. Electrokinetic phenomena generally measure zeta potential, and a zero zeta potential is interpreted as the point of zero net charge at the shear plane. This is termed the isoelectric point. Thus, the isoelectric point is the value of pH at which the colloidal particle remains stationary in an electrical field. The isoelectric point is expected to be somewhat different from the point of zero charge at the particle surface, but this difference is often ignored in practice for so-called pristine surfaces, i.e., surfaces with no specifically adsorbed positive or negative charges. In this context, specific adsorption is understood as adsorption occurring in a Stern layer or chemisorption. Thus, point of zero charge at the surface is taken as equal to isoelectric point in the absence of specific adsorption on that surface.
According to Jolivet, in the absence of positive or negative charges, the surface is best described by the point of zero charge. If positive and negative charges are both present in equal amounts, then this is the isoelectric point. Thus, the PZC refers to the absence of any type of surface charge, while the IEP refers to a state of neutral net surface charge. The difference between the two, therefore, is the quantity of charged sites at the point of net zero charge. Jolivet uses the intrinsic surface equilibrium constants, pK− and pK+ to define the two conditions in terms of the relative number of charged sites:
For large ΔpK (>4 according to Jolivet), the predominant species is MOH while there are relatively few charged species – so the PZC is relevant. For small values of ΔpK, there are many charged species in approximately equal numbers, so one speaks of the IEP.
See also
Electrophoretic deposition
Henderson-Hasselbalch equation
Isoelectric focusing
Isoionic point
pK acid dissociation constant
Preparative native PAGE
Zeta potential
References
Further reading
Nelson DL, Cox MM (2004). Lehninger Principles of Biochemistry. W. H. Freeman; 4th edition (Hardcover).
Kosmulski M. (2009). Surface Charging and Points of Zero Charge. CRC Press; 1st edition (Hardcover).
External links
IPC – Isoelectric Point Calculator — calculate protein isoelectric point using over 15 methods
prot pi – protein isoelectric point — an online program for calculating pI of proteins (include multiple subunits and posttranslational modifications)
CurTiPot — a suite of spreadsheets for computing acid-base equilibria (charge versus pH plot of amphoteric molecules e.g., amino acids)
pICalculax — Isoelectric point (pI) predictor for chemically modified peptides and proteins
SWISS-2DPAGE — a database of isoelectric points coming from two-dimensional polyacrylamide gel electrophoresis (~ 2,000 proteins)
PIP-DB — a Protein Isoelectric Point database (~ 5,000 proteins)
Proteome-pI — a proteome isoelectric point database (predicted isoelectric point for all proteins)
Ions
Molecular biology | Isoelectric point | [
"Physics",
"Chemistry",
"Biology"
] | 2,619 | [
"Biochemistry",
"Ions",
"Matter",
"Molecular biology"
] |
15,097 | https://en.wikipedia.org/wiki/Ionosphere | The ionosphere () is the ionized part of the upper atmosphere of Earth, from about to above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. Travel through this layer also impacts GPS signals, resulting in effects such as deflection in their path and delay in the arrival of the signal.
History of discovery
As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later.
In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties.
In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923.
In 1925, observations during a solar eclipse in New York by Dr. Alfred N. Goldsmith and his team demonstrated the influence of sunlight on radio wave propagation, revealing that short waves became weak or inaudible while long waves steadied during the eclipse, thus contributing to the understanding of the ionosphere's role in radio transmission.
In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature:
In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect.
Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere.
In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere.
On July 26, 1963, the first operational geosynchronous satellite Syncom 2 was launched. On board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica.
Geophysics
The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about to more than . It exists primarily due to ultraviolet radiation from the Sun.
The lowest part of the Earth's atmosphere, the troposphere, extends from the surface to about . Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above , in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere.
Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present.
Ionization depends primarily on the Sun and its Extreme Ultraviolet (EUV) and X-ray irradiance which varies strongly with solar activity. The more magnetically active the Sun is, the more sunspot active regions there are on the Sun at any one time. Sunspot active regions are the source of increased coronal heating and accompanying increases in EUV and X-ray irradiance, particularly during episodic magnetic eruptions that include solar flares that increase ionization on the sunlit side of the Earth and solar energetic particle events that can increase ionization in the polar regions. Thus the degree of ionization in the ionosphere follows both a diurnal (time of day) cycle and the 11-year solar cycle. There is also a seasonal dependence in ionization degree since the local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization.
Sydney Chapman proposed that the region below the ionosphere be called neutrosphere
(the neutral atmosphere).
Layers of ionization
At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F layer. The F layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves.
D layer
The D layer is the innermost layer, above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, solar flares can generate hard X-rays (wavelength ) that ionize N and O. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions.
Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime.
During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours.
E layer
The E layer is the middle layer, above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the E layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer.
This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 its existence was detected by Edward V. Appleton and Miles Barnett.
E layer
The E layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, frequently up to 50 MHz and rarely up to 450 MHz. Sporadic-E events may last for just a few minutes to many hours. Sporadic E propagation makes VHF-operating by radio amateurs very exciting when long-distance propagation paths that are generally unreachable "open up" to two-way communication. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs every day during June and July in northern hemisphere mid-latitudes when high signal levels are often reached. The skip distances are generally around . Distances for one hop propagation can be anywhere from . Multi-hop propagation over is also common, sometimes to distances of or more.
F layer
The F layer or region, also known as the Appleton–Barnett layer, extends from about to more than above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F) at night, but during the day, a secondary peak (labelled F) often forms in the electron density profile. Because the F layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications.
Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere.
From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region.
Ionospheric model
An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density.
Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457).
Persistent anomalies to the idealized model
Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions.
Winter anomaly
At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity.
Equatorial anomaly
Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain.
Equatorial electrojet
The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) ( altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet.
Ephemeral ionospheric perturbations
X-rays: sudden ionospheric disturbances (SID)
When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout that can persist for many hours after strong flares. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out steadily declines as the electrons in the D-region recombine rapidly and propagation gradually returns to pre-flare conditions over minutes to hours depending on the solar flare strength and frequency.
Protons: polar cap absorption (PCA)
Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions.
Storms
Geomagnetic storms and ionospheric storms are temporary and intense disturbances of the Earth's magnetosphere and ionosphere.
During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely. In the Northern and Southern polar regions of the Earth aurorae will be observable in the night sky.
Lightning
Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events.
Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast.
In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur.
Applications
Radio communication
Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Shortwave broadcasting is useful in crossing international boundaries and covering large areas at low cost. Automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts and to assist with emergency communications during natural disasters. Armed forces use shortwave so as to be independent of vulnerable infrastructure, including satellites, and the low latency of shortwave communications make it attractive to stock traders, where milliseconds count.
Mechanism of refraction
When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough.
A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics).
The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below:
where N = electron density per m3 and fcritical is in Hz.
The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time.
where = angle of arrival, the angle of the wave relative to the horizon, and sin is the sine function.
The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer.
GPS/GNSS ionospheric correction
There are a number of models used to understand the effects of the ionosphere on global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. GALILEO broadcasts 3 coefficients to compute the effective ionization level, which is then used by the NeQuick model to compute a range delay along the line-of-sight.
Other applications
The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction.
Measurements
Overview
Scientists explore the structure of the ionosphere by a wide variety of methods. They include:
passive observations of optical and radio emissions generated in the ionosphere
bouncing radio waves of different frequencies from it
incoherent scatter radars such as the EISCAT, Sondre Stromfjord, Millstone Hill, Arecibo, Advanced Modular Incoherent Scatter Radar (AMISR) and Jicamarca radars
coherent scatter radars such as the Super Dual Auroral Radar Network (SuperDARN) radars
special receivers to detect how the reflected waves have changed from the transmitted waves.
A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska.
The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 countries and multiple radars in both hemispheres.
Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo Telescope located in Puerto Rico, was originally intended to study Earth's ionosphere.
Ionograms
Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available).
Incoherent scatter radars
Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Incoherent scatter radars can also measure neutral atmosphere movements, such as atmospheric tides, after making assumptions about ion-neutral collision frequency across the ionospheric dynamo region.
GNSS radio occultation
Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed.
Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC.
Indices of the ionosphere
In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere.
Solar intensity
F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. The two indices have been shown to be correlated with each other.
However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere.
Geomagnetic disturbances
The A- and K-indices are a measurement of the behavior of the horizontal component of the geomagnetic field. The K-index uses a semi-logarithmic scale from 0 to 9 to measure the strength of the horizontal component of the geomagnetic field. The Boulder K-index is measured at the Boulder Geomagnetic Observatory.
The geomagnetic activity levels of the Earth are measured by the fluctuation of the Earth's magnetic field in SI units called teslas (or in non-SI gauss, especially in older literature). The Earth's magnetic field is measured around the planet by many observatories. The data retrieved is processed and turned into measurement indices. Daily measurements for the entire planet are made available through an estimate of the Ap-index, called the planetary A-index (PAI).
Ionospheres of other planets and natural satellites
Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, and Neptune.
The atmosphere of Titan includes an ionosphere that ranges from about in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, Triton, and Pluto.
See also
Aeronomy
Geospace
Space physics
Geophysics
International Reference Ionosphere
Ionospheric dynamo region
Magnetospheric electric convection field
Protonosphere
Schumann resonances
Van Allen radiation belt
Radio
Earth–ionosphere waveguide
Fading
Ionospheric absorption
Ionospheric scintillation
Line-of-sight propagation
Sferics
Related
Canadian Geospace Monitoring
High Frequency Active Auroral Research Program
Ionospheric heater
S4 Index
Soft gamma repeater
Upper-atmospheric lightning
Sura Ionospheric Heating Facility
TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics)
Notes
References
J. Lilensten, P.-L. Blelly: Du Soleil à la Terre, Aéronomie et météorologie de l'espace, Collection Grenoble Sciences, Université Joseph Fourier Grenoble I, 2000. .
P.-L. Blelly, D. Alcaydé: Ionosphere, in: Y. Kamide, A. Chian, Handbook of the Solar-Terrestrial Environment, Springer-Verlag Berlin Heidelberg, pp. 189–220, 2007.
External links
Gehred, Paul, and Norm Cohen, SWPC's Radio User's Page.
Amsat-Italia project on Ionospheric propagation (ESA SWENET website)
NZ4O Solar Space Weather & Geomagnetic Data Archive
NZ4O 160 Meter (Medium Frequency)Radio Propagation Theory Notes Layman Level Explanations Of "Seemingly" Mysterious 160 Meter (MF/HF) Propagation Occurrences
USGS Geomagnetism Program
Encyclopædia Britannica, Ionosphere and magnetosphere
Current Space Weather Conditions
Current Solar X-Ray Flux
Super Dual Auroral Radar Network
European Incoherent Scatter radar system
Terrestrial plasmas
Radio frequency propagation | Ionosphere | [
"Physics"
] | 6,494 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
15,112 | https://en.wikipedia.org/wiki/Wave%20interference | In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively.
Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves.
Etymology
The word interference is derived from the Latin words inter which means "between" and fere which means "hit or strike", and was used in the context of wave superposition by Thomas Young in 1801.
Mechanisms
The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference, the wave amplitudes cancel each other out, and the energy is redistributed to other areas. For example, when two pebbles are dropped in a pond, a pattern is observable; but eventually waves continue, and only when they reach the shore is the energy absorbed away from the medium.
Constructive interference occurs when the phase difference between the waves is an even multiple of (180°), whereas destructive interference occurs when the difference is an odd multiple of . If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.
Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can, for example, in water. Superposition in the EM field is an assumed phenomenon and necessary to explain how two light beams pass through each other and continue on their respective paths. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers.
In addition to classical wave model for understanding optical interference, quantum matter waves also demonstrate interference.
Real-valued wave functions
The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is
where is the peak amplitude, is the wavenumber and is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right
where is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is
Using the trigonometric identity for the sum of two cosines: this can be written
This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of .
Constructive interference: If the phase difference is an even multiple of : then , so the sum of the two waves is a wave with twice the amplitude
Destructive interference: If the phase difference is an odd multiple of : then , so the sum of the two waves is zero
Between two plane waves
A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle.
One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the x-axis. The phase difference at the point A is given by
It can be seen that the two waves are in phase when
and are half a cycle out of phase when
Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is
and is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle .
The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout.
Between two spherical waves
A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right.
When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar.
Multiple beams
Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time.
It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases.
It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as for waves from to , where
To show that
one merely assumes the converse, then multiplies both sides by
The Fabry–Pérot interferometer uses interference between multiple reflections.
A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion.
Complex valued wave functions
Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include:
The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; an optical or quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative.
Any two different real waves in the same medium interfere; complex waves must be coherent to interfere. In practice this means the wave must come from the same source and have similar frequencies
Real wave interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In complex wave interference, we measure the modulus of the wavefunction squared.
Optical wave interference
Because the frequency of light waves (~1014 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point is:
where represents the magnitude of the displacement, represents the phase and represents the angular frequency.
The displacement of the summed waves is
The intensity of the light at is given by
This can be expressed in terms of the intensities of the individual waves as
Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity.
Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state.
Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as "every photon interferes with itself". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible.
Light source requirements
The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap.
Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications.
A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.
Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements.
This has also been observed for widefield interference between two incoherent laser sources.
It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified.
Optical arrangements
To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems.
In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems.
In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror.
Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively.
Quantum interference
Quantum interference – the observed wave-behavior of matter – resembles optical interference. Let be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability of observing the object at position is where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms :
Usually, and correspond to distinct situations A and B. When this is the case, the equation indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at is the probability of finding the object at when it is in situation A plus the probability of finding the object at when it is in situation B plus an extra term. This extra term, which is called the quantum interference term, is in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all , then there is no quantum mechanical interference associated with situations A and B.
The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. One slit becomes and the other becomes . The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern.
Applications
Beat
In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.
With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency.
Interferometry
Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement. The impact on physics and the applications span various types of waves.
Optical interferometry
Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light.
In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.
The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity.
Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement.
Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components.
Radio interferometry
In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array.
Acoustic interferometry
An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured.
See also
Active noise control
Beat (acoustics)
Coherence (physics)
Diffraction
Haidinger fringes
Interference lithography
Interference visibility
Interferometer
Lloyd's Mirror
Moiré pattern
Multipath interference
Newton's rings
Optical path length
Thin-film interference
Rayleigh roughness criterion
Upfade
References
External links
Easy JavaScript Simulation Model of One Dimensional Wave Interference
Expressions of position and fringe spacing
Java simulation of interference of water waves 1
Java simulation of interference of water waves 2
Flash animations demonstrating interference
Wave mechanics
ca:Interferència (propagació d'ones)#Interferència òptica | Wave interference | [
"Physics"
] | 4,055 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
15,290 | https://en.wikipedia.org/wiki/Intercalation%20%28timekeeping%29 | Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of days or months.
Solar calendars
The solar or tropical year does not have a whole number of days (it is about 365.24 days), but a calendar year must have a whole number of days. The most common way to reconcile the two is to vary the number of days in the calendar year.
In solar calendars, this is done by adding an extra day ("leap day" or "intercalary day") to a common year of 365 days, about once every four years, creating a leap year that has 366 days (Julian, Gregorian and Indian national calendars).
The Decree of Canopus, issued by the pharaoh Ptolemy III Euergetes of Ancient Egypt in 239 BC, decreed a solar leap day system; an Egyptian leap year was not adopted until 25 BC, when the Roman Emperor Augustus instituted a reformed Alexandrian calendar.
In the Julian calendar, as well as in the Gregorian calendar, which improved upon it, intercalation is done by adding an extra day to February in each leap year. In the Julian calendar this was done every four years. In the Gregorian, years divisible by 100 but not 400 were exempted in order to improve accuracy. Thus, 2000 was a leap year; 1700, 1800, and 1900 were not.
Epagomenal days are days within a solar calendar that are outside any regular month. Usually five epagomenal days are included within every year (Egyptian, Coptic, Ethiopian, Mayan Haab' and French Republican Calendars), but a sixth epagomenal day is intercalated every four years in some (Coptic, Ethiopian and French Republican calendars).
The Solar Hijri calendar, used in Iran, is based on solar calculations and is similar to the Gregorian calendar in its structure, and hence the intercalation, with the exception that its epoch is the Hijrah.
The Bahá'í calendar includes enough epagomenal days (usually 4 or 5) before the last month (, ʿalāʾ) to ensure that the following year starts on the March equinox. These are known as the Ayyám-i-Há.
Lunisolar calendars
The solar year does not have a whole number of lunar months (it is about 365/29.5 = 12.37 lunations), so a lunisolar calendar must have a variable number of months per year. Regular years have 12 months, but embolismic years insert a 13th "intercalary" or "leap" month or "embolismic" month every second or third year. Whether to insert an intercalary month in a given year may be determined using regular cycles such as the 19-year Metonic cycle (Hebrew calendar and in the determination of Easter) or using calculations of lunar phases (Hindu lunisolar and Chinese calendars). The Buddhist calendar adds both an intercalary day and month on a usually regular cycle.
Lunar calendars
In principle, lunar calendars do not employ intercalation because they do not seek to synchronise with the seasons, and the motion of the moon is astronomically predictable. But religious lunar calendars rely on actual observation.
The Lunar Hijri calendar, the purely lunar calendar observed by most of Islam, depends on actual observation of the first crescent of the moon and thus has no intercalation. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of 29- or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th.
The tabular Islamic calendar, used in Iran, has 12 lunar months that usually alternate between 30 and 29 days every year, but an intercalary day is added to the last month of the year 12 times in a 33-year cycle. Some historians also linked the pre-Islamic practice of Nasi' to intercalation.
Leap seconds
The International Earth Rotation and Reference Systems Service can insert or remove leap seconds from the last day of any month (June and December are preferred). These are sometimes described as intercalary.
Other uses
ISO 8601 includes a specification for a 52/53-week year. Any year that has 53 Thursdays has 53 weeks; this extra week may be regarded as intercalary.
The xiuhpōhualli (year count) system of the Aztec calendar had five intercalary days after the eighteenth and final month, the nēmontēmi, in which the people fasted and reflected on the past year.
See also
Lunisolar calendar
Egyptian, Coptic, and Ethiopian calendars
Iranian calendar
Islamic calendar
Mandaean calendar
Celtic calendar
Thai lunar calendar
Bengali calendar
Igbo calendar
World Calendar
Intercalated Games
References
Calendars
Units of time | Intercalation (timekeeping) | [
"Physics",
"Mathematics"
] | 1,094 | [
"Calendars",
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
15,303 | https://en.wikipedia.org/wiki/Ion%20channel | Ion channels are pore-forming membrane proteins that allow ions to pass through the channel pore. Their functions include establishing a resting membrane potential, shaping action potentials and other electrical signals by gating the flow of ions across the cell membrane, controlling the flow of ions across secretory and epithelial cells, and regulating cell volume. Ion channels are present in the membranes of all cells. Ion channels are one of the two classes of ionophoric proteins, the other being ion transporters.
The study of ion channels often involves biophysics, electrophysiology, and pharmacology, while using techniques including voltage clamp, patch clamp, immunohistochemistry, X-ray crystallography, fluoroscopy, and RT-PCR. Their classification as molecules is referred to as channelomics.
Basic features
There are two distinctive features of ion channels that differentiate them from other types of ion transporter proteins:
The rate of ion transport through the channel is very high (often 106 ions per second or greater).
Ions pass through channels down their electrochemical gradient, which is a function of ion concentration and membrane potential, "downhill", without the input (or help) of metabolic energy (e.g. ATP, co-transport mechanisms, or active transport mechanisms).
Ion channels are located within the membrane of all excitable cells, and of many intracellular organelles. They are often described as narrow, water-filled tunnels that allow only ions of a certain size and/or charge to pass through. This characteristic is called selective permeability. The archetypal channel pore is just one or two atoms wide at its narrowest point and is selective for specific species of ion, such as sodium or potassium. However, some channels may be permeable to the passage of more than one type of ion, typically sharing a common charge: positive (cations) or negative (anions). Ions often move through the segments of the channel pore in a single file nearly as quickly as the ions move through the free solution. In many ion channels, passage through the pore is governed by a "gate", which may be opened or closed in response to chemical or electrical signals, temperature, or mechanical force.
Ion channels are integral membrane proteins, typically formed as assemblies of several individual proteins. Such "multi-subunit" assemblies usually involve a circular arrangement of identical or homologous proteins closely packed around a water-filled pore through the plane of the membrane or lipid bilayer. For most voltage-gated ion channels, the pore-forming subunit(s) are called the α subunit, while the auxiliary subunits are denoted β, γ, and so on.
Biological role
Because channels underlie the nerve impulse and because "transmitter-activated" channels mediate conduction across the synapses, channels are especially prominent components of the nervous system. Indeed, numerous toxins that organisms have evolved for shutting down the nervous systems of predators and prey (e.g., the venoms produced by spiders, scorpions, snakes, fish, bees, sea snails, and others) work by modulating ion channel conductance and/or kinetics. In addition, ion channels are key components in a wide variety of biological processes that involve rapid changes in cells, such as cardiac, skeletal, and smooth muscle contraction, epithelial transport of nutrients and ions, T-cell activation, and pancreatic beta-cell insulin release. In the search for new drugs, ion channels are a frequent target.
Diversity
There are over 300 types of ion channels just in the cells of the inner ear. Ion channels may be classified by the nature of their gating, the species of ions passing through those gates, the number of gates (pores), and localization of proteins.
Further heterogeneity of ion channels arises when channels with different constitutive subunits give rise to a specific kind of current. Absence or mutation of one or more of the contributing types of channel subunits can result in loss of function and, potentially, underlie neurologic diseases.
Classification by gating
Ion channels may be classified by gating, i.e. what opens and closes the channels. For example, voltage-gated ion channels open or close depending on the voltage gradient across the plasma membrane, while ligand-gated ion channels open or close depending on binding of ligands to the channel.
Voltage-gated
Voltage-gated ion channels open and close in response to membrane potential.
Voltage-gated sodium channels: This family contains at least 9 members and is largely responsible for action potential creation and propagation. The pore-forming α subunits are very large (up to 4,000 amino acids) and consist of four homologous repeat domains (I-IV) each comprising six transmembrane segments (S1-S6) for a total of 24 transmembrane segments. The members of this family also coassemble with auxiliary β subunits, each spanning the membrane once. Both α and β subunits are extensively glycosylated.
Voltage-gated calcium channels: This family contains 10 members, though these are known to coassemble with α2δ, β, and γ subunits. These channels play an important role in both linking muscle excitation with contraction as well as neuronal excitation with transmitter release. The α subunits have an overall structural resemblance to those of the sodium channels and are equally large.
Cation channels of sperm: This small family of channels, normally referred to as Catsper channels, is related to the two-pore channels and distantly related to TRP channels.
Voltage-gated potassium channels (KV): This family contains almost 40 members, which are further divided into 12 subfamilies. These channels are known mainly for their role in repolarizing the cell membrane following action potentials. The α subunits have six transmembrane segments, homologous to a single domain of the sodium channels. Correspondingly, they assemble as tetramers to produce a functioning channel.
Some transient receptor potential channels: This group of channels, normally referred to simply as TRP channels, is named after their role in Drosophila phototransduction. This family, containing at least 28 members, is incredibly diverse in its method of activation. Some TRP channels seem to be constitutively open, while others are gated by voltage, intracellular Ca2+, pH, redox state, osmolarity, and mechanical stretch. These channels also vary according to the ion(s) they pass, some being selective for Ca2+ while others are less selective, acting as cation channels. This family is subdivided into 6 subfamilies based on homology: classical (TRPC), vanilloid receptors (TRPV), melastatin (TRPM), polycystins (TRPP), mucolipins (TRPML), and ankyrin transmembrane protein 1 (TRPA).
Hyperpolarization-activated cyclic nucleotide-gated channels: The opening of these channels is due to hyperpolarization rather than the depolarization required for other cyclic nucleotide-gated channels. These channels are also sensitive to the cyclic nucleotides cAMP and cGMP, which alter the voltage sensitivity of the channel's opening. These channels are permeable to the monovalent cations K+ and Na+. There are 4 members of this family, all of which form tetramers of six-transmembrane α subunits. As these channels open under hyperpolarizing conditions, they function as pacemaking channels in the heart, particularly the SA node.
Voltage-gated proton channels: Voltage-gated proton channels open with depolarization, but in a strongly pH-sensitive manner. The result is that these channels open only when the electrochemical gradient is outward, such that their opening will only allow protons to leave cells. Their function thus appears to be acid extrusion from cells. Another important function occurs in phagocytes (e.g. eosinophils, neutrophils, macrophages) during the "respiratory burst." When bacteria or other microbes are engulfed by phagocytes, the enzyme NADPH oxidase assembles in the membrane and begins to produce reactive oxygen species (ROS) that help kill bacteria. NADPH oxidase is electrogenic, moving electrons across the membrane, and proton channels open to allow proton flux to balance the electron movement electrically.
Ligand-gated (neurotransmitter)
Also known as ionotropic receptors, this group of channels open in response to specific ligand molecules binding to the extracellular domain of the receptor protein. Ligand binding causes a conformational change in the structure of the channel protein that ultimately leads to the opening of the channel gate and subsequent ion flux across the plasma membrane. Examples of such channels include the cation-permeable nicotinic acetylcholine receptors, ionotropic glutamate-gated receptors, acid-sensing ion channels (ASICs), ATP-gated P2X receptors, and the anion-permeable γ-aminobutyric acid-gated GABAA receptor.
Ion channels activated by second messengers may also be categorized in this group, although ligands and second messengers are otherwise distinguished from each other.
Lipid-gated
This group of channels opens in response to specific lipid molecules binding to the channel's transmembrane domain typically near the inner leaflet of the plasma membrane. Phosphatidylinositol 4,5-bisphosphate (PIP2) and phosphatidic acid (PA) are the best-characterized lipids to gate these channels. Many of the leak potassium channels are gated by lipids including the inward-rectifier potassium channels and two pore domain potassium channels TREK-1 and TRAAK. KCNQ potassium channel family are gated by PIP2. The voltage activated potassium channel (Kv) is regulated by PA. Its midpoint of activation shifts +50 mV upon PA hydrolysis, near resting membrane potentials. This suggests Kv could be opened by lipid hydrolysis independent of voltage and may qualify this channel as dual lipid and voltage gated channel.
Other gating
Gating also includes activation and inactivation by second messengers from the inside of the cell membrane – rather than from outside the cell, as in the case for ligands.
Some potassium channels:
Inward-rectifier potassium channels: These channels allow potassium ions to flow into the cell in an "inwardly rectifying" manner: potassium flows more efficiently into than out of the cell. This family is composed of 15 official and 1 unofficial member and is further subdivided into 7 subfamilies based on homology. These channels are affected by intracellular ATP, PIP2, and G-protein βγ subunits. They are involved in important physiological processes such as pacemaker activity in the heart, insulin release, and potassium uptake in glial cells. They contain only two transmembrane segments, corresponding to the core pore-forming segments of the KV and KCa channels. Their α subunits form tetramers.
Calcium-activated potassium channels: This family of channels is activated by intracellular Ca2+ and contains 8 members.
Tandem pore domain potassium channel: This family of 15 members form what are known as leak channels, and they display Goldman-Hodgkin-Katz (open) rectification. Contrary to their common name of 'Two-pore-domain potassium channels', these channels have only one pore but two pore domains per subunit.
Two-pore channels include ligand-gated and voltage-gated cation channels, so-named because they contain two pore-forming subunits. As their name suggests, they have two pores.
Light-gated channels like channelrhodopsin are directly opened by photons.
Mechanosensitive ion channels open under the influence of stretch, pressure, shear, and displacement.
Cyclic nucleotide-gated channels: This superfamily of channels contains two families: the cyclic nucleotide-gated (CNG) channels and the hyperpolarization-activated, cyclic nucleotide-gated (HCN) channels. This grouping is functional rather than evolutionary.
Cyclic nucleotide-gated channels: This family of channels is characterized by activation by either intracellular cAMP or cGMP. These channels are primarily permeable to monovalent cations such as K+ and Na+. They are also permeable to Ca2+, though it acts to close them. There are 6 members of this family, which is divided into 2 subfamilies.
Hyperpolarization-activated cyclic nucleotide-gated channels
Temperature-gated channels: Members of the transient receptor potential ion channel superfamily, such as TRPV1 or TRPM8, are opened either by hot or cold temperatures.
Classification by type of ions
Chloride channels: This superfamily of channels consists of approximately 13 members. They include ClCs, CLICs, Bestrophins and CFTRs. These channels are non-selective for small anions; however chloride is the most abundant anion, and hence they are known as chloride channels.
Potassium channels
Voltage-gated potassium channels e.g., Kvs, Kirs etc.
Calcium-activated potassium channels e.g., BKCa or MaxiK, SK, etc.
Inward-rectifier potassium channels
Two-pore-domain potassium channels: This family of 15 members form what is known as leak channels, and they display Goldman-Hodgkin-Katz (open) rectification.
Sodium channels
Voltage-gated sodium channels (NaVs)
Epithelial sodium channels (ENaCs)
Calcium channels (CaVs)
Phosphate chanels: To date, only one phosphate channel, Xenotropic and polytropic retrovirus receptor 1 (XPR1), has been identified in animals. It is a pyrophosphate-gated channel.
Proton channels
Voltage-gated proton channels
Non-selective cation channels: These non-selectively allow many types of cations, mainly Na+, K+ and Ca2+, through the channel.
Most transient receptor potential channels
Classification by cellular localization
Ion channels are also classified according to their subcellular localization. The plasma membrane accounts for around 2% of the total membrane in the cell, whereas intracellular organelles contain 98% of the cell's membrane. The major intracellular compartments are endoplasmic reticulum, Golgi apparatus, and mitochondria. On the basis of localization, ion channels are classified as:
Plasma membrane channels
Examples: Voltage-gated potassium channels (Kv), Sodium channels (Nav), Calcium channels (Cav) and Chloride channels (ClC)
Intracellular channels, which are further classified into different organelles
Endoplasmic reticulum channels: RyR, SERCA, ORAi
Mitochondrial channels: mPTP, KATP, BK, IK, CLIC5, Kv7.4 at the inner membrane and VDAC and CLIC4 as outer membrane channels.
Other classifications
Some ion channels are classified by the duration of their response to stimuli:
Transient receptor potential channels: This group of channels, normally referred to simply as TRP channels, is named after their role in Drosophila visual phototransduction. This family, containing at least 28 members, is diverse in its mechanisms of activation. Some TRP channels remain constitutively open, while others are gated by voltage, intracellular Ca2+, pH, redox state, osmolarity, and mechanical stretch. These channels also vary according to the ion(s) they pass, some being selective for Ca2+ while others are less selective cation channels. This family is subdivided into 6 subfamilies based on homology: canonical TRP (TRPC), vanilloid receptors (TRPV), melastatin (TRPM), polycystins (TRPP), mucolipins (TRPML), and ankyrin transmembrane protein 1 (TRPA).
Detailed structure
Channels differ with respect to the ion they let pass (for example, Na+, K+, Cl−), the ways in which they may be regulated, the number of subunits of which they are composed and other aspects of structure. Channels belonging to the largest class, which includes the voltage-gated channels that underlie the nerve impulse, consist of four or sometimes five subunits with six transmembrane helices each. On activation, these helices move about and open the pore. Two of these six helices are separated by a loop that lines the pore and is the primary determinant of ion selectivity and conductance in this channel class and some others.
The existence and mechanism for ion selectivity was first postulated in the late 1960s by Bertil Hille and Clay Armstrong. The idea of the ionic selectivity for potassium channels was that the carbonyl oxygens of the protein backbones of the "selectivity filter" (named by Bertil Hille) could efficiently replace the water molecules that normally shield potassium ions, but that sodium ions were smaller and cannot be completely dehydrated to allow such shielding, and therefore could not pass through. This mechanism was finally confirmed when the first structure of an ion channel was elucidated. A bacterial potassium channel KcsA, consisting of just the selectivity filter, "P" loop, and two transmembrane helices was used as a model to study the permeability and the selectivity of ion channels in the Mackinnon lab. The determination of the molecular structure of KcsA by Roderick MacKinnon using X-ray crystallography won a share of the 2003 Nobel Prize in Chemistry.
Because of their small size and the difficulty of crystallizing integral membrane proteins for X-ray analysis, it is only very recently that scientists have been able to directly examine what channels "look like." Particularly in cases where the crystallography required removing channels from their membranes with detergent, many researchers regard images that have been obtained as tentative. An example is the long-awaited crystal structure of a voltage-gated potassium channel, which was reported in May 2003. One inevitable ambiguity about these structures relates to the strong evidence that channels change conformation as they operate (they open and close, for example), such that the structure in the crystal could represent any one of these operational states. Most of what researchers have deduced about channel operation so far they have established through electrophysiology, biochemistry, gene sequence comparison and mutagenesis.
Channels can have single (CLICs) to multiple transmembrane (K channels, P2X receptors, Na channels) domains which span plasma membrane to form pores. Pore can determine the selectivity of the channel. Gate can be formed either inside or outside the pore region.
Pharmacology
Chemical substances can modulate the activity of ion channels, for example by blocking or activating them.
Ion channel blockers
A variety of ion channel blockers (inorganic and organic molecules) can modulate ion channel activity and conductance.
Some commonly used blockers include:
Tetrodotoxin (TTX), used by puffer fish and some types of newts for defense. It blocks sodium channels.
Saxitoxin is produced by a dinoflagellate also known as "red tide". It blocks voltage-dependent sodium channels.
Conotoxin is used by cone snails to hunt prey.
Lidocaine and novocaine belong to a class of local anesthetics which block sodium ion channels.
Dendrotoxin is produced by mamba snakes, and blocks potassium channels.
Iberiotoxin is produced by the Hottentotta tamulus (Eastern Indian scorpion) and blocks potassium channels.
Heteropodatoxin is produced by Heteropoda venatoria (brown huntsman spider or laya) and blocks potassium channels.
Ion channel activators
Several compounds are known to promote the opening or activation of specific ion channels. These are classified by the channel on which they act:
Calcium channel openers, such as Bay K8644
Chloride channel openers, such as phenanthroline
Potassium channel openers, such as minoxidil
Sodium channel openers, such as DDT
Diseases
There are a number of disorders which disrupt normal functioning of ion channels and have disastrous consequences for the organism. Genetic and autoimmune disorders of ion channels and their modifiers are known as channelopathies. See :Category:Channelopathies for a full list.
Shaker gene mutations cause a defect in the voltage gated ion channels, slowing down the repolarization of the cell.
Equine hyperkalaemic periodic paralysis as well as human hyperkalaemic periodic paralysis (HyperPP) are caused by a defect in voltage-dependent sodium channels.
Paramyotonia congenita (PC) and potassium-aggravated myotonias (PAM)
Generalized epilepsy with febrile seizures plus (GEFS+)
Episodic ataxia (EA), characterized by sporadic bouts of severe discoordination with or without myokymia, and can be provoked by stress, startle, or heavy exertion such as exercise.
Familial hemiplegic migraine (FHM)
Spinocerebellar ataxia type 13
Long QT syndrome is a ventricular arrhythmia syndrome caused by mutations in one or more of presently ten different genes, most of which are potassium channels and all of which affect cardiac repolarization.
Brugada syndrome is another ventricular arrhythmia caused by voltage-gated sodium channel gene mutations.
Polymicrogyria is a developmental brain malformation caused by voltage-gated sodium channel and NMDA receptor gene mutations.
Cystic fibrosis is caused by mutations in the CFTR gene, which is a chloride channel.
Mucolipidosis type IV is caused by mutations in the gene encoding the TRPML1 channel
Mutations in and overexpression of ion channels are important events in cancer cells. In Glioblastoma multiforme, upregulation of gBK potassium channels and ClC-3 chloride channels enables glioblastoma cells to migrate within the brain, which may lead to the diffuse growth patterns of these tumors.
History
The fundamental properties of currents mediated by ion channels were analyzed by the British biophysicists Alan Hodgkin and Andrew Huxley as part of their Nobel Prize-winning research on the action potential, published in 1952. They built on the work of other physiologists, such as Cole and Baker's research into voltage-gated membrane pores from 1941. The existence of ion channels was confirmed in the 1970s by Bernard Katz and Ricardo Miledi using noise analysis . It was then shown more directly with an electrical recording technique known as the "patch clamp", which led to a Nobel Prize to Erwin Neher and Bert Sakmann, the technique's inventors. Hundreds if not thousands of researchers continue to pursue a more detailed understanding of how these proteins work. In recent years the development of automated patch clamp devices helped to increase significantly the throughput in ion channel screening.
The Nobel Prize in Chemistry for 2003 was awarded to Roderick MacKinnon for his studies on the physico-chemical properties of ion channel structure and function, including x-ray crystallographic structure studies.
Culture
Roderick MacKinnon commissioned Birth of an Idea, a tall sculpture based on the KcsA potassium channel. The artwork contains a wire object representing the channel's interior with a blown glass object representing the main cavity of the channel structure.
Ion Channels and Stochastic Processes
The behavior of ion channels can be usefully modeled using mathematics and probability. Stochastic processes are mathematical models of systems and phenomena that appear to vary in a random manner. A very simple example is flipping a coin; each flip has an equal chance to be heads or tails, the chances are not influenced by the outcome of past flips, and we can say that pheads = 0.5 and ptails = 0.5.
A particularly relevant form of stochastic processes in the study of ion channels is Markov chains. In a Markov chain, there are multiple states, each of which has given chances to transition to different states over a particular period of time. Ion channels undergo state transitions (e.g. open, closed, inactive) that behave like Markov chains. Markov chain analysis can be used to make conclusions regarding the nature of a given ion channel, including the likely number of open and closed states. We can also use Markov chain analysis to produce models that accurately simulate the insertion of ion channels into cell membranes.
Markov chains can also be used in combination with the stochastic matrix to determine the stable distribution matrix by solving the equation PX=X, where P is the stochastic matrix and X is the stable distribution matrix. This stable distribution matrix tells us the relative frequencies of each state after a long time, which in the context of ion channels can be the frequencies of the open, closed, and inactive states for an ion channel. Note that Markov chain assumptions apply, including that (1) all transition probabilities for each state sum to one, (2) the probability model applies to all possible states, and (3) that the probability of transitions are constant over time. Therefore, Markov chains have limited applicability in some situations.
There are a variety of other stochastic processes that are utilized in the study of ion channels, but are too complex to relate here and can be examined more closely elsewhere.
See also
Alpha helix
Babycurus toxin 1
Ion channel family as defined in Pfam and InterPro
Ki Database
Lipid bilayer ion channels
Magnesium transport
Neurotoxin
Passive transport
Synthetic ion channels
Transmembrane receptor
References
External links
Cell communication
Electrophysiology
Integral membrane proteins
Neurochemistry
Protein families | Ion channel | [
"Chemistry",
"Biology"
] | 5,437 | [
"Cell communication",
"Protein classification",
"Cellular processes",
"Biochemistry",
"Protein families",
"Neurochemistry",
"Ion channels"
] |
15,343 | https://en.wikipedia.org/wiki/Intron | An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons.
Introns are found in the genes of most eukaryotes and many eukaryotic viruses and they can be located in both protein-coding genes and genes that function as RNA (noncoding genes). There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes).
Discovery and etymology
Introns were first discovered in protein-coding genes of adenovirus, and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, and viruses within all of the biological kingdoms.
The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts, for which they shared the Nobel Prize in Physiology or Medicine in 1993, though credit was excluded for the researchers and collaborators in their labs that did the experiments resulting in the discovery, Susan Berget and Louise Chow. The term intron was introduced by American biochemist Walter Gilbert:
"The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons." (Gilbert 1978)
The term intron also refers to intracistron, i.e., an additional piece of DNA that arises within a cistron.
Although introns are sometimes called intervening sequences, the term "intervening sequence" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins, untranslated regions (UTR), and nucleotides removed by RNA editing, in addition to introns.
Distribution
The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, for example baker's/brewer's yeast (Saccharomyces cerevisiae). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns.
A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus, in which most (> 95%) introns are 15 or 16 bp long.
Classification
Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified:
Introns in nuclear protein-coding genes that are removed by spliceosomes (spliceosomal introns)
Introns in nuclear and archaeal transfer RNA genes that are removed by proteins (tRNA introns)
Self-splicing group I introns that are removed by RNA catalysis
Self-splicing group II introns that are removed by RNA catalysis
Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns.
Spliceosomal introns
Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons.
tRNA introns
Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. Note that self-splicing introns are also sometimes found within tRNA genes.
Group I and group II introns
Group I and group II introns are found in genes encoding proteins (messenger RNA), transfer RNA and ribosomal RNA in a very wide range of living organisms. Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture. These complex architectures allow some group I and group II introns to be self-splicing, that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron.
On the accuracy of splicing
The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites.
Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10−5) and the correct exons will be joined and the correct intron will be deleted. However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10−5 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10−2) per gene. Additional studies suggest that the error rate is no less than 0.1% per intron. This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay.
The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA.
Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case.
While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10−5 – 10−6 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences.
In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites.
Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as "alternatively spliced" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene.
Biological functions and evolution
While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay and mRNA export.
After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell, group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome.
Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). Since eukaryotes arose from a common ancestor (common descent), there must have been extensive gain or loss of introns during evolutionary time. This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. Biological factors also influence which genes in a genome lose or accumulate introns.
Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals.
Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome. Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME).
Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage. In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination. Bonnet et al. (2017) speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes.
Starvation adaptation
The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways.
As mobile genetic elements
Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns.
In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus.
Transposon insertions have been shown to generate thousands of new introns across diverse eukaryotic species. Transposon insertions sometimes result in the duplication of this sequence on each side of the transposon. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT or encodes the splice sites within the transposon sequence. Where intron-generating transposons do not create target site duplications, elements include both splice sites GT (5') and AG (3') thereby splicing precisely without affecting the protein-coding sequence. It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon.
In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain.
Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron.
The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species.
See also
Structure:
Exon
mRNA
Eukaryotic chromosome fine structure
Small t intron
Splicing:
Alternative splicing
Exitron
Minor spliceosome
Outron
Function
MicroRNA
Others:
Exon shuffling
Intein
Noncoding DNA
Noncoding RNA
Selfish DNA
Twintron
Exon-intron database
References
External links
A search engine for exon/intron sequences defined by NCBI
Bruce Alberts, Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walter Molecular Biology of the Cell, 2007, . Fourth edition is available online through the NCBI Bookshelf: link
Jeremy M Berg, John L Tymoczko, and Lubert Stryer, Biochemistry 5th edition, 2002, W H Freeman. Available online through the NCBI Bookshelf: link
Intron finding tool for plant genomic sequences
Exon-intron graphic maker
Gene expression
DNA
Spliceosome
RNA splicing
Non-coding DNA | Intron | [
"Chemistry",
"Biology"
] | 4,566 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
15,412 | https://en.wikipedia.org/wiki/Infrared%20spectroscopy | Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm−1. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below.
The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm−1 (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm−1, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties.
Uses and applications
Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers.
It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver.
IR spectroscopy has been used in identification of pigments in paintings and other art objects such as illuminated manuscripts.
Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Instruments can routinely record many spectra per second in situ, providing insights into reaction mechanism (e.g., detection of intermediates) and reaction progress.
Infrared spectroscopy is utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc.
Another important application of infrared spectroscopy is in the food industry to measure the concentration of various compounds in different food products.
Infrared spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil.
Infrared spectroscopy is an important analysis method in the recycling process of household waste plastics, and a convenient stand-off method to sort plastic of different polymers (PET, HDPE, ...).
Other developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets.
In catalysis research it is a very useful tool to characterize the catalyst, as well as to detect intermediates
Infrared spectroscopy coupled with machine learning and artificial intelligence also has potential for rapid, accurate and non-invasive sensing of bacteria. The complex chemical composition of bacteria, including nucleic acids, proteins, carbohydrates and fatty acids, results in high-dimensional datasets where the essential features are effectively hidden under the total spectrum. Extraction of the essential features therefore requires advanced statistical methods such as machine learning and deep-neural networks. The potential of this technique for bacteria classification have been demonstrated for differentiation at the genus, species and serotype taxonomic levels, and it has also been shown promising for antimicrobial susceptibility testing, which is important for many clinical settings where faster susceptibility testing would decrease unnecessary blind-treatment with broad-spectrum antibiotics. The main limitation of this technique for clinical applications is the high sensitivity to technical equipment and sample preparation techniques, which makes it difficult to construct large-scale databases. Attempts in this direction have however been made by Bruker with the IR Biotyper for food microbiology.
Theory
Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling.
In particular, in the Born–Oppenheimer and harmonic approximations (i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighbourhood of the equilibrium molecular geometry), the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface. Thus, it depends on both the nature of the bonds and the mass of the atoms that are involved. Using the Schrödinger equation leads to the selection rule for the vibrational quantum number in the system undergoing vibrational changes:
The compression and extension of a bond may be likened to the behaviour of a spring, but real molecules are hardly perfectly elastic in nature. If a bond between atoms is stretched, for instance, there comes a point at which the bond breaks and the molecule dissociates into atoms. Thus real molecules deviate from perfect harmonic motion and their molecular vibrational motion is anharmonic. An empirical expression that fits the energy curve of a diatomic molecule undergoing anharmonic extension and compression to a good approximation was derived by P.M. Morse, and is called the Morse function. Using the Schrödinger equation leads to the selection rule for the system undergoing vibrational changes :
Number of vibrational modes
In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment.
A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As examples linear carbon dioxide (CO2) has 3 × 3 – 5 = 4, while non-linear water (H2O), has only 3 × 3 – 6 = 3.
Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.
The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: two stretching modes (ν): symmetric (νs) and antisymmetric (νas); and four bending modes: scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present.
These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms.
The simplest and most important or fundamental IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number v = 0 to the first excited state with vibrational quantum number v = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state (v = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called combination modes, involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc.
Practical IR spectroscopy
The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR matches the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is extracted.
This technique is commonly used for analyzing samples with covalent bonds. The number of bands roughly correlates with symmetry and molecular complexity.
A variety of devices are used to hold the sample in the path of the IR beam These devices are selected on the basis of their transparency in the region of interest and their resilience toward the sample.
Sample preparation
Gas samples
Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters.
Liquid samples
Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used).
The plates are transparent to the infrared light and do not introduce any lines onto the spectra. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment).
Solid samples
Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on the surface of a KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved.
In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it.
A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials.
Comparing to a reference
It is typical to record spectrum of both the sample and a "reference". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence.
The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately).
A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors.
Nevertheless, among different absorption-based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration-free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference
Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage.
FTIR
Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference.
An alternate method for acquiring spectra is the "dispersive" or "scanning monochromator" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments.
Infrared microscopy
Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR).
Other methods in molecular vibrational spectroscopy
Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy.
The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries.
Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface.
Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals.
Analysis of vibrational modes that are IR-inactive but appear in inelastic neutron scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques.
Computational infrared microscopy
By using computer simulations and normal mode analysis it is possible to calculate theoretical frequencies of molecules.
Absorption bands
IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below.
Regions
A spectrograph is often interpreted as having two regions.
functional group region
In the functional region there are one to a few troughs per functional group.
fingerprint region
In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound.
Badger's rule
For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's rule. Originally published by Richard McLean Badger in 1934, this rule states that the strength of a bond (in terms of force constant) correlates with the bond length. That is, increase in bond strength leads to corresponding bond shortening and vice versa.
Isotope effects
The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for ν(16O–16O) and ν(18O–18O), respectively.
By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)]
where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system:
( is the mass of atom ).
The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus
The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps.
Two-dimensional IR
Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers.
Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research.
As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.
See also
Applied spectroscopy
Astrochemistry
Atomic and molecular astrophysics
Atomic force microscopy based infrared spectroscopy (AFM-IR)
Cosmochemistry
Far-infrared astronomy
Forensic chemistry
Forensic engineering
Forensic polymer engineering
Infrared astronomy
Infrared microscopy
Infrared multiphoton dissociation
Infrared photodissociation spectroscopy
Infrared spectroscopy correlation table
Infrared spectroscopy of metal carbonyls
Near-infrared spectroscopy
Nuclear resonance vibrational spectroscopy
Photothermal microspectroscopy
Raman spectroscopy
Rotational-vibrational spectroscopy
Time-resolved spectroscopy
Vibrational spectroscopy of linear molecules
References
External links
Infrared spectroscopy for organic chemists
Organic compounds spectrum database | Infrared spectroscopy | [
"Physics",
"Chemistry"
] | 5,159 | [
"Infrared spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
15,417 | https://en.wikipedia.org/wiki/Intermolecular%20force | An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling.
Attractive intermolecular forces are categorized into the following types:
Hydrogen bonding
Ion–dipole forces and ion–induced dipole force
Cation–π, σ–π and π–π bonding
Van der Waals forces – Keesom force, Debye force, and London dispersion force
Cation–cation bonding
Salt bridge (protein and supramolecular)
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential.
In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology).
Hydrogen bonding
A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
Salt bridge
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
Dipole–dipole and similar interactions
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
Ion–dipole and ion–induced dipole forces
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
Van der Waals forces
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.
Keesom force (permanent dipole – permanent dipole)
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
where d = electric dipole moment, = permittivity of free space, = dielectric constant of surrounding material, T = temperature, = Boltzmann constant, and r = distance between molecules.
Debye force (permanent dipoles–induced dipoles)
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
where = polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
London dispersion force (fluctuating dipole–induced dipole interaction)
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
Relative strength of forces
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way proceeding the thousands of enzymatic reactions, so important for living organisms.
Effect on the behavior of gases
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
Quantum mechanical theories
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
See also
Ionic bonding
Salt bridges
Coomber's relationship
Force field (chemistry)
Hydrophobic effect
Intramolecular force
Molecular solid
Polymer
Quantum chemistry computer programs
van der Waals force
Comparison of software for molecular mechanics modeling
Non-covalent interactions
Solvation
References
Intermolecular forces
Chemical bonding
Johannes Diderik van der Waals | Intermolecular force | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,353 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
15,445 | https://en.wikipedia.org/wiki/Entropy%20%28information%20theory%29 | In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is
where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.
The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.
Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition generalizes the above.
Introduction
The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event.
The information content, also called the surprisal or self-information, of an event is a function that increases as the probability of an event decreases. When is close to 1, the surprisal of the event is low, but if is close to 0, the surprisal of the event is high. This relationship is described by the function
where is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, is the only function that satisfies а specific set of conditions defined in section .
Hence, we can define the information, or surprisal, of an event by
or equivalently,
Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability () than each outcome of a coin toss ().
Consider a coin with probability of landing on heads and probability of landing on tails. The maximum surprise is when , for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit. (Similarly, one trit with equiprobable values contains (about 1.58496) bits of information because it can have one of three values.) The minimum surprise is when or , when the event outcome is known ahead of time, and the entropy is zero bits. When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits.
Example
Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.
Definition
Named after Boltzmann's Η-theorem, Shannon defined the entropy (Greek capital letter eta) of a discrete random variable , which takes values in the set and is distributed according to such that :
Here is the expected value operator, and is the information content of .
is itself a random variable.
The entropy can explicitly be written as:
where is the base of the logarithm used. Common values of are 2, Euler's number , and 10, and the corresponding units of entropy are the bits for , nats for , and bans for .
In the case of for some , the value of the corresponding summand is taken to be , which is consistent with the limit:
One may also define the conditional entropy of two variables and taking values from sets and respectively, as:
where and . This quantity should be understood as the remaining randomness in the random variable given the random variable .
Measure theory
Entropy can be formally defined in the language of measure theory as follows: Let be a probability space. Let be an event. The surprisal of is
The expected surprisal of is
A -almost partition is a set family such that and for all distinct . (This is a relaxation of the usual conditions for a partition.) The entropy of is
Let be a sigma-algebra on . The entropy of is
Finally, the entropy of the probability space is , that is, the entropy with respect to of the sigma-algebra of all measurable subsets of .
Example
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because
However, if we know the coin is not fair, but comes up heads or tails with probabilities and , where , then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if = 0.7, then
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.
Characterization
To understand the meaning of , first define an information function in terms of an event with probability . The amount of information acquired due to the observation of event follows from Shannon's solution of the fundamental properties of information:
is monotonically decreasing in : an increase in the probability of an event decreases the information from an observed event, and vice versa.
: events that always occur do not communicate information.
: the information learned from independent events is the sum of the information learned from each event.
Given two independent events, if the first event can yield one of equiprobable outcomes and another has one of equiprobable outcomes then there are equiprobable outcomes of the joint event. This means that if bits are needed to encode the first value and to encode the second, one needs to encode both.
Shannon discovered that a suitable choice of is given by:
In fact, the only possible values of are for . Additionally, choosing a value for is equivalent to choosing a value for , so that corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties.
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof
|-
|Let be the information function which one assumes to be twice continuously differentiable, one has:
This differential equation leads to the solution for some . Property 2 gives . Property 1 and 2 give that for all , so that .
|}
The different units of information (bits for the binary logarithm , nats for the natural logarithm , bans for the decimal logarithm and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, tosses provide bits of information, which is approximately nats or decimal digits.
The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
Alternative characterization
Another characterization of entropy uses the following properties. We denote and .
Continuity: should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
Symmetry: should be unchanged if the outcomes are re-ordered. That is, for any permutation of .
Maximum: should be maximal if all the outcomes are equally likely i.e. .
Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e.
Additivity: given an ensemble of uniformly distributed elements that are partitioned into boxes (sub-systems) with elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.
Discussion
The rule of additivity has the following consequences: for positive integers where ,
Choosing , this implies that the entropy of a certain outcome is zero: . This implies that the efficiency of a source set with symbols can be defined simply as being equal to its -ary entropy. See also Redundancy (information theory).
The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property, . Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, lends itself to practical interpretations.
Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.
Alternative characterization via additivity and subadditivity
Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties:
Subadditivity: for jointly distributed random variables .
Additivity: when the random variables are independent.
Expansibility: , i.e., adding an outcome with probability zero does not change the entropy.
Symmetry: is invariant under permutation of .
Small for small probabilities: .
Discussion
It was shown that any function satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector .
It is worth noting that if we drop the "small for small probabilities" property, then must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.
Further properties
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable :
Adding or removing an event with probability zero does not contribute to the entropy:
.
The maximal entropy of an event with n different outcomes is : it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable:
.
The entropy or the amount of information revealed by evaluating (that is, evaluating and simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of , then revealing the value of given that you know the value of . This may be written as:
If where is a function, then . Applying the previous formula to yields
so , the entropy of a variable can only decrease when the latter is passed through a function.
If and are two independent random variables, then knowing the value of doesn't influence our knowledge of the value of (since the two don't influence each other by independence):
More generally, for any random variables and , we have
.
The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e., , with equality if and only if the two events are independent.
The entropy is concave in the probability mass function , i.e.
for all probability mass functions and .
Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp.
Aspects
Relationship to thermodynamic entropy
The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.
In statistical thermodynamics the most general formula for the thermodynamic entropy of a thermodynamic system is the Gibbs entropy
where is the Boltzmann constant, and is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Ludwig Boltzmann (1872).
The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927:
where ρ is the density matrix of the quantum mechanical system and Tr is the trace.
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant indicates, the changes in for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his equation:
where is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is . When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.
In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.
Data compression
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks.
Entropy as a measure of diversity
Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of , the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.
Entropy of a sequence
There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:
the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event),
the joint entropy of the symbols forming the message or sequence (seen as a set of events),
the entropy rate of a stochastic process (message or sequence is seen as a succession of events).
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.
If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are published books, and each book is only published once, the estimate of the probability of each book is , and the entropy (in bits) is . As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately . The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [ for , , ] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
Limitations of entropy in cryptography
In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
Data as a Markov process
A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
where is the probability of . For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:
where is a state (certain preceding characters) and is the probability of given as the previous character.
For a second order Markov source, the entropy rate is
Efficiency (normalized entropy)
A source set with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:
Applying the basic properties of the logarithm, this quantity can also be expressed as:
Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy . Furthermore, the efficiency is indifferent to the choice of (positive) base , as indicated by the insensitivity within the final logarithm above thereto.
Entropy for continuous random variables
Differential entropy
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function with finite or infinite support on the real line is defined by analogy, using the above form of the entropy as an expectation:
This is the differential entropy (or continuous entropy). A precursor of the continuous entropy is the expression for the functional in the H-theorem of Boltzmann.
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the (finite or infinite) bins whose probabilities are denoted by . As the continuous domain is generalized, the width must be made explicit.
To do this, start with a continuous function discretized into bins of size .
By the mean-value theorem there exists a value in each bin such that
the integral of the function can be approximated (in the Riemannian sense) by
where this limit and "bin size goes to zero" are equivalent.
We will denote
and expanding the logarithm, we have
As , we have
Note; as , requires a special definition of the differential or continuous entropy:
which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for . Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).
Limiting density of discrete points
It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when is a dimensioned variable. will then have the units of . The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If is some "standard" value of (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:
and the result will be the same for any choice of units for . In fact, the limit of discrete entropy as would also include a term of , which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.
Relative entropy
Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure as follows. Assume that a probability distribution is absolutely continuous with respect to a measure , i.e. is of the form for some non-negative -integrable function with -integral 1, then the relative entropy can be defined as
In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure is the counting measure, and the differential entropy, where the measure is the Lebesgue measure. If the measure is itself a probability distribution, the relative entropy is non-negative, and zero if as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure . The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure .
Use in number theory
Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem.
Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) . And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem.
The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also its used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem.
While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.
Use in combinatorics
Entropy has become a useful quantity in combinatorics.
Loomis–Whitney inequality
A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset , we have
where is the orthogonal projection in the th coordinate:
The proof follows as a simple corollary of Shearer's inequality: if are random variables and are subsets of } such that every integer between 1 and lies in exactly of these subsets, then
where is the Cartesian product of random variables with indexes in (so the dimension of this vector is equal to the size of ).
We sketch how Loomis–Whitney follows from this: Indeed, let be a uniformly distributed random variable with values in and so that each point in occurs with equal probability. Then (by the further properties of entropy mentioned above) , where denotes the cardinality of . Let }. The range of is contained in and hence . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
Approximation to binomial coefficient
For integers let . Then
where
{| class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof (sketch)
|-
|Note that is one term of the expression
Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,
since there are terms in the summation. Rearranging gives the lower bound.
|}
A nice interpretation of this is that the number of binary strings of length with exactly many 1's is approximately .
Use in machine learning
Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.
Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees , which is equal to the difference between the entropy of and the conditional entropy of given , quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute . The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.
Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.
Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).
See also
Approximate entropy (ApEn)
Entropy (thermodynamics)
Cross entropy – is a measure of the average number of bits needed to identify an event from a set of possibilities between two probability distributions
Entropy (arrow of time)
Entropy encoding – a coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols.
Entropy estimation
Entropy power inequality
Fisher information
Graph entropy
Hamming distance
History of entropy
History of information theory
Information fluctuation complexity
Information geometry
Kolmogorov–Sinai entropy in dynamical systems
Levenshtein distance
Mutual information
Perplexity
Qualitative variation – other measures of statistical dispersion for nominal distributions
Quantum relative entropy – a measure of distinguishability between two quantum states.
Rényi entropy – a generalization of Shannon entropy; it is one of a family of functionals for quantifying the diversity, uncertainty or randomness of a system.
Randomness
Sample entropy (SampEn)
Shannon index
Theil index
Typoglycemia
Notes
References
Further reading
Textbooks on information theory
Cover, T.M., Thomas, J.A. (2006), Elements of Information Theory – 2nd Ed., Wiley-Interscience,
MacKay, D.J.C. (2003), Information Theory, Inference and Learning Algorithms, Cambridge University Press,
Arndt, C. (2004), Information Measures: Information and its Description in Science and Engineering, Springer,
Gray, R. M. (2011), Entropy and Information Theory, Springer.
Shannon, C.E., Weaver, W. (1949) The Mathematical Theory of Communication, Univ of Illinois Press.
Stone, J. V. (2014), Chapter 1 of Information Theory: A Tutorial Introduction , University of Sheffield, England. .
External links
"Entropy" at Rosetta Code—repository of implementations of Shannon entropy in different programming languages.
Entropy an interdisciplinary journal on all aspects of the entropy concept. Open access.
Information theory
Statistical randomness
Complex systems theory
Data compression | Entropy (information theory) | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 8,192 | [
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Entropy and information",
"Computer science",
"Entropy",
"Information theory",
"Dynamical systems"
] |
15,476 | https://en.wikipedia.org/wiki/Internet%20protocol%20suite | The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.
The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
History
Early research
Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974 by Cerf, Yogen Dalal and Carl Sunshine.
Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagrams. Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Bob Metcalfe and Yogen Dalal at Xerox PARC; Danny Cohen, who needed it for his packet voice work; and Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 4, written in 1978, Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This end-to-end principle was pioneered by Louis Pouzin in the CYCLADES network, based on the ideas of Donald Davies. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke in 1999, the IP over Avian Carriers formal protocol specification was created and successfully tested two years later. 10 years later still, it was adapted for IPv6.
DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6).
Early implementation
In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983.
A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.
Adoption
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.
IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.
Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).
The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. For Windows 3.1, the dominant PC operating system among consumers in the first half of the 1990s, Peter Tattam's release of the Trumpet Winsock TCP/IP stack was key to bringing the Internet to home users. Trumpet Winsock allowed TCP/IP operations over a serial connection (SLIP or PPP). The typical home PC of the time had an external Hayes-compatible modem connected via an RS-232 port with an 8250 or 16550 UART which required this type of stack. Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.
Formal specification and standards
The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF).
The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specifications of the suite are RFC 1122 and 1123, which broadly outlines four abstraction layers (as well as related protocols); the link layer, IP layer, transport layer, and application layer, along with support protocols. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
Key architectural principles
The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.
The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features."
Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level.
An early pair of architectural documents, and , titled Requirements for Internet Hosts, emphasizes architectural principles over layering. RFC 1122/23 are structured in sections referring to layers, but the documents refer to many other architectural principles, and do not emphasize layering. They loosely defines a four-layer model, with the layers having names, not numbers, as follows:
The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination.
The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect the transmission of internet layer datagrams to next-neighbor hosts.
Link layer
The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels.
The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model.
The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model.
Internet layer
Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet.
The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.
The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.
Transport layer
The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers).
For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services.
Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.
TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:
data arrives in-order
data has minimal error (i.e., correctness)
duplicate data is discarded
lost or discarded packets are resent
includes traffic congestion control
The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP).
Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC).
The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media.
The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications.
The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer.
QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC.
Application layer
The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer.
The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model.
Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.
At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.
Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload.
Layering evolution and representations in the literature
The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools.
The following table shows various such networking models. The number of layers varies between three and seven.
Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.
Comparison of TCP/IP and OSI layering
The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.
Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.
The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful".
For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.
Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers.
IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer.
Implementations
The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
See also
BBN Report 1822, an early layered network model
Fast Local Internet Protocol
List of automation protocols
List of information technology initialisms
List of IP protocol numbers
Lists of network protocols
List of TCP and UDP port numbers
Notes
References
Bibliography
External links
Internet History – Pages on Robert Kahn, Vinton Cerf, and TCP/IP (reviewed by Cerf and Kahn).
The Ultimate Guide to TCP/IP
The TCP/IP Guide – A comprehensive look at the protocols and the procedure and processes involved
History of the Internet
Network architecture
Reference models | Internet protocol suite | [
"Engineering"
] | 5,809 | [
"Network architecture",
"Computer networks engineering"
] |
15,942 | https://en.wikipedia.org/wiki/John%20von%20Neumann | John von Neumann ( ; ; December 28, 1903 – February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, integrating pure and applied sciences and making major contributions to many fields, including mathematics, physics, economics, computing, and statistics. He was a pioneer in building the mathematical framework of quantum physics, in the development of functional analysis, and in game theory, introducing or codifying concepts including cellular automata, the universal constructor and the digital computer. His analysis of the structure of self-replication preceded the discovery of the structure of DNA.
During World War II, von Neumann worked on the Manhattan Project. He developed the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon. Before and after the war, he consulted for many organizations including the Office of Scientific Research and Development, the Army's Ballistic Research Laboratory, the Armed Forces Special Weapons Project and the Oak Ridge National Laboratory. At the peak of his influence in the 1950s, he chaired a number of Defense Department committees including the Strategic Missile Evaluation Committee and the ICBM Scientific Advisory Committee. He was also a member of the influential Atomic Energy Commission in charge of all atomic energy development in the country. He played a key role alongside Bernard Schriever and Trevor Gardner in the design and development of the United States' first ICBM programs. At that time he was considered the nation's foremost expert on nuclear weaponry and the leading defense scientist at the U.S. Department of Defense.
Von Neumann's contributions and intellectual ability drew praise from colleagues in physics, mathematics, and beyond. Accolades he received range from the Medal of Freedom to a crater on the Moon named in his honor.
Life and education
Family background
Von Neumann was born in Budapest, Kingdom of Hungary (then part of the Austro-Hungarian Empire), on December 28, 1903, to a wealthy, non-observant Jewish family. His birth name was Neumann János Lajos. In Hungarian, the family name comes first, and his given names are equivalent to John Louis in English.
He was the eldest of three brothers; his two younger siblings were Mihály (Michael) and Miklós (Nicholas). His father Neumann Miksa (Max von Neumann) was a banker and held a doctorate in law. He had moved to Budapest from Pécs at the end of the 1880s. Miksa's father and grandfather were born in Ond (now part of Szerencs), Zemplén County, northern Hungary. John's mother was Kann Margit (Margaret Kann); her parents were Kann Jákab and Meisels Katalin of the Meisels family. Three generations of the Kann family lived in spacious apartments above the Kann-Heller offices in Budapest; von Neumann's family occupied an 18-room apartment on the top floor.
On February 20, 1913, Emperor Franz Joseph elevated John's father to the Hungarian nobility for his service to the Austro-Hungarian Empire. The Neumann family thus acquired the hereditary appellation Margittai, meaning "of Margitta" (today Marghita, Romania). The family had no connection with the town; the appellation was chosen in reference to Margaret, as was their chosen coat of arms depicting three marguerites. Neumann János became margittai Neumann János (John Neumann de Margitta), which he later changed to the German Johann von Neumann.
Child prodigy
Von Neumann was a child prodigy who at six years old could divide two eight-digit numbers in his head and converse in Ancient Greek. He, his brothers and his cousins were instructed by governesses. Von Neumann's father believed that knowledge of languages other than their native Hungarian was essential, so the children were tutored in English, French, German and Italian. By age eight, von Neumann was familiar with differential and integral calculus, and by twelve he had read Borel's La Théorie des Fonctions. He was also interested in history, reading Wilhelm Oncken's 46-volume world history series (General History in Monographs). One of the rooms in the apartment was converted into a library and reading room.
Von Neumann entered the Lutheran Fasori Evangélikus Gimnázium in 1914. Eugene Wigner was a year ahead of von Neumann at the school and soon became his friend.
Although von Neumann's father insisted that he attend school at the grade level appropriate to his age, he agreed to hire private tutors to give von Neumann advanced instruction. At 15, he began to study advanced calculus under the analyst Gábor Szegő. On their first meeting, Szegő was so astounded by von Neumann's mathematical talent and speed that, as recalled by his wife, he came back home with tears in his eyes. By 19, von Neumann had published two major mathematical papers, the second of which gave the modern definition of ordinal numbers, which superseded Georg Cantor's definition. At the conclusion of his education at the gymnasium, he applied for and won the Eötvös Prize, a national award for mathematics.
University studies
According to his friend Theodore von Kármán, von Neumann's father wanted John to follow him into industry, and asked von Kármán to persuade his son not to take mathematics. Von Neumann and his father decided that the best career path was chemical engineering. This was not something that von Neumann had much knowledge of, so it was arranged for him to take a two-year, non-degree course in chemistry at the University of Berlin, after which he sat for the entrance exam to ETH Zurich, which he passed in September 1923. Simultaneously von Neumann entered Pázmány Péter University, then known as the University of Budapest, as a Ph.D. candidate in mathematics. For his thesis, he produced an axiomatization of Cantor's set theory. In 1926, he graduated as a chemical engineer from ETH Zurich and simultaneously passed his final examinations summa cum laude for his Ph.D. in mathematics (with minors in experimental physics and chemistry) at the University of Budapest.
He then went to the University of Göttingen on a grant from the Rockefeller Foundation to study mathematics under David Hilbert. Hermann Weyl remembers how in the winter of 1926–1927 von Neumann, Emmy Noether, and he would walk through "the cold, wet, rain-wet streets of Göttingen" after class discussing hypercomplex number systems and their representations.
Career and private life
Von Neumann's habilitation was completed on December 13, 1927, and he began to give lectures as a Privatdozent at the University of Berlin in 1928. He was the youngest person elected Privatdozent in the university's history. He began writing nearly one major mathematics paper per month. In 1929, he briefly became a Privatdozent at the University of Hamburg, where the prospects of becoming a tenured professor were better, then in October of that year moved to Princeton University as a visiting lecturer in mathematical physics.
Von Neumann was baptized a Catholic in 1930. Shortly afterward, he married Marietta Kövesi, who had studied economics at Budapest University. Von Neumann and Marietta had a daughter, Marina, born in 1935; she would become a professor. The couple divorced on November 2, 1937. On November 17, 1938, von Neumann married Klára Dán.
In 1933 Von Neumann accepted a tenured professorship at the Institute for Advanced Study in New Jersey, when that institution's plan to appoint Hermann Weyl appeared to have failed. His mother, brothers and in-laws followed von Neumann to the United States in 1939. Von Neumann anglicized his name to John, keeping the German-aristocratic surname von Neumann. Von Neumann became a naturalized U.S. citizen in 1937, and immediately tried to become a lieutenant in the U.S. Army's Officers Reserve Corps. He passed the exams but was rejected because of his age.
Klára and John von Neumann were socially active within the local academic community. His white clapboard house on Westcott Road was one of Princeton's largest private residences. He always wore formal suits. He enjoyed Yiddish and "off-color" humor. In Princeton, he received complaints for playing extremely loud German march music; Von Neumann did some of his best work in noisy, chaotic environments. According to Churchill Eisenhart, von Neumann could attend parties until the early hours of the morning and then deliver a lecture at 8:30.
He was known for always being happy to provide others of all ability levels with scientific and mathematical advice. Wigner wrote that he perhaps supervised more work (in a casual sense) than any other modern mathematician. His daughter wrote that he was very concerned with his legacy in two aspects: his life and the durability of his intellectual contributions to the world.
Many considered him an excellent chairman of committees, deferring rather easily on personal or organizational matters but pressing on technical ones. Herbert York described the many "Von Neumann Committees" that he participated in as "remarkable in style as well as output". The way the committees von Neumann chaired worked directly and intimately with the necessary military or corporate entities became a blueprint for all Air Force long-range missile programs. Many people who had known von Neumann were puzzled by his relationship to the military and to power structures in general. Stanisław Ulam suspected that he had a hidden admiration for people or organizations that could influence the thoughts and decision making of others.
He also maintained his knowledge of languages learnt in his youth. He knew Hungarian, French, German and English fluently, and maintained a conversational level of Italian, Yiddish, Latin and Ancient Greek. His Spanish was less perfect. He had a passion for and encyclopedic knowledge of ancient history, and he enjoyed reading Ancient Greek historians in the original Greek. Ulam suspected they may have shaped his views on how future events could play out and how human nature and society worked in general.
Von Neumann's closest friend in the United States was the mathematician Stanisław Ulam. Von Neumann believed that much of his mathematical thought occurred intuitively; he would often go to sleep with a problem unsolved and know the answer upon waking up. Ulam noted that von Neumann's way of thinking might not be visual, but more aural. Ulam recalled, "Quite independently of his liking for abstract wit, he had a strong appreciation (one might say almost a hunger) for the more earthy type of comedy and humor".
Illness and death
In 1955, a mass was found near von Neumann's collarbone, which turned out to be cancer originating in the skeleton, pancreas or prostate. (While there is general agreement that the tumor had metastasised, sources differ on the location of the primary cancer.) The malignancy may have been caused by exposure to radiation at Los Alamos National Laboratory. As death neared he asked for a priest, though the priest later recalled that von Neumann found little comfort in receiving the last riteshe remained terrified of death and unable to accept it. Of his religious views, Von Neumann reportedly said, "So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end," referring to Pascal's wager. He confided to his mother, "There probably has to be a God. Many things are easier to explain if there is than if there isn't."
He died Roman Catholic on February 8, 1957, at Walter Reed Army Medical Hospital and was buried at Princeton Cemetery.
Mathematics
Set theory
At the beginning of the 20th century, efforts to base mathematics on naive set theory suffered a setback due to Russell's paradox (on the set of all sets that do not belong to themselves). The problem of an adequate axiomatization of set theory was resolved implicitly about twenty years later by Ernst Zermelo and Abraham Fraenkel. Zermelo–Fraenkel set theory provided a series of principles that allowed for the construction of the sets used in the everyday practice of mathematics, but did not explicitly exclude the possibility of the existence of a set that belongs to itself. In his 1925 doctoral thesis, von Neumann demonstrated two techniques to exclude such sets—the axiom of foundation and the notion of class.
The axiom of foundation proposed that every set can be constructed from the bottom up in an ordered succession of steps by way of the Zermelo–Fraenkel principles. If one set belongs to another, then the first must necessarily come before the second in the succession. This excludes the possibility of a set belonging to itself. To demonstrate that the addition of this new axiom to the others did not produce contradictions, von Neumann introduced the method of inner models, which became an essential demonstration instrument in set theory.
The second approach to the problem of sets belonging to themselves took as its base the notion of class, and defines a set as a class that belongs to other classes, while a proper class is defined as a class that does not belong to other classes. On the Zermelo–Fraenkel approach, the axioms impede the construction of a set of all sets that do not belong to themselves. In contrast, on von Neumann's approach, the class of all sets that do not belong to themselves can be constructed, but it is a proper class, not a set.
Overall, von Neumann's major achievement in set theory was an "axiomatization of set theory and (connected with that) elegant theory of the ordinal and cardinal numbers as well as the first strict formulation of principles of definitions by the transfinite induction".
Von Neumann paradox
Building on the Hausdorff paradox of Felix Hausdorff (1914), Stefan Banach and Alfred Tarski in 1924 showed how to subdivide a three-dimensional ball into disjoint sets, then translate and rotate these sets to form two identical copies of the same ball; this is the Banach–Tarski paradox. They also proved that a two-dimensional disk has no such paradoxical decomposition. But in 1929, von Neumann subdivided the disk into finitely many pieces and rearranged them into two disks, using area-preserving affine transformations instead of translations and rotations. The result depended on finding free groups of affine transformations, an important technique extended later by von Neumann in his work on measure theory.
Proof theory
With the contributions of von Neumann to sets, the axiomatic system of the theory of sets avoided the contradictions of earlier systems and became usable as a foundation for mathematics, despite the lack of a proof of its consistency. The next question was whether it provided definitive answers to all mathematical questions that could be posed in it, or whether it might be improved by adding stronger axioms that could be used to prove a broader class of theorems.
By 1927, von Neumann was involving himself in discussions in Göttingen on whether elementary arithmetic followed from Peano axioms. Building on the work of Ackermann, he began attempting to prove (using the finistic methods of Hilbert's school) the consistency of first-order arithmetic. He succeeded in proving the consistency of a fragment of arithmetic of natural numbers (through the use of restrictions on induction). He continued looking for a more general proof of the consistency of classical mathematics using methods from proof theory.
A strongly negative answer to whether it was definitive arrived in September 1930 at the Second Conference on the Epistemology of the Exact Sciences, in which Kurt Gödel announced his first theorem of incompleteness: the usual axiomatic systems are incomplete, in the sense that they cannot prove every truth expressible in their language. Moreover, every consistent extension of these systems necessarily remains incomplete. At the conference, von Neumann suggested to Gödel that he should try to transform his results for undecidable propositions about integers.
Less than a month later, von Neumann communicated to Gödel an interesting consequence of his theorem: the usual axiomatic systems are unable to demonstrate their own consistency. Gödel replied that he had already discovered this consequence, now known as his second incompleteness theorem, and that he would send a preprint of his article containing both results, which never appeared. Von Neumann acknowledged Gödel's priority in his next letter. However, von Neumann's method of proof differed from Gödel's, and he was also of the opinion that the second incompleteness theorem had dealt a much stronger blow to Hilbert's program than Gödel thought it did. With this discovery, which drastically changed his views on mathematical rigor, von Neumann ceased research in the foundations of mathematics and metamathematics and instead spent time on problems connected with applications.
Ergodic theory
In a series of papers published in 1932, von Neumann made foundational contributions to ergodic theory, a branch of mathematics that involves the states of dynamical systems with an invariant measure. Of the 1932 papers on ergodic theory, Paul Halmos wrote that even "if von Neumann had never done anything else, they would have been sufficient to guarantee him mathematical immortality". By then von Neumann had already written his articles on operator theory, and the application of this work was instrumental in his mean ergodic theorem.
The theorem is about arbitrary one-parameter unitary groups and states that for every vector in the Hilbert space, exists in the sense of the metric defined by the Hilbert norm and is a vector which is such that for all . This was proven in the first paper. In the second paper, von Neumann argued that his results here were sufficient for physical applications relating to Boltzmann's ergodic hypothesis. He also pointed out that ergodicity had not yet been achieved and isolated this for future work.
Later in the year he published another influential paper that began the systematic study of ergodicity. He gave and proved a decomposition theorem showing that the ergodic measure preserving actions of the real line are the fundamental building blocks from which all measure preserving actions can be built. Several other key theorems are given and proven. The results in this paper and another in conjunction with Paul Halmos have significant applications in other areas of mathematics.
Measure theory
In measure theory, the "problem of measure" for an -dimensional Euclidean space may be stated as: "does there exist a positive, normalized, invariant, and additive set function on the class of all subsets of ?" The work of Felix Hausdorff and Stefan Banach had implied that the problem of measure has a positive solution if or and a negative solution (because of the Banach–Tarski paradox) in all other cases. Von Neumann's work argued that the "problem is essentially group-theoretic in character": the existence of a measure could be determined by looking at the properties of the transformation group of the given space. The positive solution for spaces of dimension at most two, and the negative solution for higher dimensions, comes from the fact that the Euclidean group is a solvable group for dimension at most two, and is not solvable for higher dimensions. "Thus, according to von Neumann, it is the change of group that makes a difference, not the change of space." Around 1942 he told Dorothy Maharam how to prove that every complete σ-finite measure space has a multiplicative lifting; he did not publish this proof and she later came up with a new one.
In a number of von Neumann's papers, the methods of argument he employed are considered even more significant than the results. In anticipation of his later study of dimension theory in algebras of operators, von Neumann used results on equivalence by finite decomposition, and reformulated the problem of measure in terms of functions. A major contribution von Neumann made to measure theory was the result of a paper written to answer a question of Haar regarding whether there existed an algebra of all bounded functions on the real number line such that they form "a complete system of representatives of the classes of almost everywhere-equal measurable bounded functions". He proved this in the positive, and in later papers with Stone discussed various generalizations and algebraic aspects of this problem. He also proved by new methods the existence of disintegrations for various general types of measures. Von Neumann also gave a new proof on the uniqueness of Haar measures by using the mean values of functions, although this method only worked for compact groups. He had to create entirely new techniques to apply this to locally compact groups. He also gave a new, ingenious proof for the Radon–Nikodym theorem. His lecture notes on measure theory at the Institute for Advanced Study were an important source for knowledge on the topic in America at the time, and were later published.
Topological groups
Using his previous work on measure theory, von Neumann made several contributions to the theory of topological groups, beginning with a paper on almost periodic functions on groups, where von Neumann extended Bohr's theory of almost periodic functions to arbitrary groups. He continued this work with another paper in conjunction with Bochner that improved the theory of almost periodicity to include functions that took on elements of linear spaces as values rather than numbers. In 1938, he was awarded the Bôcher Memorial Prize for his work in analysis in relation to these papers.
In a 1933 paper, he used the newly discovered Haar measure in the solution of Hilbert's fifth problem for the case of compact groups. The basic idea behind this was discovered several years earlier when von Neumann published a paper on the analytic properties of groups of linear transformations and found that closed subgroups of a general linear group are Lie groups. This was later extended by Cartan to arbitrary Lie groups in the form of the closed-subgroup theorem.
Functional analysis
Von Neumann was the first to axiomatically define an abstract Hilbert space. He defined it as a complex vector space with a Hermitian scalar product, with the corresponding norm being both separable and complete. In the same papers he also proved the general form of the Cauchy–Schwarz inequality that had previously been known only in specific examples. He continued with the development of the spectral theory of operators in Hilbert space in three seminal papers between 1929 and 1932. This work cumulated in his Mathematical Foundations of Quantum Mechanics which alongside two other books by Stone and Banach in the same year were the first monographs on Hilbert space theory. Previous work by others showed that a theory of weak topologies could not be obtained by using sequences. Von Neumann was the first to outline a program of how to overcome the difficulties, which resulted in him defining locally convex spaces and topological vector spaces for the first time. In addition several other topological properties he defined at the time (he was among the first mathematicians to apply new topological ideas from Hausdorff from Euclidean to Hilbert spaces) such as boundness and total boundness are still used today. For twenty years von Neumann was considered the 'undisputed master' of this area. These developments were primarily prompted by needs in quantum mechanics where von Neumann realized the need to extend the spectral theory of Hermitian operators from the bounded to the unbounded case. Other major achievements in these papers include a complete elucidation of spectral theory for normal operators, the first abstract presentation of the trace of a positive operator, a generalisation of Riesz's presentation of Hilbert's spectral theorems at the time, and the discovery of Hermitian operators in a Hilbert space, as distinct from self-adjoint operators, which enabled him to give a description of all Hermitian operators which extend a given Hermitian operator. He wrote a paper detailing how the usage of infinite matrices, common at the time in spectral theory, was inadequate as a representation for Hermitian operators. His work on operator theory lead to his most profound invention in pure mathematics, the study of von Neumann algebras and in general of operator algebras.
His later work on rings of operators lead to him revisiting his work on spectral theory and providing a new way of working through the geometric content by the use of direct integrals of Hilbert spaces. Like in his work on measure theory he proved several theorems that he did not find time to publish. He told Nachman Aronszajn and K. T. Smith that in the early 1930s he proved the existence of proper invariant subspaces for completely continuous operators in a Hilbert space while working on the invariant subspace problem.
With I. J. Schoenberg he wrote several items investigating translation invariant Hilbertian metrics on the real number line which resulted in their complete classification. Their motivation lie in various questions related to embedding metric spaces into Hilbert spaces.
With Pascual Jordan he wrote a short paper giving the first derivation of a given norm from an inner product by means of the parallelogram identity. His trace inequality is a key result of matrix theory used in matrix approximation problems. He also first presented the idea that the dual of a pre-norm is a norm in the first major paper discussing the theory of unitarily invariant norms and symmetric gauge functions (now known as symmetric absolute norms). This paper leads naturally to the study of symmetric operator ideals and is the beginning point for modern studies of symmetric operator spaces.
Later with Robert Schatten he initiated the study of nuclear operators on Hilbert spaces, tensor products of Banach spaces, introduced and studied trace class operators, their ideals, and their duality with compact operators, and preduality with bounded operators. The generalization of this topic to the study of nuclear operators on Banach spaces was among the first achievements of Alexander Grothendieck. Previously in 1937 von Neumann published several results in this area, for example giving 1-parameter scale of different cross norms on and proving several other results on what are now known as Schatten–von Neumann ideals.
Operator algebras
Von Neumann founded the study of rings of operators, through the von Neumann algebras (originally called W*-algebras). While his original ideas for rings of operators existed already in 1930, he did not begin studying them in depth until he met F. J. Murray several years later. A von Neumann algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. The von Neumann bicommutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as being equal to the bicommutant. After elucidating the study of the commutative algebra case, von Neumann embarked in 1936, with the partial collaboration of Murray, on the noncommutative case, the general study of factors classification of von Neumann algebras. The six major papers in which he developed that theory between 1936 and 1940 "rank among the masterpieces of analysis in the twentieth century"; they collect many foundational results and started several programs in operator algebra theory that mathematicians worked on for decades afterwards. An example is the classification of factors. In addition in 1938 he proved that every von Neumann algebra on a separable Hilbert space is a direct integral of factors; he did not find time to publish this result until 1949. Von Neumann algebras relate closely to a theory of noncommutative integration, something that von Neumann hinted to in his work but did not explicitly write out. Another important result on polar decomposition was published in 1932.
Lattice theory
Between 1935 and 1937, von Neumann worked on lattice theory, the theory of partially ordered sets in which every two elements have a greatest lower bound and a least upper bound. As Garrett Birkhoff wrote, "John von Neumann's brilliant mind blazed over lattice theory like a meteor". Von Neumann combined traditional projective geometry with modern algebra (linear algebra, ring theory, lattice theory). Many previously geometric results could then be interpreted in the case of general modules over rings. His work laid the foundations for some of the modern work in projective geometry.
His biggest contribution was founding the field of continuous geometry. It followed his path-breaking work on rings of operators. In mathematics, continuous geometry is a substitute of complex projective geometry, where instead of the dimension of a subspace being in a discrete set it can be an element of the unit interval . Earlier, Menger and Birkhoff had axiomatized complex projective geometry in terms of the properties of its lattice of linear subspaces. Von Neumann, following his work on rings of operators, weakened those axioms to describe a broader class of lattices, the continuous geometries.
While the dimensions of the subspaces of projective geometries are a discrete set (the non-negative integers), the dimensions of the elements of a continuous geometry can range continuously across the unit interval . Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous range of dimensions, and the first example of a continuous geometry other than projective space was the projections of the hyperfinite type II factor.
In more pure lattice theoretical work, he solved the difficult problem of characterizing the class of (continuous-dimensional projective geometry over an arbitrary division ring ) in abstract language of lattice theory. Von Neumann provided an abstract exploration of dimension in completed complemented modular topological lattices (properties that arise in the lattices of subspaces of inner product spaces): Dimension is determined, up to a positive linear transformation, by the following two properties. It is conserved by perspective mappings ("perspectivities") and ordered by inclusion. The deepest part of the proof concerns the equivalence of perspectivity with "projectivity by decomposition"—of which a corollary is the transitivity of perspectivity.
For any integer every -dimensional abstract projective geometry is isomorphic to the subspace-lattice of an -dimensional vector space over a (unique) corresponding division ring . This is known as the Veblen–Young theorem. Von Neumann extended this fundamental result in projective geometry to the continuous dimensional case. This coordinatization theorem stimulated considerable work in abstract projective geometry and lattice theory, much of which continued using von Neumann's techniques. Birkhoff described this theorem as follows: Any complemented modular lattice having a "basis" of pairwise perspective elements, is isomorphic with the lattice of all principal right-ideals of a suitable regular ring . This conclusion is the culmination of 140 pages of brilliant and incisive algebra involving entirely novel axioms. Anyone wishing to get an unforgettable impression of the razor edge of von Neumann's mind, need merely try to pursue this chain of exact reasoning for himself—realizing that often five pages of it were written down before breakfast, seated at a living room writing-table in a bathrobe.
This work required the creation of regular rings. A von Neumann regular ring is a ring where for every , an element exists such that . These rings came from and have connections to his work on von Neumann algebras, as well as AW*-algebras and various kinds of C*-algebras.
Many smaller technical results were proven during the creation and proof of the above theorems, particularly regarding distributivity (such as infinite distributivity), von Neumann developing them as needed. He also developed a theory of valuations in lattices, and shared in developing the general theory of metric lattices.
Birkhoff noted in his posthumous article on von Neumann that most of these results were developed in an intense two-year period of work, and that while his interests continued in lattice theory after 1937, they became peripheral and mainly occurred in letters to other mathematicians. A final contribution in 1940 was for a joint seminar he conducted with Birkhoff at the Institute for Advanced Study on the subject where he developed a theory of σ-complete lattice ordered rings. He never wrote up the work for publication.
Mathematical statistics
Von Neumann made fundamental contributions to mathematical statistics. In 1941, he derived the exact distribution of the ratio of the mean square of successive differences to the sample variance for independent and identically normally distributed variables. This ratio was applied to the residuals from regression models and is commonly known as the Durbin–Watson statistic for testing the null hypothesis that the errors are serially independent against the alternative that they follow a stationary first order autoregression.
Subsequently, Denis Sargan and Alok Bhargava extended the results for testing whether the errors on a regression model follow a Gaussian random walk (i.e., possess a unit root) against the alternative that they are a stationary first order autoregression.
Other work
In his early years, von Neumann published several papers related to set-theoretical real analysis and number theory. In a paper from 1925, he proved that for any dense sequence of points in , there existed a rearrangement of those points that is uniformly distributed. In 1926 his sole publication was on Prüfer's theory of ideal algebraic numbers where he found a new way of constructing them, thus extending Prüfer's theory to the field of all algebraic numbers, and clarified their relation to p-adic numbers.
In 1928 he published two additional papers continuing with these themes. The first dealt with partitioning an interval into countably many congruent subsets. It solved a problem of Hugo Steinhaus asking whether an interval is -divisible. Von Neumann proved that indeed that all intervals, half-open, open, or closed are -divisible by translations (i.e. that these intervals can be decomposed into subsets that are congruent by translation). His next paper dealt with giving a constructive proof without the axiom of choice that algebraically independent reals exist. He proved that are algebraically independent for . Consequently, there exists a perfect algebraically independent set of reals the size of the continuum. Other minor results from his early career include a proof of a maximum principle for the gradient of a minimizing function in the field of calculus of variations, and a small simplification of Hermann Minkowski's theorem for linear forms in geometric number theory.
Later in his career together with Pascual Jordan and Eugene Wigner he wrote a foundational paper classifying all finite-dimensional formally real Jordan algebras and discovering the Albert algebras while attempting to look for a better mathematical formalism for quantum theory. In 1936 he attempted to further the program of replacing the axioms of his previous Hilbert space program with those of Jordan algebras in a paper investigating the infinite-dimensional case; he planned to write at least one further paper on the topic but never did. Nevertheless, these axioms formed the basis for further investigations of algebraic quantum mechanics started by Irving Segal.
Physics
Quantum mechanics
Von Neumann was the first to establish a rigorous mathematical framework for quantum mechanics, known as the Dirac–von Neumann axioms, in his influential 1932 work Mathematical Foundations of Quantum Mechanics. After having completed the axiomatization of set theory, he began to confront the axiomatization of quantum mechanics. He realized in 1926 that a state of a quantum system could be represented by a point in a (complex) Hilbert space that, in general, could be infinite-dimensional even for a single particle. In this formalism of quantum mechanics, observable quantities such as position or momentum are represented as linear operators acting on the Hilbert space associated with the quantum system.
The physics of quantum mechanics was thereby reduced to the mathematics of Hilbert spaces and linear operators acting on them. For example, the uncertainty principle, according to which the determination of the position of a particle prevents the determination of its momentum and vice versa, is translated into the non-commutativity of the two corresponding operators. This new mathematical formulation included as special cases the formulations of both Heisenberg and Schrödinger.
Von Neumann's abstract treatment permitted him to confront the foundational issue of determinism versus non-determinism, and in the book he presented a proof that the statistical results of quantum mechanics could not possibly be averages of an underlying set of determined "hidden variables", as in classical statistical mechanics. In 1935, Grete Hermann published a paper arguing that the proof contained a conceptual error and was therefore invalid. Hermann's work was largely ignored until after John S. Bell made essentially the same argument in 1966. In 2010, Jeffrey Bub argued that Bell had misconstrued von Neumann's proof, and pointed out that the proof, though not valid for all hidden variable theories, does rule out a well-defined and important subset. Bub also suggests that von Neumann was aware of this limitation and did not claim that his proof completely ruled out hidden variable theories. The validity of Bub's argument is, in turn, disputed. Gleason's theorem of 1957 provided an argument against hidden variables along the lines of von Neumann's, but founded on assumptions seen as better motivated and more physically meaningful.
Von Neumann's proof inaugurated a line of research that ultimately led, through Bell's theorem and the experiments of Alain Aspect in 1982, to the demonstration that quantum physics either requires a notion of reality substantially different from that of classical physics, or must include nonlocality in apparent violation of special relativity.
In a chapter of The Mathematical Foundations of Quantum Mechanics, von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the universal wave function. Since something "outside the calculation" was needed to collapse the wave function, von Neumann concluded that the collapse was caused by the consciousness of the experimenter. He argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the "subjective consciousness" of the human observer. In other words, while the line between observer and observed could be drawn in different places, the theory only makes sense if an observer exists somewhere. Although the idea of consciousness causing collapse was accepted by Eugene Wigner, the Von Neumann–Wigner interpretation never gained acceptance among the majority of physicists.
Though theories of quantum mechanics continue to evolve, a basic framework for the mathematical formalism of problems in quantum mechanics underlying most approaches can be traced back to the mathematical formalisms and techniques first used by von Neumann. Discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations.
Viewing von Neumann's work on quantum mechanics as a part of the fulfilment of Hilbert's sixth problem, mathematical physicist Arthur Wightman said in 1974 his axiomization of quantum theory was perhaps the most important axiomization of a physical theory to date. With his 1932 book, quantum mechanics became a mature theory in the sense it had a precise mathematical form, which allowed for clear answers to conceptual problems. Nevertheless, von Neumann in his later years felt he had failed in this aspect of his scientific work as despite all the mathematics he developed, he did not find a satisfactory mathematical framework for quantum theory as a whole.
Von Neumann entropy
Von Neumann entropy is extensively used in different forms (conditional entropy, relative entropy, etc.) in the framework of quantum information theory. Entanglement measures are based upon some quantity directly related to the von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix , it is given by Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and conditional quantum entropy. Quantum information theory is largely concerned with the interpretation and uses of von Neumann entropy, a cornerstone in the former's development; the Shannon entropy applies to classical information theory.
Density matrix
The formalism of density operators and matrices was introduced by von Neumann in 1927 and independently, but less systematically by Lev Landau and Felix Bloch in 1927 and 1946 respectively. The density matrix allows the representation of probabilistic mixtures of quantum states (mixed states) in contrast to wavefunctions, which can only represent pure states.
Von Neumann measurement scheme
The von Neumann measurement scheme, the ancestor of quantum decoherence theory, represents measurements projectively by taking into account the measuring apparatus which is also treated as a quantum object. The 'projective measurement' scheme introduced by von Neumann led to the development of quantum decoherence theories.
Quantum logic
Von Neumann first proposed a quantum logic in his 1932 treatise Mathematical Foundations of Quantum Mechanics, where he noted that projections on a Hilbert space can be viewed as propositions about physical observables. The field of quantum logic was subsequently inaugurated in a 1936 paper by von Neumann and Garrett Birkhoff, the first to introduce quantum logics, wherein von Neumann and Birkhoff first proved that quantum mechanics requires a propositional calculus substantially different from all classical logics and rigorously isolated a new algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are polarized perpendicularly (e.g., horizontally and vertically), and therefore, a fortiori, it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession, but if the third filter is added between the other two, the photons will indeed pass through. This experimental fact is translatable into logic as the non-commutativity of conjunction . It was also demonstrated that the laws of distribution of classical logic, and , are not valid for quantum theory.
The reason for this is that a quantum disjunction, unlike the case for classical disjunction, can be true even when both of the disjuncts are false and this is in turn attributable to the fact that it is frequently the case in quantum mechanics that a pair of alternatives are semantically determinate, while each of its members is necessarily indeterminate. Consequently, the distributive law of classical logic must be replaced with a weaker condition. Instead of a distributive lattice, propositions about a quantum system form an orthomodular lattice isomorphic to the lattice of subspaces of the Hilbert space associated with that system.
Nevertheless, he was never satisfied with his work on quantum logic. He intended it to be a joint synthesis of formal logic and probability theory and when he attempted to write up a paper for the Henry Joseph Lecture he gave at the Washington Philosophical Society in 1945 he found that he could not, especially given that he was busy with war work at the time. During his address at the 1954 International Congress of Mathematicians he gave this issue as one of the unsolved problems that future mathematicians could work on.
Fluid dynamics
Von Neumann made fundamental contributions in the field of fluid dynamics, including the classic flow solution to blast waves, and the co-discovery (independently by Yakov Borisovich Zel'dovich and Werner Döring) of the ZND detonation model of explosives. During the 1930s, von Neumann became an authority on the mathematics of shaped charges.
Later with Robert D. Richtmyer, von Neumann developed an algorithm defining artificial viscosity that improved the understanding of shock waves. When computers solved hydrodynamic or aerodynamic problems, they put too many computational grid points at regions of sharp discontinuity (shock waves). The mathematics of artificial viscosity smoothed the shock transition without sacrificing basic physics.
Von Neumann soon applied computer modelling to the field, developing software for his ballistics research. During World War II, he approached R. H. Kent, the director of the US Army's Ballistic Research Laboratory, with a computer program for calculating a one-dimensional model of 100 molecules to simulate a shock wave. Von Neumann gave a seminar on his program to an audience which included his friend Theodore von Kármán. After von Neumann had finished, von Kármán said "Of course you realize Lagrange also used digital models to simulate continuum mechanics." Von Neumann had been unaware of Lagrange's .
Other work
While not as prolific in physics as he was in mathematics, he nevertheless made several other notable contributions. His pioneering papers with Subrahmanyan Chandrasekhar on the statistics of a fluctuating gravitational field generated by randomly distributed stars were considered a tour de force. In this paper they developed a theory of two-body relaxation and used the Holtsmark distribution to model the dynamics of stellar systems. He wrote several other unpublished manuscripts on topics in stellar structure, some of which were included in Chandrasekhar's other works. In earlier work led by Oswald Veblen von Neumann helped develop basic ideas involving spinors that would lead to Roger Penrose's twistor theory. Much of this was done in seminars conducted at the IAS during the 1930s. From this work he wrote a paper with A. H. Taub and Veblen extending the Dirac equation to projective relativity, with a key focus on maintaining invariance with regards to coordinate, spin, and gauge transformations, as a part of early research into potential theories of quantum gravity in the 1930s. In the same time period he made several proposals to colleagues for dealing with the problems in the newly created quantum field theory and for quantizing spacetime; however, both his colleagues and he did not consider the ideas fruitful and did not pursue them. Nevertheless, he maintained at least some interest, in 1940 writing a manuscript on the Dirac equation in de Sitter space.
Economics
Game theory
Von Neumann founded the field of game theory as a mathematical discipline. He proved his minimax theorem in 1928. It establishes that in zero-sum games with perfect information (i.e., in which players know at each time all moves that have taken place so far), there exists a pair of strategies for both players that allows each to minimize their maximum losses. Such strategies are called optimal. Von Neumann showed that their minimaxes are equal (in absolute value) and contrary (in sign). He improved and extended the minimax theorem to include games involving imperfect information and games with more than two players, publishing this result in his 1944 Theory of Games and Economic Behavior, written with Oskar Morgenstern. The public interest in this work was such that The New York Times ran a front-page story. In this book, von Neumann declared that economic theory needed to use functional analysis, especially convex sets and the topological fixed-point theorem, rather than the traditional differential calculus, because the maximum-operator did not preserve differentiable functions.
Von Neumann's functional-analytic techniques—the use of duality pairings of real vector spaces to represent prices and quantities, the use of supporting and separating hyperplanes and convex sets, and fixed-point theory—have been primary tools of mathematical economics ever since.
Mathematical economics
Von Neumann raised the mathematical level of economics in several influential publications. For his model of an expanding economy, he proved the existence and uniqueness of an equilibrium using his generalization of the Brouwer fixed-point theorem. Von Neumann's model of an expanding economy considered the matrix pencil A − λB with nonnegative matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity equation along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the growth factor which is 1 plus the rate of growth of the economy; the rate of growth equals the interest rate.
Von Neumann's results have been viewed as a special case of linear programming, where his model uses only nonnegative matrices. The study of his model of an expanding economy continues to interest mathematical economists. This paper has been called the greatest paper in mathematical economics by several authors, who recognized its introduction of fixed-point theorems, linear inequalities, complementary slackness, and saddlepoint duality. In the proceedings of a conference on von Neumann's growth model, Paul Samuelson said that many mathematicians had developed methods useful to economists, but that von Neumann was unique in having made significant contributions to economic theory itself. The lasting importance of the work on general equilibria and the methodology of fixed point theorems is underscored by the awarding of Nobel prizes in 1972 to Kenneth Arrow, in 1983 to Gérard Debreu, and in 1994 to John Nash who used fixed point theorems to establish equilibria for non-cooperative games and for bargaining problems in his Ph.D. thesis. Arrow and Debreu also used linear programming, as did Nobel laureates Tjalling Koopmans, Leonid Kantorovich, Wassily Leontief, Paul Samuelson, Robert Dorfman, Robert Solow, and Leonid Hurwicz.
Von Neumann's interest in the topic began while he was lecturing at Berlin in 1928 and 1929. He spent his summers in Budapest, as did the economist Nicholas Kaldor; Kaldor recommended that von Neumann read a book by the mathematical economist Léon Walras. Von Neumann noticed that Walras's General Equilibrium Theory and Walras's law, which led to systems of simultaneous linear equations, could produce the absurd result that profit could be maximized by producing and selling a negative quantity of a product. He replaced the equations by inequalities, introduced dynamic equilibria, among other things, and eventually produced his paper.
Linear programming
Building on his results on matrix games and on his model of an expanding economy, von Neumann invented the theory of duality in linear programming when George Dantzig described his work in a few minutes, and an impatient von Neumann asked him to get to the point. Dantzig then listened dumbfounded while von Neumann provided an hourlong lecture on convex sets, fixed-point theory, and duality, conjecturing the equivalence between matrix games and linear programming.
Later, von Neumann suggested a new method of linear programming, using the homogeneous linear system of Paul Gordan (1873), which was later popularized by Karmarkar's algorithm. Von Neumann's method used a pivoting algorithm between simplices, with the pivoting decision determined by a nonnegative least squares subproblem with a convexity constraint (projecting the zero-vector onto the convex hull of the active simplex). Von Neumann's algorithm was the first interior point method of linear programming.
Computer science
Von Neumann was a founding figure in computing, with significant contributions to computing hardware design, to theoretical computer science, to scientific computing, and to the philosophy of computer science.
Hardware
Von Neumann consulted for the Army's Ballistic Research Laboratory, most notably on the ENIAC project, as a member of its Scientific Advisory Committee. Although the single-memory, stored-program architecture is commonly called von Neumann architecture, the architecture was based on the work of J. Presper Eckert and John Mauchly, inventors of ENIAC and its successor, EDVAC.
While consulting for the EDVAC project at the University of Pennsylvania, von Neumann wrote an incomplete First Draft of a Report on the EDVAC. The paper, whose premature distribution nullified the patent claims of Eckert and Mauchly, described a computer that stored both its data and its program in the same address space, unlike the earliest computers which stored their programs separately on paper tape or plugboards. This architecture became the basis of most modern computer designs.
Next, von Neumann designed the IAS machine at the Institute for Advanced Study in Princeton, New Jersey. He arranged its financing, and the components were designed and built at the RCA Research Laboratory nearby. Von Neumann recommended that the IBM 701, nicknamed the defense computer, include a magnetic drum. It was a faster version of the IAS machine and formed the basis for the commercially successful IBM 704.
Algorithms
Von Neumann was the inventor, in 1945, of the merge sort algorithm, in which the first and second halves of an array are each sorted recursively and then merged.
As part of Von Neumann's hydrogen bomb work, he and Stanisław Ulam developed simulations for hydrodynamic computations. He also contributed to the development of the Monte Carlo method, which used random numbers to approximate the solutions to complicated problems.
Von Neumann's algorithm for simulating a fair coin with a biased coin is used in the "software whitening" stage of some hardware random number generators. Because obtaining "truly" random numbers was impractical, von Neumann developed a form of pseudorandomness, using the middle-square method. He justified this crude method as faster than any other method at his disposal, writing that "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin." He also noted that when this method went awry it did so obviously, unlike other methods which could be subtly incorrect.
Stochastic computing was introduced by von Neumann in 1953, but could not be implemented until advances in computing of the 1960s. Around 1950 he was also among the first to talk about the time complexity of computations, which eventually evolved into the field of computational complexity theory.
Cellular automata, DNA and the universal constructor
Von Neumann's mathematical analysis of the structure of self-replication preceded the discovery of the structure of DNA. Ulam and von Neumann are also generally credited with creating the field of cellular automata, beginning in the 1940s, as a simplified mathematical model of biological systems.
In lectures in 1948 and 1949, von Neumann proposed a kinematic self-reproducing automaton. By 1952, he was treating the problem more abstractly. He designed an elaborate 2D cellular automaton that would automatically make a copy of its initial configuration of cells. The Von Neumann universal constructor based on the von Neumann cellular automaton was fleshed out in his posthumous Theory of Self Reproducing Automata.
The von Neumann neighborhood, in which each cell in a two-dimensional grid has the four orthogonally adjacent grid cells as neighbors, continues to be used for other cellular automata.
Scientific computing and numerical analysis
Considered to be possibly "the most influential researcher in scientific computing of all time", von Neumann made several contributions to the field, both technically and administratively. He developed the Von Neumann stability analysis procedure, still commonly used to avoid errors from building up in numerical methods for linear partial differential equations. His paper with Herman Goldstine in 1947 was the first to describe backward error analysis, although implicitly. He was also one of the first to write about the Jacobi method. At Los Alamos, he wrote several classified reports on solving problems of gas dynamics numerically. However, he was frustrated by the lack of progress with analytic methods for these nonlinear problems. As a result, he turned towards computational methods. Under his influence Los Alamos became the leader in computational science during the 1950s and early 1960s.
From this work von Neumann realized that computation was not just a tool to brute force the solution to a problem numerically, but could also provide insight for solving problems analytically, and that there was an enormous variety of scientific and engineering problems towards which computers would be useful, most significant of which were nonlinear problems. In June 1945 at the First Canadian Mathematical Congress he gave his first talk on general ideas of how to solve problems, particularly of fluid dynamics numerically. He also described how wind tunnels were actually analog computers, and how digital computers would replace them and bring a new era of fluid dynamics. Garrett Birkhoff described it as "an unforgettable sales pitch". He expanded this talk with Goldstine into the manuscript "On the Principles of Large Scale Computing Machines" and used it to promote the support of scientific computing. His papers also developed the concepts of inverting matrices, random matrices and automated relaxation methods for solving elliptic boundary value problems.
Weather systems and global warming
As part of his research into possible applications of computers, von Neumann became interested in weather prediction, noting similarities between the problems in the field and those he had worked on during the Manhattan Project. In 1946 von Neumann founded the "Meteorological Project" at the Institute for Advanced Study, securing funding for his project from the Weather Bureau, the US Air Force and US Navy weather services. With Carl-Gustaf Rossby, considered the leading theoretical meteorologist at the time, he gathered a group of twenty meteorologists to work on various problems in the field. However, given his other postwar work he was not able to devote enough time to proper leadership of the project and little was accomplished.
This changed when a young Jule Gregory Charney took up co-leadership of the project from Rossby. By 1950 von Neumann and Charney wrote the world's first climate modelling software, and used it to perform the world's first numerical weather forecasts on the ENIAC computer that von Neumann had arranged to be used; von Neumann and his team published the results as Numerical Integration of the Barotropic Vorticity Equation. Together they played a leading role in efforts to integrate sea-air exchanges of energy and moisture into the study of climate. Though primitive, news of the ENIAC forecasts quickly spread around the world and a number of parallel projects in other locations were initiated.
In 1955 von Neumann, Charney and their collaborators convinced their funders to open the Joint Numerical Weather Prediction Unit (JNWPU) in Suitland, Maryland, which began routine real-time weather forecasting. Next up, von Neumann proposed a research program for climate modeling: The approach is to first try short-range forecasts, then long-range forecasts of those properties of the circulation that can perpetuate themselves over arbitrarily long periods of time, and only finally to attempt forecast for medium-long time periods which are too long to treat by simple hydrodynamic theory and too short to treat by the general principle of equilibrium theory. Positive results of Norman A. Phillips in 1955 prompted immediate reaction and von Neumann organized a conference at Princeton on "Application of Numerical Integration Techniques to the Problem of the General Circulation". Once again he strategically organized the program as a predictive one to ensure continued support from the Weather Bureau and the military, leading to the creation of the General Circulation Research Section (now the Geophysical Fluid Dynamics Laboratory) next to the JNWPU. He continued work both on technical issues of modelling and in ensuring continuing funding for these projects.
During the late 19th century, Svante Arrhenius suggested that human activity could cause global warming by adding carbon dioxide to the atmosphere. In 1955, von Neumann observed that this may already have begun: "Carbon dioxide released into the atmosphere by industry's burning of coal and oil – more than half of it during the last generation – may have changed the atmosphere's composition sufficiently to account for a general warming of the world by about one degree Fahrenheit." His research into weather systems and meteorological prediction led him to propose manipulating the environment by spreading colorants on the polar ice caps to enhance absorption of solar radiation (by reducing the albedo). However, he urged caution in any program of atmosphere modification: What could be done, of course, is no index to what should be done... In fact, to evaluate the ultimate consequences of either a general cooling or a general heating would be a complex matter. Changes would affect the level of the seas, and hence the habitability of the continental coastal shelves; the evaporation of the seas, and hence general precipitation and glaciation levels; and so on... But there is little doubt that one could carry out the necessary analyses needed to predict the results, intervene on any desired scale, and ultimately achieve rather fantastic results. He also warned that weather and climate control could have military uses, telling Congress in 1956 that they could pose an even bigger risk than ICBMs.
Technological singularity hypothesis
The first use of the concept of a singularity in the technological context is attributed to von Neumann, who according to Ulam discussed the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This concept was later fleshed out in the 1970 book Future Shock by Alvin Toffler.
Defense work
Manhattan Project
Beginning in the late 1930s, von Neumann developed an expertise in explosions—phenomena that are difficult to model mathematically. During this period, he was the leading authority of the mathematics of shaped charges, leading him to a large number of military consultancies and consequently his involvement in the Manhattan Project. The involvement included frequent trips to the project's secret research facilities at the Los Alamos Laboratory in New Mexico.
Von Neumann made his principal contribution to the atomic bomb in the concept and design of the explosive lenses that were needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki. While von Neumann did not originate the "implosion" concept, he was one of its most persistent proponents, encouraging its continued development against the instincts of many of his colleagues, who felt such a design to be unworkable. He also eventually came up with the idea of using more powerful shaped charges and less fissionable material to greatly increase the speed of "assembly".
When it turned out that there would not be enough uranium-235 to make more than one bomb, the implosive lens project was greatly expanded and von Neumann's idea was implemented. Implosion was the only method that could be used with the plutonium-239 that was available from the Hanford Site. He established the design of the explosive lenses required, but there remained concerns about "edge effects" and imperfections in the explosives. His calculations showed that implosion would work if it did not depart by more than 5% from spherical symmetry. After a series of failed attempts with models, this was achieved by George Kistiakowsky, and the construction of the Trinity bomb was completed in July 1945.
In a visit to Los Alamos in September 1944, von Neumann showed that the pressure increase from explosion shock wave reflection from solid objects was greater than previously believed if the angle of incidence of the shock wave was between 90° and some limiting angle. As a result, it was determined that the effectiveness of an atomic bomb would be enhanced with detonation some kilometers above the target, rather than at ground level.
Von Neumann was included in the target selection committee that was responsible for choosing the Japanese cities of Hiroshima and Nagasaki as the first targets of the atomic bomb. Von Neumann oversaw computations related to the expected size of the bomb blasts, estimated death tolls, and the distance above the ground at which the bombs should be detonated for optimum shock wave propagation. The cultural capital Kyoto was von Neumann's first choice, a selection seconded by Manhattan Project leader General Leslie Groves. However, this target was dismissed by Secretary of War Henry L. Stimson.
On July 16, 1945, von Neumann and numerous other Manhattan Project personnel were eyewitnesses to the first test of an atomic bomb detonation, which was code-named Trinity. The event was conducted as a test of the implosion method device, at the Alamogordo Bombing Range in New Mexico. Based on his observation alone, von Neumann estimated the test had resulted in a blast equivalent to but Enrico Fermi produced a more accurate estimate of 10 kilotons by dropping scraps of torn-up paper as the shock wave passed his location and watching how far they scattered. The actual power of the explosion had been between 20 and 22 kilotons. It was in von Neumann's 1944 papers that the expression "kilotons" appeared for the first time.
Von Neumann continued unperturbed in his work and became, along with Edward Teller, one of those who sustained the hydrogen bomb project. He collaborated with Klaus Fuchs on further development of the bomb, and in 1946 the two filed a secret patent outlining a scheme for using a fission bomb to compress fusion fuel to initiate nuclear fusion. The Fuchs–von Neumann patent used radiation implosion, but not in the same way as is used in what became the final hydrogen bomb design, the Teller–Ulam design. Their work was, however, incorporated into the "George" shot of Operation Greenhouse, which was instructive in testing out concepts that went into the final design. The Fuchs–von Neumann work was passed on to the Soviet Union by Fuchs as part of his nuclear espionage, but it was not used in the Soviets' own, independent development of the Teller–Ulam design. The historian Jeremy Bernstein has pointed out that ironically, "John von Neumann and Klaus Fuchs, produced a brilliant invention in 1946 that could have changed the whole course of the development of the hydrogen bomb, but was not fully understood until after the bomb had been successfully made."
For his wartime services, von Neumann was awarded the Navy Distinguished Civilian Service Award in July 1946, and the Medal for Merit in October 1946.
Post-war work
In 1950, von Neumann became a consultant to the Weapons Systems Evaluation Group, whose function was to advise the Joint Chiefs of Staff and the United States Secretary of Defense on the development and use of new technologies. He also became an adviser to the Armed Forces Special Weapons Project, which was responsible for the military aspects on nuclear weapons. Over the following two years, he became a consultant across the US government. This included the Central Intelligence Agency (CIA), a member of the influential General Advisory Committee of the Atomic Energy Commission, a consultant to the newly established Lawrence Livermore National Laboratory, and a member of the Scientific Advisory Group of the United States Air Force During this time he became a "superstar" defense scientist at the Pentagon. His authority was considered infallible at the highest levels of the US government and military.
During several meetings of the advisory board of the US Air Force, von Neumann and Edward Teller predicted that by 1960 the US would be able to build a hydrogen bomb light enough to fit on top of a rocket. In 1953 Bernard Schriever, who was present at the meeting, paid a personal visit to von Neumann at Princeton to confirm this possibility. Schriever enlisted Trevor Gardner, who in turn visited von Neumann several weeks later to fully understand the future possibilities before beginning his campaign for such a weapon in Washington. Now either chairing or serving on several boards dealing with strategic missiles and nuclear weaponry, von Neumann was able to inject several crucial arguments regarding potential Soviet advancements in both these areas and in strategic defenses against American bombers into government reports to argue for the creation of ICBMs. Gardner on several occasions brought von Neumann to meetings with the US Department of Defense to discuss with various senior officials his reports. Several design decisions in these reports such as inertial guidance mechanisms would form the basis for all ICBMs thereafter. By 1954, von Neumann was also regularly testifying to various Congressional military subcommittees to ensure continued support for the ICBM program.
However, this was not enough. To have the ICBM program run at full throttle they needed direct action by the President of the United States. They convinced President Eisenhower in a direct meeting in July 1955, which resulted in a presidential directive on September 13, 1955. It stated that "there would be the gravest repercussions on the national security and on the cohesion of the free world" if the Soviet Union developed the ICBM before the US and therefore designated the ICBM project "a research and development program of the highest priority above all others." The Secretary of Defense was ordered to commence the project with "maximum urgency". Evidence would later show that the Soviets indeed were already testing their own intermediate-range ballistic missiles at the time. Von Neumann would continue to meet the President, including at his home in Gettysburg, Pennsylvania, and other high-level government officials as a key advisor on ICBMs until his death.
Atomic Energy Commission
In 1955, von Neumann became a commissioner of the Atomic Energy Commission (AEC), which at the time was the highest official position available to scientists in the government. (While his appointment formally required that he sever all his other consulting contracts, an exemption was made for von Neumann to continue working with several critical military committees after the Air Force and several key senators raised concerns.) He used this position to further the production of compact hydrogen bombs suitable for intercontinental ballistic missile (ICBM) delivery. He involved himself in correcting the severe shortage of tritium and lithium 6 needed for these weapons, and he argued against settling for the intermediate-range missiles that the Army wanted. He was adamant that H-bombs delivered deep into enemy territory by an ICBM would be the most effective weapon possible, and that the relative inaccuracy of the missile would not be a problem with an H-bomb. He said the Russians would probably be building a similar weapon system, which turned out to be the case. While Lewis Strauss was away in the second half of 1955 von Neumann took over as acting chairman of the commission.
In his final years before his death from cancer, von Neumann headed the United States government's top-secret ICBM committee, which would sometimes meet in his home. Its purpose was to decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon. Von Neumann had long argued that while the technical obstacles were sizable, they could be overcome. The SM-65 Atlas passed its first fully functional test in 1959, two years after his death. The more advanced Titan rockets were deployed in 1962. Both had been proposed in the ICBM committees von Neumann chaired. The feasibility of the ICBMs owed as much to improved, smaller warheads that did not have guidance or heat resistance issues as it did to developments in rocketry, and his understanding of the former made his advice invaluable.
Von Neumann entered government service primarily because he felt that, if freedom and civilization were to survive, it would have to be because the United States would triumph over totalitarianism from Nazism, Fascism and Soviet Communism. During a Senate committee hearing he described his political ideology as "violently anti-communist, and much more militaristic than the norm".
Personality
Work habits
Herman Goldstine commented on von Neumann's ability to intuit hidden errors and remember old material perfectly. When he had difficulties he would not labor on; instead, he would go home and sleep on it and come back later with a solution. This style, 'taking the path of least resistance', sometimes meant that he could go off on tangents. It also meant that if the difficulty was great from the very beginning, he would simply switch to another problem, not trying to find weak spots from which he could break through. At times he could be ignorant of the standard mathematical literature, finding it easier to rederive basic information he needed rather than chase references.
After World War II began, he became extremely busy with both academic and military commitments. His habit of not writing up talks or publishing results worsened. He did not find it easy to discuss a topic formally in writing unless it was already mature in his mind; if it was not, he would, in his own words, "develop the worst traits of pedantism and inefficiency".
Mathematical range
The mathematician Jean Dieudonné said that von Neumann "may have been the last representative of a once-flourishing and numerous group, the great mathematicians who were equally at home in pure and applied mathematics and who throughout their careers maintained a steady production in both directions". According to Dieudonné, his specific genius was in analysis and "combinatorics", with combinatorics being understood in a very wide sense that described his ability to organize and axiomize complex works that previously seemed to have little connection with mathematics. His style in analysis followed the German school, based on foundations in linear algebra and general topology. While von Neumann had an encyclopedic background, his range in pure mathematics was not as wide as Poincaré, Hilbert or even Weyl: von Neumann never did significant work in number theory, algebraic topology, algebraic geometry or differential geometry. However, in applied mathematics his work equalled that of Gauss, Cauchy or Poincaré.
According to Wigner, "Nobody knows all science, not even von Neumann did. But as for mathematics, he contributed to every part of it except number theory and topology. That is, I think, something unique." Halmos noted that while von Neumann knew lots of mathematics, the most notable gaps were in algebraic topology and number theory; he recalled an incident where von Neumann failed to recognize the topological definition of a torus. Von Neumann admitted to Herman Goldstine that he had no facility at all in topology and he was never comfortable with it, with Goldstine later bringing this up when comparing him to Hermann Weyl, who he thought was deeper and broader.
In his biography of von Neumann, Salomon Bochner wrote that much of von Neumann's works in pure mathematics involved finite and infinite dimensional vector spaces, which at the time, covered much of the total area of mathematics. However he pointed out this still did not cover an important part of the mathematical landscape, in particular, anything that involved geometry "in the global sense", topics such as topology, differential geometry and harmonic integrals, algebraic geometry and other such fields. Von Neumann rarely worked in these fields and, as Bochner saw it, had little affinity for them.
In one of von Neumann's last articles, he lamented that pure mathematicians could no longer attain deep knowledge of even a fraction of the field. In the early 1940s, Ulam had concocted for him a doctoral-style examination to find weaknesses in his knowledge; von Neumann was unable to answer satisfactorily a question each in differential geometry, number theory, and algebra. They concluded that doctoral exams might have "little permanent meaning". However, when Weyl turned down an offer to write a history of mathematics of the 20th century, arguing that no one person could do it, Ulam thought von Neumann could have aspired to do so.
Preferred problem-solving techniques
Ulam remarked that most mathematicians could master one technique that they then used repeatedly, whereas von Neumann had mastered three:
A facility with the symbolic manipulation of linear operators;
An intuitive feeling for the logical structure of any new mathematical theory;
An intuitive feeling for the combinatorial superstructure of new theories.
Although he was commonly described as an analyst, he once classified himself an algebraist, and his style often displayed a mix of algebraic technique and set-theoretical intuition. He loved obsessive detail and had no issues with excess repetition or overly explicit notation. An example of this was a paper of his on rings of operators, where he extended the normal functional notation, to . However, this process ended up being repeated several times, where the final result were equations such as . The 1936 paper became known to students as "von Neumann's onion" because the equations "needed to be peeled before they could be digested". Overall, although his writings were clear and powerful, they were not clean or elegant. Although powerful technically, his primary concern was more with the clear and viable formation of fundamental issues and questions of science rather than just the solution of mathematical puzzles.
According to Ulam, von Neumann surprised physicists by doing dimensional estimates and algebraic computations in his head with fluency Ulam likened to blindfold chess. His impression was that von Neumann analyzed physical situations by abstract logical deduction rather than concrete visualization.
Lecture style
Goldstine compared his lectures to being on glass, smooth and lucid. By comparison, Goldstine thought his scientific articles were written in a much harsher manner, and with much less insight. Halmos described his lectures as "dazzling", with his speech clear, rapid, precise and all encompassing. Like Goldstine, he also described how everything seemed "so easy and natural" in lectures but puzzling on later reflection. He was a quick speaker: Banesh Hoffmann found it very difficult to take notes, even in shorthand, and Albert Tucker said that people often had to ask von Neumann questions to slow him down so they could think through the ideas he was presenting. Von Neumann knew about this and was grateful for his audience telling him when he was going too quickly. Although he did spend time preparing for lectures, he rarely used notes, instead jotting down points of what he would discuss and for how long.
Eidetic memory
Von Neumann was also noted for his eidetic memory, particularly of the symbolic kind. Herman Goldstine writes:
Von Neumann was reportedly able to memorize the pages of telephone directories. He entertained friends by asking them to randomly call out page numbers; he then recited the names, addresses and numbers therein. Stanisław Ulam believed that von Neumann's memory was auditory rather than visual.
Mathematical quickness
Von Neumann's mathematical fluency, calculation speed, and general problem-solving ability were widely noted by his peers. Paul Halmos called his speed "awe-inspiring." Lothar Wolfgang Nordheim described him as the "fastest mind I ever met". Enrico Fermi told physicist Herbert L. Anderson: "You know, Herb, Johnny can do calculations in his head ten times as fast as I can! And I can do them ten times as fast as you can, Herb, so you can see how impressive Johnny is!" Edward Teller admitted that he "never could keep up with him", and Israel Halperin described trying to keep up as like riding a "tricycle chasing a racing car."
He had an unusual ability to solve novel problems quickly. George Pólya, whose lectures at ETH Zürich von Neumann attended as a student, said, "Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem, the chances were he'd come to me at the end of the lecture with the complete solution scribbled on a slip of paper." When George Dantzig brought von Neumann an unsolved problem in linear programming "as I would to an ordinary mortal", on which there had been no published literature, he was astonished when von Neumann said "Oh, that!", before offhandedly giving a lecture of over an hour, explaining how to solve the problem using the hitherto unconceived theory of duality.
A story about von Neumann's encounter with the famous fly puzzle has entered mathematical folklore. In this puzzle, two bicycles begin 20 miles apart, and each travels toward the other at 10 miles per hour until they collide; meanwhile, a fly travels continuously back and forth between the bicycles at 15 miles per hour until it is squashed in the collision. The questioner asks how far the fly traveled in total; the "trick" for a quick answer is to realize that the fly's individual transits do not matter, only that it has been traveling at 15 miles per hour for one hour. As Eugene Wigner tells it, Max Born posed the riddle to von Neumann. The other scientists to whom he had posed it had laboriously computed the distance, so when von Neumann was immediately ready with the correct answer of 15 miles, Born observed that he must have guessed the trick. "What trick?" von Neumann replied. "All I did was sum the geometric series."
Self-doubts
Rota wrote that von Neumann had "deep-seated and recurring self-doubts". John L. Kelley reminisced in 1989 that "Johnny von Neumann has said that he will be forgotten while Kurt Gödel is remembered with Pythagoras, but the rest of us viewed Johnny with awe." Ulam suggests that some of his self-doubts with regard for his own creativity may have come from the fact he had not discovered several important ideas that others had, even though he was more than capable of doing so, giving the incompleteness theorems and Birkhoff's pointwise ergodic theorem as examples. Von Neumann had a virtuosity in following complicated reasoning and had supreme insights, yet he perhaps felt he did not have the gift for seemingly irrational proofs and theorems or intuitive insights. Ulam describes how during one of his stays at Princeton while von Neumann was working on rings of operators, continuous geometries and quantum logic he felt that von Neumann was not convinced of the importance of his work, and only when finding some ingenious technical trick or new approach did he take some pleasure in it. However, according to Rota, von Neumann still had an "incomparably stronger technique" compared to his friend, despite describing Ulam as the more creative mathematician.
Legacy
Accolades
Nobel Laureate Hans Bethe said "I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man". Edward Teller observed "von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us." Peter Lax wrote "Von Neumann was addicted to thinking, and in particular to thinking about mathematics". Eugene Wigner said, "He understood mathematical problems not only in their initial aspect, but in their full complexity." Claude Shannon called him "the smartest person I've ever met", a common opinion. Jacob Bronowski wrote "He was the cleverest man I ever knew, without exception. He was a genius." Due to his wide reaching influence and contributions to many fields, von Neumann is widely considered a polymath.
Wigner noted the extraordinary mind that von Neumann had, and he described von Neumann as having a mind faster than anyone he knew, stating that:
"It seems fair to say that if the influence of a scientist is interpreted broadly enough to include impact on fields beyond science proper, then John von Neumann was probably the most influential mathematician who ever lived," wrote Miklós Rédei. Peter Lax commented that von Neumann would have won a Nobel Prize in Economics had he lived longer, and that "if there were Nobel Prizes in computer science and mathematics, he would have been honored by these, too." Rota writes that "he was the first to have a vision of the boundless possibilities of computing, and he had the resolve to gather the considerable intellectual and engineering resources that led to the construction of the first large computer" and consequently that "No other mathematician in this century has had as deep and lasting an influence on the course of civilization." He is widely regarded as one of the greatest and most influential mathematicians and scientists of the 20th century.
Neurophysiologist Leon Harmon described him in a similar manner, calling him the only "true genius" he had ever met: "von Neumann's mind was all-encompassing. He could solve problems in any domain. ... And his mind was always working, always restless." While consulting for non-academic projects von Neumann's combination of outstanding scientific ability and practicality gave him a high credibility with military officers, engineers, and industrialists that no other scientist could match. In nuclear missilery he was considered "the clearly dominant advisory figure" according to Herbert York. Economist Nicholas Kaldor said he was "unquestionably the nearest thing to a genius I have ever encountered." Likewise, Paul Samuelson wrote, "We economists are grateful for von Neumann's genius. It is not for us to calculate whether he was a Gauss, or a Poincaré, or a Hilbert. He was the incomparable Johnny von Neumann. He darted briefly into our domain and it has never been the same since."
Honors and awards
Events and awards named in recognition of von Neumann include the annual John von Neumann Theory Prize of the Institute for Operations Research and the Management Sciences, IEEE John von Neumann Medal, and the John von Neumann Prize of the Society for Industrial and Applied Mathematics. Both the crater von Neumann on the Moon and the asteroid 22824 von Neumann are named in his honor.
Von Neumann received awards including the Medal for Merit in 1947, the Medal of Freedom in 1956, and the Enrico Fermi Award also in 1956. He was elected a member of multiple honorary societies, including the American Academy of Arts and Sciences and the National Academy of Sciences, and he held eight honorary doctorates. On May 4, 2005, the United States Postal Service issued the American Scientists commemorative postage stamp series, designed by artist Victor Stabin. The scientists depicted were von Neumann, Barbara McClintock, Josiah Willard Gibbs, and Richard Feynman.
was established in Kecskemét, Hungary in 2016, as a successor to Kecskemét College.
Selected works
Von Neumann's first published paper was On the position of zeroes of certain minimum polynomials, co-authored with Michael Fekete and published when von Neumann was 18. At 19, his solo paper On the introduction of transfinite numbers was published. He expanded his second solo paper, An axiomatization of set theory, to create his PhD thesis. His first book, Mathematical Foundations of Quantum Mechanics, was published in 1932. Following this, von Neumann switched from publishing in German to publishing in English, and his publications became more selective and expanded beyond pure mathematics. His 1942 Theory of Detonation Waves contributed to military research, his work on computing began with the unpublished 1946 On the principles of large scale computing machines, and his publications on weather prediction began with the 1950 Numerical integration of the barotropic vorticity equation. Alongside his later papers were informal essays targeted at colleagues and the general public, such as his 1947 The Mathematician, described as a "farewell to pure mathematics", and his 1955 Can we survive technology?, which considered a bleak future including nuclear warfare and deliberate climate change. His complete works have been compiled into a six-volume set.
See also
List of pioneers in computer science
Teapot Committee
The MANIAC, 2023 book about von Neumann
(English title: Adventures of a Mathematician), biopic about Stanislaw Ulam also features John von Neumann.
Notes
References
Description, contents, incl. arrow-scrollable preview, & review.
Further reading
Books
Popular periodicals
Journals
External links
A more or less complete bibliography of publications of John von Neumann by Nelson H. F. Beebe
von Neumann's profile at Google Scholar
Oral History Project - The Princeton Mathematics Community in the 1930s, contains many interviews that describe contact and anecdotes of von Neumann and others at the Princeton University and Institute for Advanced Study community.
Oral history interviews (from the Charles Babbage Institute, University of Minnesota) with: Alice R. Burks and Arthur W. Burks; Eugene P. Wigner; and Nicholas C. Metropolis.
zbMATH profile
Query for "von neumann" on the digital repository of the Institute for Advanced Study.
Von Neumann vs. Dirac on Quantum Theory and Mathematical Rigor – from Stanford Encyclopedia of Philosophy
Quantum Logic and Probability Theory - from Stanford Encyclopedia of Philosophy
FBI files on John von Neumann released via FOI
Biographical video by David Brailsford (John Dunford Professor Emeritus of computer science at the University of Nottingham)
John von Neumann: Prophet of the 21st Century 2013 Arte documentary on John von Neumann and his influence in the modern world (in German and French with English subtitles).
John von Neumann - A Documentary 1966 detailed documentary by the Mathematical Association of America containing remarks by several of his colleagues including Ulam, Wigner, Halmos, Morgenstern, Bethe, Goldstine, Strauss and Teller.
1903 births
1957 deaths
20th-century American mathematicians
20th-century American physicists
Algebraists
American anti-communists
American computer scientists
American nuclear physicists
American operations researchers
American people of Hungarian-Jewish descent
American Roman Catholics
Hungarian physicists
American systems scientists
Mathematicians from Austria-Hungary
Ballistics experts
Burials at Princeton Cemetery
Deaths from cancer in Washington, D.C.
Carl-Gustaf Rossby Research Medal recipients
Cellular automatists
Computer designers
Converts to Roman Catholicism from Judaism
Cyberneticists
Elected Members of the International Statistical Institute
Enrico Fermi Award recipients
ETH Zurich alumni
Fasori Gimnázium alumni
Fellows of the American Physical Society
Fellows of the Econometric Society
Fluid dynamicists
Functional analysts
Game theorists
Hungarian anti-communists
Hungarian computer scientists
Hungarian emigrants to the United States
20th-century Hungarian inventors
20th-century Hungarian Jews
20th-century Hungarian mathematicians
20th-century Hungarian physicists
Hungarian nobility
Hungarian nuclear physicists
Hungarian Roman Catholics
Institute for Advanced Study faculty
Jewish anti-communists
Jewish American physicists
Lattice theorists
Manhattan Project people
Mathematical economists
Mathematical physicists
Mathematicians from Budapest
Measure theorists
Medal for Merit recipients
Members of the American Philosophical Society
Members of the Lincean Academy
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Mental calculators
Monte Carlo methodologists
Naturalized citizens of the United States
Numerical analysts
Oak Ridge National Laboratory people
Operations researchers
Operator theorists
People from Pest, Hungary
Presidents of the American Mathematical Society
Princeton University faculty
Probability theorists
Quantum physicists
RAND Corporation people
Recipients of the Medal of Freedom
Researchers of artificial life
American set theorists
Theoretical physicists
Academic staff of the University of Göttingen
John
Yiddish-speaking people
Academic staff of the University of Hamburg
Recipients of the Navy Distinguished Civilian Service Award | John von Neumann | [
"Physics",
"Chemistry",
"Mathematics"
] | 18,263 | [
"Algebraists",
"Theoretical physics",
"Quantum physicists",
"Quantum mechanics",
"Game theory",
"Fluid dynamicists",
"Game theorists",
"Theoretical physicists",
"Algebra",
"Fluid dynamics"
] |
15,944 | https://en.wikipedia.org/wiki/Jet%20engine | A jet engine is a type of reaction engine, discharging a fast-moving jet of heated gas (usually air) that generates thrust by jet propulsion. While this broad definition may include rocket, water jet, and hybrid propulsion, the term typically refers to an internal combustion air-breathing jet engine such as a turbojet, turbofan, ramjet, pulse jet, or scramjet. In general, jet engines are internal combustion engines.
Air-breathing jet engines typically feature a rotating air compressor powered by a turbine, with the leftover power providing thrust through the propelling nozzle—this process is known as the Brayton thermodynamic cycle. Jet aircraft use such engines for long-distance travel. Early jet aircraft used turbojet engines that were relatively inefficient for subsonic flight. Most modern subsonic jet aircraft use more complex high-bypass turbofan engines. They give higher speed and greater fuel efficiency than piston and propeller aeroengines over long distances. A few air-breathing engines made for high-speed applications (ramjets and scramjets) use the ram effect of the vehicle's speed instead of a mechanical compressor.
The thrust of a typical jetliner engine went from (de Havilland Ghost turbojet) in the 1950s to (General Electric GE90 turbofan) in the 1990s, and their reliability went from 40 in-flight shutdowns per 100,000 engine flight hours to less than 1 per 100,000 in the late 1990s. This, combined with greatly decreased fuel consumption, permitted routine transatlantic flight by twin-engined airliners by the turn of the century, where previously a similar journey would have required multiple fuel stops.
History
The principle of the jet engine is not new; however, the technical advances necessary to make the idea work did not come to fruition until the 20th century.
A rudimentary demonstration of jet power dates back to the aeolipile, a device described by Hero of Alexandria in 1st-century Egypt. This device directed steam power through two nozzles to cause a sphere to spin rapidly on its axis. It was seen as a curiosity. Meanwhile, practical applications of the turbine can be seen in the water wheel and the windmill.
Historians have further traced the theoretical origin of the principles of jet engines to traditional Chinese firework and rocket propulsion systems. Such devices' use for flight is documented in the story of Ottoman soldier Lagâri Hasan Çelebi, who reportedly achieved flight using a cone-shaped rocket in 1633.
The earliest attempts at airbreathing jet engines were hybrid designs in which an external power source first compressed air, which was then mixed with fuel and burned for jet thrust. The Italian Caproni Campini N.1, and the Japanese Tsu-11 engine intended to power Ohka kamikaze planes towards the end of World War II were unsuccessful.
Even before the start of World War II, engineers were beginning to realize that engines driving propellers were approaching limits due to issues related to propeller efficiency, which declined as blade tips approached the speed of sound. If aircraft performance were to increase beyond such a barrier, a different propulsion mechanism was necessary. This was the motivation behind the development of the gas turbine engine, the most common form of jet engine.
The key to a practical jet engine was the gas turbine, extracting power from the engine itself to drive the compressor. The gas turbine was not a new idea: the patent for a stationary turbine was granted to John Barber in England in 1791. The first gas turbine to successfully run self-sustaining was built in 1903 by Norwegian engineer Ægidius Elling. Such engines did not reach manufacture due to issues of safety, reliability, weight and, especially, sustained operation.
The first patent for using a gas turbine to power an aircraft was filed in 1921 by Maxime Guillaume. His engine was an axial-flow turbojet, but was never constructed, as it would have required considerable advances over the state of the art in compressors. Alan Arnold Griffith published An Aerodynamic Theory of Turbine Design in 1926 leading to experimental work at the RAE.
In 1928, RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbojet to his superiors. In October 1929, he developed his ideas further. On 16 January 1930, in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A.Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle would later concentrate on the simpler centrifugal compressor only. Whittle was unable to interest the government in his invention, and development continued at a slow pace.
In Spain, pilot and engineer Virgilio Leret Ruiz was granted a patent for a jet engine design in March 1935. Republican president Manuel Azaña arranged for initial construction at the Hispano-Suiza aircraft factory in Madrid in 1936, but Leret was executed months later by Francoist Moroccan troops after unsuccessfully defending his seaplane base on the first days of the Spanish Civil War. His plans, hidden from Francoists, were secretly given to the British embassy in Madrid a few years later by his wife, Carlota O'Neill, upon her release from prison.
In 1935, Hans von Ohain started work on a similar design to Whittle's in Germany, both compressor and turbine being radial, on opposite sides of the same disc, initially unaware of Whittle's work. Von Ohain's first device was strictly experimental and could run only under external power, but he was able to demonstrate the basic concept. Ohain was then introduced to Ernst Heinkel, one of the larger aircraft industrialists of the day, who immediately saw the promise of the design. Heinkel had recently purchased the Hirth engine company, and Ohain and his master machinist Max Hahn were set up there as a new division of the Hirth company. They had their first HeS 1 centrifugal engine running by September 1937. Unlike Whittle's design, Ohain used hydrogen as fuel, supplied under external pressure. Their subsequent designs culminated in the gasoline-fuelled HeS 3 of , which was fitted to Heinkel's simple and compact He 178 airframe and flown by Erich Warsitz in the early morning of August 27, 1939, from Rostock-Marienehe aerodrome, an impressively short time for development. The He 178 was the world's first jet plane. Heinkel applied for a US patent covering the Aircraft Power Plant by Hans Joachim Pabst von Ohain on May 31, 1939; patent number US2256198, with M Hahn referenced as inventor. Von Ohain's design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s.
Austrian Anselm Franz of Junkers' engine division (Junkers Motoren or "Jumo") introduced the axial-flow compressor in their jet engine. Jumo was assigned the next engine number in the RLM 109-0xx numbering sequence for gas turbine aircraft powerplants, "004", and the result was the Jumo 004 engine. After many lesser technical difficulties were solved, mass production of this engine started in 1944 as a powerplant for the world's first jet-fighter aircraft, the Messerschmitt Me 262 (and later the world's first jet-bomber aircraft, the Arado Ar 234). A variety of reasons conspired to delay the engine's availability, causing the fighter to arrive too late to improve Germany's position in World War II, however this was the first jet engine to be used in service.
Meanwhile, in Britain the Gloster E28/39 had its maiden flight on 15 May 1941 and the Gloster Meteor finally entered service with the RAF in July 1944. These were powered by turbojet engines from Power Jets Ltd., set up by Frank Whittle. The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor entered service within three months of each other in 1944; the Me 262 in April and the Gloster Meteor in July. The Meteor only saw around 15 aircraft enter World War II action, while up to 1400 Me 262 were produced, with 300 entering combat, delivering the first ground attacks and air combat victories of jet planes.
Following the end of the war the German jet aircraft and jet engines were extensively studied by the victorious allies and contributed to work on early Soviet and US jet fighters. The legacy of the axial-flow engine is seen in the fact that practically all jet engines on fixed-wing aircraft have had some inspiration from this design.
By the 1950s, the jet engine was almost universal in combat aircraft, with the exception of cargo, liaison and other specialty types. By this point, some of the British designs were already cleared for civilian use, and had appeared on early models like the de Havilland Comet and Avro Canada Jetliner. By the 1960s, all large civilian aircraft were also jet powered, leaving the piston engine in low-cost niche roles such as cargo flights.
The efficiency of turbojet engines was still rather worse than piston engines, but by the 1970s, with the advent of high-bypass turbofan jet engines (an innovation not foreseen by the early commentators such as Edgar Buckingham, at high speeds and high altitudes that seemed absurd to them), fuel efficiency was about the same as the best piston and propeller engines.
Uses
Jet engines power jet aircraft, cruise missiles and unmanned aerial vehicles. In the form of rocket engines they power model rocketry, spaceflight, and military missiles.
Jet engines have propelled high speed cars, particularly drag racers, with the all-time record held by a rocket car. A turbofan powered car, ThrustSSC, currently holds the land speed record.
Jet engine designs are frequently modified for non-aircraft applications, as industrial gas turbines or marine powerplants. These are used in electrical power generation, for powering water, natural gas, or oil pumps, and providing propulsion for ships and locomotives. Industrial gas turbines can create up to 50,000 shaft horsepower. Many of these engines are derived from older military turbojets such as the Pratt & Whitney J57 and J75 models. There is also a derivative of the P&W JT8D low-bypass turbofan that creates up to 35,000 horsepower (HP)
.
Jet engines are also sometimes developed into, or share certain components such as engine cores, with turboshaft and turboprop engines, which are forms of gas turbine engines that are typically used to power helicopters and some propeller-driven aircraft.
Types of jet engine
There are a large number of different types of jet engines, all of which achieve forward thrust from the principle of jet propulsion.
Airbreathing
Commonly aircraft are propelled by airbreathing jet engines. Most airbreathing jet engines that are in use are turbofan jet engines, which give good efficiency at speeds just below the speed of sound.
Turbojet
A turbojet engine is a gas turbine engine that works by compressing air with an inlet and a compressor (axial, centrifugal, or both), mixing fuel with the compressed air, burning the mixture in the combustor, and then passing the hot, high pressure air through a turbine and a nozzle. The compressor is powered by the turbine, which extracts energy from the expanding gas passing through it. The engine converts internal energy in the fuel to increased momentum of the gas flowing through the engine, producing thrust. All the air entering the compressor is passed through the combustor, and turbine, unlike the turbofan engine described below.
Turbofan
Turbofans differ from turbojets in that they have an additional fan at the front of the engine, which accelerates air in a duct bypassing the core gas turbine engine. Turbofans are the dominant engine type for medium and long-range airliners.
Turbofans are usually more efficient than turbojets at subsonic speeds, but at high speeds their large frontal area generates more drag. Therefore, in supersonic flight, and in military and other aircraft where other considerations have a higher priority than fuel efficiency, fans tend to be smaller or absent.
Because of these distinctions, turbofan engine designs are often categorized as low-bypass or high-bypass, depending upon the amount of air which bypasses the core of the engine. Low-bypass turbofans have a bypass ratio of around 2:1 or less.
Propfan
A propfan engine is a type of airbreathing jet engine which combines aspects of turboprop and turbofan. It’s design consists of a central gas turbine which drives open-air contra-rotating propellers. Unlike turboprop engines, in which the propeller and the engine are considered two separate products, the propfan’s gas generator and its unshrouded propeller module are heavily integrated and are considered to be a single product. Additionally, the propfan’s short, heavily twisted variable pitch blades closely remember the ducted fan blades of turbofan engines.
Propfans are designed to offer the speed and performance of turbofan engines with fuel efficiency of turboprops. However, due to low fuel costs and high cabin noise, early propfan projects were abandoned. Very few aircraft have flown with propfans, with the Antonov An-70 being the first and only aircraft to fly while being powered solely by propfan engines.
Advanced technology engine
The term Advanced technology engine refers to the modern generation of jet engines. The principle is that a turbine engine will function more efficiently if the various sets of turbines can revolve at their individual optimum speeds, instead of at the same speed. The true advanced technology engine has a triple spool, meaning that instead of having a single drive shaft, there are three, in order that the three sets of blades may revolve at different speeds. An interim state is a twin-spool engine, allowing only two different speeds for the turbines.
Ram compression
Ram compression jet engines are airbreathing engines similar to gas turbine engines in so far as they both use the Brayton cycle. Gas turbine and ram compression engines differ, however, in how they compress the incoming airflow. Whereas gas turbine engines use axial or centrifugal compressors to compress incoming air, ram engines rely only on air compressed in the inlet or diffuser. A ram engine thus requires a substantial initial forward airspeed before it can function. Ramjets are considered the simplest type of air breathing jet engine because they have no moving parts in the engine proper, only in the accessories.
Scramjets differ mainly in the fact that the air does not slow to subsonic speeds. Rather, they use supersonic combustion. They are efficient at even higher speed. Very few have been built or flown.
Non-continuous combustion
Other types of jet propulsion
Rocket
The rocket engine uses the same basic physical principles of thrust as a form of reaction engine, but is distinct from the jet engine in that it does not require atmospheric air to provide oxygen; the rocket carries all components of the reaction mass. However some definitions treat it as a form of jet propulsion.
Because rockets do not breathe air, this allows them to operate at arbitrary altitudes and in space.
This type of engine is used for launching satellites, space exploration and crewed access, and permitted landing on the Moon in 1969.
Rocket engines are used for high altitude flights, or anywhere where very high accelerations are needed since rocket engines themselves have a very high thrust-to-weight ratio.
However, the high exhaust speed and the heavier, oxidizer-rich propellant results in far more propellant use than turbofans. Even so, at extremely high speeds they become energy-efficient.
An approximate equation for the net thrust of a rocket engine is:
Where is the net thrust, is the specific impulse, is a standard gravity, is the propellant flow in kg/s, is the cross-sectional area at the exit of the exhaust nozzle, and is the atmospheric pressure.
Hybrid
Combined-cycle engines simultaneously use two or more different principles of jet propulsion.
Water jet
A water jet, or pump-jet, is a marine propulsion system that uses a jet of water. The mechanical arrangement may be a ducted propeller with nozzle, or a centrifugal compressor and nozzle. The pump-jet must be driven by a separate engine such as a Diesel or gas turbine.
General physical principles
All jet engines are reaction engines that generate thrust by emitting a jet of fluid rearwards at relatively high speed. The forces on the inside of the engine needed to create this jet give a strong thrust on the engine which pushes the craft forwards.
Jet engines make their jet from propellant stored in tanks that are attached to the engine (as in a 'rocket') as well as in duct engines (those commonly used on aircraft) by ingesting an external fluid (very typically air) and expelling it at higher speed.
Propelling nozzle
A propelling nozzle produces a high velocity exhaust jet. Propelling nozzles turn internal and pressure energy into high velocity kinetic energy. The total pressure and temperature don't change through the nozzle but their static values drop as the gas speeds up.
The velocity of the air entering the nozzle is low, about Mach 0.4, a prerequisite for minimizing pressure losses in the duct leading to the nozzle. The temperature entering the nozzle may be as low as sea level ambient for a fan nozzle in the cold air at cruise altitudes. It may be as high as the 1000 Kelvin exhaust gas temperature for a supersonic afterburning engine or 2200 K with afterburner lit. The pressure entering the nozzle may vary from 1.5 times the pressure outside the nozzle, for a single stage fan, to 30 times for the fastest manned aircraft at Mach 3+.
Convergent nozzles are only able to accelerate the gas up to local sonic (Mach 1) conditions. To reach high flight speeds, even greater exhaust velocities are required, and so a convergent-divergent nozzle is needed on high-speed aircraft.
The engine thrust is highest if the static pressure of the gas reaches the ambient value as it leaves the nozzle. This only happens if the nozzle exit area is the correct value for the nozzle pressure ratio (npr). Since the npr changes with engine thrust setting and flight speed this is seldom the case. Also at supersonic speeds the divergent area is less than required to give complete internal expansion to ambient pressure as a trade-off with external body drag. Whitford gives the F-16 as an example. Other underexpanded examples were the XB-70 and SR-71.
The nozzle size, together with the area of the turbine nozzles, determines the operating pressure of the compressor.
Thrust
Energy efficiency relating to aircraft jet engines
This overview highlights where energy losses occur in complete jet aircraft powerplants or engine installations.
A jet engine at rest, as on a test stand, sucks in fuel and generates thrust. How well it does this is judged by how much fuel it uses and what force is required to restrain it. This is a measure of its efficiency. If something deteriorates inside the engine (known as performance deterioration) it will be less efficient and this will show when the fuel produces less thrust. If a change is made to an internal part which allows the air/combustion gases to flow more smoothly the engine will be more efficient and use less fuel. A standard definition is used to assess how different things change engine efficiency and also to allow comparisons to be made between different engines. This definition is called specific fuel consumption, or how much fuel is needed to produce one unit of thrust. For example, it will be known for a particular engine design that if some bumps in a bypass duct are smoothed out the air will flow more smoothly giving a pressure loss reduction of x% and y% less fuel will be needed to get the take-off thrust, for example. This understanding comes under the engineering discipline Jet engine performance. How efficiency is affected by forward speed and by supplying energy to aircraft systems is mentioned later.
The efficiency of the engine is controlled primarily by the operating conditions inside the engine which are the pressure produced by the compressor and the temperature of the combustion gases at the first set of rotating turbine blades. The pressure is the highest air pressure in the engine. The turbine rotor temperature is not the highest in the engine but is the highest at which energy transfer takes place ( higher temperatures occur in the combustor). The above pressure and temperature are shown on a Thermodynamic cycle diagram.
The efficiency is further modified by how smoothly the air and the combustion gases flow through the engine, how well the flow is aligned (known as incidence angle) with the moving and stationary passages in the compressors and turbines. Non-optimum angles, as well as non-optimum passage and blade shapes can cause thickening and separation of Boundary layers and formation of Shock waves. It is important to slow the flow (lower speed means less pressure losses or Pressure drop) when it travels through ducts connecting the different parts. How well the individual components contribute to turning fuel into thrust is quantified by measures like efficiencies for the compressors, turbines and combustor and pressure losses for the ducts. These are shown as lines on a Thermodynamic cycle diagram.
The engine efficiency, or thermal efficiency, known as . is dependent on the Thermodynamic cycle parameters, maximum pressure and temperature, and on component efficiencies, , and and duct pressure losses.
The engine needs compressed air for itself just to run successfully. This air comes from its own compressor and is called secondary air. It does not contribute to making thrust so makes the engine less efficient. It is used to preserve the mechanical integrity of the engine, to stop parts overheating and to prevent oil escaping from bearings for example. Only some of this air taken from the compressors returns to the turbine flow to contribute to thrust production. Any reduction in the amount needed improves the engine efficiency. Again, it will be known for a particular engine design that a reduced requirement for cooling flow of x% will reduce the specific fuel consumption by y%. In other words, less fuel will be required to give take-off thrust, for example. The engine is more efficient.
All of the above considerations are basic to the engine running on its own and, at the same time, doing nothing useful, i.e. it is not moving an aircraft or supplying energy for the aircraft's electrical, hydraulic and air systems. In the aircraft the engine gives away some of its thrust-producing potential, or fuel, to power these systems. These requirements, which cause installation losses, reduce its efficiency. It is using some fuel that does not contribute to the engine's thrust.
Finally, when the aircraft is flying the propelling jet itself contains wasted kinetic energy after it has left the engine. This is quantified by the term propulsive, or Froude, efficiency and may be reduced by redesigning the engine to give it bypass flow and a lower speed for the propelling jet, for example as a turboprop or turbofan engine. At the same time forward speed increases the by increasing the Overall pressure ratio.
The overall efficiency of the engine at flight speed is defined as .
The at flight speed depends on how well the intake compresses the air before it is handed over to the engine compressors. The intake compression ratio, which can be as high as 32:1 at Mach 3, adds to that of the engine compressor to give the Overall pressure ratio and for the Thermodynamic cycle. How well it does this is defined by its pressure recovery or measure of the losses in the intake. Mach 3 manned flight has provided an interesting illustration of how these losses can increase dramatically in an instant. The North American XB-70 Valkyrie and Lockheed SR-71 Blackbird at Mach 3 each had pressure recoveries of about 0.8, due to relatively low losses during the compression process, i.e. through systems of multiple shocks. During an 'unstart' the efficient shock system would be replaced by a very inefficient single shock beyond the inlet and an intake pressure recovery of about 0.3 and a correspondingly low pressure ratio.
The propelling nozzle at speeds above about Mach 2 usually has extra internal thrust losses because the exit area is not big enough as a trade-off with external afterbody drag.
Although a bypass engine improves propulsive efficiency it incurs losses of its own inside the engine itself. Machinery has to be added to transfer energy from the gas generator to a bypass airflow. The low loss from the propelling nozzle of a turbojet is added to with extra losses due to inefficiencies in the added turbine and fan. These may be included in a transmission, or transfer, efficiency . However, these losses are more than made up by the improvement in propulsive efficiency. There are also extra pressure losses in the bypass duct and an extra propelling nozzle.
With the advent of turbofans with their loss-making machinery what goes on inside the engine has been separated by Bennett, for example, between gas generator and transfer machinery giving .
The energy efficiency () of jet engines installed in vehicles has two main components:
propulsive efficiency (): how much of the energy of the jet ends up in the vehicle body rather than being carried away as kinetic energy of the jet.
cycle efficiency (): how efficiently the engine can accelerate the jet
Even though overall energy efficiency is:
for all jet engines the propulsive efficiency is highest as the exhaust jet velocity gets closer to the vehicle speed as this gives the smallest residual kinetic energy. For an airbreathing engine an exhaust velocity equal to the vehicle velocity, or a equal to one, gives zero thrust with no net momentum change. The formula for air-breathing engines moving at speed with an exhaust velocity , and neglecting fuel flow, is:
And for a rocket:
In addition to propulsive efficiency, another factor is cycle efficiency; a jet engine is a form of heat engine. Heat engine efficiency is determined by the ratio of temperatures reached in the engine to that exhausted at the nozzle. This has improved constantly over time as new materials have been introduced to allow higher maximum cycle temperatures. For example, composite materials, combining metals with ceramics, have been developed for HP turbine blades, which run at the maximum cycle temperature. The efficiency is also limited by the overall pressure ratio that can be achieved. Cycle efficiency is highest in rocket engines (~60+%), as they can achieve extremely high combustion temperatures. Cycle efficiency in turbojet and similar is nearer to 30%, due to much lower peak cycle temperatures.
The combustion efficiency of most aircraft gas turbine engines at sea level takeoff conditions
is almost 100%. It decreases nonlinearly to 98% at altitude cruise conditions. Air-fuel ratio ranges from 50:1 to 130:1. For any type of combustion chamber there is a rich and weak limit to the air-fuel ratio, beyond which the flame is extinguished. The range of air-fuel ratio between the rich and weak limits is reduced with an increase of air velocity. If the
increasing air mass flow reduces the fuel ratio below certain value, flame extinction occurs.
Consumption of fuel or propellant
A closely related (but different) concept to energy efficiency is the rate of consumption of propellant mass. Propellant consumption in jet engines is measured by specific fuel consumption, specific impulse, or effective exhaust velocity. They all measure the same thing. Specific impulse and effective exhaust velocity are strictly proportional, whereas specific fuel consumption is inversely proportional to the others.
For air-breathing engines such as turbojets, energy efficiency and propellant (fuel) efficiency are much the same thing, since the propellant is a fuel and the source of energy. In rocketry, the propellant is also the exhaust, and this means that a high energy propellant gives better propellant efficiency but can in some cases actually give lower energy efficiency.
It can be seen in the table (just below) that the subsonic turbofans such as General Electric's CF6 turbofan use a lot less fuel to generate thrust for a second than did the Concorde's Rolls-Royce/Snecma Olympus 593 turbojet. However, since energy is force times distance and the distance per second was greater for the Concorde, the actual power generated by the engine for the same amount of fuel was higher for the Concorde at Mach 2 than the CF6. Thus, the Concorde's engines were more efficient in terms of energy per distance traveled.
Thrust-to-weight ratio
The thrust-to-weight ratio of jet engines with similar configurations varies with scale, but is mostly a function of engine construction technology. For a given engine, the lighter the engine, the better the thrust-to-weight is, the less fuel is used to compensate for drag due to the lift needed to carry the engine weight, or to accelerate the mass of the engine.
As can be seen in the following table, rocket engines generally achieve much higher thrust-to-weight ratios than duct engines such as turbojet and turbofan engines. This is primarily because rockets almost universally use dense liquid or solid reaction mass which gives a much smaller volume and hence the pressurization system that supplies the nozzle is much smaller and lighter for the same performance. Duct engines have to deal with air which is two to three orders of magnitude less dense and this gives pressures over much larger areas, which in turn results in more engineering materials being needed to hold the engine together and for the air compressor.
Comparison of types
Propeller engines handle larger air mass flows, and give them smaller acceleration, than jet engines. Since the increase in air speed is small, at high flight speeds the thrust available to propeller-driven aeroplanes is small. However, at low speeds, these engines benefit from relatively high propulsive efficiency.
On the other hand, turbojets accelerate a much smaller mass flow of intake air and burned fuel, but they then reject it at very high speed. When a de Laval nozzle is used to accelerate a hot engine exhaust, the outlet velocity may be locally supersonic. Turbojets are particularly suitable for aircraft travelling at very high speeds.
Turbofans have a mixed exhaust consisting of the bypass air and the hot combustion product gas from the core engine. The amount of air that bypasses the core engine compared to the amount flowing into the engine determines what is called a turbofan's bypass ratio (BPR).
While a turbojet engine uses all of the engine's output to produce thrust in the form of a hot high-velocity exhaust gas jet, a turbofan's cool low-velocity bypass air yields between 30% and 70% of the total thrust produced by a turbofan system.
The net thrust (FN) generated by a turbofan can also be expanded as:
where:
Rocket engines have extremely high exhaust velocity and thus are best suited for high speeds (hypersonic) and great altitudes. At any given throttle, the thrust and efficiency of a rocket motor improves slightly with increasing altitude (because the back-pressure falls thus increasing net thrust at the nozzle exit plane), whereas with a turbojet (or turbofan) the falling density of the air entering the intake (and the hot gases leaving the nozzle) causes the net thrust to decrease with increasing altitude. Rocket engines are more efficient than even scramjets above roughly Mach 15.
Altitude and speed
With the exception of scramjets, jet engines, deprived of their inlet systems can only accept air at around half the speed of sound. The inlet system's job for transonic and supersonic aircraft is to slow the air and perform some of the compression.
The limit on maximum altitude for engines is set by flammability – at very high altitudes the air becomes too thin to burn, or after compression, too hot. For turbojet engines altitudes of about 40 km appear to be possible, whereas for ramjet engines 55 km may be achievable. Scramjets may theoretically manage 75 km. Rocket engines of course have no upper limit.
At more modest altitudes, flying faster compresses the air at the front of the engine, and this greatly heats the air. The upper limit is usually thought to be about Mach 5–8, as above about Mach 5.5, the atmospheric nitrogen tends to react due to the high temperatures at the inlet and this consumes significant energy. The exception to this is scramjets which may be able to achieve about Mach 15 or more, as they avoid slowing the air, and rockets again have no particular speed limit.
Noise
The noise emitted by a jet engine has many sources. These include, in the case of gas turbine engines, the fan, compressor, combustor, turbine and propelling jet/s.
The propelling jet produces jet noise which is caused by the violent mixing action of the high speed jet with the surrounding air. In the subsonic case the noise is produced by eddies and in the supersonic case by Mach waves. The sound power radiated from a jet varies with the jet velocity raised to the eighth power for velocities up to and varies with the velocity cubed above . Thus, the lower speed exhaust jets emitted from engines such as high bypass turbofans are the quietest, whereas the fastest jets, such as rockets, turbojets, and ramjets, are the loudest. For commercial jet aircraft the jet noise has reduced from the turbojet through bypass engines to turbofans as a result of a progressive reduction in propelling jet velocities. For example, the JT8D, a bypass engine, has a jet velocity of whereas the JT9D, a turbofan, has jet velocities of (cold) and (hot).
The advent of the turbofan replaced the very distinctive jet noise with another sound known as "buzz saw" noise. The origin is the shockwaves originating at the supersonic fan blade tip at takeoff thrust.
Cooling
Adequate heat transfer away from the working parts of the jet engine is critical to maintaining strength of engine materials and ensuring long life for the engine.
After 2016, research is ongoing in the development of transpiration cooling techniques to jet engine components.
Operation
In a jet engine, each major rotating section usually has a separate gauge devoted to monitoring its speed of rotation.
Depending on the make and model, a jet engine may have an N gauge that monitors the low-pressure compressor section and/or fan speed in turbofan engines. The gas generator section may be monitored by an N gauge, while triple spool engines may have an N gauge as well. Each engine section rotates at many thousands RPM. Their gauges therefore are calibrated in percent of a nominal speed rather than actual RPM, for ease of display and interpretation.
See also
Air turboramjet
Balancing machine
Components of jet engines
Intake momentum drag
Rocket engine nozzle
Rocket turbine engine
Spacecraft propulsion
Thrust reversal
Turbojet development at the RAE
Variable cycle engine
Water injection (engine)
Notes
References
Bibliography
External links
Media about jet engines from Rolls-Royce
How Stuff Works article on how a Gas Turbine Engine works
Influence of the Jet Engine on the Aerospace Industry
An Overview of Military Jet Engine History, Appendix B, pp. 97–120, in Military Jet Engine Acquisition (Rand Corp., 24 pp, PDF)
Basic jet engine tutorial (QuickTime Video)
An article on how reaction engine works
Energy conversion
Gas turbines
Gas compressors
Turbomachinery
Engineering thermodynamics
Fluid dynamics
Aerodynamics
Discovery and invention controversies
20th-century inventions | Jet engine | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 7,348 | [
"Turbomachinery",
"Engines",
"Gas compressors",
"Gas turbines",
"Engineering thermodynamics",
"Chemical equipment",
"Chemical engineering",
"Aerodynamics",
"Jet engines",
"Thermodynamics",
"Mechanical engineering",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
16,500 | https://en.wikipedia.org/wiki/John%20Pople | Sir John Anthony Pople (31 October 1925 – 15 March 2004) was a British theoretical chemist who was awarded the Nobel Prize in Chemistry with Walter Kohn in 1998 for his development of computational methods in quantum chemistry.
Early life and education
Pople was born in Burnham-on-Sea, Somerset, and attended the Bristol Grammar School. He won a scholarship to Trinity College, Cambridge, in 1943. He received his Bachelor of Arts degree in 1946. Between 1945 and 1947 he worked at the Bristol Aeroplane Company. He then returned to the University of Cambridge and was awarded his PhD in mathematics in 1951 on lone pair electrons.
Career
After obtaining his PhD, he was a research fellow at Trinity College, Cambridge and then from 1954 a lecturer in the mathematics faculty at Cambridge. In 1958, he moved to the National Physical Laboratory, near London as head of the new basics physics division. He moved to the United States of America in 1964, where he lived the rest of his life, though he retained British citizenship. Pople considered himself more of a mathematician than a chemist, but theoretical chemists consider him one of the most important of their number. In 1964 he moved to Carnegie Mellon University in Pittsburgh, Pennsylvania, where he had experienced a sabbatical in 1961 to 1962. In 1993 he moved to Northwestern University in Evanston, Illinois, where he was Trustees Professor of Chemistry until his death.
Research
Pople's major scientific contributions were in four different areas:
Statistical mechanics of water
Pople's early paper on the statistical mechanics of water, according to Michael J. Frisch, "remained the standard for many years". This was his thesis topic for his PhD at Cambridge supervised by John Lennard-Jones.
Nuclear magnetic resonance
In the early days of nuclear magnetic resonance he studied the underlying theory, and in 1959 he co-authored the textbook High Resolution Nuclear Magnetic Resonance with W.G. Schneider and H.J. Bernstein.
Semi-empirical theory
He made major contributions to the theory of approximate molecular orbital (MO) calculations, starting with one identical to the one developed by Rudolph Pariser and Robert G. Parr on pi electron systems, and now called the Pariser–Parr–Pople method. Subsequently, he developed the methods of Complete Neglect of Differential Overlap (CNDO) (in 1965) and Intermediate Neglect of Differential Overlap (INDO) for approximate MO calculations on three-dimensional molecules, and other developments in computational chemistry. In 1970 he and David Beveridge coauthored the book Approximate Molecular Orbital Theory describing these methods.
Ab initio electronic structure theory
Pople pioneered the development of more sophisticated computational methods, called ab initio quantum chemistry methods, that use basis sets of either Slater type orbitals or Gaussian orbitals to model the wave function. While in the early days these calculations were extremely expensive to perform, the advent of high speed microprocessors has made them much more feasible today. He was instrumental in the development of one of the most widely used computational chemistry packages, the Gaussian suite of programs, including coauthorship of the first version, Gaussian 70. One of his most important original contributions is the concept of a model chemistry whereby a method is rigorously evaluated across a range of molecules. His research group developed the quantum chemistry composite methods such as Gaussian-1 (G1) and Gaussian-2 (G2). In 1991, Pople stopped working on Gaussian and several years later he developed (with others) the Q-Chem computational chemistry program. Prof. Pople's departure from Gaussian, along with the subsequent banning of many prominent scientists, including himself, from using the software gave rise to considerable controversy among the quantum chemistry community.
The Gaussian molecular orbital methods were described in the 1986 book Ab initio molecular orbital theory by Warren Hehre, Leo Radom, Paul v.R. Schleyer and Pople.
Awards and honours
Pople received the Wolf Prize in Chemistry in 1992, and the Nobel Prize in Chemistry in 1998. He was elected a Fellow of the Royal Society (FRS) in 1961. He was made a Knight Commander (KBE) of the Order of the British Empire in 2003. He was a founding member of the International Academy of Quantum Molecular Science.
An IT room and a scholarship are named after him at Bristol Grammar School, as is a supercomputer at the Pittsburgh Supercomputing Center.
Personal life
Pople married Joy Bowers in 1952 and was married until her death from cancer in 2002. Pople died of liver cancer in Chicago in 2004. He was survived by his daughter Hilary, and sons Adrian, Mark and Andrew. In accordance with his wishes, Pople's Nobel Medal was given to Carnegie Mellon University by his family on 5 October 2009.
See also
Pople diagram
Pople notation
STO-nG basis sets
Unrestricted Hartree–Fock
NDDO
References
External links
Sir John Pople, Gaussian Code, and Complex Chemical Reactions, from the Office of Scientific and Technical Information, United States Department of Energy
including the Nobel Lecture, 8 December 1998 Quantum Chemical Models
1925 births
2004 deaths
Alumni of Trinity College, Cambridge
Theoretical chemists
British expatriate academics in the United States
Carnegie Mellon University faculty
Deaths from liver cancer in the United States
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Members of the International Academy of Quantum Molecular Science
Knights Commander of the Order of the British Empire
Nobel laureates in Chemistry
British Nobel laureates
Northwestern University faculty
People educated at Bristol Grammar School
People from Burnham-on-Sea
British physical chemists
Wolf Prize in Chemistry laureates
Recipients of the Copley Medal
English Nobel laureates
Computational chemists
Deaths from cancer in Illinois
Scientists of the National Physical Laboratory (United Kingdom) | John Pople | [
"Chemistry"
] | 1,177 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
16,511 | https://en.wikipedia.org/wiki/Janus%20kinase | Janus kinase (JAK) is a family of intracellular, non-receptor tyrosine kinases that transduce cytokine-mediated signals via the JAK-STAT pathway. They were initially named "just another kinase" 1 and 2 (since they were just two of many discoveries in a PCR-based screen of kinases), but were ultimately published as "Janus kinase". The name is taken from the two-faced Roman god of beginnings, endings and duality, Janus, because the JAKs possess two near-identical phosphate-transferring domains. One domain exhibits the kinase activity, while the other negatively regulates the kinase activity of the first.
Family
The four JAK family members are:
Janus kinase 1 (JAK1)
Janus kinase 2 (JAK2)
Janus kinase 3 (JAK3)
Tyrosine kinase 2 (TYK2)
Transgenic mice that do not express JAK1 have defective responses to some cytokines, such as interferon-gamma. JAK1 and JAK2 are involved in type II interferon (interferon-gamma) signalling, whereas JAK1 and TYK2 are involved in type I interferon signalling. Mice that do not express TYK2 have defective natural killer cell function.
Functions
Since members of the type I and type II cytokine receptor families possess no catalytic kinase activity, they rely on the JAK family of tyrosine kinases to phosphorylate and activate downstream proteins involved in their signal transduction pathways. The receptors exist as paired polypeptides, thus exhibiting two intracellular signal-transducing domains.
JAKs associate with a proline-rich region in each intracellular domain that is adjacent to the cell membrane and called a box1/box2 region. After the receptor associates with its respective cytokine/ligand, it goes through a conformational change, bringing the two JAKs close enough to phosphorylate each other. The JAK autophosphorylation induces a conformational change within itself, enabling it to transduce the intracellular signal by further phosphorylating and activating transcription factors called STATs (Signal Transducer and Activator of Transcription, or Signal Transduction And Transcription). The activated STATs dissociate from the receptor and form dimers before translocating to the cell nucleus, where they regulate transcription of selected genes.
Some examples of the molecules that use the JAK/STAT signaling pathway are colony-stimulating factor, prolactin, growth hormone, and many cytokines. Janus Kinases have also been reported to have a role in the maintenance of X chromosome inactivation.
Clinical significance
JAK inhibitors are used for the treatment of atopic dermatitis and rheumatoid arthritis. They are also being studied in psoriasis, polycythemia vera, alopecia, essential thrombocythemia, ulcerative colitis, myeloid metaplasia with myelofibrosis and vitiligo. Examples are tofacitinib, baricitinib, upadacitinib and filgotinib.
In 2014 researchers discovered that oral JAK inhibitors, when administered orally, could restore hair growth in some subjects and that applied to the skin, effectively promoted hair growth.
Structure
JAKs range from 120-140 kDa in size and have seven defined regions of homology called Janus homology domains 1 to 7 (JH1-7). JH1 is the kinase domain important for the enzymatic activity of the JAK and contains typical features of a tyrosine kinase such as conserved tyrosines necessary for JAK activation (e.g., Y1038/Y1039 in JAK1, Y1007/Y1008 in JAK2, Y980/Y981 in JAK3, and Y1054/Y1055 in Tyk2). Phosphorylation of these dual tyrosines leads to the conformational changes in the JAK protein to facilitate binding of substrate. JH2 is a pseudokinase domain, a domain structurally similar to a tyrosine kinase and essential for a normal kinase activity, yet lacks enzymatic activity. This domain may be involved in regulating the activity of JH1, and was likely a duplication of the JH1 domain which has undergone mutation post-duplication. The JH3-JH4 domains of JAKs share homology with Src-homology-2 (SH2) domains. The amino terminal (NH2) end (JH4-JH7) of Jaks is called a FERM domain (short for band 4.1, ezrin, radixin and moesin); this domain is also found in the focal adhesion kinase (FAK) family and is involved in association of JAKs with cytokine receptors and/or other kinases.
References
Signal transduction
Tyrosine kinases | Janus kinase | [
"Chemistry",
"Biology"
] | 1,052 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
16,565 | https://en.wikipedia.org/wiki/Jones%20calculus | In optics, polarized light can be described using the Jones calculus, invented by R. C. Jones in 1941. Polarized light is represented by a Jones vector, and linear optical elements are represented by Jones matrices. When light crosses an optical element the resulting polarization of the emerging light is found by taking the product of the Jones matrix of the optical element and the Jones vector of the incident light.
Note that Jones calculus is only applicable to light that is already fully polarized. Light which is randomly polarized, partially polarized, or incoherent must be treated using Mueller calculus.
Jones vector
The Jones vector describes the polarization of light in free space or another homogeneous isotropic non-attenuating medium, where the light can be properly described as transverse waves. Suppose that a monochromatic plane wave of light is travelling in the positive z-direction, with angular frequency ω and wave vector k = (0,0,k), where the wavenumber k = ω/c. Then the electric and magnetic fields E and H are orthogonal to k at each point; they both lie in the plane "transverse" to the direction of motion. Furthermore, H is determined from E by 90-degree rotation and a fixed multiplier depending on the wave impedance of the medium. So the polarization of the light can be determined by studying E. The complex amplitude of E is written:
Note that the physical E field is the real part of this vector; the complex multiplier serves up the phase information. Here is the imaginary unit with .
The Jones vector is
Thus, the Jones vector represents the amplitude and phase of the electric field in the x and y directions.
The sum of the squares of the absolute values of the two components of Jones vectors is proportional to the intensity of light. It is common to normalize it to 1 at the starting point of calculation for simplification. It is also common to constrain the first component of the Jones vectors to be a real number. This discards the overall phase information that would be needed for calculation of interference with other beams.
Note that all Jones vectors and matrices in this article employ the convention that the phase of the light wave is given by , a convention used by Hecht. Under this convention, increase in (or ) indicates retardation (delay) in phase, while decrease indicates advance in phase. For example, a Jones vectors component of () indicates retardation by (or 90 degrees) compared to 1 (). Collett uses the opposite definition for the phase (). Also, Collet and Jones follow different conventions for the definitions of handedness of circular polarization. Jones' convention is called: "From the point of view of the receiver", while Collett's convention is called: "From the point of view of the source." The reader should be wary of the choice of convention when consulting references on the Jones calculus.
The following table gives the 6 common examples of normalized Jones vectors.
A general vector that points to any place on the surface is written as a ket . When employing the Poincaré sphere (also known as the Bloch sphere), the basis kets ( and ) must be assigned to opposing (antipodal) pairs of the kets listed above. For example, one might assign = and = . These assignments are arbitrary. Opposing pairs are
and
and
and
The polarization of any point not equal to or and not on the circle that passes through is known as elliptical polarization.
Jones matrices
The Jones matrices are operators that act on the Jones vectors defined above. These matrices are implemented by various optical elements such as lenses, beam splitters, mirrors, etc. Each matrix represents projection onto a one-dimensional complex subspace of the Jones vectors. The following table gives examples of Jones matrices for polarizers:
Phase retarders
A phase retarder is an optical element that produces a phase difference between two orthogonal polarization components of a monochromatic polarized beam of light. Mathematically, using kets to represent Jones vectors, this means that the action of a phase retarder is to transform light with polarization
to
where are orthogonal polarization components (i.e. ) that are determined by the physical nature of the phase retarder. In general, the orthogonal components could be any two basis vectors. For example, the action of the circular phase retarder is such that
However, linear phase retarders, for which are linear polarizations, are more commonly encountered in discussion and in practice. In fact, sometimes the term "phase retarder" is used to refer specifically to linear phase retarders.
Linear phase retarders are usually made out of birefringent uniaxial crystals such as calcite, MgF2 or quartz. Plates made of these materials for this purpose are referred to as waveplates. Uniaxial crystals have one crystal axis that is different from the other two crystal axes (i.e., ni ≠ nj = nk). This unique axis is called the extraordinary axis and is also referred to as the optic axis. An optic axis can be the fast or the slow axis for the crystal depending on the crystal at hand. Light travels with a higher phase velocity along an axis that has the smallest refractive index and this axis is called the fast axis. Similarly, an axis which has the largest refractive index is called a slow axis since the phase velocity of light is the lowest along this axis. "Negative" uniaxial crystals (e.g., calcite CaCO3, sapphire Al2O3) have ne < no so for these crystals, the extraordinary axis (optic axis) is the fast axis, whereas for "positive" uniaxial crystals (e.g., quartz SiO2, magnesium fluoride MgF2, rutile TiO2), ne > no and thus the extraordinary axis (optic axis) is the slow axis. Other commercially available linear phase retarders exist and are used in more specialized applications. The Fresnel rhombs is one such alternative.
Any linear phase retarder with its fast axis defined as the x- or y-axis has zero off-diagonal terms and thus can be conveniently expressed as
where and are the phase offsets of the electric fields in and directions respectively. In the phase convention , define the relative phase between the two waves as . Then a positive (i.e. > ) means that doesn't attain the same value as until a later time, i.e. leads . Similarly, if , then leads .
For example, if the fast axis of a quarter waveplate is horizontal, then the phase velocity along the horizontal direction is ahead of the vertical direction i.e., leads . Thus, which for a quarter waveplate yields .
In the opposite convention , define the relative phase as . Then means that doesn't attain the same value as until a later time, i.e. leads .
The Jones matrix for an arbitrary birefringent material is the most general form of a polarization transformation in the Jones calculus; it can represent any polarization transformation. To see this, one can show
The above matrix is a general parametrization for the elements of SU(2), using the convention
where the overline denotes complex conjugation.
Finally, recognizing that the set of unitary transformations on can be expressed as
it becomes clear that the Jones matrix for an arbitrary birefringent material represents any unitary transformation, up to a phase factor . Therefore, for appropriate choice of , , and , a transformation between any two Jones vectors can be found, up to a phase factor . However, in the Jones calculus, such phase factors do not change the represented polarization of a Jones vector, so are either considered arbitrary or imposed ad hoc to conform to a set convention.
The special expressions for the phase retarders can be obtained by taking suitable parameter values in the general expression for a birefringent material. In the general expression:
The relative phase retardation induced between the fast axis and the slow axis is given by
is the orientation of the fast axis with respect to the x-axis.
is the circularity.
Note that for linear retarders, = 0 and for circular retarders, = ± /2, = /4. In general for elliptical retarders, takes on values between - /2 and /2.
Axially rotated elements
Assume an optical element has its optic axis perpendicular to the surface vector for the plane of incidence and is rotated about this surface vector by angle θ/2 (i.e., the principal plane through which the optic axis passes, makes angle θ/2 with respect to the plane of polarization of the electric field of the incident TE wave). Recall that a half-wave plate rotates polarization as twice the angle between incident polarization and optic axis (principal plane). Therefore, the Jones matrix for the rotated polarization state, M(θ), is
where
This agrees with the expression for a half-wave plate in the table above. These rotations are identical to beam unitary splitter transformation in optical physics given by
where the primed and unprimed coefficients represent beams incident from opposite sides of the beam splitter. The reflected and transmitted components acquire a phase θr and θt, respectively. The requirements for a valid representation of the element are
and
Both of these representations are unitary matrices fitting these requirements; and as such, are both valid.
Arbitrarily rotated elements
Finding the Jones matrix, J(α, β, γ), for an arbitrary rotation involves a three-dimensional rotation matrix. In the following notation α, β and γ are the yaw, pitch, and roll angles (rotation about the z-, y-, and x-axes, with x being the direction of propagation), respectively. The full combination of the 3-dimensional rotation matrices is the following:
Using the above, for any base Jones matrix J, you can find the rotated state J(α, β, γ) using:
The simplest case, where the Jones matrix is for an ideal linear horizontal polarizer, reduces then to:
where ci and si represent the cosine or sine of a given angle "i", respectively.
See Russell A. Chipman and Garam Yun for further work done based on this.
See also
Polarization
Scattering parameters
Stokes parameters
Mueller calculus
Photon polarization
Notes
References
Further reading
External links
Jones Calculus written by E. Collett on Optipedia
Optics
Polarization (waves)
Matrices | Jones calculus | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,170 | [
"Applied and interdisciplinary physics",
"Optics",
"Mathematical objects",
"Astrophysics",
"Matrices (mathematics)",
" molecular",
"Atomic",
"Polarization (waves)",
" and optical physics"
] |
16,972 | https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%80%93Moser%20theorem | The Kolmogorov–Arnold–Moser (KAM) theorem is a result in dynamical systems about the persistence of quasiperiodic motions under small perturbations. The theorem partly resolves the small-divisor problem that arises in the perturbation theory of classical mechanics.
The problem is whether or not a small perturbation of a conservative dynamical system results in a lasting quasiperiodic orbit. The original breakthrough to this problem was given by Andrey Kolmogorov in 1954. This was rigorously proved and extended by Jürgen Moser in 1962 (for smooth twist maps) and Vladimir Arnold in 1963 (for analytic Hamiltonian systems), and the general result is known as the KAM theorem.
Arnold originally thought that this theorem could apply to the motions of the Solar System or other instances of the -body problem, but it turned out to work only for the three-body problem because of a degeneracy in his formulation of the problem for larger numbers of bodies. Later, Gabriella Pinzari showed how to eliminate this degeneracy by developing a rotation-invariant version of the theorem.
Statement
Integrable Hamiltonian systems
The KAM theorem is usually stated in terms of trajectories in phase space of an integrable Hamiltonian system. The motion of an integrable system is confined to an invariant torus (a doughnut-shaped surface). Different initial conditions of the integrable Hamiltonian system will trace different invariant tori in phase space. Plotting the coordinates of an integrable system would show that they are quasiperiodic.
Perturbations
The KAM theorem states that if the system is subjected to a weak nonlinear perturbation, some of the invariant tori are deformed and survive, i.e. there is a map from the original manifold to the deformed one that is continuous in the perturbation. Conversely, other invariant tori are destroyed: even arbitrarily small perturbations cause the manifold to no longer be invariant and there exists no such map to nearby manifolds. Surviving tori meet the non-resonance condition, i.e., they have “sufficiently irrational” frequencies. This implies that the motion on the deformed torus continues to be quasiperiodic, with the independent periods changed (as a consequence of the non-degeneracy condition). The KAM theorem quantifies the level of perturbation that can be applied for this to be true.
Those KAM tori that are destroyed by perturbation become invariant Cantor sets, named Cantori by Ian C. Percival in 1979.
The non-resonance and non-degeneracy conditions of the KAM theorem become increasingly difficult to satisfy for systems with more degrees of freedom. As the number of dimensions of the system increases, the volume occupied by the tori decreases.
As the perturbation increases and the smooth curves disintegrate we move from KAM theory to Aubry–Mather theory which requires less stringent hypotheses and works with the Cantor-like sets.
The existence of a KAM theorem for perturbations of quantum many-body integrable systems is still an open question, although it is believed that arbitrarily small perturbations will destroy integrability in the infinite size limit.
Consequences
An important consequence of the KAM theorem is that for a large set of initial conditions the motion remains perpetually quasiperiodic.
KAM theory
The methods introduced by Kolmogorov, Arnold, and Moser have developed into a large body of results related to quasiperiodic motions, now known as KAM theory. Notably, it has been extended to non-Hamiltonian systems (starting with Moser), to non-perturbative situations (as in the work of Michael Herman) and to systems with fast and slow frequencies (as in the work of Mikhail B. Sevryuk).
KAM torus
A manifold invariant under the action of a flow is called an invariant -torus, if there exists a diffeomorphism into the standard -torus such that the resulting motion on is uniform linear but not static, i.e. ,where is a non-zero constant vector, called the frequency vector.
If the frequency vector is:
rationally independent (a.k.a. incommensurable, that is for all )
and "badly" approximated by rationals, typically in a Diophantine sense: ,
then the invariant -torus () is called a KAM torus. The case is normally excluded in classical KAM theory because it does not involve small divisors.
See also
Stability of the Solar System
Arnold diffusion
Ergodic theory
Hofstadter's butterfly
Nekhoroshev estimates
Notes
References
Arnold, Weinstein, Vogtmann. Mathematical Methods of Classical Mechanics, 2nd ed., Appendix 8: Theory of perturbations of conditionally periodic motion, and Kolmogorov's theorem. Springer 1997.
Sevryuk, M.B. Translation of the V. I. Arnold paper “From Superpositions to KAM Theory” (Vladimir Igorevich Arnold. Selected — 60, Moscow: PHASIS, 1997, pp. 727–740). Regul. Chaot. Dyn. 19, 734–744 (2014). https://doi.org/10.1134/S1560354714060100
Rafael de la Llave (2001) A tutorial on KAM theory.
KAM theory: the legacy of Kolmogorov’s 1954 paper
Kolmogorov-Arnold-Moser theory from Scholarpedia
H Scott Dumas. The KAM Story – A Friendly Introduction to the Content, History, and Significance of Classical Kolmogorov–Arnold–Moser Theory, 2014, World Scientific Publishing, . Chapter 1: Introduction
Hamiltonian mechanics
Theorems in dynamical systems
Computer-assisted proofs | Kolmogorov–Arnold–Moser theorem | [
"Physics",
"Mathematics"
] | 1,221 | [
"Theorems in dynamical systems",
"Mathematical theorems",
"Computer-assisted proofs",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mathematical problems",
"Dynamical systems"
] |
9,693,784 | https://en.wikipedia.org/wiki/Couch%20surfing | Couch surfing is a term that generally indicates the practice of moving from house to house, sleeping in whatever spare space is available (often a couch or floor), generally staying a few days before moving on to another house. People sometimes couch surf when they are travelling or because they are homeless.
Couch surfing in travel
Couch surfing's cultural significance grew when the website CouchSurfing was launched in 2004. Upon the release of the app, what previously used to be a cheap alternative for budget travelers became recognized as a hip, new way to travel. Couch surfing became not only a way to save money, but a way to meet new people and have new experiences. Its attraction was in the way it allowed people to have a more immersive and authentic travel experience. Besides CouchSurfing, many other platforms were created and groups were formed in order to help people who are looking to couch surf connect with potential hosts and other travelers. While couch surfing may not be considered the most popular or mainstream way to travel, in 2018 around 15 million people had identified using couch surfing accommodations to travel. However, couch surfing comes with the issue of safety. It can be less regulated than traditional forms of travel accommodations, making it a more risky choice for vulnerable travelers.
Couch surfing as homelessness
Couch surfing is also considered a form of homelessness. It is the most common type of homelessness amongst youth. It can be a result of substance abuse, conflict in home relationships, or aftermath of leaving abusive situations. The individual may turn to couch surfing as a temporary solution, staying with friends or family members while they search for permanent housing or a way to get back on their feet. It is different from sleeping on the streets or in a shelter, but it still has significant challenges, including the lack of stability and the strain on an individual. Couch surfing homelessness can be a short-term solution to homelessness, but it is not a sustainable solution in the long term. Individuals experiencing couch surfing homelessness often face uncertainty and instability, which can lead to negative consequences such as difficulty in finding employment, social isolation, and mental health issues.
Couch surfing is usually missed by homeless counts and is therefore a type of hidden homelessness. For example, in 2017, HUD counted 114 thousand children as homeless in the United States in their homeless count, while surveys conducted by the Department of Education concluded there were 1.3 million. Couch surfing is especially common among those under the age of 25, including children. In Britain, 1 in 5 young people have couch surfed at least once each year, and almost half of those have done so for more than a month.
While safer than sleeping in the rough, couch surfing is not an adequate long term housing solution. Most couch surfers only stay in a single home for a short period of time. This may be because their host limits their stay, they voluntarily leave to preserve friendships, or they are forced to leave the home of a person who is abusive or has a drug problem. Some couch surfers have received housing in exchange for services such as cooking and cleaning. In other cases, people will have otherwise unwanted sexual encounters to be able to couch surf at a person's home for the night. Those who couch surf often sleep in the rough after leaving their accommodations.
See also
Housing First
Internally displaced person
Right to housing
References
Cultural exchange
Homelessness
Sharing economy
Sleep | Couch surfing | [
"Biology"
] | 683 | [
"Behavior",
"Sleep"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.