id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
61,010,503 | https://en.wikipedia.org/wiki/LCP%20family | The LCP family or TagU family of proteins is a conserved family of phosphotransferases that are involved in the attachment of teichoic acid (TA) molecules to gram-positive cell wall or cell membrane. It was initially thought as the LytR (lytic repressor) component of a LytABC operon encoding autolysins, but the mechanism of regulation was later realized to be the production of TA molecules. It was accordingly renamed TagU.
The "LCP" acronym derives from three proteins initially identified to contain this domain, LytR (now TagU, ), cpsA ("Capsular polysaccharide expression regulator"), and psr ("PBP 5 synthesis repressor"). These proteins were mistaken as transcriptional regulators via different reasons, but all three of them are now known to be TagU-like enzymes. While TagU itself only attaches TA molecules to the peptidoglycan cell wall (forming WTA), other LCP proteins may glycosylate cell wall proteins (A. oris LcpA, ) or attach TA molecules to a cell membrane anchor (forming LTA). Most, if not all, LCP proteins also have a secondary pyrophosphatase activity.
Typical TagU proteins are made up of an N-terminal transmembrane domain (for anchoring), an optional, non-conserved accessory domain (CATH 3tflA01), a core catalytic domain, and sometimes a C-terminal domain for which the structure is unknown. The core LCP domain is a magnesium-dependent enzyme.
References
External links
MetaCyc RXN-18030: Polyisoprenyl-teichoic acid—peptidoglycan teichoic acid transferase
Acids
Cells
Enzymes
Proteins | LCP family | [
"Chemistry"
] | 382 | [
"Biomolecules by chemical classification",
"Proteins",
"Acids",
"Molecular biology"
] |
61,012,947 | https://en.wikipedia.org/wiki/Tropospheric%20Emissions%3A%20Monitoring%20of%20Pollution | Tropospheric Emissions: Monitoring of Pollution (TEMPO) is a space-based spectrometer designed to measure air pollution across greater North America at a high resolution and on an hourly basis. The ultraviolet–visible spectrometer will provide hourly data on ozone, nitrogen dioxide, and formaldehyde in the atmosphere.
TEMPO is a hosted payload on a commercial geostationary communication satellite with a constant view of North America. TEMPO's spectrometer measures reflected sunlight from the Earth's atmosphere and separates it into 2,000 component wavelengths. It will scan North America from the Pacific Ocean to the Atlantic Ocean and from the Alberta oil sands to Mexico City. TEMPO will form part of a geostationary constellation of pollution-monitoring assets, along with the planned Sentinel-4 from ESA and Geostationary Environment Monitoring Spectrometer (GEMS) from South Korea's KARI.
On 3 February 2020, Intelsat announced that the Intelsat 40e satellite will host TEMPO. Maxar Technologies, the builder of the satellite, is responsible for payload integration. The launch occurred on 7 April 2023.
Earth Venture-Instrument program
TEMPO, which is a collaboration between NASA and the Smithsonian Astrophysical Observatory, is NASA's first Earth Venture-Instrument (EVI) mission. The EVI program is an element within the Earth System Science Pathfinder (ESSP) program office, which is under NASA's Science Mission Directorate Earth Science Division (SMD/ESD). EVI's are a series of innovative "science-driven, competitively selected, low cost missions". The series of "Venture Class" missions were recommended in the 2007 publication Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond. "nnovative research and application missions that might address any area of Earth science" are selected through frequent "openly-competed solicitations".
Earth Venture missions are "small-sized competitively selected orbital missions and instrument missions of opportunity" and include NASA-ISRO Synthetic Aperture Radar (NISAR), Surface Water and Ocean Topography (SWOT), ICESat-2, SAGE III on ISS, Gravity Recovery and Climate Experiment Follow On (GRACE-FO), Cyclone Global Navigation Satellite System (CYGNSS), Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS), and the Global Ecosystem Dynamics Investigation lidar (GEDI).
References
External links
TEMPO website by the Smithsonian Astrophysical Observatory
Satellite meteorology
Spacecraft instruments
Spectrometers
Piggyback mission
2023 in spaceflight | Tropospheric Emissions: Monitoring of Pollution | [
"Physics",
"Chemistry"
] | 521 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
64,505,156 | https://en.wikipedia.org/wiki/Potassium%20fluorosilicate | Potassium fluorosilicate is a chemical compound with the chemical formula .
When doped with potassium hexafluoromanganate(IV) (, with ) it forms a narrow band red producing phosphor, (often abbreviated PSF or KSF), of economic interest due to its applicability in LED lighting and displays.
Natural occurrence
Occurs naturally as hiereatite, found in the Aeolian islands (Sicily, Italy). A hexagonal form demartinite has also been found at the rim of volcanic fumaroles in the same islands.
The sea sponge Halichondria Moorei builds a skeleton of potassium fluorosilicate.
Structure and properties
According to analysis by Loehlin (1984), it has space group Fmm, with a0 = 0.8134 nm, V = 0.538.2 nm3 at 295 K. The Si-F bond length is 0.1683 nm. At high temperatures and pressures -beta and -gamma phases exist.
Applications
Potassium fluorosilicate has applications in porcelain manufacture, the preservation of timber, aluminium and magnesium smelting, and the manufacture of optical glass.
Red phosphor
When doped with potassium hexafluoromanganate(IV) ), a narrow band red phosphor is produced, emitting at around 630 nm. This substance has application improving the white light quality of white LEDs that use a blue emitting LED in combination with the yellow cerium doped yttrium aluminium garnet phosphor (YAG), .
Synthesis routes to the phosphor include co-crystallisation and co-precipitation. For example, in (40 %) hydrofluoric acid with potassium fluoride can be mixed with dissolved in (40 %) hydrofluoric acid to co-precipitate the phosphor.
The acronyms KSF or PSF are used for potassium fluorosilicate phosphors.
See also
Fluorosilicic acid
Ammonium fluorosilicate
Sodium fluorosilicate
References
Potassium compounds
Hexafluorosilicates
Phosphors and scintillators | Potassium fluorosilicate | [
"Chemistry"
] | 451 | [
"Luminescence",
"Phosphors and scintillators"
] |
64,506,449 | https://en.wikipedia.org/wiki/Mitochondrial%20outer%20membrane%20permeabilization | Mitochondrial outer membrane permeabilization (MOMP), also known as the mitochondrial outer membrane permeability, is one of two ways apoptosis (a type of programmed cell death) can be activated. It is part of the intrinsic pathway of apoptosis, also known as the mitochondrial pathway. MOMP is known as the point of no return in apoptosis. Once triggered, it results in the diffusion of proteins from the space between the inner and outer mitochondrial membranes into the cytosol.
Mechanism
Initiation of MOMP involves Bcl-2 family proteins, including BAX and BAK. The outer mitochondrial membrane, typically permeable to molecules smaller than 5 kDa, forms pores during MOMP that allow it to accommodate proteins larger than 100 kDa. During MOMP, it takes about five minutes for all mitochondrial membranes within a cell to permeabilize.
Outcome
MOMP has been referred to as the point of no return for apoptosis, almost always resulting in the completion of the process, and thus, cell death. However, in limited circumstances, apoptosis does not complete. Sometimes, MOMP does not complete, known as incomplete MOMP (iMOMP) or minority MOMP (miniMOMP). For incomplete MOMP, mitochondrial membranes become permeable in most, but not all, the cell's mitochondria. In minority MOMP, only a few mitochondria of the cell experience MOMP—the result of sublethal stress.
References
Apoptosis
Mitochondria | Mitochondrial outer membrane permeabilization | [
"Chemistry"
] | 319 | [
"Mitochondria",
"Metabolism",
"Apoptosis",
"Signal transduction"
] |
64,507,510 | https://en.wikipedia.org/wiki/Benzoate%20degradation%20via%20hydroxylation | Benzoate degradation via hydroxylation is an enzyme-catalyzed, bacterial chemical reaction. Benzoate is degraded aerobically and anaerobically. Aerobic degradation forms catechol. Anaerobic degradation forms cyclohex-1,5-diene-1-carbonylCoA. A hybrid degradation forms Acetyl-CoA and Succinyl-CoA.
Potential microbes
References
Chemical reactions | Benzoate degradation via hydroxylation | [
"Chemistry"
] | 91 | [
"Chemical reaction stubs",
"nan"
] |
64,508,219 | https://en.wikipedia.org/wiki/Homotopy%20associative%20algebra | In mathematics, an algebra such as has multiplication whose associativity is well-defined on the nose. This means for any real numbers we have
.
But, there are algebras which are not necessarily associative, meaning if then
in general. There is a notion of algebras, called -algebras, which still have a property on the multiplication which still acts like the first relation, meaning associativity holds, but only holds up to a homotopy, which is a way to say after an operation "compressing" the information in the algebra, the multiplication is associative. This means although we get something which looks like the second equation, the one of inequality, we actually get equality after "compressing" the information in the algebra.
The study of -algebras is a subset of homotopical algebra, where there is a homotopical notion of associative algebras through a differential graded algebra with a multiplication operation and a series of higher homotopies giving the failure for the multiplication to be associative. Loosely, an -algebra is a -graded vector space over a field with a series of operations on the -th tensor powers of . The corresponds to a chain complex differential, is the multiplication map, and the higher are a measure of the failure of associativity of the . When looking at the underlying cohomology algebra , the map should be an associative map. Then, these higher maps should be interpreted as higher homotopies, where is the failure of to be associative, is the failure for to be higher associative, and so forth. Their structure was originally discovered by Jim Stasheff while studying A∞-spaces, but this was interpreted as a purely algebraic structure later on. These are spaces equipped with maps that are associative only up to homotopy, and the A∞ structure keeps track of these homotopies, homotopies of homotopies, and so forth.
They are ubiquitous in homological mirror symmetry because of their necessity in defining the structure of the Fukaya category of D-branes on a Calabi–Yau manifold who have only a homotopy associative structure.
Definition
Definition
For a fixed field an -algebra is a -graded vector space
such that for there exist degree , -linear maps
which satisfy a coherence condition:
,
where .
Understanding the coherence conditions
The coherence conditions are easy to write down for low degreespgs 583–584.
d=1
For this is the condition that
,
since giving and . These two inequalities force in the coherence condition, hence the only input of it is from . Therefore represents a differential.
d=2
Unpacking the coherence condition for gives the degree map . In the sum there are the inequalities
of indices giving equal to . Unpacking the coherence sum gives the relation
,
which when rewritten with
and
as the differential and multiplication, it is
,
which is the Leibniz rule for differential graded algebras.
d=3
In this degree the associativity structure comes to light. Note if then there is a differential graded algebra structure, which becomes transparent after expanding out the coherence condition and multiplying by an appropriate factor of , the coherence condition reads something like
Notice that the left hand side of the equation is the failure for to be an associative algebra on the nose. One of the inputs for the first three maps are coboundaries since is the differential, so on the cohomology algebra these elements would all vanish since . This includes the final term since it is also a coboundary, giving a zero element in the cohomology algebra. From these relations we can interpret the map as a failure for the associativity of , meaning it is associative only up to homotopy.
d=4 and higher order terms
Moreover, the higher order terms, for , the coherent conditions give many different terms combining a string of consecutive into some and inserting that term into an along with the rest of the 's in the elements . When combining the terms, there is a part of the coherence condition which reads similarly to the right hand side of , namely, there are terms
In degree the other terms can be written out as
showing how elements in the image of and interact. This means the homotopy of elements, including one that's in the image of minus the multiplication of elements where one is a homotopy input, differ by a boundary. For higher order , these middle terms can be seen how the middle maps behave with respect to terms coming from the image of another higher homotopy map.
Diagrammatic interpretation of axioms
There is a nice diagrammatic formalism of algebras which is described in Algebra+Homotopy=Operad explaining how to visually think about this higher homotopies. This intuition is encapsulated with the discussion above algebraically, but it is useful to visualize it as well.
Examples
Associative algebras
Every associative algebra has an -infinity structure by defining and for . Hence -algebras generalize associative algebras.
Differential graded algebras
Every differential graded algebra has a canonical structure as an -algebra where and is the multiplication map. All other higher maps are equal to . Using the structure theorem for minimal models, there is a canonical -structure on the graded cohomology algebra which preserves the quasi-isomorphism structure of the original differential graded algebra. One common example of such dga's comes from the Koszul algebra arising from a regular sequence. This is an important result because it helps pave the way for the equivalence of homotopy categoriesof differential graded algebras and -algebras.
Cochain algebras of H-spaces
One of the motivating examples of -algebras comes from the study of H-spaces. Whenever a topological space is an H-space, its associated singular chain complex has a canonical -algebra structure from its structure as an H-space.
Example with infinitely many non-trivial mi
Consider the graded algebra over a field of characteristic where is spanned by the degree vectors and is spanned by the degree vector . Even in this simple example there is a non-trivial -structure which gives differentials in all possible degrees. This is partially due to the fact there is a degree vector, giving a degree vector space of rank in . Define the differential by
and for
where on any map not listed above and . In degree , so for the multiplication map, we have
And in the above relations give
When relating these equations to the failure for associativity, there exist non-zero terms. For example, the coherence conditions for will give a non-trivial example where associativity doesn't hold on the nose. Note that in the cohomology algebra we have only the degree terms since is killed by the differential .
Properties
Transfer of A∞ structure
One of the key properties of -algebras is their structure can be transferred to other algebraic objects given the correct hypotheses. An early rendition of this property was the following: Given an -algebra and a homotopy equivalence of complexes
,
then there is an -algebra structure on inherited from and can be extended to a morphism of -algebras. There are multiple theorems of this flavor with different hypotheses on and , some of which have stronger results, such as uniqueness up to homotopy for the structure on and strictness on the map .
Structure
Minimal models and Kadeishvili's theorem
One of the important structure theorems for -algebras is the existence and uniqueness of minimal models – which are defined as -algebras where the differential map is zero. Taking the cohomology algebra of an -algebra from the differential , so as a graded algebra,
,
with multiplication map . It turns out this graded algebra can then canonically be equipped with an -structure,
,
which is unique up-to quasi-isomorphisms of -algebras. In fact, the statement is even stronger: there is a canonical -morphism
,
which lifts the identity map of . Note these higher products are given by the Massey product.
Motivation
This theorem is very important for the study of differential graded algebras because they were originally introduced to study the homotopy theory of rings. Since the cohomology operation kills the homotopy information, and not every differential graded algebra is quasi-isomorphic to its cohomology algebra, information is lost by taking this operation. But, the minimal models let you recover the quasi-isomorphism class while still forgetting the differential. There is an analogous result for A∞-categories by Maxim Kontsevich and Yan Soibelman, giving an A∞-category structure on the cohomology category of the dg-category consisting of cochain complexes of coherent sheaves on a non-singular variety over a field of characteristic and morphisms given by the total complex of the Cech bi-complex of the differential graded sheaf pg 586-593. In this was, the degree morphisms in the category are given by .
Applications
There are several applications of this theorem. In particular, given a dg-algebra, such as the de Rham algebra , or the Hochschild cohomology algebra, they can be equipped with an -structure.
Massey structure from DGA's
Given a differential graded algebra its minimal model as an -algebra is constructed using the Massey products. That is,
It turns out that any -algebra structure on is closely related to this construction. Given another -structure on with maps , there is the relation
,
where
.
Hence all such -enrichments on the cohomology algebra are related to one another.
Graded algebras from its ext algebra
Another structure theorem is the reconstruction of an algebra from its ext algebra. Given a connected graded algebra
,
it is canonically an associative algebra. There is an associated algebra, called its Ext algebra, defined as
,
where multiplication is given by the Yoneda product. Then, there is an -quasi-isomorphism between and . This identification is important because it gives a way to show that all derived categories are derived affine, meaning they are isomorphic to the derived category of some algebra.
See also
A∞-category
Associahedron
Mirror symmetry conjecture
Homological mirror symmetry
Homotopy Lie algebra
Derived algebraic geometry
References
— Original paper linking structures to Mirror symmetry
Homotopical algebra
Homological algebra
Algebraic geometry
Homotopy theory | Homotopy associative algebra | [
"Mathematics"
] | 2,184 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Algebraic geometry",
"Homological algebra"
] |
76,164,070 | https://en.wikipedia.org/wiki/Cellular%20anastasis | Anastasis is a cellular phenomenon characterized by recovery of cells threatened by cell death; it essentially reverses the process of programmed cell death, or apoptosis. Contrary to the prior assumption that apoptosis is irreversible, some cells have been discovered to resist the stimuli that trigger apoptosis. Some of these cells can survive even following the activation of executioner caspases, forming the basis of anastasis. The initial phase of recovery begins when transcription is initiated and the cell recovers from previous stressors. Finally, the cell's cytoskeleton undergoes reorganization, reinforcing its structure and encouraging migration. The relatively recent discovery of anastasis is a key factor in the survival of cancer cells exposed to chemotherapy and changed the way scientists approach the topic of cell death. Anastasis is possible even during advanced stages of cell death, leading researchers to believe that further research on the topic can have therapeutic and pathological implications. Further exploration of the phenomenon could potentially bring forth information including treatment for neurodegenerative diseases and anti-aging therapy.
History
Apoptosis
Apoptosis, or programmed cell death, was discovered in 1842 by Carl Vogt and was initially believed to be irreversible. Once a cell exhibited signs of apoptosis, the cell was doomed. Apoptosis is triggered by external or internal signals, such as developmental cues or cellular damage, which activate cellular pathways leading to apoptosis. Cells at risk of cell death display shrinkage and membrane blebbing. Once the pathway to cellular death is initiated, an enzyme known as caspase, a proteolytic cysteine, is activated. Caspase breaks down proteins and DNA in the cells, preparing the cell for removal by phagocytes. Should apoptosis be restricted or prevented, uncontrolled cell division and tumor growth may occur. Apoptosis has a significant role in maintaining tissue homeostasis by eliminating damaged cells and preventing tumor formation. Cells that are no longer needed or damaged are targeted for apoptosis, aiding in the regulation of normal conditions and body functioning.
Anastasis
Apoptosis was once considered irreversible and unavoidable before the recent discovery of the process of anastasis. It is a rapid process with many initiating factors that were once believed to be permanent. Anastasis, meaning rising to life, was a term coined by siblings Ho Man Tang and Ho Lam Tang following their discovery at the University of Hong Kong in 2007. The Tang siblings executed their experiment by exposing breast cancer cells to various toxic chemicals and waiting for signs of apoptosis. After the cells displayed these characteristics, they then washed the cells with fresh medium and allowed them to incubate. The cancer cells were induced into apoptosis by treating them with ethanol, and the results showed that survival of these cells was possible. Many cells in the original study survived apoptosis and appeared normal once again following the washing of the cells by fresh medium. Tang's results were not initially well received due to the popular opinion at the time that apoptosis was irreversible. However, their research eventually became more accepted and challenged the traditional understanding of apoptosis as an irreversible process. The discovery of anastasis suggested that cells have the potential to reverse the process of cell death under certain circumstances.
Etymology
The word Anastasis comes from the Greek word for resurrection, ανάσταση. The prefix ana- means "upward" or "again", and the root sta- means "to stand", forming a combined meaning of "standing again" or "resurrection". In Christianity, the term anastasis refers to the resurrection of Jesus Christ. The term is used to describe the notion of rising or standing again after a period of death or dormancy. The use of the word anastasis began increasing steadily following the Tang siblings' discovery in 2007.
Process
Anastasis begins in response to stimuli such as DNA damage, chemical stress, and other indicators of approaching cellular death. During early stages of apoptosis, mechanisms allow the cell to evade destruction and halt the process of cell death. Once the apoptotic process is halted, the cell undergoes recovery and repair. The process of transcription resumes, allowing the cell to synthesize vital proteins. Normal cellular morphology and function are restored, and the cell is no longer in danger of cell death. The recovery of cells can limit the damage done to tissue by injury or infection.
Clinical applications
Cancer treatment
Some cancer cells can undergo the process of anastasis after they are exposed to chemotherapy. Anastasis can help cancer cells by enhancing their migration, metastasis, and resistance to chemotherapy. The process of anastasis can be one explanation for the survival of cancer cells after they are treated with cytotoxic drugs; apoptotic cells are able to recover via anastasis following the elimination of such compounds. Similarly to the University of Hong Kong study on breast cancer cells, a study of HeLa cancer cells showed that the cells were able to recover from the presence of caspase and ethanol after being washed with fresh medium. Another study suggested that anastasis in normal cells can even induce carcinomatous results. Due to these effects, tumors can progress and grow in size thanks to anastasis. By understanding this phenomenon, cancer treatments may be improved.
References
Wikipedia Student Program
Cell signaling
Immunology
Programmed cell death | Cellular anastasis | [
"Chemistry",
"Biology"
] | 1,121 | [
"Senescence",
"Immunology",
"Programmed cell death",
"Signal transduction"
] |
76,164,607 | https://en.wikipedia.org/wiki/Gap%20junction%20modulator | A gap junction modulator is a compound or agent that either facilitates or inhibits the transfer of small molecules between biological cells by regulating gap junctions. Various physiological processes including cardiac, neural or auditory, depend on gap junctions to perform crucial regulatory roles, and the modulators themselves are the key players in this procedure. Gap junctions are necessary for diffusion of small molecules from cell to cell, keeping the cells interlinked and connecting the cytoplasm, allowing transfer of signals or resources between the body.
Many different molecules act as modulators in gap junctions, from simple ions to complex proteins. Protein kinases modulate the opening and closing of connexin pores by moderating phosphorylation. Chemical gating modulators such as calmodulin, calcium, and pH values are key in regulating the gap junction proteins. The functions of different modulators can be categorized into five aspects, including enhancement and inhibition of gap junctions activities; modulation of connexin; voltage specificity; and natural compounds.
These modulators can be potential therapeutic targets for a number of disorders and are essential in the regulation of several physiological processes, potentially providing solutions to some diseases caused by issues in the gap junction.s A variety of gap junction modulators as pharmaceutical agents are being investigated to treat and regulate these diseases, such as amiodarone to treat heart issues such as ventricular arrhythmia, tonabersat to treat Cortical Spreading Depression, or rotigaptide and danegaptide to combat bupropion overdose.
Gap Junctions
Gap junctions are collections of intercellular channels that allow ions and other tiny molecules to move directly between cells. These junctions are made up of a number of gap junction channels that consist of two connexons, each with six protein subunits called connexin, and a gene family of nearly 20 members encodes the connexins found in mammals
Through gap junctions, the majority of cells in tissues communicate with one another, with the exception of a small number of terminally differentiated cells like blood and skeletal muscle cells. The gap junction channels bridge the cytoplasm of the two cells, allowing ions and small molecules to pass in both directions through these channels
Categories of Gap Junction Modulators
Protein kinases
Protein kinase enzymes moderate phosphorylation, the addition of phosphate groups, to the junction proteins play important roles in controlling the junction protein and their possible subunits.
These kinases, such as PKA and PKC, phosphorylate the connexin gap junctions in the heart. Phosphate group addition changes the charge and configuration of the connexin protein, opening (for PKA) and closing (for PKC) the transmembrane channel pores. These same proteins can also undergo dephosphorylation by phosphatase enzymes, which reverses phosphorylation, reopening or reclosing the connexin pores.
Chemical gating
Calmodulin
Calmodulin is a model Ca2+ sensor that is very adaptable, and its high-affinity Ca2+ binding domains are EF-hands, which is a structure optimized for binding to calcium ions. Calmodulin is present in all eukaryotic cells, which mediates calcium-dependent signalling. Calmodulin conformationally varies upon binding Ca2+ to form complexes with a wide range of target proteins.
Intracellular Ca2+activated calmodulin (CaM) inhibits gap junction channels, which is critical for several cellular functions, such as lens transparency, heart contraction synchronization, and hearing.
Calcium
Calcium ions are a major modulator that can completely close the gap junction proteins. Ca2+ ions binding to the amino acid side chains changes the structure of the protein, decreasing connectivity of other molecules. Calcium affects not only connexin, but also Calmodulin, a transmembrane protein present in all eukaryotic cells. Calcium ions are fairly abundant in cells, as they are the main signal ion of the nervous system, where the calcium released by the nervous signals can inform the junction of proteins of changes needed to adapt to their surroundings. Ca2+ itself is associated with cell-to-cell uncoupling, which breaks apart the pathway between cells when the pathway itself becomes harmful, such as pathways to injured cells, as the abnormal pathways and communication between injured cells can cause various disorders.
pH changes
Changes in the pH of the environment to a more acidic or alkaline one can also affect the gap junction protein structure, changing their shape, which closes the diffusion pathways. These pH changes are caused by chemical reactions from other metabolic processes or even inflammation signals from disease or the body's immune system. Significant pH changes are acidification, the addition of hydrogen ions, and alkalinization, the removal of hydrogen ions. Higher pH is associated with the closing of the gap junction channels, either by protonation of amino acids which can change the entire protein structure, or in some cases even denaturing the whole protein if pH is too low, completely preventing passage of molecules through the gap junctions.
Functionality
Gap junction enhancers (GJEs)
Gap junction enhancers (GJEs) facilitate cell-to-cell communication by increasing gap junction coupling or inducing depolarization across cell membranes. Examples include growth factors like TGF-beta and EGF, which are important in wound healing and tissue repair; retinoic acid, which plays a part in cellular differentiation; and acetylcholine (ACh), which contributes to learning, motivation, alertness, focus, and the stimulation of rapid eye movement (REM) sleep in the brain depolarizes the membrane potential closer to the threshold, thereby increasing the chance of neuron firing (the transfer of neurotransmitters).
Gap junction inhibitors (GJIs)
Gap junction inhibitors (GJIs) lessen communication between cells by lowering gap junction coupling or inducing hyperpolarization across cell membranes. Among them are the anti-inflammatory and anti-tumor effects of 18-alpha-glycyrrhetinic acid (18-AGA); carbenoxolone, which are used to treat inflammatory diseases; and Gamma-aminobutyric acid (GABA), which blocks signals and lessens the likelihood to generate action potential through the hyperpolarization of neurons. These inhibitors play a significant part in regulating the hyperactivity of nerve cells linked to stress, anxiety, and terror.
Connexin (Cx) specific modulators
Connexin (Cx) specific modulators target the building blocks of gap junction channels - connexin proteins. For example, the Cx43 mimetic peptides Gap26 and Gap27 bind to extracellular loop regions one and two of CxHc (connexin hemichannels), respectively, to selectively block Cx43-based gap junctions, resulting in the rapid closure of these channels.
Voltage-dependent modulators
Voltage-dependent modulators modify the cell membrane potential, which has an impact on gap junctions. For instance, substances like heptanol and quinine act as modulators and can interfere with gap junctions' ability to sense voltage, which inhibits the junctions.
Natural compounds
A variety of natural compounds such as flavonoids have been reported to modulate gap junction activity. For example, it has been shown that the dietary flavonoid quercetin inhibits gap junction communication in specific cell types such as cardiovascular cells or cancer cells.
Pharmaceutical agents
It has been found that many drugs either work primarily by modulating gap junction function or produce unintended side effects. Drugs that inhibit gap junction communication include the antiarrhythmic drug amiodarone, the anti-migraine drug tonabersat; or drugs that promote gap junction conduction, such as rotigaptide and danegaptide.
Amiodarone
Amiodarone treats ventricular arrhythmia, a potentially fatal form of arrhythmia, a disease of the heart where it has an irregular heartbeat that is either too fast or too slow. It is especially recommended for patients who do not respond well to other typical therapies. Amiodarone belongs to the third antiarrhythmic pharmacological class, which is known as antiarrhythmic drugs. It helps to maintain a regular heart rhythm by acting directly on heart tissue and effectively slowing down nerve impulses to the heart. In addition, amiodarone inhibits the potassium current, which repolarizes the myocardium during the third phase of the cardiac action potential. As a result, the effective folding time of the heart cells and the length of the action potential are prolonged, which in turn reduces the incidence of arrhythmia.
Tonabersat
Tonabersat is a novel benzopyran compound that selectively binds to a unique brain site, α2δ-1 subunit of voltage-gated calcium channels. This reduces calcium intake of this channel. Calcium intake reduction of the channels is associated with reducing Cortical Spreading Depression (CSD) as it inhibits gap-junction communication. This is important as CSD relies on neuronal-glial cell communication through connexin-containing gap junctions and hemichannels, and this abnormal sensory processing due to peripheral and/or central sensitization is found to cause migraines.
Rotigaptide and danegaptide
Recently, rotigaptide and danegaptide were found to be effective as an antidote to toxicity caused by overdose of certain drugs, such as Bupropion. Bupropion is an antidepressant, but is also a cardiotoxin if ingested in large doses. Rotigaptide and danegaptide, two small-molecule medications that increase gap junction conductance by facilitating gap junction activity, can prevent the binding or passing of bupropion to the cardiac gap junctions. Thus the modulators can be essential to preventing overdose of bupropion.
See also
Gap junction modulation
Gap junction protein
References
Cell communication
Cell signaling | Gap junction modulator | [
"Biology"
] | 2,094 | [
"Cell communication",
"Cellular processes"
] |
76,172,564 | https://en.wikipedia.org/wiki/MicroRNA%20biosensors | MicroRNA (miRNA) biosensors are analytical devices that involve interactions between the target miRNA strands and recognition element on a detection platform to produce signals that can be measured to indicate levels or the presence of the target miRNA. Research into miRNA biosensors shows shorter readout times, increased sensitivity and specificity of miRNA detection and lower fabrication costs than conventional miRNA detection methods.
miRNAs are a category of small, non-coding RNAs in the range of 18-25 base pairs in length. miRNAs regulate cellular processes such as gene regulation post-transcriptionally, and are abundant in body fluids such as saliva, urine and circulatory fluids such as blood. Also, miRNAs are found in animals and plants and have regulatory functions that affect cellular mechanisms. miRNAs are highly associated with diseases such as cancers and cardiovascular diseases. In cancer, miRNAs have oncogenic or tumor suppressor roles and are promising biomarkers for disease diagnosis and prognosis. Many techniques exist in clinical and research settings for analyzing miRNA biomarkers. However, inherent limitations with current methods, such as high cost, time and personnel training requirements, and low detection sensitivity and specificity, create the need for improved miRNA detection methods.
Background
miRNAs are associated with physiological and pathological processes; hence, measuring them in fields like human health, agriculture, and environmental testing is in demand. Here are some key aspects of the necessity of detection of miRNAs:
Potential biomarkers: miRNAs have specific expression in diseases such as cancer, cardiovascular diseases, and autoimmune diseases, which can be beneficial for early detection, prognosis and monitoring for response to treatments. Furthermore, because miRNAs are in body fluids like urine, saliva, and blood, detecting miRNAs is less invasive than methods such as biopsies. This is more comfortable for patients and can facilitate more frequent monitoring of their disease.
Molecular mechanisms: As miRNAs have regulatory roles in gene expression and signaling pathways, studying them can give the etiology of diseases and targeting them can provide therapeutic options.
Personalized medicine: Because the specific expression of miRNAs offers a promising avenue for enhancing personalized medicine, they provide a deeper understanding of individual disease risk, treatment response, and prognosis, which help clinicians make better informed clinical decisions.
History of miRNA detection technology
Early and current detection methods
The first miRNA (lin-4) was detected by Victor Ambros in Caenorhabditis elegans in 1993. The first detection method was Northern blotting (1977), which had low sensitivity. Following that was Reverse Transcription Polymerase Chain Reaction (RT-PCR) (1990), which had high detection sensitivity.
Northern Blotting: Northern blotting involves hybridizing miRNA probes (short nucleic acid sequences) with miRNAs, followed by their separation on a gel and transfer to a membrane. The probes are labeled with radioactive isotopes (raising safety and environmental concerns), enzymes, or fluorescent markers. The quantity of RNA present is inferred from the probe signal’s intensity. While Northern Blotting is highly specific and helpful for validating high-throughput methods like RNA-seq, it requires a large sample volume, is time-consuming, and lacks precision in quantification analysis.
Real-Time Reverse Transcription–Polymerase Chain Reaction (Real-time RT-PCR): This method starts with converting miRNA into cDNA using reverse transcriptase enzymes. The cDNA is then amplified using sequence-specific primers, a process monitored by fluorescent dyes or probes. Real-time RT-PCR is noted for its sensitivity and specificity. However, it faces challenges such as the need for standardization, technical complexities (e.g., primer design, sample preparation), time-intensive processes, and high costs.
High-throughput Methods:
Microarrays (1990): Microarrays enable the detection of thousands of miRNAs in a single experiment. They consist of a solid surface to which complementary miRNA sequences are attached. Introducing miRNAs allows them to bind to these probes, with the amount of miRNA measured by the fluorescence intensity. Microarrays are cost-effective compared to real-time RT-PCR and NGS but have limitations in detecting low quantities of miRNAs and distinguishing between miRNAs with similar sequences.
Next Generation Sequencing (NGS) (2005): NGS begins with RNA extraction and reverse transcription into cDNA, followed by adaptor ligation and amplification. The cDNA is then sequenced on an NGS platform, producing millions of short reads. Expert bioinformaticians and sophisticated tools must align and analyze the data and map reads to reference miRNA sequences for miRNA discovery and identification. NGS offers high sensitivity and specificity for detecting low-quantity miRNAs and identifying miRNAs differing by a single nucleotide.
Principles of microRNA biosensors
Three essential elements make up miRNA biosensors:
Biological recognition element: they can detect specific target molecules and have different types, including antibodies, antigens, DNA/RNA, aptamers, enzymes, and MIPs (molecularly imprinted polymers).
Transducer: following recognition, the transducer is an element required to convert changes in the recognition element to a measurable signal. Based on the type of signal they produce, they are categorized into electrochemical, optical, and mechanical transducers.
Signal processor: computational elements that amplify and process the signals produced from transducers and can be demonstrated by numerical values and digital readouts.
Specificity in miRNA detection
The term “specificity” in the context of miRNA biosensors refers to the ability of the biosensor to identify a particular miRNA within a sample that contains various components and miRNAs with similar sequences. The challenge in achieving this specificity derives from the small size of miRNAs, which may differ from each other by only one nucleotide. Consequently, designing biosensors capable of precisely recognizing the target miRNA is essential.
Sensitivity in miRNA detection
Sensitivity in miRNA biosensors refers to their ability to detect target miRNAs in low concentrations within samples. Since miRNAs are typically found in small amounts, biosensors are engineered to identify concentrations as low as femtomolar (10^-15) or attomolar (10^-18) levels. Achieving such high sensitivity involves enhancements to recognition elements, amplification, and signal processing techniques. The LoD (limit of detection) is used to determine the concrete value of sensitivity in biosensors, which indicates the lowest concentration of miRNA that can be separated from the background (zero) signal with a specified level of confidence.
The dynamic range in miRNA biosensors refers to the concentrations over which the biosensor can accurately detect the target miRNAs, extending from the lowest detectable LoD to the maximum concentration that can be measured without necessitating sample dilution.
Types of microRNA biosensors
Electrochemical biosensors
Electrochemical biosensors present significant advantages to miRNA detection over conventional miRNA analysis methods. Using simple electronics reduces production costs and increases ease of use in portable system configurations. This allows for a broader scope of use, including environmental, clinical and food analysis applications.
miRNA electrochemical biosensor detection relies on measuring the changes in the electrode-property or electroactive compound redox signal in the transduction of electrochemically active reporter species and hybridization between the target miRNA and complementary probe. Various materials can be made into the transduction element, including silver, gold, graphite or nanoparticle variations of such materials. Detection of electrochemical property changes allows for real-time analysis and kinetics data, an advantage biosensor methods such as optical biosensors lack. Light pollution is not a limitation of electrochemical miRNA biosensors. However, amplification techniques such as rolling circle amplification (RCA) may be required when miRNA concentrations are insufficient to produce an electrical signal.
1. Voltammetric and amperometric electrochemical biosensors
Electrochemical miRNA biosensors can be designed to infer voltammetric or amperometric measurements. Upon hybridization of the miRNA target with its complementary probe sequence, voltammetric miRNA biosensors detect the change in current based on a controlled increase or decrease in electric potential on the detection platform. Amperometric-based biosensors detect the change in electric current at a fixed positive electric potential. Recent developments in voltammetric and amperometric miRNA biosensors can be classified as label-based or label-free biosensors, indicating whether or not electroactive labels on the miRNA target are used as the naming suggests.
Voltammetric and amperometric label-free (direct detection) miRNA biosensors
First published in 2009, label-free (direct detection) electrochemical miRNA biosensors function without labelling the target miRNA with electrocatalytic nanoparticle tags or hybridization indicators. Label-free miRNA biosensors were initially based on DNA detection through guanine electrooxidation measurements, with the lower detection limit being 5 nM of miRNA. Since then, electrode materials have been developed to increase the sensitivity of detection down to less than 1 pM, such as with graphene and ionic-liquid modified electrodes. For example, Wu et al. (2013) increased the conductivity of the electrode surface of an amperometric biosensor with a multilayer consisting of Nafion, thionine and palladium nanoparticles, which immobilized the target miRNA on the electrode surface for a lower limit of detection of 1.87 pM. Label-free miRNA biosensors detect signals before and after the hybridization of electroactive nucleic acid bases. For instance, doxorubicin-loaded gold nanoparticles (AuNps) have been integrated with a double-loop hairpin probe that hybridizes with the target miRNA to form heteroduplexes, in which duplex specific nucleases hydrolyze DNA in the heteroduplex structures to release target miRNA strands for amplification in a signal amplification system. The limit of detection in such a system is 0.17pM.
Voltammetric and amperometric label-based (indirect detection) miRNA biosensors
Label-based (indirect detection) electrochemical miRNA biosensors require electrocatalytic or redox active molecule or nanoparticle labelling of the miRNA target or complementary capture probes for detection. Generally, label-based approaches offer significantly greater sensitivity of miRNA detection than label-free methods, with sensitivity reaching the fM-aM range.
An example is AuNp-superlattice-based miRNA biosensors utilizing the small molecule cationic dye toluidine blue to detect miRNA-21. Toluidine blue acts as a miRNA intercalative label through electrostatic interaction with the negatively charged backbone phosphate groups. On the biosensor, toluidine blue is a redox indicator to measure the oxidation peak current of toluidine blue and indicated hybridization of miRNA. The LoD levels reached 78 aM.
2. Amplification (enzyme)-based electrochemical miRNA biosensors
Electrochemical detection or amplification strategies for miRNA biosensors have been developed using enzyme-based methods. Amplification of miRNA is often a necessary component of biosensor detection as miRNA concentrations are found in low abundance, and amplification of target miRNA strands will increase the sensitivity of detection. Additionally, inherent properties of miRNA include short strand length and high sequence homology, which present a challenge with detection sensitivity and specificity.
Various methods, such as duplex-specific nuclease enzymes and polymerase extension, can amplify miRNA targets to reach LoD in the fM range. Isothermal amplification techniques are widely used enzyme-based miRNA amplification techniques, given the advantages of cost and time-reduction associated with ease of use compared to polymerase chain reaction (PCR) methods. Isothermal methods amplify nucleic acids at a constant temperature, which removes the thermal cycling requirement as used in PCR and does not require specific enzymes for spatial recognition sites in the target miRNA. A commonly used isothermal technique for miRNA detection is rolling circle amplification (RCA). In the RCA of miRNA targets, the miRNA binds to a complementary circular DNA template, which is continuously and exponentially amplified through the synthesis of long single-stranded DNA. Research with gold electrode electrochemical biosensors has shown that RCA initiated on the electrode has provided LoD levels of 50 aM. RCA's isothermal nature and ease of use allow it to be used in clinical diagnostic and resource-lacking laboratory settings and in point-of-care biosensor devices.
Optical miRNA biosensors
Upon hybridization of the target miRNA tagged with a nucleic acid probe and an optically active reporter, label-based optical biosensors transduce the absorbance or fluorescence optical signal into quantifiable data. The reporters can be either quantum dots or dye labels. On the other hand, label-free optical miRNA biosensors detect changes in the refractive index (RI) at the recognition element, which are caused by the binding of the target miRNA to its bioreceptor. The electromagnetic field probes the RI changes, characterized as an evanescent wave. The electromagnetic fields are generated by guided or resonant optical modes that travel in the transducer element. Additionally, label-free optical miRNA biosensors are insensitive to unbound or background RNA or DNA molecules, as optical detection is confined to the sensing recognition surface. This is beneficial for miRNA detection in small volumes and is an advantage over other label-based miRNA biosensors, as signal detection is based on measuring the total number of miRNA in the sample.
Surface Plasmon resonance-based optical miRNA biosensors
Surface plasmon resonance (SPR) based miRNA biosensors are a label-free method that detects RI changes after target miRNA binds to its probes and forms a complex. Detection involves propagating a surface plasmon wave (SPW) across the metal-dielectric interface surface layer of the biosensor in a Kretschmann configuration. The SPW decays exponentially, where the changes in the SPW propagation constant are measured as the constant is sensitive to change in the RI. A practical example of a label-based SPR-based miRNA biosensor is miR-21 detection with a LoD of 1 fM. The biosensor utilized graphene oxide–gold nanoparticles integrated with the sandwiching of the target miRNA between two DNA probes to amplify the SPR signal and have secondary hybridization through miR-21 report probes.
Electromechanical biosensors
Electromechanical biosensors represent an integration of electrical and mechanical engineering disciplines, employing a detection strategy that hinges on the hybridization of miRNAs to specific probes anchored on the sensor’s surface. Subsequent alterations in parameters such as stress or mass are then transduced into electrical signals. A notable implementation involves Atomic Force Microscopy (AFM), which has successfully identified has-mir-194 and has-mir-205 in samples related to colon and bladder cancer. The underlying mechanism of this approach is AFM’s ability to delineate the variations in stiffness across the gold surface of the biosensor, facilitating the detection of miRNA hybridization events. Another pivotal component in electromechanical biosensors is the gold-coated piezoelectric cantilever sensor, which is adept at recognizing hybridized miRNA. Although electromechanical biosensors are highly sensitive to miRNAs, it is difficult to measure them in samples with high amounts of different molecules.
Nanomaterials used in miRNA biosensors
Nanomaterials are used for their unique characteristics to facilitate the detection of miRNAs. Here, we discuss some features of nanomaterials used in miRNA biosensors.
Gold nanoparticles (AuNps): AuNps enhance miRNA detection signals and facilitate the stable conjugation of recognition elements into miRNAs. AuNps have excellent catalytic properties, conductivity, high surface area and interface energy and can be modified with molecules such as oligonucleotide aggregates for high affinity binding with specific substrates.
In electrochemical miRNA biosensors, AuNps allow for ease of functionalization for electrochemical reactions that involve changes in potential, current, conductivity, or impedance in detecting target miRNA binding on the detection surface. In optical biosensors, AuNps exhibit unique and tunable optical properties beneficial for SPR miRNA biosensors. When AuNps are exposed to light, propagating surface plasmons needed for detecting receptor-bonded miRNAs are created from a resonant interaction between the electromagnetic field of light and the electron-charged oscillations on the metal surface. This is due to AuNps exhibiting a high density of conduction band electrons and its nanoparticle size allowing multiple angular shifts for more reflectance angles.
Graphene: Graphene is a member of the carbon nanomaterials family and stands out for its biocompatibility, electrical conductivity, light molecular weight, stability, and affordability, making it an exceptional choice for miRNA biosensor applications. It demonstrates excellent responsiveness to chemical, optical, and mechanical stimuli. Graphene is predominantly utilized in electrical and optical miRNA biosensors. A notable recent application involves using laser-induced self-N-doped porous graphene in miRNA biosensors, capable of detecting miRNA hsa-miR-486-5p at concentrations as low as 10 fM. This approach combines cost-effectiveness with high reproducibility, offering significant advantages for conditions like preeclampsia.
Terahertz (THz) Metamaterial with Gold Nanoparticles: THz metamaterial is artificially synthesized and designed to interact with THz frequency waves. When combined with AuNps and after binding with target miRNA, they produce higher changes in THz spectral regions. For instance, a miRNA biosensor based on these materials could detect the miRNA-21 from clinical samples with a LoD of 14.54 aM.
Technologies and principles of multiplex miRNA biosensors
Multiplex miRNA biosensors are designed to detect multiple types of miRNAs simultaneously with high specificity and sensitivity. This capability is essential for several reasons: First, it allows for detecting various miRNAs within a single sample that may contribute to disease, enabling comprehensive monitoring during treatment while facilitating high-throughput screening. Second, it can significantly reduce cost and time by allowing the simultaneous analysis of data from multiple miRNAs. Here are some recent technologies in multiplex miRNA biosensors:
DNA-PAINT based using a DNA origami-based sensor platform - this miRNA biosensor has a unique geometric barcoding system and can detect up to 4 miRNAs at the same time. The 52 nm distance intervals between strands enable the platform to distinguish between single mismatches to the LoD of 11 fM to 388 fM.
CRISPR-Multiplex Biosensor- this platform utilizes various technologies, including electrochemical microfluidics and Cas13a, to enable the amplification-free detection of eight miRNAs. It features a design with four divided channels for electrochemical analysis.
Applications
Diagnostic and prognostic applications
Since the initial discovery of miRNAs, large databases of miRNAs have been identified in humans, plants and animals. As many miRNAs are associated with disease onset and development, miRNAs are a suitable biomarker for biosensor detection in clinical settings. Considerations must be taken into account of the biological sample source for miRNA targets. Clinical miRNA sample analysis commonly comes in blood, plasma, serum, seminal fluid, saliva, urine, and tissue-derived miRNAs. In the context of cancer, biosensor detection of miRNAs is most conveniently performed in the form of liquid biopsies, as circulatory miRNAs are found in the highest abundance in liquid samples.
Point-of-Care (POC) testing
Research into POC diagnostic tests has resulted in the development of microfluidic biosensors capable of early diagnostic clinical analysis of cancer-associated miRNAs, which produce cost- and time-efficient results with increased sensitivity and specificity over traditional methods. Liquid biopsy droplet-based microfluidic biosensors can be fabricated into POC devices for ease of use by integrating with pre-existing devices and interfaces and can extend utilization beyond traditional laboratory settings and those without sophisticated instruments. An example of developments in POC testing for prostate cancer is where miR-21 in low concentrations of urine samples was detected with a limit of detection of 2 nM on screen-printed, label-based electrochemical biosensor chips. Detection was rapid, with results produced in less than two hours.
Agriculture management
Besides clinical usage, miRNA biosensors have been adapted for managing agriculture plant stress and growth and disease analysis, as plant miRNAs are associated with growth regulatory mechanisms. An example is electrochemical biosensors fabricated for detecting miR-319a, a miRNA associated with phytohormone response that regulates rice seedling growth regulation. Isothermal alkaline phosphatase catalytic signal amplification of the target miRNA strands was integrated with a three-electrode system to detect miR319a to LoD levels of 1.7 fM. AuNp label-based optical biosensors were tested for detecting miRNA-1886, an indicator of drought stress in tomato plants. They found that decreasing irrigation levels increased the concentration of miRNA-1886 at a range of 100 to 6800 fM.
Research applications
1. Molecular and cellular biology
As miRNAs are one of the main regulators of genes, detection and measuring them in cells and molecular levels can be helpful to decipher miRNA interactions with other molecules. For instance, a study by Bandi et al. found that miR-15a and miR-16 function in tumorigenesis of non-small cell lung cancer (NSCLC) cell lines. miRNA biosensors also have a significant role in the elucidation of disease mechanisms. For example, a study on cardiovascular diseases found that miRNA biosensors based on DNA tetrahedron nanostructure can recognize miR-133a in aM levels, which is helpful for further studies on myocardial infarction.
2. Drug discovery and development
Because of their high-throughput potential, miRNA biosensors can significantly accelerate drug discovery by evaluating various drugs on miRNA expression levels to observe which drug can target unregulated miRNAs in diseases. Furthermore, miRNA biosensors can monitor the expression of miRNA expression in real-time to observe which changes happen in different concentrations of drugs, and this is especially crucial in early-phase clinical trials for drug dosage optimization. In addition, by testing various miRNA expressions, researchers can discover relations between diseases and miRNAs’ expression
Limitations to miRNA biosensors
While miRNA biosensors hold considerable promise for miRNA detection, several critical challenges must be addressed:
Sensitivity and Specificity: The low abundance of miRNAs in complex biological samples, such as blood, necessitates enhancing biosensor sensitivity to detect miRNAs at levels beyond femtomolar concentrations. Additionally, due to the high sequence similarity among miRNAs, improving the specificity of these biosensors is essential to differentiate between miRNAs based on single nucleotide differences.
Sample Preparation: Extracting miRNAs from samples presents significant difficulties. The process is complex and requires optimization to ensure the purity and integrity of the miRNAs for accurate detection.
Stability of miRNA Biosensor: The stability of miRNA biosensors is compromised by environmental conditions, particularly for components like aptamers and antibodies. This issue is especially pertinent for point-of-care (POC) devices, which require robustness and longevity to be effectively used in various settings.
Standardization: A significant limitation in the field is the absence of standardized guidelines and universal reference miRNAs for comparing results across blood and plasma samples. Establishing reliable normalizers, characterized by consistent expression and stability across all samples, is crucial for accurately interpreting miRNA levels.
Addressing these challenges is essential for advancing and adopting miRNA biosensor technologies.
Future directions
The significance of miRNA in diagnostics and the recent advancements in miRNA detection from various sample sources, particularly in clinical settings, underscore the need for enhancing miRNA biosensor technologies. The future of miRNA biosensor optimization encompasses several key areas:
Furthering nanomaterial integration research: The materials, including graphene, gold nanoparticles, and quantum dots, can significantly improve the biosensors’ specificity and sensitivity, making them more effective in detecting miRNAs.
Multiplex detection: Efforts are underway to refine miRNA biosensors for the simultaneous detection of multiple miRNA types, especially those within the same family, from small-volume samples; in this regard, artificial intelligence can aid in distinguishing between miRNA types and correlating them with clinical outcomes. Such advancements would be particularly beneficial for point-of-care (POC) devices, simplifying sample preparation, enhancing user-friendliness, and enabling physicians to monitor miRNA levels in real-time remotely.
Encapsulation technologies: Encapsulation technologies aim to safeguard the biosensors’ sensitive components from environmental threats, ensuring their durability and reliability.
Standardization of miRNA research and development: The development of standardized guidelines and the identification of universal genes for miRNA expression comparison will facilitate the accurate evaluation of miRNA biosensors across different clinical scenarios.
Clinical Sample Analysis: The study of prospective and retrospective analyses of clinical samples and comparing miRNA biosensor results with those obtained via real-time qPCR and sequencing technologies can assess biosensor performance under varied clinical conditions.
These advancements suggest a focused trajectory for miRNA biosensor development, aiming at technological enhancements that promise improved diagnostic capabilities and clinical applications.
References
MicroRNA
Clinical medicine
Genomics techniques
Biomedical engineering
Medical research | MicroRNA biosensors | [
"Chemistry",
"Engineering",
"Biology"
] | 5,555 | [
"Genetics techniques",
"Genomics techniques",
"Biological engineering",
"Biomedical engineering",
"Molecular biology techniques",
"Medical technology"
] |
71,831,510 | https://en.wikipedia.org/wiki/Nebivolol/valsartan | Nebivolol/valsartan, sold under the brand name Byvalson among others, is a medication used to treat hypertension.
It is available as a generic medication.
References
Further reading
External links
Combination antihypertensive drugs
Angiotensin II receptor antagonists | Nebivolol/valsartan | [
"Chemistry"
] | 60 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
71,840,293 | https://en.wikipedia.org/wiki/Height%20above%20mean%20sea%20level | Height above mean sea level is a measure of a location's vertical distance (height, elevation or altitude) in reference to a vertical datum based on a historic mean sea level. In geodesy, it is formalized as orthometric height. The zero level varies in different countries due to different reference points and historic measurement periods. Climate change and other forces can cause sea levels and elevations to vary over time.
Uses
Elevation or altitude above sea level is a standard measurement for:
Geographic locations such as towns, mountains and other landmarks.
The top of buildings and other structures.
Mining infrastructure, particularly underground.
Flying objects such as airplanes or helicopters below a Transition Altitude defined by local regulations.
Units and abbreviations
Elevation or altitude is generally expressed as "metres above mean sea level" in the metric system, or "feet above mean sea level" in United States customary and imperial units. Common abbreviations in English are:
AMSL – above mean sea level
ASL – above sea level
FAMSL – feet above mean sea level
FASL – feet above sea level
MAMSL – metres above mean sea level
MASL – metres above sea level
MSL – mean sea level
For elevations or altitudes, often just the abbreviation MSL is used, e.g., Mount Everest (8849 m MSL), or the reference to sea level is omitted completely, e.g., Mount Everest (8849 m).
Methods of measurement
Altimetry is the measurement of altitude or elevation above sea level. Common techniques are:
Surveying, especially levelling.
Global Navigation Satellite System (such as GPS), where a receiver determines a location from pseudoranges to multiple satellites. A geoid is needed to convert the 3D position to sea-level elevation.
Pressure altimeter measuring atmospheric pressure, which decreases as altitude increases. Since atmospheric pressure varies with the weather, too, a recent local measure of the pressure at a known altitude is needed to calibrate the altimeter.
Stereoscopy in aerial photography.
Aerial lidar and satellite laser altimetry.
Aerial or satellite radar altimetry.
Accurate measurement of historical mean sea levels is complex. Land mass subsidence (as occurs naturally in some regions) can give the appearance of rising sea levels. Conversely, markings on land masses that are uplifted (due to geological processes) can suggest a relative lowering of mean sea level.
See also
Depth below seafloor
Height above average terrain
Height above ground level
List of places on land with elevations below sea level
References
Geography terminology
Geodesy
Topography
Altitudes in aviation
Vertical position
Vertical datums | Height above mean sea level | [
"Physics",
"Mathematics"
] | 525 | [
"Vertical position",
"Physical quantities",
"Distance",
"Applied mathematics",
"Geodesy"
] |
59,572,951 | https://en.wikipedia.org/wiki/Lioz | Lioz (), also known as Royal Stone (pedra real), is a type of limestone, originating in Portugal, from the Lisbon region. It is famed for its use as an ornamental stone, resulting in its proliferation in palaces, cathedrals, and important civic buildings throughout Portugal and the former Portuguese Empire. Owing to its historical relevance, lioz was designated a Global Heritage Stone Resource.
Characteristics
Lioz stone contains rudist fossils dating back 120 million years. Its color is generally ivory but varies from light grey to whitish and rosy. This type of limestone is used as a decorative construction material because of its fossiliferous composition.
During the XVII–XVIII centuries lioz was widely used in churches, monuments and official buildings in Portugal, as well as some Portuguese colonies (Salvador, Bahia, Brazil), therefore, it was also called “royal stone”. Lioz stone has been designated by the International Union of Geological Sciences as a Global Heritage Stone Resource.
Notable buildings
Monuments made of lioz include:
Portugal:
Jeronimos Monastery
Belém Tower
Belém Cultural Centre
Rossio Station
Mafra Palace
Brazil:
Cathedral of Salvador
Basilica of the Immaculate Conception
See also
Limestone
References
Limestone
Architecture in Portugal
Geology of Portugal
building materials | Lioz | [
"Physics",
"Engineering"
] | 250 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
59,575,257 | https://en.wikipedia.org/wiki/Split%20gene%20theory | The split gene theory is a theory of the origin of introns, long non-coding sequences in eukaryotic genes between the exons. The theory holds that the randomness of primordial DNA sequences would only permit small (< 600bp) open reading frames (ORFs), and that important intron structures and regulatory sequences are derived from stop codons. In this introns-first framework, the spliceosomal machinery and the nucleus evolved due to the necessity to join these ORFs (now "exons") into larger proteins, and that intronless bacterial genes are less ancestral than the split eukaryotic genes. The theory originated with Periannan Senapathy.
The theory provides solutions to key questions concerning the split gene architecture, including split eukaryotic genes, exons, introns, splice junctions, and branch points, based on the origin of split genes from random genetic sequences. It also provides possible solutions to the origin of the spliceosomal machinery, the nuclear boundary and the eukaryotic cell.
This theory led to the Shapiro–Senapathy algorithm, which provides the methodology for detecting the splice sites, exons and split genes in eukaryotic DNA, and which is the main method for detecting splice site mutations in genes that cause hundreds of diseases.
Split gene theory requires a separate origin of all eukaryotic species. It also requires that the simpler prokaryotes evolved from eukaryotes. This completely contradicts the scientific consensus about the formation of eukaryotic cells by endosymbiosis of bacteria. In 1994, Senapathy wrote a book about this aspect of his theory - The Independent Birth of Organisms. It proposed that all eukaryotic genomes were formed separately in a primordial pool. Dutch biologist Gert Korthoff criticized the theory by posing various problems that cannot be explained by a theory of independent origins. He pointed out that various eukaryotes need nurturing and called this the 'boot problem', in that even the initial eukaryote needed parental care. Korthoff notes that a large fraction of eukaryotes are parasites. Senapathy's theory would require a coincidence to explain their existence. Senapathy's theory cannot explain the strong evidence for common descent (homology, universal genetic code, embryology, fossil record.)
Background
Genes of all organisms, except bacteria, consist of short protein-coding regions (exons) interrupted by long sequences (introns). When a gene is expressed, its DNA sequence is copied into a “primary RNA” sequence by the enzyme RNA polymerase. Then the “spliceosome” machinery physically removes the introns from the RNA copy of the gene by the process of splicing, leaving only a contiguously connected series of exons, which becomes messenger RNA (mRNA). This mRNA is now read by the ribosome, which produces the encoded protein. Thus, although introns are not physically removed from a gene, a gene's sequence is read as if introns were not present.
Exons are usually short, with an average length of about 120 bases (e.g. in human genes). Intron lengths vary widely from 10 to 500,000, but exon lengths have an upper bound of about 600 bases in most eukaryotes. Because exons code for protein sequences, they are important for the cell, yet constitute only ~2% of the sequences. Introns, in contrast, constitute 98% of the sequences but seem to have few crucial functions, except for enhancer sequences and developmental regulators in rare instances.
Until Philip Sharp and Richard Roberts discovered introns within eukaryotic genes in 1977, it was believed that the coding sequence of all genes was always in one single stretch, bounded by a single long ORF. The discovery of introns was a profound surprise, which instantly brought up the questions of how, why and when the introns came into the eukaryotic genes.
It soon became apparent that a typical eukaryotic gene was interrupted at many locations by introns, dividing the coding sequence into many short exons. Also surprising was that the introns were long, as long as hundreds of thousands of bases. These findings prompted the questions of why many introns occur within a gene (for example, ~312 introns occur in the human gene TTN), why they are long, and why exons are short.
It was also discovered that the spliceosome machinery was large and complex with ~300 proteins and several SnRNA molecules. The questions extended to the origin of the spliceosome. Soon after the discovery of introns, it became apparent that the junctions between exons and introns on either side exhibited specific sequences that directed the spliceosome machinery to the exact base position for splicing. How and why these splice junction signals came into being was another important question.
History
The discovery of introns and the split gene architecture of the eukaryotic genes started a new era of eukaryotic biology. The question of why eukaryotic genes had fragmented genes prompted speculation and discussion almost immediately.
Ford Doolittle published a paper in 1978 in which he stated that most molecular biologists assumed that the eukaryotic genome arose from a ‘simpler’ and more ‘primitive’ prokaryotic genome rather like that of Escherichia coli. However, this type of evolution would require that introns be introduced into the coding sequences of bacterial genes. Regarding this requirement, Doolittle said, “It is extraordinarily difficult to imagine how informationally irrelevant sequences could be introduced into pre-existing structural genes without deleterious effects.” He stated “I would like to argue that the eukaryotic genome, at least in that aspect of its structure manifested as ‘genes in pieces’ is in fact the primitive original form.”
James Darnell expressed similar views in 1978. He stated, “The differences in the biochemistry of messenger RNA formation in eukaryotes compared to prokaryotes are so profound as to suggest that sequential prokaryotic to eukaryotic cell evolution seems unlikely. The recently discovered non-contiguous sequences in eukaryotic DNA that encode messenger RNA may reflect an ancient, rather than a new, distribution of information in DNA and that eukaryotes evolved independently of prokaryotes.”
However, in an apparent attempt to reconcile with the idea that RNA preceded DNA in evolution, and with the concept of the three evolutionary lineages of archea, bacteria and eukarya, both Doolittle and Darnell deviated from their original speculation in a joint paper in 1985. They suggested that the ancestor of all three groups of organisms, the ‘progenote,’ had a genes-in-pieces structure, from which all three lineages evolved. They speculated that the precellular stage had primitive RNA genes which had introns, which were reverse transcribed into DNA and formed the progenote. Bacteria and archea evolved from the progenote by losing introns, and ‘urkaryote’ evolved from it by retaining introns. Later, the eukaryote evolved from the urkaryote by evolving a nucleus and absorbing mitochondria from bacteria. Multicellular organisms then evolved from the eukaryote.
These authors predicted that the distinctions between the prokaryote and the eukaryote were so profound that the prokaryote to eukaryote evolution was not tenable, and had different origins. However, other than the speculations that the precellular RNA genes must have had introns, they did not address the key questions of intron origin. No explanations described why exons were short and introns were long, how the splice junctions originated, what the structure and sequence of the splice junctions meant, and why eukaryote genomes were large.
Around the same time that Doolittle and Darnell suggested that introns in eukaryotic genes could be ancient, Colin Blake and Walter Gilbert published their views on intron origins independently. In their view, introns originated as spacer sequences that enabled convenient recombination and shuffling of exons that encoded distinct functional domains in order to evolve new genes. Thus, new genes were assembled from exon modules that coded for functional domains, folding regions, or structural elements from preexisting genes in the genome of an ancestral organism, thereby evolving genes with new functions. They did not specify how exons or introns originated. In addition, even after many years, extensive analysis of thousands of proteins and genes showed that only extremely rarely do genes exhibit the supposed exon shuffling phenomenon. Furthermore, molecular biologists questioned the exon shuffling proposal, from a purely evolutionary view for both methodological and conceptual reasons, and, in the long run, this theory did not survive.
Hypothesis
Around the time introns were discovered, Senapathy was asking how genes themselves could have originated. He surmised that for any gene to come into being, genetic sequences (RNA or DNA) must have been present in the prebiotic environment. A basic question he asked was how protein-coding sequences could have originated from primordial DNA sequences at the origin of the first cells.
To answer this, he made two basic assumptions:
before a self-replicating cell could come into existence, DNA molecules were synthesized in the primordial soup by random addition of the 4 nucleotides without the help of templates and
the nucleotide sequences that code for proteins were selected from these preexisting random DNA sequences in the primordial soup, and not by construction from shorter coding sequences.
He also surmised that codons must have been established prior to the origin of the first genes. If primordial DNA did contain random nucleotide sequences, he asked: Was there an upper limit in coding-sequence lengths, and, if so, did this limit play a crucial role in the formation of the structural features of genes at the origin of genes?
His logic was the following. The average length of proteins in living organisms, including the eukaryotic and bacterial organisms, was ~400 amino acids. However, much longer proteins existed, even longer than 10,000-30,000 amino acids in both eukaryotes and bacteria. Thus, the coding sequence of thousands of bases existed in a single stretch in bacterial genes. In contrast, the coding sequence of eukaryotes existed only in short segments of exons of ~120 bases regardless of the length of the protein. If the coding sequence ORF lengths in random DNA sequences were as long as those in bacterial organisms, then long, contiguous coding genes were possible in random DNA. This was not known, as the distribution of ORF lengths in a random DNA sequence had never been studied.
As random DNA sequences could be generated in the computer, Senapathy thought that he could ask these questions and conduct his experiments in silico. Furthermore, when he began studying this question, sufficient DNA and protein sequence information existed in the National Biomedical Research Foundation (NBRF) database in the early 1980s.
Testing the hypothesis
Origin of introns/split genes
Senapathy analyzed the distribution of the ORF lengths in computer-generated random DNA sequences first. Surprisingly, this study revealed that about 200 codons (600 bases) was the upper limit in ORF lengths. The shortest ORF (zero base in length) was the most frequent. At increasing lengths of ORFs, their frequency decreased logarithmically, approaching zero at about 600 bases. When the probability of ORF lengths in a random sequence was plotted, it revealed that the probability of increasing lengths of ORFs decreased exponentially and tailed off at a maximum of about 600 bases. From this “negative exponential” distribution of ORF lengths, it was found that most of ORFs were far shorter than the maximum.This finding was surprising because the coding sequence for the average protein length of 400 AAs (with ~1,200 bases of coding sequence) and longer proteins of thousands of AAs (requiring >10,000 bases of coding sequence) would not occur at a stretch in a random sequence. If this was true, a typical gene with a contiguous coding sequence could not originate in a random sequence. Thus, the only possible way that any gene could originate from a random sequence was to split the coding sequence into shorter segments and select these segments from short ORFs available in the random sequence, rather than to increase the ORF length by eliminating consecutive stop codons. This process of choosing short segments of coding sequences from the available ORFs to make a long ORF would lead to a split structure.
If this hypothesis was true, eukaryotic DNA sequences should reflect it. When Senapathy plotted the distribution of ORF lengths in eukaryotic DNA sequences, the plot was remarkably similar to that from random DNA sequences. This plot was also a negative exponential distribution that tailed off at a maximum of about 600 bases, as with eukaryotic genes, which coincided exactly with the maximum length of ORFs observed in both random DNA and eukaryotic DNA sequences.
The split genes thus originated from random DNA sequences by choosing the best of the short coding segments (exons) and splicing them. The intervening intron sequences were left-over vestiges of the random sequences, and thus were earmarked to be removed by the spliceosome. These findings indicated that split genes could have originated from random DNA sequences with exons and introns as they appear in today's eukaryotic organisms. Nobel Laureate Marshall Nirenberg, who deciphered the codons, stated that these findings strongly showed that the split gene theory for the origin of introns and the split structure of genes must be valid.
Blake proposed the Gilbert-Blake hypothesis in 1979 for the origin of introns and stated that Senapathy's split gene theory comprehensively explained the origin of the split gene structure. In addition, he stated that it explained several key questions including the origin of the splicing mechanism:
Origin of splice junctions
Under the split gene theory, an exon is defined by an ORF. It requires a mechanism to recognize an ORF to have originated. As an ORF is defined by a contiguous coding sequence bounded by stop codons, these stop codon ends had to be recognized by the exon-intron gene recognition system. This system could have defined the exons by the presence of a stop codon at the ends of ORFs, which should be included within the ends of the introns and eliminated by the splicing process. Thus, the introns should contain a stop codon at their ends, which would be part of the splice junction sequences.
If this hypothesis was true, the split genes of today's living organisms should contain stop codons exactly at the ends of introns. When Senapathy tested this hypothesis in the splice junctions of eukaryotic genes, he found that the vast majority of splice junctions did contain a stop codon at the end of each intron, outside of the exons. In fact, these stop codons were found to form the “canonical” GT:AG splicing sequence, with the three stop codons occurring as part of the strong consensus signals. Thus, the basic split gene theory for the origin of introns and the split gene structure led to the understanding that the splice junctions originated from the stop codons.
Sequence data for only about 1,000 exon-intron junctions were available when Senapathy thought about this question. He took the data for 1,030 splice junction sequences (donors and acceptors) and counted the codons occurring at
each of the 7- base positions in the donor signal sequence [CAG:GTGAGT] and each of the possible 2-base positions in the acceptor signal [CAG:G] from the GenBank database. He found that the stop codons occurred at high frequency only at the 5th base position in the donor signal and the first base position in the acceptor signal. These positions are the* start of the intron (in fact, one base after the start) and at the end of the intron, as Senapathy had predicted. The codon counts at only these positions are shown. Even when the codons at these positions were not stop
codons, 70% of them began with the first two bases of the stop codons TA and TG [TAT = 75; TAC = 59; TGT = 70].
All three stop codons (TGA, TAA and TAG) were found after one base (G) at the start of introns. These stop codons are shown in the consensus canonical donor splice junction as AG:GT(A/G)GGT, wherein the TAA and TGA are the stop codons, and the additional TAG is also present at this position. Besides the codon CAG, only TAG, which is a stop codon, was found at the ends of introns. The canonical acceptor splice junction is shown as (C/T)AG:GT, in which TAG is the stop codon. These consensus sequences clearly show the presence of the stop codons at the ends of introns bordering the exons in all eukaryotic genes, thus providing a strong corroboration for the split gene theory. Nirenberg again stated that these observations fully supported the split gene theory for the origin of splice junction sequences from stop codons.
Soon after the discovery of introns by Philip Sharp and Richard Roberts, it became known that mutations within splice junctions could lead to diseases. Senapathy showed that mutations in the stop codon bases (canonical bases) caused more diseases than the mutations in non-canonical bases.
Branch point (lariat) sequence
An intermediate stage in the process of eukaryotic RNA splicing is the formation of a lariat structure. It is anchored at an adenosine residue in intron between 10 and 50 nucleotides upstream of the 3' splice site. A short conserved sequence (the branch point sequence) functions as the recognition signal for the site of lariat formation. During the splicing process, this conserved sequence towards the end of the intron forms a lariat structure with the beginning of the intron. The final step of the splicing process occurs when the two exons are joined and the intron is released as a lariat RNA.
Several investigators found the branch point sequences in different organisms including yeast, human, fruit fly, rat, and plants. Senapathy found that, in all of these sequences, the codon ending at the branch point adenosine is consistently a stop codon. What is interesting is that two of the three stop codons (TAA and TGA) occur almost all of the time at this position.
These findings led Senapathy to propose that the branch point signal originated from stop codons. The finding that two different stop codons (TAA and TGA) occur within the lariat signal with the branching point as the third base of the stop codons corroborates this proposal. As the branching point of the lariat occurs at the last adenine of the stop codon, it is possible that the spliceosome machinery that originated for the elimination of the stop codons from the primary RNA sequence created an auxiliary stop-codon sequence signal as the lariat sequence to aid its splicing function.
The small nuclear U2 RNA found in splicing complexes is thought to aid splicing by interacting with the lariat sequence. Complementary sequences for both the lariat sequence and the acceptor signal are present in a segment of only 15 nucleotides in U2 RNA. Further, the U1 RNA has been proposed to function as a guide in splicing to identify the precise donor splice junction by complementary base-pairing. The conserved regions of the U1 RNA thus include sequences complementary to the stop codons. These observations enabled Senapathy to predict that stop codons had operated in the origin of not only the splice-junction signals and the lariat signal, but also some small nuclear RNAs.
Gene regulatory sequences
Senapathy proposed that the gene-expression regulatory sequences (promoter and poly-A addition site sequences) also could have originated from stop codons. A conserved sequence, AATAAA, exists in almost every gene a short distance downstream from the end of the protein-coding message and serves as a signal for the addition of poly(A) in the mRNA copy of the gene. This poly(A) sequence signal contains a stop codon, TAA. A sequence shortly downstream from this signal, thought to be part of the complete poly(A) signal, also contains the TAG and TGA stop codons.
Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA box (consensus sequence TATAAA), which contains the stop codon TAA. Bacterial promoter elements at ~10 bases exhibits a TATA box with a consensus of TATAAT (which contains the stop codon TAA), and at -35 bases exhibits a consensus of TTGACA (containing the stop codon TGA). Thus, the evolution of the whole RNA processing mechanism seems to have been influenced by the too-frequent occurrence of stop codons, thus making the stop codons the focal points for RNA processing.
Stop codons are key parts of every genetic element in the eukaryotic gene
Senapathy discovered that stop codons occur as key parts in every genetic element in eukaryotic genes. The table and figure show that the key parts of the core promoter elements, the lariat signal, the donor and acceptor splice signals, and the poly-A addition signal consist of one or more stop codons. This finding corroborates the split gene theory's claim that the underlying reason for the complete split gene paradigm is the origin of split genes from random DNA sequences, wherein random distribution of an extremely high frequency of stop codons were used by nature to define these genetic elements.
Short exons/long introns
Research based on the split gene theory sheds light on other basic questions of exons and introns. The exons of eukaryotes are generally short (human exons average ~120 bases, and can be as short as 10 bases) and introns are usually long (average of ~3,000 bases, and can be several hundred thousands bases long), for example genes RBFOX1, CNTNAP2, PTPRD and DLG2. Senapathy provided a plausible answer to these questions, the only explanation to date. If eukaryotic genes originated from random DNA sequences, they have to match the lengths of ORFs from random sequences, and possibly should be around 100 bases (close to the median length of ORFs in random sequence). The genome sequences of living organisms exhibit exactly the same average lengths of 120 bases for exons, and the longest exons of 600 bases (with few exceptions), which is the same length as that of the longest random ORFs.
If split genes originated in random DNA sequences, then introns would be long for several reasons. The stop codons occur in clusters leading to numerous consecutive short ORFs: longer ORFs that could be defined as exons would be rarer. Furthermore, the best of the coding sequence parameters for functional proteins would be chosen from the long ORFs in random sequence, which may occur rarely. In addition, the combination of donor and acceptor splice junction sequences within short lengths of coding sequence segments that would define exon boundaries would occur rarely in a random sequence. These combined reasons would make introns long compared to exons.
Eukaryotic genomes
This work also explains why genomes such as the human genome have billions of bases, and why only a small fraction (~2%) codes for proteins and other regulatory elements. If split genes originated from random primordial DNA sequences, they would contain a significant amount of DNA that represented by introns. Furthermore, a genome assembled from random DNA containing split genes would also include intergenic random DNA. Thus, genomes that originated from random DNA sequences had to be large, regardless of the complexity of the organism.
The observation that several organisms such as the onion (~16 billion bases) and salamander (~32 billion bases) have much larger genomes than humans (~3 billion bases) while the organisms are no more complex than humans comports with the theory. Furthermore, the fact that several organisms with smaller genomes have a similar number of genes as human, such as C. elegans (genome size ~100 million bases, ~19,000 genes) and Arabidopsis thaliana (genome size ~125 million bases, ~25,000 genes), supports the theory. The theory predicts that the introns in the split genes in these genomes could be the “reduced” (or deleted) form compared to larger genes with long introns, thus leading to reduced genomes. In fact, researchers have recently proposed that these smaller genomes are actually reduced genomes.
Spliceosomal machinery and eukaryotic nucleus
Senapathy addressed the origin of the spliceosomal machinery that edits out the introns from RNA transcripts. If the split genes had originated from random DNA, then the introns would have become an unnecessary but integral part of eukaryotic genes along with the splice junctions. The spliceosomal machinery would be required to remove them and to enable the short exons to be linearly spliced together as a contiguously coding mRNA that can be translated into a complete protein. Thus, the split gene theory argues that spliceosomal machinery exists to remove the unnecessary introns.
Blake states, “Work by Senapathy, when applied to RNA, comprehensively explains the origin of the segregated form of RNA into coding and noncoding regions. It also suggests why a splicing mechanism was developed at the start of primordial evolution.”
Eukaryotes
Senapathy proposed a plausible mechanistic and functional rationale why the eukaryotic nucleus originated, a major question in biology. If the transcripts of the split genes and the spliced mRNAs were present in a cell without a nucleus, the ribosomes would try to bind to both the un-spliced primary RNA transcript and the spliced mRNA, which would result in chaos. A boundary that separates the RNA splicing process from the mRNA translation avoids this problem. The nuclear boundary provides a clear separation of the primary RNA splicing and the mRNA translation.
These investigations thus led to the possibility that primordial DNA with essentially random sequence gave rise to the complex structure of the split genes with exons, introns and splice junctions. Cells that harbored split genes had to be complex with a nuclear cytoplasmic boundary, and must have a spliceosomal machinery. Thus, it was possible that the earliest cell was complex and eukaryotic. Surprisingly, findings from extensive comparative genomics research from several organisms since 2007 overwhelmingly show that the earliest organisms could have been highly complex and eukaryotic, and could have contained complex proteins, as predicted by Senapathy's theory.
The spliceosome is a highly complex mechanism, containing ~200 proteins and several SnRNPs. Collins and Penny stated, “We begin with the hypothesis that ... the spliceosome has increased in complexity throughout eukaryotic evolution. However, examination of the distribution of spliceosomal components indicates that not only was a spliceosome present in the eukaryotic ancestor but it also contained most of the key components found in today's eukaryotes. ... the last common ancestor of extant eukaryotes appears to show much of the molecular complexity seen today.” This suggests that the earliest eukaryotic organisms were complex and contained sophisticated genes and proteins.
Bacterial genes
Genes with uninterrupted coding sequences that are thousands of bases long - up to 90,000 bases - that occur in many bacterial organisms were practically impossible to have occurred. However, the bacterial genes could have originated from split genes by losing introns, the only proposed way to arrive at long coding sequences. It is also a better way than by increasing the lengths of ORFs from short random ORFs to long ORFs by specifically removing the stop codons by mutation.
According to the split gene theory, this process of intron loss could have happened from prebiotic random DNA. These contiguously coding genes could be tightly organized in the bacterial genomes without any introns and be more streamlined. According to Senapathy, the nuclear boundary that was required for a cell containing split genes would not be required for a cell containing only uninterrupted genes. Thus, the bacterial cells did not develop a nucleus. Based on split gene theory, the eukaryotic genomes and bacterial genomes could have independently originated from the split genes in primordial random DNA sequences.
Shapiro-Senapathy algorithm
Senapathy developed algorithms to detect donor and acceptor splice sites, exons and a complete split gene in a genomic sequence. He developed the position weight matrix (PWM) method based on the frequency of the four bases at the consensus sequences of the donor and acceptor in different organisms to identify the splice sites in a given sequence. Furthermore, he formulated the first algorithm to find the exons based on the requirement of exons to contain a donor sequence (at the 5’ end) and an acceptor sequence (at the 3’ end), and an ORF in which the exon should occur, and another algorithm to find a complete split gene. These algorithms are collectively known as the Shapiro-Senapathy algorithm (S&S).
This algorithm aids in the identification of splicing mutations that cause disease and adverse drug reactions. Scientists used the algorithm to identify mutations and genes that cause cancers, inherited disorders, immune deficiency diseases and neurological disorders. It is increasingly used in clinical practice and research to find mutations in known disease-causing genes in patients and to discover novel genes that are causal of different diseases. Furthermore, it is used in defining the cryptic splice sites and deducing the mechanisms by which mutations can affect normal splicing and lead to different diseases. It is also employed in basic research.
Findings based on S&S have impacted major questions in eukaryotic biology and in human medicine.
Corroborating evidence
The split gene theory implies that structural features of split genes predicted from computer-simulated random sequences occur in eukaryotic split genes. This is borne out in most known split genes. The sequences exhibit a nearly perfect negative exponential distribution of ORF lengths. With rare exceptions, eukaryotic gene exons fall within the predicted 600 base maximum.
The theory correctly predicts that exons are delimited by stop codons, especially at the 3’ ends of exons. Actually they are precisely delimited more strongly at the 3’ ends of exons and less strongly at the 5’ ends in most known genes, as predicted. These stop codons are the most important functional parts of both splice junctions. The theory thus provides an explanation for the “conserved” splice junctions at the ends of exons and for the loss of these stop codons along with introns when they are spliced out. The theory correctly predicts that splice junctions are randomly distributed in eukaryotic DNA sequences. The theory correctly predicts that splice junctions present in transfer RNA genes and ribosomal RNA genes, do not contain stop codons. The lariat signal, another sequence involved in the splicing process, also contains stop codons.
The theory correctly predicts that introns are non-coding and that they are mostly non-functional. Except for some intron sequences including the donor and acceptor splice signal sequences and branch point sequences, and possibly the intron splice enhancers that occur at the ends of introns, which aid in the removal of introns, the vast majority of introns are devoid of any functions. The theory does not exclude rare sequences within introns that could be used by the genome and the cell, especially because introns are so long.
Thus, the theory's predictions are precisely corroborated by the major elements in modern eukaryotic genomes.
Comparative analysis of the modern genome data from several living organisms found that the characteristics of split genes trace back to the earliest organisms. These organisms could have contained the split genes and complex proteins that occur in today's living organisms.
Studies employing maximum likelihood analysis found that the earliest eukaryotic organisms contained the same genes as modern organisms with yet a higher intron density. Comparative genomics of many organisms including basal eukaryotes (considered to be primitive eukaryotic organisms such as Amoeboflagellata, Diplomonadida, and Parabasalia) showed that intron-rich split genes accompanied and spliceosome from modern organisms were present in their earliest forebears, and that the earliest organisms came with all the eukaryotic cellular components.
Selected publications
References
Gene expression
Genetics experiments
Genomics | Split gene theory | [
"Chemistry",
"Biology"
] | 6,878 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
59,580,008 | https://en.wikipedia.org/wiki/%C3%97%20Pachyveria%20%27Powder%20Puff%27 | 'Powder Puff' is a hybrid succulent plant from the Pachyphytum cross Echeveria genus, × Pachyveria. 'Powder Puff' is derived from Echeveria cante and Pachyphytum oviferum. It was created in the 1970s.
References
Hybrid plants
Succulent plants
Crassulaceae
Intergeneric hybrids
Ornamental plant cultivars | × Pachyveria 'Powder Puff' | [
"Biology"
] | 82 | [
"Intergeneric hybrids",
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
62,227,380 | https://en.wikipedia.org/wiki/Nanoconcrete | Nanoconcrete (also spelled nano concrete or nano-concrete) is a form of concrete that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is also a product of high-energy mixing (HEM) of conventional cement, sand and water which is a bottom-up approach of nano technology.
Role of nano particles
The incorporation of ultra-fine particles into a Portland-cement paste within a concrete mixture in accordance with top-down approach of nano technology alters the concrete's material properties and performance by reducing the void space between the cement and aggregate in the cured concrete. This improves strength, durability, shrinkage and bonding to steel reinforcing bars.
Manufacture
To ensure the mixing is thorough enough to create nanoconcrete, the mixer must apply a total mixing power to the mixture of 30–600 watts per kilogram of the mix. This mixing must continue long enough to yield a net specific energy expended upon the mix of at least 5000 joules per kilogram of the mix. and may be increased to 30–80 kJ per kilogram. A superplasticizer is then added to the activated mixture which can later be mixed with aggregates in a conventional concrete mixer. In the HEM process, the intense mixing of cement and water with or without sand in conditions of queasy laminar flow, Reynolds number 20-800 provides dissipation and absorption of energy by the mixture and increases shear stresses on the surface of cement particles. As a result, the temperature of the mixture increases by 20–25 and more degrees Celsius. This intense mixing serves to deepen hydration process inside the cement particles. The nano-sized colloid Calcium Silicate Hydrate (C-S-H) formation increased several times compared with conventional mixing. Thus, the ordinary concrete transforms to nanoconcrete.
The initial natural process of cement hydration with formation of colloidal globules about 5 nm in diameter spreads into the entire volume of cement–water matrix as the energy expended upon the mix.
The liquid activated mixture can be used by itself for casting small architectural details and decorative items, or expanded with gas-forming admixture for making Aerated HEM Nanoconcrete as a lightweight concrete. HEM Nanoconcrete hardens in low and subzero temperature conditions because the liquid phase inside the nano-pores of C-S-H gel doesn't freeze at temperatures from −8 to −42 degrees Celsius. The increased volume of gel reduces capillarity in solid and porous materials.
References
Concrete | Nanoconcrete | [
"Engineering"
] | 565 | [
"Structural engineering",
"Concrete"
] |
62,240,093 | https://en.wikipedia.org/wiki/%28Pentamethylcyclopentadienyl%29titanium%20trichloride | (Pentamethylcyclopentadienyl)titanium trichloride is an organotitanium compound with the formula Cp*TiCl3 (Cp* = C5(CH3)5). It is an orange solid. The compound adopts a piano stool geometry. An early synthesis involve the combination of lithium pentamethylcyclopentadienide and titanium tetrachloride.
The compound is an intermediate in the synthesis of decamethyltitanocene dichloride. In the presence of organoaluminium compounds and other additives, it catalyzes the polymerization of alkenes.
See also
(Cyclopentadienyl)titanium trichloride
References
Chloro complexes
Titanium compounds
Half sandwich compounds | (Pentamethylcyclopentadienyl)titanium trichloride | [
"Chemistry"
] | 159 | [
"Organometallic chemistry",
"Half sandwich compounds"
] |
53,358,766 | https://en.wikipedia.org/wiki/Intake%20tower | An intake tower or outlet tower is a vertical tubular structure with one or more openings used for capturing water from reservoirs and conveying it further to a hydroelectric or water-treatment plant.
Unlike spillways, intake towers are intended for the reservoir's regular operation, conveying clean, debris-free water for further use.
Construction
An intake tower is typically made from reinforced concrete, with foundations laid in the river or lake bed. It has at least one water-collecting opening at the top, and may have additional openings along its height, depending on the purpose: towers for hydroelectric plants typically have only one inlet, while those in water-processing plants have multiple draw-off inlets. Near the bottom of the tower, depending on the dam construction and plant location, a horizontal or slanted outlet conduit takes the water from the tower into the plant.
The most convenient location for an intake tower is in the proximity of the processing plant. In artificial lakes, those are typically placed near the dam. Lake bed near the dam also provides sufficient water depth to ensure substantial supply to the towers throughout the year, thus the exposed towers can be regularly seen along the dams.
When built near the shore, an intake tower is equipped with a service bridge, used to gain access for maintenance.
Draw-off tower
Draw-off towers are intake towers specialized for drinking water reservoirs. They have multiple openings at various depths, typically equipped with valves, allowing drawing water only from the level where it is of highest quality.
References
See also
Culvert
Fish screen
Gatehouse (waterworks)
Hydraulic engineering
Hydraulic structures
Dams | Intake tower | [
"Physics",
"Engineering",
"Environmental_science"
] | 322 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Civil engineering stubs",
"Hydraulic engineering"
] |
53,359,527 | https://en.wikipedia.org/wiki/Ceronapril | Ceronapril (INN, proposed trade names Ceranapril, Novopril) is a phosphonate ACE inhibitor that was never marketed.
References
ACE inhibitors
Carboxamides
Enantiopure drugs
Phosphonates
Prodrugs
Pyrrolidines | Ceronapril | [
"Chemistry"
] | 60 | [
"Stereochemistry",
"Enantiopure drugs",
"Prodrugs",
"Stereochemistry stubs",
"Chemicals in medicine"
] |
53,363,521 | https://en.wikipedia.org/wiki/Third-generation%20sequencing | Third-generation sequencing (also known as long-read sequencing) is a class of DNA sequencing methods which produce longer sequence reads, under active development since 2008.
Third generation sequencing technologies have the capability to produce substantially longer reads than second generation sequencing, also known as next-generation sequencing. Such an advantage has critical implications for both genome science and the study of biology in general. However, third generation sequencing data have much higher error rates than previous technologies, which can complicate downstream genome assembly and analysis of the resulting data. These technologies are undergoing active development and it is expected that there will be improvements to the high error rates. For applications that are more tolerant to error rates, such as structural variant calling, third generation sequencing has been found to outperform existing methods, even at a low depth of sequencing coverage.
Current technologies
Sequencing technologies with a different approach than second-generation platforms were first described as "third-generation" in 2008–2009.
There are several companies currently at the heart of third generation sequencing technology development, namely, Pacific Biosciences, Oxford Nanopore Technology, Quantapore (CA-USA), and Stratos (WA-USA). These companies are taking fundamentally different approaches to sequencing single DNA molecules.
PacBio developed the sequencing platform of single molecule real time sequencing (SMRT), based on the properties of zero-mode waveguides. Signals are in the form of fluorescent light emission from each nucleotide incorporated by a DNA polymerase bound to the bottom of the zL well.
Oxford Nanopore’s technology involves passing a DNA molecule through a nanoscale pore structure and then measuring changes in electrical field surrounding the pore; while Quantapore has a different proprietary nanopore approach. Stratos Genomics spaces out the DNA bases with polymeric inserts, "Xpandomers", to circumvent the signal to noise challenge of nanopore ssDNA reading.
Also notable is Helicos's single molecule fluorescence approach, but the company entered bankruptcy in the fall of 2015.
Advantages
Longer reads
In comparison to the current generation of sequencing technologies, third generation sequencing has the obvious advantage of producing much longer reads. It is expected that these longer read lengths will alleviate numerous computational challenges surrounding genome assembly, transcript reconstruction, and metagenomics among other important areas of modern biology and medicine.
It is well known that eukaryotic genomes including primates and humans are complex and have large numbers of long repeated regions. Short reads from second generation sequencing must resort to approximative strategies in order to infer sequences over long ranges for assembly and genetic variant calling. Pair end reads have been leveraged by second generation sequencing to combat these limitations. However, exact fragment lengths of pair ends are often unknown and must also be approximated as well. By making long reads lengths possible, third generation sequencing technologies have clear advantages.
Epigenetics
Epigenetic markers are stable and potentially heritable modifications to the DNA molecule that are not in its sequence. An example is DNA methylation at CpG sites, which has been found to influence gene expression. Histone modifications are another example. The current generation of sequencing technologies rely on laboratory techniques such as ChIP-sequencing for the detection of epigenetic markers. These techniques involve tagging the DNA strand, breaking and filtering fragments that contain markers, followed by sequencing. Third generation sequencing may enable direct detection of these markers due to their distinctive signal from the other four nucleotide bases.
Portability and speed
Other important advantages of third generation sequencing technologies include portability and sequencing speed. Since minimal sample preprocessing is required in comparison to second generation sequencing, smaller equipments could be designed. Oxford Nanopore Technology has recently commercialized the MinION sequencer. This sequencing machine is roughly the size of a regular USB flash drive and can be used readily by connecting to a laptop. In addition, since the sequencing process is not parallelized across regions of the genome, data could be collected and analyzed in real time. These advantages of third generation sequencing may be well-suited in hospital settings where quick and on-site data collection and analysis is demanded.
Challenges
Third generation sequencing, as of 2008, faced important challenges mainly surrounding accurate identification of nucleotide bases; error rates were still much higher compared to second generation sequencing. This is generally due to instability of the molecular machinery involved. For example, in PacBio’s single molecular and real time sequencing technology, the DNA polymerase molecule becomes increasingly damaged as the sequencing process occurs. Additionally, since the process happens quickly, the signals given off by individual bases may be blurred by signals from neighbouring bases. This poses a new computational challenge for deciphering the signals and consequently inferring the sequence. Methods such as Hidden Markov Models, for example, have been leveraged for this purpose with some success.
On average, different individuals of the human population share about 99.9% of their genes. In other words, approximately only one out of every thousand bases would differ between any two person. The high error rates involved with third generation sequencing are inevitably problematic for the purpose of characterizing individual differences that exist between members of the same species.
Genome assembly
Genome assembly is the reconstruction of whole genome DNA sequences. This is generally done with two fundamentally different approaches.
Reference alignment
When a reference genome is available, as one is in the case of human, newly sequenced reads could simply be aligned to the reference genome in order to characterize its properties. Such reference based assembly is quick and easy but has the disadvantage of “hiding" novel sequences and large copy number variants. In addition, reference genomes do not yet exist for most organisms.
De novo assembly
De novo assembly is the alternative genome assembly approach to reference alignment. It refers to the reconstruction of whole genome sequences entirely from raw sequence reads. This method would be chosen when there is no reference genome, when the species of the given organism is unknown as in metagenomics, or when there exist genetic variants of interest that may not be detected by reference genome alignment.
Given the short reads produced by the current generation of sequencing technologies, de novo assembly is a major computational problem. It is normally approached by an iterative process of finding and connecting sequence reads with sensible overlaps. Various computational and statistical techniques, such as de bruijn graphs and overlap layout consensus graphs, have been leveraged to solve this problem. Nonetheless, due to the highly repetitive nature of eukaryotic genomes, accurate and complete reconstruction of genome sequences in de novo assembly remains challenging. Pair end reads have been posed as a possible solution, though exact fragment lengths are often unknown and must be approximated.
Hybrid assembly
Long read lengths offered by third generation sequencing may alleviate many of the challenges currently faced by de novo genome assemblies. For example, if an entire repetitive region can be sequenced unambiguously in a single read, no computation inference would be required. Computational methods have been proposed to alleviate the issue of high error rates. For example, in one study, it was demonstrated that de novo assembly of a microbial genome using PacBio sequencing alone performed superior to that of second generation sequencing.
Third generation sequencing may also be used in conjunction with second generation sequencing. This approach is often referred to as hybrid sequencing. For example, long reads from third generation sequencing may be used to resolve ambiguities that exist in genomes previously assembled using second generation sequencing. On the other hand, short second generation reads have been used to correct errors in that exist in the long third generation reads. In general, this hybrid approach has been shown to improve de novo genome assemblies significantly.
Epigenetic markers
DNA methylation (DNAm) – the covalent modification of DNA at CpG sites resulting in attached methyl groups – is the best understood component of epigenetic machinery. DNA modifications and resulting gene expression can vary across cell types, temporal development, with genetic ancestry, can change due to environmental stimuli and are heritable. After the discovery of DNAm, researchers have also found its correlation to diseases like cancer and autism. In this disease etiology context DNAm is an important avenue of further research.
Advantages
The current most common methods for examining methylation state require an assay that fragments DNA before standard second generation sequencing on the Illumina platform. As a result of short read length, information regarding the longer patterns of methylation are lost. Third generation sequencing technologies offer the capability for single molecule real-time sequencing of longer reads, and detection of DNA modification without the aforementioned assay.
Oxford Nanopore Technologies’ MinION has been used to detect DNAm. As each DNA strand passes through a pore, it produces electrical signals which have been found to be sensitive to epigenetic changes in the nucleotides, and a hidden Markov model (HMM) was used to analyze MinION data to detect 5-methylcytosine (5mC) DNA modification. The model was trained using synthetically methylated E. coli DNA and the resulting signals measured by the nanopore technology. Then the trained model was used to detect 5mC in MinION genomic reads from a human cell line which already had a reference methylome. The classifier has 82% accuracy in randomly sampled singleton sites, which increases to 95% when more stringent thresholds are applied.
Other methods address different types of DNA modifications using the MinION platform. Stoiber et al. examined 4-methylcytosine (4mC) and 6-methyladenine (6mA), along with 5mC, and also created software to directly visualize the raw MinION data in a human-friendly way. Here they found that in E. coli, which has a known methylome, event windows of 5 base pairs long can be used to divide and statistically analyze the raw MinION electrical signals. A straightforward Mann-Whitney U test can detect modified portions of the E. coli sequence, as well as further split the modifications into 4mC, 6mA or 5mC regions.
It seems likely that in the future, MinION raw data will be used to detect many different epigenetic marks in DNA.
PacBio sequencing has also been used to detect DNA methylation. In this platform, the pulse width – the width of a fluorescent light pulse – corresponds to a specific base. In 2010 it was shown that the interpulse distance in control and methylated samples are different, and there is a "signature" pulse width for each methylation type. In 2012 using the PacBio platform the binding sites of DNA methyltransferases were characterized. The detection of N6-methylation in C Elegans was shown in 2015. DNA methylation on N6-adenine using the PacBio platform in mouse embryonic stem cells was shown in 2016.
Other forms of DNA modifications – from heavy metals, oxidation, or UV damage – are also possible avenues of research using Oxford Nanopore and PacBio third generation sequencing.
Drawbacks
Processing of the raw data – such as normalization to the median signal – was needed on MinION raw data, reducing real-time capability of the technology. Consistency of the electrical signals is still an issue, making it difficult to accurately call a nucleotide. MinION has low throughput; since multiple overlapping reads are hard to obtain, this further leads to accuracy problems of downstream DNA modification detection. Both the hidden Markov model and statistical methods used with MinION raw data require repeated observations of DNA modifications for detection, meaning that individual modified nucleotides need to be consistently present in multiple copies of the genome, e.g. in multiple cells or plasmids in the sample.
For the PacBio platform, too, depending on what methylation you expect to find, coverage needs can vary. As of March 2017, other epigenetic factors like histone modifications have not been discoverable using third-generation technologies. Longer patterns of methylation are often lost because smaller contigs still need to be assembled.
Transcriptomics
Transcriptomics is the study of the transcriptome, usually by characterizing the relative abundances of messenger RNA molecules in the tissue under study. According to the central dogma of molecular biology, genetic information flows from double stranded DNA molecules to single stranded mRNA molecules where they can be readily translated into functional protein molecules. By studying the transcriptome, one can gain valuable insight into the regulation of gene expression.
While expression levels can be more or less accurately depicted by second generation sequencing (we can assume that actual abundances of the population of transcripts are randomly sampled), transcript-level information still remains an important challenge. As a consequence, the role of alternative splicing in molecular biology remains largely elusive. Third generation sequencing technologies hold promising prospects in resolving this issue by enabling sequencing of mRNA molecules at their full lengths.
Alternative splicing
Alternative splicing (AS) is the process by which a single gene may give rise to multiple distinct mRNA transcripts and consequently different protein translations. Some evidence suggests that AS is a ubiquitous phenomenon and may play a key role in determining the phenotypes of organisms, especially in complex eukaryotes; all eukaryotes contain genes consisting of introns that may undergo AS. In particular, it has been estimated that AS occurs in 95% of all human multi-exon genes. AS has undeniable potential to influence myriad biological processes. Advancing knowledge in this area has critical implications for the study of biology in general.
Transcript reconstruction
The current generation of sequencing technologies produce only short reads, putting tremendous limitation on the ability to detect distinct transcripts; short reads must be reverse engineered into original transcripts that could have given rise to the resulting read observations. This task is further complicated by the highly variable expression levels across transcripts, and consequently variable read coverages across the sequence of the gene. In addition, exons may be shared among individual transcripts, rendering unambiguous inferences essentially impossible. Existing computational methods make inferences based on the accumulation of short reads at various sequence locations often by making simplifying assumptions. Cufflinks takes a parsimonious approach, seeking to explain all the reads with the fewest possible number of transcripts. On the other hand, StringTie attempts to simultaneously estimate transcript abundances while assembling the reads. These methods, while reasonable, may not always identify real transcripts.
A study published in 2008 surveyed 25 different existing transcript reconstruction protocols. Its evidence suggested that existing methods are generally weak in assembling transcripts, though the ability to detect individual exons are relatively intact. According to the estimates, average sensitivity to detect exons across the 25 protocols is 80% for Caenorhabditis elegans genes. In comparison, transcript identification sensitivity decreases to 65%. For human, the study reported an exon detection sensitivity averaging to 69% and transcript detection sensitivity had an average of a mere 33%. In other words, for human, existing methods are able to identify less than half of all existing transcript.
Third generation sequencing technologies have demonstrated promising prospects in solving the problem of transcript detection as well as mRNA abundance estimation at the level of transcripts. While error rates remain high, third generation sequencing technologies have the capability to produce much longer read lengths. Pacific Bioscience has introduced the iso-seq platform, proposing to sequence mRNA molecules at their full lengths. It is anticipated that Oxford Nanopore will put forth similar technologies. The trouble with higher error rates may be alleviated by supplementary high quality short reads. This approach has been previously tested and reported to reduce the error rate by more than 3 folds.
Metagenomics
Metagenomics is the analysis of genetic material recovered directly from environmental samples.
Advantages
The main advantage for third-generation sequencing technologies in metagenomics is their speed of sequencing in comparison to second generation techniques. Speed of sequencing is important for example in the clinical setting (i.e. pathogen identification), to allow for efficient diagnosis and timely clinical actions.
Oxford Nanopore's MinION was used in 2015 for real-time metagenomic detection of pathogens in complex, high-background clinical samples. The first Ebola virus (EBOV) read was sequenced 44 seconds after data acquisition. There was uniform mapping of reads to genome; at least one read mapped to >88% of the genome. The relatively long reads allowed for sequencing of a near-complete viral genome to high accuracy (97–99% identity) directly from a primary clinical sample.
A common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene. Both MinION and PacBio's SMRT platform have been used to sequence this gene. In this context the PacBio error rate was comparable to that of shorter reads from 454 and Illumina's MiSeq sequencing platforms.
Drawbacks
MinION's high error rate (~10-40%) prevented identification of antimicrobial resistance markers, for which single nucleotide resolution is necessary. For the same reason, eukaryotic pathogens were not identified. Ease of carryover contamination when re-using the same flow cell (standard wash protocols don’t work) is also a concern. Unique barcodes may allow for more multiplexing. Furthermore, performing accurate species identification for bacteria, fungi and parasites is very difficult, as they share a larger portion of the genome, and some only differ by <5%.
The per base sequencing cost is still significantly more than that of MiSeq. However, the prospect of supplementing reference databases with full-length sequences from organisms below the limit of detection from the Sanger approach; this could possibly greatly help the identification of organisms in metagenomics.
See also
First-generation sequencing
Second-generation sequencing
References
External links
Molecular biology
Molecular biology techniques
Biotechnology
DNA sequencing methods | Third-generation sequencing | [
"Chemistry",
"Biology"
] | 3,615 | [
"Genetics techniques",
"Biotechnology",
"DNA sequencing methods",
"Molecular biology techniques",
"DNA sequencing",
"nan",
"Molecular biology",
"Biochemistry"
] |
53,367,602 | https://en.wikipedia.org/wiki/SMiLE-Seq | Selective microfluidics-based ligand enrichment followed by sequencing (SMiLE-seq) is a technique developed for the rapid identification of DNA binding specificities and affinities of full length monomeric and dimeric transcription factors in a fast and semi-high-throughput fashion.
SMiLE-seq works by loading in vitro transcribed and translated “bait” transcription factors into a microfluidic device in combination with DNA molecules. Bound transcription factor-DNA complexes are then isolated from the device, which is followed by sequencing and then sequence data analysis to characterize binding motifs. Specialized software is used to determine the DNA binding properties of monomeric or dimeric transcription factors to help predict their in vivo DNA binding activity.
SMiLE-seq combines three important functions differing from existing techniques: (1) The use of capillary pumps to optimize the loading of samples, (2) Trapping molecular interactions on the surface of the microfluidic device through immunocapture of target transcription factors, (3) Enabling the selection of DNA that is specifically bound to transcription factors from a pool of random DNA sequences.
Background
Elucidating the regulatory mechanisms used to govern essential cellular processes is an important branch of research. Cellular regulatory networks can be very complex and often involve the coordination of multiple processes that begin with the modulation of gene expression. The binding of transcription factor molecules to DNA, either alone or in combination with other transcription factors, is used to control gene expression in response to both intra- and extracellular stimuli.
Characterizing the binding mechanisms and specificities of transcription factors to specific regions of DNA – and identifying these transcription factors – is a fundamental component of the process of resolving cellular regulatory dynamics. Before the introduction of SMiLE-seq technology, ChIP-seq (chromatin immunoprecipitation sequencing) and HT-SELEX (high throughput systematic evolution of ligands by exponential enrichment) technologies were used to successfully characterize nearly 500 transcription factor-DNA binding interactions.
ChIP-seq uses immunoprecipitation to isolate specific transcription factors bound to DNA fragments. Immunoprecipitation is followed by DNA sequencing, which identifies the genomic regions to which transcription factors bind.
HT-SELEX, a similar method, uses random, synthetically generated DNA molecules as bait for transcription factors in vitro. Sequence preferences and binding affinities are characterized based on successful binding interactions between bait molecules and transcription factors.
It is estimated that fewer than 50% of the transcription factors present in humans have been described in previous techniques. The development of SMiLE-seq technology has provided an additional method with the potential to facilitate identification and characterization of previously undescribed transcription factor-DNA binding interactions.
Workflow of SMiLE-seq
SMiLE-seq uses a microfluidic device into which transcription factors, which have been transcribed and translated in vitro, are loaded. Transcription factor samples (~0.3 ng) are modified by the addition of an enhanced green fluorescent protein (eGFP) tag and combined with both target double-stranded DNA molecules (~8 pmol) tagged with Cyanine Dye5 (Cy5) and a double-stranded competitive DNA model, poly-dIdC, which operates as a negative control to limit spurious binding interactions.
When multiple transcription factors are simultaneously analyzed (e.g., when characterization of potential heterodimeric binding interactions is performed), each transcription factor is tagged with a correspondingly unique fluorescent tag. Samples are pumped through the microfluidic device in a passive, twenty-minute process that utilizes capillary action in a series of parallel channels. eGFP-tagged transcription factors are immunocaptured using anchored biotinylated anti-eGFP antibodies.
Mechanical depression of a button traps bound transcription factor-DNA complexes, and fluorescent analysis is performed. Fluorescent readouts that identify the presence of multiple fluorescent tags associated with a single antibody indicate heterodimeric binding interactions. The presence of DNA is confirmed by Cy5 signal detection. A polydimethylsiloxane membrane on the button surface captures successfully bound transcription factor-DNA complexes, while unbound transcription factors and targets are washed away.
Following the removal of unbound components, bound DNA molecules are collected, pooled, and amplified. Sequencing is subsequently performed using NextSeq 500 or HiSeq2000 sequencing lanes. Sequence data is used to develop a seed sequence, which is then probed for functional motifs using a uniquely developed hidden Markov model-based software pipeline.
Advantages
The use of microfluidics in SMiLE-seq offers three main advantages when compared to current techniques used to measure protein-DNA interactions (e.g., ChIP-seq, HT-SELEX, and protein binding microarrays).
SMiLE-seg requires fewer transcription factors than other similar techniques (only picograms are required).
The process is faster than other techniques (it requires less than an hour, as compared to days).
SMiLE-seq is not limited by the length of target DNA (a limitation of protein binding microarrays), and is not biased towards stronger affinity protein-DNA interactions (a major limitation of HT-SELEX).
The ability of many transcription factors to bind DNA is dependent on heterodimer formation, and therefore requires the presence of a specific dimer partner for binding. This has been shown to yield incomplete results if transcription factors are individually tested. Heterodimer combinations have been shown to range from 3000 to 25000, and many remain uncharacterized.
A technology like SMiLE-seq, which is able to detect these dimeric interactions, may help broaden current knowledge and characterization of transcription factor-DNA binding profiles. Additionally, previous technologies have used transcription factor probes in their truncated form, which may reduce their ability to bind and dimerize. SMiLE-seq enables robust identification of DNA binding specificities of full length, previously uncharacterized transcription factors. Furthermore, SMiLE-seq is able to identify transcription factor binding sites over a wide range of binding affinities, which represents a significant limitation of other technologies.
Limitations
The primary limitation of SMiLE-seq is that the technique can only be used to characterize the binding interactions of previously identified transcription factors, as the method requires in vitro transcription and translation of the transcription factors prior to their combination with DNA molecules. Additionally, previous studies have shown that fluorescent protein tags can affect the binding affinity of proteins to their targets.
The effect of the specific fluorescent protein tags on binding affinity would have to be investigated to determine whether this would impact specific protein-DNA interactions found using this technology. Further development of SMiLE-seq may involve modifying transcription factor expression conditions to increase the success of analysis.
See also
SELEX
ChIP-seq
Protein binding microarrays
Competition-ChIP
References
Protein methods
Molecular biology techniques
Biotechnology
DNA | SMiLE-Seq | [
"Chemistry",
"Biology"
] | 1,418 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
54,643,138 | https://en.wikipedia.org/wiki/Zoothamnium%20niveum | Zoothamnium niveum is a species of ciliate protozoan which forms feather-shaped colonies in marine coastal environments. The ciliates form a symbiosis with sulfur-oxidizing chemosynthetic bacteria of the species "Candidatus Thiobios zoothamnicoli", which live on the surface of the colonies and give them their unusual white color.
Characteristics
The conspicuously white and feather-shaped colonies are composed of individual bell-shaped cells known as zooids. The stalks of individual cells grow from a single central stalk. Colonies can reach a length of up to 15 mm, formed from hundreds of single zooids, each with a length of only 120 μm. An entire colony can contract into a ball-shaped bunch through the contraction of myonemes in their stalks.
The white color is produced by chemolithoautotrophic sulfur-oxidizing bacteria, which cover the entire surface of the Z. niveum colony. In most other species of Zoothamnium, bacteria are only known to cover the stalks. The bacteria contain elemental sulfur, which appear white. Z. niveum appears colorless when the bacteria are absent.
Like in other ciliates, a contractile vacuole maintains osmotic balance for the cell, and allows it to survive the salt concentrations in both marine and brackish water. The vacuole is located in Z. niveum directly below the lip of the peristome.
Polymorphism
Most ciliates live as single-celled organisms in aquatic environments, and the single cell carries out all functions of life, such as nutrition, metabolism, and reproduction. Colonies of Z. niveum are composed of numerous individual cells that form a feather-like colonial unit, with several different cell types. Old branches of the colony illustrate the polymorphism of the zooids when viewed under the microscope. Three different forms of the individual ciliate cells are present, which are distinct in both form and function. The large macrozooids can transform into swarmers and leave the colony. They settle on suitable surfaces and develop into new colonies. The microzooids are small cells specialized for feeding, which the colony does by consumption of their symbiotic bacteria and other organic particles. At the terminal ends of the colony are specialized zooids that can elongate and facilitate the asexual reproduction of the colony.
The bacteria on different parts of a host have different shapes despite belonging to the same species (polymorphism). Those on the stalks are shaped like rods, but those in the region of the ciliated oral apparatus of the microzooids are shaped like small spheres (coccoid). Intermediate forms are also found in between.
Distribution and habitat
The sessile colonies of Z. niveum were first described from the shallow waters of the Red Sea. They were later also found in the Florida Keys in the Gulf of Mexico, and at the Belize Barrier Reef in the Caribbean Sea.
The colonies settle in environments that contain sulfide. Hydrogen sulfide, sulfide, and related sulfur-containing compounds like thiosulfate are produced during the decomposition and remineralization of organic material. For example, plant material like the torn-off leaves of Posidonia oceanica in seagrass meadows of the Mediterranean accumulate in depressions of rocky ledges and decompose. In mangrove forests of the Caribbean, organic material can form peat and release sulfide. Hydrogen sulfide can also originate from geological phenomena such as at underwater hydrothermal vents, e.g. off the Canary Islands.
Ecological conditions
Extreme ecological conditions prevail at these sources of sulfide close to which colonies of Z. niveum settle. Because there is little water current under mangrove roots and at seagrass deposits under rock ledges, these decomposition hot-spots are extremely poor in oxygen and rich in sulfide. In mangrove forests off the coast of Belize, they have been found around small holes in the mangrove peat which form when the mangrove rootlets decompose. These openings have been called sulfide "microvent[s]", because they resemble in miniature the hydrothermal vents of the deep sea, the so-called black smokers, although the temperatures in shallow waters are much lower (28 °C in the Caribbean, 21 °C-25 °C in the Mediterranean (summer)), compared to the gradient between >300 °C and 2 °C in the deep sea because of volcanic activity. The Zoothamnium colonies do not settle directly over the decomposing material, but nearby e.g. on overhanging rocks, leaves of seagrass or seaweed, or mangrove roots.
Symbiosis
The symbiotic benefits provided by the colonies of Z. niveum for its attached ectosymbiotic bacteria Candidatus Thiobios zoothamnicoli (a member of the Gammaproteobacteria), which are vertically transmitted to its host, are its active alternation between oxygen-rich and sulfide-rich conditions. This alternation can occur through the regular contraction and extension of the colonies and through the water currents set up by the beating of the cilia in the region of the oral opening of the ciliates.
The rapid contraction and slow re-extension of the colonies causes a flow of both sulfide-rich water for the feeding of the bacteria and normal oxygenated seawater for the respiration of Z. niveum. Through the beating of its cilia at the oral apparatus of Zoothamnium is the mixing regulated. When there is a low supply of sulfur compounds, the bacteria use the sulfur that is stored inside their cells. They eventually appear pale and transparent after four hours because the stored sulfur has been consumed. However, if the sulfide concentration is too high, it can be toxic to the Zoothamnium colonies and kill the ciliates despite the bacteria.
Bacteria close to the oral end of the microzooids have a coccoid form, a larger volume, and a higher division rate than the rod-shaped bacteria on the stalks, despite both belonging to the same species. This is because the mixing of water by the beating of the oral cilia result in a more optimal concentration of both oxygen and sulfide in the water there. The bacteria at the oral region can thus be used as a food source and are swirled into the mouth (cytostome) of the ciliate and digested.
References
Literature
Christian Rinke, Jörg A. Ott und Monika Bright: "Nutritional processes in the chemoautotrophic Zoothamnium niveum symbioses", Symposium of the Biology of Tropical Shallow Water Habitats, Lunz, Österreich, Oktober 2001, S. 19-21
External links
Smithsonian Marine Station at Fort Pierce - Zoothamnium niveum
Chemosynthetic symbiosis
Oligohymenophorea
Taxa named by Christian Gottfried Ehrenberg
Ciliate species | Zoothamnium niveum | [
"Biology"
] | 1,430 | [
"Biological interactions",
"Chemosynthetic symbiosis",
"Behavior",
"Symbiosis"
] |
74,639,708 | https://en.wikipedia.org/wiki/Drug%20permeability | In medicinal chemistry, Drug Permeability is an empirical parameter that indicates how quickly a chemical entity or an active pharmaceutical ingredient crosses a biological membrane or another biological barrier to become bioavailable in the body. Drug permeability, together with drug aqueous solubility are the two parameters that define the fate of the active ingredient after oral administration and ultimately define its bioavailability. When drug permeability is empirically measured in vitro, it is generally called apparent permeability (Papp) as its absolute value varies according to the method selected for its measurement. Papp is measured in vitro utilizing cellular based barriers such as the Caco-2 model or utilizing artificial biomimetic barriers, such as the Parallel Artificial Membrane Permeation Assay (PAMPA) or the PermeaPad. All these methods are built on an acceptor compartment (from 0.2 up to several mL according to the method uses) where the drug solution is placed, a biomimetic barrier and an acceptor compartment, where the drug concentration is quantified over time. By maintaining sink condition, a steady state is reached after a lag time (τ, Fig. 1) .
Data Analysis
The drug flux represents the slope of the linear regression of the accumulated mass (Q) over time (t) normalized over the permeation area (A), i.e., the surface area of the barrier available for permeation.
Equation 1:
The drug apparent permeability (Papp) is calculated by normalizing the drug flux (j) over the initial concentration of the API in the donor compartment (c0) as:
Equation 2:
Dimensionally, the Papp represents a velocity, and it is normally expressed in cm/sec. The highest is the permeability, the highest is expected to be the bioavailability of the drug after oral administration.
See also
Lipinski's rule of five
Pharmacodynamics
Pharmacokinetics
References
External links
Permm server and database, a computational tool for theoretical assessment of passive permeability of molecules across the lipid bilayer
Medicinal chemistry
Diffusion
Membrane biology | Drug permeability | [
"Physics",
"Chemistry",
"Biology"
] | 444 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Biochemistry",
"Membrane biology",
"nan",
"Molecular biology",
"Medicinal chemistry"
] |
74,643,075 | https://en.wikipedia.org/wiki/Mitochondrial%20pyruvate%20carrier%201 | Mitochondrial pyruvate carrier 1 (MPC1), also known as brain protein 44-like (BRP44L) and SLC54A1, is a protein that in humans is encoded by the MPC1 gene. It is part of the Mitochondrial Pyruvate Carrier (MPC) protein family. This protein is involved in transport of pyruvate across the inner membrane of mitochondria in preparation for the pyruvate dehydrogenase reaction.
Interactive pathway map
Clinical significance
Mitochondrial pyruvate carrier deficiency (MPYCD) is an autosomal recessive disease due to mutations in the MPC1 gene on chromosome 6q27. It is an inborn error of carbohydrate metabolism that blocks aerobic glycolysis by preventing the transport of pyruvate from the cytosol into the mitochondrion for oxidative phosphorylation; however, anaerobic glycolysis is preserved. Common signs and symptoms include poor growth, normal lactate/pyruvate ratio (however both lactate and pyruvate are in higher than normal concentrations), hepatomegaly, lactic acidosis, hypoglycemia, neurological problems, and hypotonia. A disease with comparable symptoms is also seen in autosomal recessive mutations of the MPC2 gene.
See also
Mitochondrial pyruvate carrier 2
Inborn errors of carbohydrate metabolism
References
Genes on human chromosome 6
Inborn errors of carbohydrate metabolism
Autosomal recessive disorders
Transport proteins
Solute carrier family | Mitochondrial pyruvate carrier 1 | [
"Chemistry"
] | 336 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
73,245,839 | https://en.wikipedia.org/wiki/Bogomolov%E2%80%93Sommese%20vanishing%20theorem | In algebraic geometry, the Bogomolov–Sommese vanishing theorem is a result related to the Kodaira–Itaka dimension. It is named after Fedor Bogomolov and Andrew Sommese. Its statement has differing versions:
This result is equivalent to the statement that:
for every complex projective snc pair and every invertible sheaf
with .
Therefore, this theorem is called the vanishing theorem.
See also
Bogomolov–Miyaoka–Yau inequality
Vanishing theorem (disambiguation)
Notes
References
Further reading
Theorems in algebraic geometry
Theorems in complex geometry | Bogomolov–Sommese vanishing theorem | [
"Mathematics"
] | 120 | [
"Theorems in algebraic geometry",
"Theorems in complex geometry",
"Theorems in geometry"
] |
73,246,459 | https://en.wikipedia.org/wiki/Dividing%20a%20square%20into%20similar%20rectangles | Dividing a square into similar rectangles (or, equivalently, tiling a square with similar rectangles) is a problem in mathematics.
Three rectangles
There is only one way (up to rotation and reflection) to divide a square into two similar rectangles.
However, there are three distinct ways of partitioning a square into three similar rectangles:
The trivial solution given by three congruent rectangles with aspect ratio 3:1.
The solution in which two of the three rectangles are congruent and the third one has twice the side length of the other two, where the rectangles have aspect ratio 3:2.
The solution in which the three rectangles are all of different sizes and where they have aspect ratio ρ2, where ρ is the plastic ratio.
The fact that a rectangle of aspect ratio ρ2 can be used for dissections of a square into similar rectangles is equivalent to an algebraic property of the number ρ2 related to the Routh–Hurwitz theorem: all of its conjugates have positive real part.
Generalization to n rectangles
In 2022, the mathematician John Baez brought the problem of generalizing this problem to n rectangles to the attention of the Mathstodon online mathematics community.
The problem has two parts: what aspect ratios are possible, and how many different solutions are there for a given n. Frieling and Rinne had previously published a result in 1994 that states that the aspect ratio of rectangles in these dissections must be an algebraic number and that each of its conjugates must have a positive real part. However, their proof was not a constructive proof.
Numerous participants have attacked the problem of finding individual dissections using exhaustive computer search of possible solutions. One approach is to exhaustively enumerate possible coarse-grained placements of rectangles, then convert these to candidate topologies of connected rectangles. Given the topology of a potential solution, the determination of the rectangle's aspect ratio can then trivially be expressed as a set of simultaneous equations, thus either determining the solution exactly, or eliminating it from possibility.
The numbers of distinct valid dissections for different values of n, for n = 1, 2, 3, ..., are:
See also
Squaring the square
References
External links
Python code for dissection of a square into n similar rectangles via "guillotine cuts" by Rahul Narain
Rectangular subdivisions
Mathematical problems
Recreational mathematics | Dividing a square into similar rectangles | [
"Physics",
"Mathematics"
] | 523 | [
"Tessellation",
"Recreational mathematics",
"Rectangular subdivisions",
"Mathematical problems",
"Symmetry"
] |
73,247,463 | https://en.wikipedia.org/wiki/Intake%20momentum%20drag | Intake momentum drag is an aerodynamic phenomenon which affects turboprop and jet-powered aircraft.
Causes
Intake momentum drag is caused by the consequence of the speed of the air entering the engine increasing, but where the exit speed of the air from the engine remains constant. The outcome therefore is that the amount by which the engine increases air velocity, ostensibly by way of the compression process, is reduced. A repercussion of this causes a slight reduction in the thrust of a jet engine.
Intake momentum drag yaw
Intake momentum drag yaw is a further consequence of intake momentum drag which affects V/STOL (vertical and/or short take-off and landing) aircraft such as the Hawker Siddeley Harrier.
Intake momentum drag yaw is an aspect in which the mass of air ingested by the intake of the engine, whilst the aircraft is in the hover during a crosswind, can result in a state of uncontrolled roll (a secondary aerodynamic effect of yaw).
The phenomenon was identified during the test flying programme for the Harrier and which required precise investigation. This resulted in test pilot John Farley deliberately flying right into the edge of this condition repeatedly, so that a system to counteract the effect could be developed.
References
Aerospace engineering
Aerodynamics
Classical mechanics
Force | Intake momentum drag | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 262 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Aerodynamics",
"Mechanics",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
73,249,745 | https://en.wikipedia.org/wiki/Trimethylenemethane%20complexes | Trimethylenemethane complexes are metal complexes of the organic compound trimethylenemethane. Several examples are known, and some have been employed in organic synthesis.
History
The synthesis of cyclobutadieneiron tricarbonyl pointed to the possible existence of related complexes of elusive organic compounds. Trimethylenemethane (TMM) has a natural connection to cyclobutadiene, and, in 1966, Emerson and co-workers reported the first trimethylenemethane (TMM) transition metal complex, η4-. This compound became the starting point for extensive studies.
Synthesis
Generally speaking, trimethylenemethane complexes are synthesized in the following four ways: (A) the dehalogenation of α, α'-dihalosubstituted precursors, (B) the thermal extrusion of XY (XY = HCl, Br2, and CH4,) from η3-methylallyl complexes, (C) the ring opening of alkylidenecyclopropanes, and (D) the elimination of Me3SiX [X = OAc, Cl, OS(O)2Me] from functionalized allylsilanes (Figure 1).
Dehalogenation of α, α'-dihalosubstituted precursors
η4-, the first trimethylenemethane metal complex to be reported, was obtained from the reaction of 3-chloro-2-chloromethylprop-1-ene with Fe2(CO)9 or Na2[Fe(CO)4]. Followed by this result, a number of substituted trimethylenemethane iron complexes have been prepared.
The thermal extrusion from η3-methylallyl complexes was reported by Emerson.The iron allyl complex, obtained from the reaction of 3-chloro-2-methylprop-1-ene with [Fe2(CO)9], decomposed on heating to afford the iron trimethylenemethane complex.
Ring opening of alkylidenecyclopropanes
In the presence of [Fe2(CO)9], the ring opening of 2-substituted methylenecyclopropanes leads to the formation of various η4-trimethylenemethane complexes containing different functional groups, such as (R1 = H, R2 = Ph), (R1 = Me, R2 = Ph), (R1 = R2 = Ph), and (R1 = H, R2 = CH=CH2). The stereochemistry has been elucidated by deuterium-labeling experiments.
Elimination of Me3SiX [X = OAc, Cl, OS(O)2Me] from functionalized allylsilanes
tetrakis(triphenylphosphine)palladium(0) is a precursor to highly reactive η3-trimethylenemethane complexes. Allylsilanes oxidatively add to some low-valent d8 complexes resulting in the formation of an η1-allyl complexes, followed by the formation of an η3-allyl complex, and finally elimination of Me3SiX to yield the η4-trimethylenemethane complex. The isolation of the proposed intermidate further confirmed the mechanism.
η4- (Ph = C6H5)
Structure
According to gas phase electron diffraction, η4- adopts a staggered conformation about the iron center. The ligands, which include carbonyl and a trigonal-pyramidal trimethylenemethane, are arranged in the usual umbrella-type configuration. The central carbon of the trimethylenemethane ligand is closer to the iron center compared to the outer methylene carbons. This was confirmed by the Fe-C(central) distance measuring 1.94(1) Å, while the Fe-CH distances were measured at 2.12 Å. Moreover, this result has also been confirmed by X-ray diffraction and vibrational spectrum.
The primary bonding interaction occurs between the 2e set of the Fe(CO)3 fragment and e" on the trimethylenemethane ligand. However, if the metal-trimethylenemethane axis is rotated by 60° into an eclipsed geometry, the interaction between 2e and e" is minimized, which results in an increase in the energy of the HOMO in the complex, which is a significant factor that provides a barrier to rotation, as shown in Figure 6b.
Extended Huckel calculations give a barrier of 87 KJ mol−1 using a planar trimethylenemethane ligand. Introducing a puckered conformation to the trimethylenemethane ligand, which resembles the experimental geometry, leads to an increase in the calculated barrier to 98.6 kJ mol−1. This puckering induces mixing of s character into e" orbitals, causing a more pronounced orientation toward the metal center. Consequently, the overlap between e" and 2e orbitals is enhanced. The degree of puckering, characterized by θ, falls within the range of 12°. The mixing of s character into e" also results in the H-C-H plane being tipped away from the metal. The angle β, between C-1 and C-2 and the plane H-C-H, is typically about 15°.
Reactions
Trimethylenemethane complexes undergo a wide variety of reactions including those with electrophiles, nucleophiles as well as redox reactions.
η4- adds hydrogen chloride to yield η3-. Substituted trimethylenemethane iron complexes, on the other hand, react with strong acids to produce cross-conjugated dienyl iron cations and η4-diene complexes. η4- add nucleophiles to give charge-neutral η3-allyl complexes.
η4- (PR3 = PMe3 or PMe2Ph) is oxidized by silver trifluoromethanesulfonate to give the 17-electron cation.
References
Coordination complexes | Trimethylenemethane complexes | [
"Chemistry"
] | 1,283 | [
"Coordination chemistry",
"Coordination complexes"
] |
73,250,426 | https://en.wikipedia.org/wiki/Transmission%20Kikuchi%20diffraction | Transmission Kikuchi Diffraction (TKD), also sometimes called transmission electron backscatter diffraction (t-EBSD), is a method for orientation mapping at the nanoscale. It’s used for analysing the microstructures of thin transmission electron microscopy (TEM) specimens in the scanning electron microscope (SEM). This technique has been widely utilised in the characterization of nano-crystalline materials, including oxides, superconductors, and metallic alloys.
TKD offers improved spatial resolution, enabling effective characterization of nanocrystalline materials and heavily deformed samples where high dislocation densities can prevent successful characterization using conventional Electron backscatter diffraction. Many studies have reported sub-10 nm resolution using TKD.
The main difference between diffraction spots and Kikuchi bands is that in TEM, discrete diffraction spots arise from coherent scattering of the incident beam, while the formation of Kikuchi bands is described as a two-step process consisting of incoherent scattering of the primary beam followed by coherent scattering of these forward biased electrons. TKD has also been applied to analyse fine-grained ultramylonite peridotite samples in a scanning electron microscope. The preparation of TKD samples can be done with standard methods used for transmission electron microscopy (TEM).
Description
Transmission Kikuchi diffraction (TKD or t-EBSD) is an Electron backscatter diffraction (EBSD) technique that is used to analyse the crystallographic orientation and microstructure of materials at a high spatial resolution. It is a variation of convergent-beam electron diffraction, which has been introduced around the 1970s, and has since become increasingly popular in materials science research, especially for studying materials at the nanoscale.
In TKD, a thin foil sample is prepared and placed perpendicular to the electron beam of a scanning electron microscope. The electron beam is then focused on a small spot on the sample, and the crystal lattice of the sample diffracts the transmitted electrons. The diffraction pattern is then collected by a detector and analysed to determine the crystallographic orientation and microstructure of the sample.
One of the key advantages of TKD is its high spatial resolution that can reach a few nanometres. This is achieved by using a small electron beam spot size, typically less than 10 nanometres in diameter, and by collecting the transmitted electrons with a small-angle annular dark-field detector (STEM-ADF) in a scanning transmission electron microscope (STEM). Another advantage of TKD is its high sensitivity to local variations in crystallographic orientation. This is because the transmitted electrons in TKD are diffracted at very small angles, which makes the diffraction pattern highly sensitive to local variations in the crystal lattice.
TKD can also be used to study nano-sized materials, such as nanoparticles and thin films. Thin foil samples can be prepared for TKD using a Focused ion beam (FIB) or ion milling machine. However, such machines are expensive and their operation requires particular skills and training. Additionally, the diffraction patterns obtained from TKD can be more complex to interpret than those obtained from conventional EBSD techniques due to the complex geometry of the diffracted electrons.
On-axis and off-axis TKD methods differ in the sample's orientation with respect to the electron beam. In on-axis TKD, the sample is oriented so that the incident electron beam is nearly perpendicular to the sample surface. This results in a diffraction pattern that is nearly centred around the transmitted beam direction. On-axis TKD is typically used for analysing samples with low lattice strain and high crystallographic symmetry, such as single crystals or large grains.
In off-axis TKD, the sample is tilted with respect to the incident electron beam, typically at an angle of several degrees. This results in a diffraction pattern that is shifted away from the transmitted beam direction. Off-axis TKD is typically used for analysing samples with high lattice strain and/or low crystallographic symmetry, such as nano-crystalline materials or materials with defects. Off-axis TKD is often preferred for materials science research because it provides more information about the crystallographic orientation and microstructure of the sample, especially in samples with a high density of defects or a high degree of lattice strain. However, on-axis TKD can still be useful for studying samples with high crystallographic symmetry or for verifying the crystallographic orientation of a sample before performing off-axis TKD. The on-axis technique can speed up acquisition by more than 20 times, and a low scattering angle setup also gives rise to higher quality patterns.
EBSD resolution is influenced by multiple factors including the beam size, electron accelerating voltage, the material's atomic mass and the specimen's thickness. Out of these variables, sample thickness has the greatest effect on the pattern quality and resolution of the image. An increase in the sample thickness broadens the beam, thus reducing the lateral spatial resolution.
Further reading
References
Diffraction
Scientific techniques
Spectroscopy | Transmission Kikuchi diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,097 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Crystallography",
"Diffraction",
"Spectroscopy"
] |
73,252,403 | https://en.wikipedia.org/wiki/Carbon-carbon%20bond%20activation | Carbon-carbon bond activation refers to the breaking of carbon-carbon bonds in organic molecules. This process is an important tool in organic synthesis, as it allows for the formation of new carbon-carbon bonds and the construction of complex organic molecules. However, C–C bond activation is challenging mainly for the following reasons: (1) C-H bond activation is a competitive process of C-C activation, which is both energetically and kinetically more favorable; (2) the accessibility of the transition metal center to C–C bonds is generally difficult due to its 'hidden' nature; (3) relatively high stability of the C–C bond (90 kcal/mol−1). As a result, in the early stage, most examples of C-C activation are of stringed ring systems, which makes C-C activation more favorable by increasing the energy of the starting material. However, C-C activation of unstrained C-C bonds has remained challenging until the recent two decades.
Examples of C-C bond activation
Due to the difficulty of C-C activation, a driving force is required to facilitate the reaction. One common strategy is to form stable metal complexes. One example is reported by Milstein and coworkers, in which the C(sp2)–C(sp3) bond of bisphosphine ligands was selectively cleaved by a number of metals to afford stable pincer complexes under mild conditions.
Aromatization is another driving force that is utilized for C–C bond activation. For example, Chaudret group reported that the C–C bond of steroid compounds can be cleaved through the Ru-promoted aromatization of the B ring. At the same time, a methane molecule is released, which is possibly another driving force for this reaction.
In addition, the metalloradical has also been proven to have the ability to cleave the C–C single bond. Chan group reported the C–C bond scission of cyclooctane via 1,2-addition with Rh(III) porphyrin hydride, which involved [RhII(ttp)]· radical as the key intermediate.
Mechanism of C-C bond activation
Generally speaking, there are two distinct mechanistic pathways that lead to C-C bond activation: (a) the β-carbon elimination of metal complexes. In this mechanism, a M–C intermediate and a double bond are formed at the same time; and (b) the direct oxidative addition of C–C bonds into low-valent metal adducts to form a bis(organyl)metal complex.
β-carbon elimination
In 1997, Tamaru group reported the first metal-catalyzed β-carbon elimination of an unstrained compound. Their work revealed a novel Pd(0)-catalyzed ring opening of 4-vinyl cyclic carbonates. They proposed that the reaction is initiated by the elimination of carbon dioxide to form π-allylpalladium intermediate, which is followed by β-decarbopalladation to form dienals and dienones. Since then, this field has bloomed, and a lot of similar reactions were developed and showed their great potential in organic synthesis. The early stage of research in this field has focused on the reaction of M–O–C–C species and β-carbon elimination of the M–N–C–C intermediate was not discovered until the recent ten years. In 2010, Nakamura reported a Cu-catalyzed substitution reaction of propargylic amines with alkynes or other amines as the first example of the transition-metal-catalyzed β-carbon elimination of amines.
Oxidative addition
Compared with β-carbon elimination, oxidative addition of C-C bond is a more direct way of C-C bond activation. However, it is more challenging to do for the following reasons: 1) It forms two weak M-C bonds at the expense of breaking a stable C-C bond, so it is energetically unfavorable; 2) the C-C bond is usually hindered, which makes the metal center hard to approach. As a result, the cleavage of unstrained compounds that have been achieved is mainly focused on ketone substrates. This is because the C–C bond adjacent to the carbonyl of ketones is weaker and can be much more easily cleaved. It also benefits from the less steric hindrance from the planar structure of the carbonyl motif. Suggs and Jun are pioneers in this field. They found that an Rh(I) complex, [RhCl(C2H4)2]2, can be oxidatively inserted into the C–C bond of 8-acylquinolines at the 8-position to form relatively stable 5-membered rhodacycles. Subsequently, 8-acylquinoline can be coupled with ethylene to afford 8-quinolinyl ethylketone, which represented the first transition-metal-catalyzed scission of C–C bonds via oxidative addition.
Applications of C-C bond activation
Carbon-carbon bond activation reactions have numerous applications in organic synthesis, materials science, and pharmaceuticals. In organic synthesis, these reactions are used to construct complex molecules in a highly efficient and selective manner. For example, in 2021 Dong Group described the first enantioselective total synthesis of the natural product penicibilaenes using a late-stage carbon-carbon bond activation strategy. There are also a lot of other examples highlighting the potential of carbon-carbon bond activation strategies in the total synthesis of complex natural products with high stereocontrol.
References
Organic chemistry
Chemical bonding | Carbon-carbon bond activation | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,181 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
73,255,051 | https://en.wikipedia.org/wiki/Comparison%20of%20data%20structures | This is a comparison of the performance of notable data structures, as measured by the complexity of their logical operations. For a more comprehensive listing of data structures, see List of data structures.
The comparisons in this article are organized by abstract data type. As a single concrete data structure may be used to implement many abstract data types, some data structures may appear in multiple comparisons (for example, a hash map can be used to implement an associative array or a set).
Lists
A list or sequence is an abstract data type that represents a finite number of ordered values, where the same value may occur more than once. Lists generally support the following operations:
peek: access the element at a given index.
insert: insert a new element at a given index. When the index is zero, this is called prepending; when the index is the last index in the list it is called appending.
delete: remove the element at a given index.
Maps
Maps store a collection of (key, value) pairs, such that each possible key appears at most once in the collection. They generally support three operations:
Insert: add a new (key, value) pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
Remove: remove a (key, value) pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
Lookup: find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation.
Unless otherwise noted, all data structures in this table require O(n) space.
Integer keys
Some map data structures offer superior performance in the case of integer keys. In the following table, let be the number of bits in the keys.
Priority queues
A priority queue is an abstract data-type similar to a regular queue or stack. Each element in a priority queue has an associated priority. In a priority queue, elements with high priority are served before elements with low priority. Priority queues support the following operations:
insert: add an element to the queue with an associated priority.
find-max: return the element from the queue that has the highest priority.
delete-max: remove the element from the queue that has the highest priority.
Priority queues are frequently implemented using heaps.
Heaps
A (max) heap is a tree-based data structure which satisfies the : for any given node C, if P is a parent node of C, then the key (the value) of P is greater than or equal to the key of C.
In addition to the operations of an abstract priority queue, the following table lists the complexity of two additional logical operations:
increase-key: updating a key.
meld: joining two heaps to form a valid new heap containing all the elements of both, destroying the original heaps.
Notes
References
data structures
Data structures | Comparison of data structures | [
"Technology"
] | 616 | [
"Computing comparisons"
] |
66,009,512 | https://en.wikipedia.org/wiki/Introduction%20to%20Quantum%20Mechanics%20%28book%29 | Introduction to Quantum Mechanics, often called Griffiths, is an introductory textbook on quantum mechanics by David J. Griffiths. The book is considered a standard undergraduate textbook in the subject. Originally published by Pearson Education in 1995 with a second edition in 2005, Cambridge University Press (CUP) reprinted the second edition in 2017. In 2018, CUP released a third edition of the book with Darrell F. Schroeter as co-author; this edition is known as Griffiths and Schroeter.
Content (3rd edition)
Part I: Theory
Chapter 1: The Wave Function
Chapter 2: Time-independent Schrödinger Equation
Chapter 3: Formalism
Chapter 4: Quantum Mechanics in Three Dimensions
Chapter 5: Identical Particles
Chapter 6: Symmetries and Conservation Laws
Part II: Applications
Chapter 7: Time-independent Perturbation Theory
Chapter 8: The Variational Principle
Chapter 9: The WKB Approximation
Chapter 10: Scattering
Chapter 11: Quantum Dynamics
Chapter 12: Afterword
Appendix: Linear Algebra
Index
Reception
The book was reviewed by John R. Taylor, among others. It has also been recommended in other, more advanced, textbooks on the subject.
According to physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory, Griffiths' Introduction to Quantum Mechanics covers all materials needed for questions on quantum mechanics and atomic physics in the Physics Graduate Record Examinations (Physics GRE).
Publication history
See also
Introduction to Electrodynamics by the same author
List of textbooks in electromagnetism
List of textbooks on classical mechanics and quantum mechanics
References
Physics textbooks
Quantum mechanics
1995 non-fiction books
2005 non-fiction books
2018 non-fiction books
Prentice Hall books
Cambridge University Press books
Undergraduate education | Introduction to Quantum Mechanics (book) | [
"Physics"
] | 341 | [
"Quantum mechanics",
"Works about quantum mechanics"
] |
66,016,004 | https://en.wikipedia.org/wiki/HL-2M | HL-2M is a research tokamak at the Southwestern Institute of Physics in Chengdu, China. It was completed on November 26, 2019 and commissioned on December 4, 2020. HL-2M is now used for nuclear fusion research, in particular to study heat extraction from the plasma. With a major radius of , the tokamak is a medium-scale device. The magnetic field of up to is created by non-superconducting copper coils.
References
Tokamaks
Fusion reactors | HL-2M | [
"Physics",
"Chemistry"
] | 106 | [
"Nuclear fusion",
"Fusion reactors",
"Plasma physics stubs",
"Plasma physics"
] |
66,016,836 | https://en.wikipedia.org/wiki/Borophosphate | The borophosphates are mixed anion compounds containing borate and phosphate anions, which may be joined together by a common oxygen atom. Compounds that contain water or hydroxy groups can also be included in the class of compounds.
Borophosphates can be classified by whether or not they are hydrated, and the anion structure, which can be single, double, triple, isolated ring, isolated branched ring, simple chain, branched chain, loop chain, layers, or three-dimensional network. The isolated anion compounds are the borate phosphates, which contain separate borate and phosphate groups. Some of the borophosphate structures resemble silicates.
Related compounds include aluminophosphates, which have aluminium instead of boron, gallophosphates, with gallium in place of boron, and by substituting the phosphate: boroarsenates, boroantimonates, and vanadoborates.
Formation
Borophosphates can be formed by heating compounds together at up to 900 °C. The products are dense, anhydrous, and do not contain organic substances.
Solvothermal synthesis uses a non water solvent such as ethylene glycol to dissolve the product.
The flux method crystallises from a molten flux of boric acid and sodium dihydrogen phosphate at around 171.
The hydrothermal method heats the ingredients with water under pressure up to 200 °C. The ingredients are boric acid, phosphoric acid, metal salts, or organic bases. Products often contain hydrogen.
The ionothermal synthesis method uses an ionic liquid such as 1-alkyl-3-methylimidazolium bromide as a solvent. This can be done at atmospheric pressure and temperatures under 100 °C.
Characteristics
Borophosphate compounds have been investigated for magnetic, electrical, optical and catalytic properties. Some borophosphates are porous and so have surface for interaction on their interiors, not just their surface. They can reversibly absorb water, or have channels that can allow ions to conduct. The reflection of a labelled tetrahedron cannot be superimposed (even with rotation or movements), so the compounds containing phosphate and borate tetrahedrons can be non-centrosymmetric, or chiral.
List
References
Borates
Phosphates
Mixed anion compounds | Borophosphate | [
"Physics",
"Chemistry"
] | 487 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Phosphates",
"Ions"
] |
47,684,331 | https://en.wikipedia.org/wiki/Imidazolate | Imidazolate (C3H3N) is the conjugate base of imidazole. It is a nucleophile and a strong base. The free anion has C2v symmetry. Imidazole has a pKa of 14.05, so the deprotonation of imidazole (C3H3N2H) requires a strong base.
Occurrence
Imidazolate is a common bridging ligand in coordination chemistry. In the zeolitic imidazolate frameworks, the metals are interconnected via imidazolates. In the enzyme superoxide dismutase, imidazolate links copper and zinc centers.
References
Imidazoles
Anions | Imidazolate | [
"Physics",
"Chemistry"
] | 151 | [
"Ions",
"Matter",
"Anions"
] |
47,689,010 | https://en.wikipedia.org/wiki/Judith%20Q.%20Longyear | Judith Querida Longyear (20 September 1938–13 December 1995) was an American mathematician and professor whose research interests included graph theory and combinatorics. Longyear was the second woman to ever earn a mathematics Ph.D. from Pennsylvania State University, where she studied under the supervision of Sarvadaman Chowla and wrote a thesis entitled Tactical Configurations. Longyear taught mathematics at several universities including California Institute of Technology, Dartmouth College and Wayne State University. She worked on nested block designs and Hadamard matrices.
References
Graph theorists
20th-century American mathematicians
1938 births
1995 deaths
20th-century American women mathematicians | Judith Q. Longyear | [
"Mathematics"
] | 126 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
47,690,260 | https://en.wikipedia.org/wiki/Giovanni%20Vignale | Giovanni Vignale is an Italian American physicist and Professor of Physics at the University of Missouri. Vignale is known for his work on density functional theory - a theoretical approach to the quantum many-body problem - and for several contributions to many-particle physics and spintronics. He is also the author of a monograph on the "Quantum Theory of the Electron Liquid" (with Gabriele F. Giuliani) and a book entitled "The Beautiful Invisible - Creativity, imagination, and theoretical physics".
Life
Vignale was born in Naples, Italy, in 1957 and studied physics at the Scuola Normale Superiore in Pisa, where he graduated in 1979. He completed his Ph.D. at Northwestern University in 1984, with a thesis on "Collective modes, effective interactions and superconductivity in the electron-hole liquid". He was a postdoctoral researcher at the Max-Planck-Institute for Solid State Research in Stuttgart, Germany and at Oak Ridge National Laboratory in Oak Ridge, Tennessee, before joining the Department of Physics and Astronomy at the University of Missouri in 1988. He is Curators' Professor of Physics at the University of Missouri since 2006 and Fellow of the American Physical Society since 1997.
Research contributions
Vignale is known for his contributions to density functional theory. In 1987 he formulated, in collaboration with Mark Rasolt, the current density functional theory for electronic systems in the presence of a static magnetic field. In 1996 he developed, with Walter Kohn (Nobel Laureate in Chemistry, 1998), the time-dependent current density functional theory for electronic systems subjected to time-dependent electromagnetic fields. He is also known for his contributions to spintronics: in 2000, with Irene D'Amico, he introduced the concept of spin Coulomb drag (experimentally observed in 2005). In 2003 he proposed, with Michael E. Flatte' of the University of Iowa, the theoretical concept for a unipolar spin diode and a unipolar spin transistor.
Vignale is co-author (with Gabriele F. Giuliani) of a monograph on the quantum electron liquid, which is used by students and researchers for reference and self-study. In 2011 he published a non-technical book "The Beautiful Invisible - Creativity, imagination, and theoretical physics", which presents theoretical physics as a form of art. In the introduction to this book he writes “A good scientific theory is like a symbolic tale, an allegory of reality. Its characters are abstractions that may not exist in reality; yet they give us a way of thinking more deeply about reality. Like a fine work of art, the theory creates its own world: it transforms reality into something else – an illusion perhaps, but an illusion that has more value than the literal fact.”
Literary work
Giovanni Vignale is the author of several works of fiction and poetry. Some of his poems have been translated from English to Spanish by the renowned Cuban poet Juana Rosa Pita and are published in both languages in Time is Alive/El Tiempo Esta' Vivo. The dramatic quartet Odradek and Billy Bass Drink to the End of the World features four short plays patterned after the classic Japanese Noh drama form. About this book Juana Rosa Pita writes: "Suspended in space and time, his characters are pure abstractions, speaking devices, neither alive nor dead, in fact, not even human... Poetry, prose and drama conspire to make these plays an unforgettable reading experience".
Prose
Odradek and Billy Bass Drink to the End of the World. El Zunzun Viajero, 2018
Vite Scambiate. Cultura Duemila Editrice, 1993
Poetry
Time is Alive/El tiempo esta' vivo. El Zunzun Viajero, 2019
References
Northwestern University alumni
University of Missouri physicists
21st-century American physicists
Computational chemists
1957 births
Living people
Fellows of the American Physical Society | Giovanni Vignale | [
"Chemistry"
] | 812 | [
"Computational chemistry",
"Theoretical chemists",
"Computational chemists"
] |
47,692,268 | https://en.wikipedia.org/wiki/Marine%20mercury%20pollution | Mercury is a heavy metal that cycles through the atmosphere, water, and soil in various forms to different parts
of the world. Due to this natural mercury cycle, irrespective of which part of the world releases mercury it could affect an entirely different part of the world making mercury pollution a global concern. Mercury pollution is now identified as a global problem and awareness has been raised on an international action plan to minimize anthropogenic mercury emissions and clean up mercury pollution. The 2002 Global Mercury Assessment concluded that "International actions to address the global mercury problem should not be delayed". Among many environments that are under the impact of mercury pollution, the ocean is one which cannot be neglected as it has the ability to act as a "storage closet" for mercury. According to a recent model study the total anthropogenic mercury released into the ocean is estimated to be around 80,000 to 45,000 metric tons and two-thirds of this amount is estimated to be found in waters shallower than 1000m level where much consumable fish live. Mercury can bioaccumulate in marine food chains in the form of highly toxic methylmercury which can cause health risks to human seafood consumers. According to statistics, about 66% of global fish consumption comes from the ocean. Therefore, it is important to monitor and regulate oceanic mercury levels to prevent more and more mercury from reaching the human population through seafood consumption.
Sources
Mercury release occurs through both natural and anthropogenic processes. Natural processes are mainly geogenic such as volcanic activities and land emissions through the soil. Volcanoes release mercury from the underground reservoirs upon eruption. Land emissions are usually observed in regions closer to plate-tectonic boundaries where soils are enriched with minerals such as cinnabar (insoluble mercury sulfide, HgS). This mercury is released, usually as a salt, either by natural weathering of the rocks or by geothermal reactions. While natural phenomena account for a certain percentage of present-day emissions, anthropogenic emissions alone have increased mercury concentration in the environment by threefold. Global Mercury Assessment 2013 states main anthropogenic sources of mercury emission are artisanal and small-scale gold mining, fossil fuel burning, and primary production of non-ferrous metals. Other sources such as cement production, consumer product waste, crematoria, contaminated sites, and the chloralkali industry also contribute in relatively small percentages.
Mercury enters the ocean in different ways. Atmospheric deposition is the largest source of mercury in the oceans. Atmospheric deposition introduces three types of mercury to the ocean. Gaseous elemental mercury (Hg0) enters the ocean through air-water exchange. Inorganic mercury (Hg2+/HgII) and particle-bound mercury (Hg(P)) enter through wet and dry deposition. In addition, mercury enters the ocean via rivers, estuaries, sediments, hydrothermal vents, etc. These sources also release organic mercury compounds such as methylmercury. Once they are in the ocean they can undergo many reactions primarily grouped as; redox reactions (gain or loss of electrons), adsorption processes (binding to solid particles), methylation, and demethylation (addition or removal of a methyl group).
Sedimentary mercury
Mercury can enter seas and the open ocean as a result of the down stream movement and re-deposition of contaminated sediments from urban estuaries. For example, high total Hg content up to 5 mg/kg and averaging about 2 mg/kg occur in the surface sediments and sediment cores of the tidal River Mersey, UK, due to discharge from historical industries located along the banks of the tidal river including industries such as historical chlor-alkali industry. Sediments along a 100 km stretch of the Thames Estuary have also been shown to have total Hg contents of up to 12 mg/kg and a mean of 2 mg/kg with the highest concentrations found at depth in and around London. A gradual and statistically significant decrease in sedimentary Hg content occurs in the Thames as a results of greater distance from the historical and current point-sources, sorption and in-river deposition in the mud reaches, as well as dilution by marine sands from the Southern North Sea. In contrast, sediments entering the ocean from the marsh creeks of east coast US and mangroves fringing the South China Sea generally have moderate sedimentary Hg (<0.5 mg/kg).
Submarines
Many tonnes of liquid mercury reside in steel cylinders in the keels of sunken submarines around the world. Some have begun to leak and create environmental problems, for example German submarine U-864, sunk in 1945 near the coast of Norway, containing 67 tonnes of mercury.
Chemistry
Reduction and oxidation of mercury mostly occur closer to the ocean water surface. These are either driven by sunlight or by microbial activity. Under UV radiation, elemental mercury oxidizes and dissolves directly in ocean water or binds to other particles. The reverse reaction reduces some mercury Hg2+ to elemental mercury Hg(0) and returns to the atmosphere. Fine aerosols in the atmosphere such as ocean water droplets can act as small reaction chambers in this process providing the special reaction conditions required. Oxidation and reduction of mercury in the ocean are not very simple reversible reactions. Shown below is the proposed pathway of ocean aerosol mercuric photochemistry suggesting that it occurs through a reactive intermediate:
Photo oxidation is suspected to be driven by OH. radicals and reduction is driven by wind and surface layer disturbances. In the dark, mercury redox reactions continue due to microbial activity. The biological transformations are different and have a smaller rate compared to sunlight-driven processes above. Inorganic mercury Hg2+ and methylmercury have the ability to get adsorbed into particles. A positive correlation of binding is observed for the amount of organic matter vs. the concentration of these mercury species showing that most of them bind to organic matter. This phenomenon can determine the bioavailability and toxicity of mercury in the ocean. Some methylmercury is released into the ocean through river run-off. However, most of the methylmercury found in the ocean is produced in–situ (inside the ocean itself).
Methylation of inorganic mercury can occur via biotic and abiotic pathways. However, biotic pathways are more predominant. The reactions illustrated in a simplified scheme below are actually parts of complex enzyme-driven metabolic pathways taking place inside microbial cells.
In abiotic reactions, humic substances act as methylating agents and therefore this process occurs at shallow sea levels where decomposing organic matter is available to combine with inorganic mercury Hg2+.9 Mercury methylation studies in polar regions have also shown a positive correlation between methylation and chlorophyll content in water showing there could also be biogenic pathways for methylmercury production. Produced methylmercury gets accumulated in microbes. Due to the high permeability and absence of degradation for methylmercury in other species that depend on those microbes, this very toxic compound gets biomagnified through marine food chains to the top predators. Many humans consume many types of marine fish that are top predators in the food chains, putting their health in great danger. Therefore, finding possible solutions to minimize further mercury emissions and clean up the already existing mercury pollution is extremely important.
Health risks
Oceanic mercury pollution presents a serious threat to human health. The United States Environmental Protection Agency (EPA) states that mercury consumption by people of all ages can result in loss of peripheral vision, weakened muscles, impairment of hearing and speech, and deteriorated movement coordination. Infants and developing children face even more serious health risks because mercury exposure inhibits proper brain and nervous system development, damaging memory, cognitive thinking, language abilities, attention, and fine motor skills. The case of Minamata disease that occurred in Minamata Bay, Japan in the 1950s demonstrated the frightening effects of exposure to extremely high concentrations of mercury. Adult patients experienced extreme salivation, limb deformity, and irreversible dysarthria and intelligence loss. In children and fetuses (exposed to mercury through the mother's consumption of contaminated seafood), extensive brain lesions were observed and the patients experienced more serious effects like cerebral palsy, mental retardation, and primitive reflexes. In order to avoid the toxic effects of mercury exposure, the United States EPA advises a mercury dose limit of 0.1 μg/kg/day.
In addition to human health, animal health is also seriously threatened by mercury pollution in the ocean. The effects of high mercury levels on animal health were revealed by the severe mercury poisoning in Minamata Bay in which many animals exhibited extremely strange behaviors and high mortality rates after consuming contaminated seafood or absorbing mercury from the seawater. The cat population essentially disappeared due to cats drowning in the ocean and simply collapsing dead and it became commonplace to witness birds falling out of the sky and fish swimming in circles.
Prevention and remedy
Cleaning up the existing mercury pollution could be a tedious process. Nevertheless, there is some promising ongoing research bringing hope to the challenging task. One such research is based on nanotechnology. It uses synthesized aluminum oxide nanoparticles (Al2O3) mimicking the coral structures. These structures absorb heavy metal toxins effectively due to the high surface/volume ratio and the quality of the surface. In nature, it has been long observed corals can absorb heavy metal ions due to their surface structure and this new technique has been used in nanotechnology to create "synthetic corals" which may help clean mercury in the ocean.
Another novel material (Patent application: PCT/US15/55205) is still under investigation which looks at the possibility of cleaning mercury pollution using orange peels as raw material. This technology produces sulfur limonene polysulfide (proposed material) using sulfur and limonene. Using industrial byproducts to manufacture this polymer makes it a highly sustainable approach. The scientists say 50% of the mercury content could be reduced with a single treatment using this polymer.
In addition to the cleaning processes, minimizing the usage of coal power and shifting to cleaner energy sources, reducing small-scale artisanal gold mining, proper treatment of industrial mercury waste, and implementation policies are sound approaches to reduce mercury emissions in the long term-large scale plan. Public awareness is critical in achieving this goal. Proper disposal of mercury-containing items such as medicinal packaging and thermometers, using mercury-free bulbs and batteries, and buying consumer products with zero or minimum mercury emission to the environment can make a significant difference in recovering the world's ecosystems from mercury pollution leaving a minimum legacy of mercury pollution in the ocean for our future generations.
See also
Dimethylmercury
Ethylmercury
Methylmercury
Mercury (element)
Mercury cycle
Mercury in fish
Mercury poisoning
References
Mercury pollution
Ocean pollution | Marine mercury pollution | [
"Chemistry",
"Environmental_science"
] | 2,218 | [
"Ocean pollution",
"Water pollution"
] |
70,394,222 | https://en.wikipedia.org/wiki/Pr%3AYLF%20laser | A Pr:YLF laser (or Pr3+:LiYF4 laser) is a solid state laser that uses a praseodymium doped yttrium-lithium-fluoride crystal as its gain medium. The first Pr:YLF laser was built in 1977 and emitted pulses at 479 nm. Pr:YLF lasers can emit in many different wavelengths in the visible spectrum of light, making them potentially interesting for RGB applications and materials processing. Notable emission wavelengths are 479 nm, 523 nm, 607 nm and 640 nm.
Technology
Pr:YLF lasers are optically pumped using flashlamps, pulsed dye lasers or diode lasers. The strongest emission line of Pr:YLF is 640 nm, which stems from the transition of the Pr3+- ion. However, by suppressing this line (and other lines stronger than the desired one), other transitions can be used for obtaining different wavelengths. This can be done using dichroic mirrors. Pr:YLF lasers are pumped by using the transitions from to , or (corresponding wavelengths: 444 nm, 469 nm, 479 nm). The Pr3+- ion then undergoes a quick, radiationless transition (fast relaxation), followed by the light-emitting transition. Finally, the ground level () is reached via another radiationless transfer, making the Pr:YLF laser a 4-level system. Pr:YLF supports lasing at the following wavelengths: 479 nm, 523 nm, 546 nm, 607 nm, 640 nm, 698 nm, 721 nm, 907 nm and 915 nm.
The transition is of special interest, since its wavelength (444 nm) can be covered by InGaN laser diodes, which are commercially available at high output powers. Because the absorption peak at 444 nm only has a bandwidth of a few nanometers, pumpdiodes have to be selected and stabilized for efficient laser action. Diode pumped solid state (DPSS) lasers using these diodes have reached multiple watts of output powers in continuous wave operation. Typical DPSS setups using Pr:YLF crystals consist of a hemispheric resonator in which the crystal is pumped longitudinally by the pump diode. Depending on the resonator length, this resonator type can tolerate slight misalignments of the mirrors and retains stability even if the crystal shows thermal lensing effects. The plane mirror of the resonator can be replaced by coating one face of the crystal, making the setup very compact.
Although several other rare-earth dopants such as Sm3+, Tb3+, Dy3+, Ho3+ and Er3+ offer transitions in the visible spectrum, the most efficient emission in this region is achieved by Pr:YLF lasers
Pr:YLF lasers can be operated in continuous wave (cw) or pulsed mode. Q-Switched and frequency-doubled Pr:YLF lasers have also been reported.
Applications
Pr:YLF lasers, especially in combination with high power InGaN laser diodes, are of high scientific interestic because of their emission lines in the visible spectrum of light and potentially very compact laser setups. Besides biomedical applications such as fluorescence microscopy or cytometry, Pr:YLF lasers also are very attractive for the use in powerful RGB light sources.
Furthermore, compact and efficient continuous wave (deep) UV lasers can be made by frequency doubling the output of Pr:YLF lasers. Nanosecond UV pulses can be obtained by Q-switching frequency doubled Pr:YLF lasers. Pulsed and/or continuous wave UV lasers can be used for very precise materials processing, photoluminescence analysis, lithography for semiconductor manufacturing and inspection, UV Raman spectroscopy, eye surgery, etc.
Applications also include precise and efficient materials processing of some non-ferrous metals like copper or gold.
References
Solid-state lasers
Yttrium compounds
Lithium compounds
Praseodymium compounds | Pr:YLF laser | [
"Chemistry"
] | 825 | [
"Solid state engineering",
"Solid-state lasers"
] |
70,396,474 | https://en.wikipedia.org/wiki/Square%20root%20of%207 | The square root of 7 is the positive real number that, when multiplied by itself, gives the prime number 7. It is more precisely called the principal square root of 7, to distinguish it from the negative number with the same property. This number appears in various geometric and number-theoretic contexts. It can be denoted in surd form as:
and in exponent form as:
It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are:
.
which can be rounded up to 2.646 to within about 99.99% accuracy (about 1 part in 10000); that is, it differs from the correct value by about . The approximation (≈ 2.645833...) is better: despite having a denominator of only 48, it differs from the correct value by less than , or less than one part in 33,000.
More than a million decimal digits of the square root of seven have been published.
Rational approximations
The extraction of decimal-fraction approximations to square roots by various methods has used the square root of 7 as an example or exercise in textbooks, for hundreds of years. Different numbers of digits after the decimal point are shown: 5 in 1773 and 1852, 3 in 1835, 6 in 1808, and 7 in 1797.
An extraction by Newton's method (approximately) was illustrated in 1922, concluding that it is 2.646 "to the nearest thousandth".
For a family of good rational approximations, the square root of 7 can be expressed as the continued fraction
The successive partial evaluations of the continued fraction, which are called its convergents, approach :
Their numerators are 2, 3, 5, 8, 37, 45, 82, 127, 590, 717, 1307, 2024, 9403, 11427, 20830, 32257… , and their denominators are 1, 1, 2, 3, 14, 17, 31, 48, 223, 271, 494, 765, 3554, 4319, 7873, 12192,….
Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Approximate decimal equivalents improve linearly (number of digits proportional to convergent number) at a rate of less than one digit per step:
Every fourth convergent, starting with , expressed as , satisfies the Pell's equation
When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction:
All but the first of these satisfy the Pell's equation above.
The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial . The Newton's method update, is equal to when . The method therefore converges quadratically (number of accurate decimal digits proportional to the square of the number of Newton or Babylonian steps).
Geometry
In plane geometry, the square root of 7 can be constructed via a sequence of dynamic rectangles, that is, as the largest diagonal of those rectangles illustrated here.
The minimal enclosing rectangle of an equilateral triangle of edge length 2 has a diagonal of the square root of 7.
Due to the Pythagorean theorem and Legendre's three-square theorem, is the smallest square root of a natural number that cannot be the distance between any two points of a cubic integer lattice (or equivalently, the length of the space diagonal of a rectangular cuboid with integer side lengths). is the next smallest such number.
Outside of mathematics
On the reverse of the current US one-dollar bill, the "large inner box" has a length-to-width ratio of the square root of 7, and a diagonal of 6.0 inches, to within measurement accuracy.
See also
Square root
Square root of 2
Square root of 3
Square root of 5
Square root of 6
References
Mathematical constants
Quadratic irrational numbers | Square root of 7 | [
"Mathematics"
] | 827 | [
"nan",
"Mathematical objects",
"Numbers",
"Mathematical constants"
] |
70,396,976 | https://en.wikipedia.org/wiki/Fusarubin | Fusarubin is a naphthoquinone derived mycotoxin which is produced by the fungi Fusarium solani. Fusarubin has the molecular formula C15H14O7.
References
Further reading
Mycotoxins
1,4-Naphthoquinones
Methoxy compounds
Triols | Fusarubin | [
"Chemistry"
] | 67 | [] |
70,397,447 | https://en.wikipedia.org/wiki/U%20band | The U band is a range of frequencies contained in the microwave region of the electromagnetic spectrum. Common usage places this range between 40 and 60 GHz, but may vary depending on the source using the term.
References
Microwave bands
Satellite broadcasting | U band | [
"Engineering"
] | 47 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
70,397,574 | https://en.wikipedia.org/wiki/Medipines | Medipines is an American medical equipment maker which is based in Orange County, California, United States. It is known for its device, AGM100, which is approved for respiratory diagnosis and identifying the symptoms of COVID-19 using parameters such as oxygen saturation levels. The device is being used in the Canada and the U.S.
History
MediPines was incorporated in 2013. They are known for their device, AGM100, which provides non-invasive pulmonary gas exchange measurements in a short period of time. Approved by FDA, the device was developed in California and has been tested at the University of British Columbia. It is in use in Canadian hospitals.
In July 2020, it received the National Consortium for Pediatric Device award for developing a monitor device that displays a critical analysis of patients' breathing samples.
In August 2021, the AGM100 was included in the WHO Compendium.
References
External links
Medical devices
American companies established in 2013
Companies based in Orange County, California | Medipines | [
"Biology"
] | 200 | [
"Medical devices",
"Medical technology"
] |
61,019,171 | https://en.wikipedia.org/wiki/Hybrid%20pixel%20detector | Hybrid pixel detectors are a type of ionizing radiation detector consisting of an array of diodes based on semiconductor technology and their associated electronics. The term “hybrid” stems from the fact that the two main elements from which these devices are built, the semiconductor sensor and the readout chip (also known as application-specific integrated circuit or ASIC), are manufactured independently and later electrically coupled by means of a bump-bonding process. Ionizing particles are detected as they produce electron-hole pairs through their interaction with the sensor element, usually made of doped silicon or cadmium telluride. The readout ASIC is segmented into pixels containing the necessary electronics to amplify and measure the electrical signals induced by the incoming particles in the sensor layer.
Hybrid pixel detectors made to operate in single-photon mode are known as Hybrid Photon Counting Detectors (HPCDs). These detectors are designed to count the number of hits within a certain time interval. They have become a standard in most synchrotron light sources and X-ray detection applications.
History
The first hybrid pixel detectors were developed in the 1980s and ‘90s for high energy particle physics experiments at CERN. Since then, many large collaborations have continued to develop and implement these detectors into their systems, such as the ATLAS, CMS and ALICE experiments at the Large Hadron Collider. Using silicon pixel detectors as part of their inner tracking systems, these experiments are able to determine the trajectory of particles produced during the high-energy collisions that they study.
The key innovation for the construction of such large area pixel detectors was the separation of the sensor and the electronics into independent layers. Given that particle sensors require high resistivity silicon, while the readout electronics requires low resistivity, the introduction of the hybrid design allowed to optimize each element individually and later couple them together through a bump-bonding process involving microscopic spot soldering.
It was soon realized that the same hybrid technology could be used for the detection of X-ray photons. By the end of the 1990s the first hybrid photon counting (HPC) detectors developed by CERN and PSI were tested with synchrotron radiation. Further developments at CERN resulted in the creation of the Medipix chip and its variations.
The first large-area HPC detector was built in 2003 at PSI based on the PILATUS readout chip. The second generation of this detector, with improved readout
electronics and smaller pixels, became the first HPC detector to operate routinely at a synchrotron.
In 2006, the company DECTRIS was founded as a spin-off from PSI and successfully commercialized the PILATUS technology. Since then, detectors based on the PILATUS and EIGER systems have been widely used for small-angle scattering, coherent scattering, X-ray powder diffraction and spectroscopy applications. The main reasons for the success of HPC detectors are the direct detection of individual photons and the accurate
determination of scattering and diffraction intensities over a wide dynamic range.
See also
Semiconductor detector
Microstrip detector
Medipix
PILATUS (detector)
References
Particle detectors
Ionising radiation detectors
CERN | Hybrid pixel detector | [
"Technology",
"Engineering"
] | 640 | [
"Ionising radiation detectors",
"Radioactive contamination",
"Particle detectors",
"Measuring instruments"
] |
61,020,200 | https://en.wikipedia.org/wiki/Canite | Canite, also known as caneboard, pinboard or softboard, is a low-density fibreboard panel made from sugar cane fibres. It is easy to handle, lightweight and relatively durable. Because of its low environmental footprint it is considered a sustainable building product. It can be used without finish, painted, or rendered with natural lime-based products. It is commonly used for
Interior wall and ceiling lining
Pin boards and bulletin boards
Office partitions
Protective covering boards
Sound insulation and reflected sound reduction
Door fillings
Stucco base
Soundproofing under floorboards
Fire lighter (when saturated with kerosene)
In Australia, canite is commonly sold in 2400 x 1200 mm panels. They are typically 10–13 mm thick, with a density of 350 kg/m3.
References
Building materials | Canite | [
"Physics",
"Engineering"
] | 160 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
61,023,018 | https://en.wikipedia.org/wiki/Crack%20growth%20equation | A crack growth equation is used for calculating the size of a fatigue crack growing from cyclic loads. The growth of a fatigue crack can result in catastrophic failure, particularly in the case of aircraft. When many growing fatigue cracks interact with one another it is known as widespread fatigue damage. A crack growth equation can be used to ensure safety, both in the design phase and during operation, by predicting the size of cracks. In critical structure, loads can be recorded and used to predict the size of cracks to ensure maintenance or retirement occurs prior to any of the cracks failing. Safety factors are used to reduce the predicted fatigue life to a service fatigue life because of the sensitivity of the fatigue life to the size and shape of crack initiating defects and the variability between assumed loading and actual loading experienced by a component.
Fatigue life can be divided into an initiation period and a crack growth period. Crack growth equations are used to predict the crack size starting from a given initial flaw and are typically based on experimental data obtained from constant amplitude fatigue tests.
One of the earliest crack growth equations based on the stress intensity factor range of a load cycle () is the Paris–Erdogan equation
where is the crack length and is the fatigue crack growth for a single load cycle . A variety of crack growth equations similar to the Paris–Erdogan equation have been developed to include factors that affect the crack growth rate such as stress ratio, overloads and load history effects.
The stress intensity range can be calculated from the maximum and minimum stress intensity for a cycle
A geometry factor is used to relate the far field stress to the crack tip stress intensity using
.
There are standard references containing the geometry factors for many different configurations.
History of crack propagation equations
Many crack propagation equations have been proposed over the years to improve prediction accuracy and incorporate a variety of effects. The works of Head, Frost and Dugdale, McEvily and Illg, and Liu on fatigue crack-growth behaviour laid the foundation in this topic. The general form of these crack propagation equations may be expressed as
where, the crack length is denoted by , the number of cycles of load applied is given by , the stress range by , and the material parameters by . For symmetrical configurations, the length of the crack from the line of symmetry is defined as and is half of the total crack length .
Crack growth equations of the form are not a true differential equation as they do not model the process of crack growth in a continuous manner throughout the loading cycle. As such, separate cycle counting or identification algorithms such as the commonly used rainflow-counting algorithm, are required to identify the maximum and minimum values in a cycle. Although developed for the stress/strain-life methods rainflow counting has also been shown to work for crack growth. There have been a small number of true derivative fatigue crack growth equations that have also been developed.
Factors affecting crack growth rate
Regimes
Figure 1 shows a typical plot of the rate of crack growth as a function of the alternating stress intensity or crack tip driving force plotted on log scales. The crack growth rate behaviour with respect to the alternating stress intensity can be explained in different regimes (see, figure 1) as follows
Regime A: At low growth rates, variations in microstructure, mean stress (or load ratio), and environment have significant effects on the crack propagation rates. It is observed at low load ratios that the growth rate is most sensitive to microstructure and in low strength materials it is most sensitive to load ratio.
Regime B: At mid-range of growth rates, variations in microstructure, mean stress (or load ratio), thickness, and environment have no significant effects on the crack propagation rates.
Regime C: At high growth rates, crack propagation is highly sensitive to the variations in microstructure, mean stress (or load ratio), and thickness. Environmental effects have relatively very less influence.
Stress ratio effect
Cycles with higher stress ratio have an increased rate of crack growth. This effect is often explained using the crack closure concept which describes the observation that the crack faces can remain in contact with each other at loads above zero. This reduces the effective stress intensity factor range and the fatigue crack growth rate.
Sequence effects
A equation gives the rate of growth for a single cycle, but when the loading is not constant amplitude, changes in the loading can lead to temporary increases or decreases in the rate of growth. Additional equations have been developed to deal with some of these cases. The rate of growth is retarded when an overload occurs in a loading sequence. These loads generate are plastic zone that may delay the rate of growth. Two notable equations for modelling the delays occurring while the crack grows through the overload region are:
The Wheeler model (1972)
with
where is the plastic zone corresponding to the ith cycle that occurs post the overload and is the distance between the crack and the extent of the plastic zone at the overload.
The Willenborg model
Crack growth equations
Threshold equation
To predict the crack growth rate at the near threshold region, the following relation has been used
Paris–Erdoğan equation
To predict the crack growth rate in the intermediate regime, the Paris–Erdoğan equation is used
Forman equation
In 1967, Forman proposed the following relation to account for the increased growth rates due to stress ratio and when approaching the fracture toughness
McEvily–Groeger equation
McEvily and Groeger proposed the following power-law relationship which considers the effects of both high and low values of
.
NASGRO equation
The NASGRO equation is used in the crack growth programs AFGROW, FASTRAN and NASGRO software. It is a general equation that covers the lower growth rate near the threshold and the increased growth rate approaching the fracture toughness , as well as allowing for the mean stress effect by including the stress ratio . The NASGRO equation is
where , , , , , and are the equation coefficients.
McClintock equation
In 1967, McClintock developed an equation for the upper limit of crack growth based on the cyclic crack tip opening displacement
where is the flow stress, is the Young's modulus and is a constant typically in the range 0.1–0.5.
Walker equation
To account for the stress ratio effect, Walker suggested a modified form of the Paris–Erdogan equation
where, is a material parameter which represents the influence of stress ratio on the fatigue crack growth rate. Typically, takes a value around , but can vary between . In general, it is assumed that compressive portion of the loading cycle has no effect on the crack growth by considering which gives This can be physically explained by considering that the crack closes at zero load and does not behave like a crack under compressive loads. In very ductile materials like Man-Ten steel, compressive loading does contribute to the crack growth according to .
Elber equation
Elber modified the Paris–Erdogan equation to allow for crack closure with the introduction of the opening stress intensity level at which contact occurs. Below this level there is no movement at the crack tip and hence no growth. This effect has been used to explain the stress ratio effect and the increased rate of growth observed with short cracks. Elber's equation is
Ductile and brittle materials equation
The general form of the fatigue-crack growth rate in ductile and brittle materials is given by
where, and are material parameters. Based on different crack-advance and crack-tip shielding mechanisms in metals, ceramics, and intermetallics, it is observed that the fatigue crack growth rate in metals is significantly dependent on term, in ceramics on , and intermetallics have almost similar dependence on and terms.
Prediction of fatigue life
Computer programs
There are many computer programs that implement crack growth equations such as Nasgro, AFGROW and Fastran. In addition, there are also programs that implement a probabilistic approach to crack growth that calculate the probability of failure throughout the life of a component.
Crack growth programs grow a crack from an initial flaw size until it exceeds the fracture toughness of a material and fails. Because the fracture toughness depends on the boundary conditions, the fracture toughness may change from plane strain conditions for a semi-circular surface crack to plane stress conditions for a through crack. The fracture toughness for plane stress conditions is typically twice as large as that for plane strain. However, because of the rapid rate of growth of a crack near the end of its life, variations in fracture toughness do not significantly alter the life of a component.
Crack growth programs typically provide a choice of:
cycle counting methods to extract cycle extremes
geometry factors that select for the shape of the crack and the applied loading
crack growth equation
acceleration/retardation models
material properties such as yield strength and fracture toughness
Analytical solution
The stress intensity factor is given by
where is the applied uniform tensile stress acting on the specimen in the direction perpendicular to the crack plane, is the crack length and is a dimensionless parameter that depends on the geometry of the specimen. The alternating stress intensity becomes
where is the range of the cyclic stress amplitude.
By assuming the initial crack size to be , the critical crack size before the specimen fails can be computed using as
The above equation in is implicit in nature and can be solved numerically if necessary.
Case I
For crack closure has negligible effect on the crack growth rate and the Paris–Erdogan equation can be used to compute the fatigue life of a specimen before it reaches the critical crack size as
Crack growth model with constant value of 𝛽 and R = 0
For the Griffith-Irwin crack growth model or center crack of length in an infinite sheet as shown in the figure 2, we have and is independent of the crack length. Also, can be considered to be independent of the crack length. By assuming the above integral simplifies to
by integrating the above expression for and cases, the total number of load cycles are given by
Now, for and critical crack size to be very large in comparison to the initial crack size will give
The above analytical expressions for the total number of load cycles to fracture are obtained by assuming . For the cases, where is dependent on the crack size such as the Single Edge Notch Tension (SENT), Center Cracked Tension (CCT) geometries, numerical integration can be used to compute .
Case II
For crack closure phenomenon has an effect on the crack growth rate and we can invoke Walker equation to compute the fatigue life of a specimen before it reaches the critical crack size as
Numerical calculation
This scheme is useful when is dependent on the crack size . The initial crack size is considered to be . The stress intensity factor at the current crack size is computed using the maximum applied stress as
If is less than the fracture toughness , the crack has not reached its critical size and the simulation is continued with the current crack size to calculate the alternating stress intensity as
Now, by substituting the stress intensity factor in Paris–Erdogan equation, the increment in the crack size is computed as
where is cycle step size. The new crack size becomes
where index refers to the current iteration step. The new crack size is used to calculate the stress intensity at maximum applied stress for the next iteration. This iterative process is continued until
Once this failure criterion is met, the simulation is stopped.
The schematic representation of the fatigue life prediction process is shown in figure 3.
Example
The stress intensity factor in a SENT specimen (see, figure 4) under fatigue crack growth is given by
The following parameters are considered for the calculation
mm, mm, mm, , ,
MPa,, .
The critical crack length, , can be computed when as
By solving the above equation, the critical crack length is obtained as .
Now, invoking the Paris–Erdogan equation gives
By numerical integration of the above expression, the total number of load cycles to failure is obtained as .
References
External links
Materials science
Fracture mechanics
Mechanical failure
Mechanical failure modes
Solid mechanics
Structural analysis | Crack growth equation | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 2,409 | [
"Structural engineering",
"Solid mechanics",
"Mechanical failure modes",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Aerospace engineering",
"Structural analysis",
"Technological failures",
"Materials science",
"Mechanics",
"nan",
"Mechanical engineering",
"Materials degr... |
61,024,303 | https://en.wikipedia.org/wiki/C25H25NO2 | {{DISPLAYTITLE:C25H25NO2}}
The molecular formula C25H25NO2 (molar mass: 371.47 g/mol, exact mass: 371.1885 u) may refer to:
JWH-081
JWH-164
Molecular formulas | C25H25NO2 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,024,315 | https://en.wikipedia.org/wiki/C21H23NO | {{DISPLAYTITLE:C21H23NO}}
The molecular formula C21H23NO (molar mass: 305.41 g/mol, exact mass: 305.1780 u) may refer to:
Dapoxetine
Indapyrophenidone
JWH-167 (1-pentyl-3-(phenylacetyl)indole)
Molecular formulas | C21H23NO | [
"Physics",
"Chemistry"
] | 87 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,025,200 | https://en.wikipedia.org/wiki/Critical%20green%20inclusion | Critical green inclusions, also known as green neutrophilic inclusions and informally, death crystals or crystals of death, are amorphous blue-green cytoplasmic inclusions found in neutrophils and occasionally in monocytes. They appear brightly coloured and refractile when stained with Wright-Giemsa stain. These inclusions are most commonly found in critically ill patients, particularly those with liver disease, and their presence on the peripheral blood smear is associated with a high short-term mortality rate.
Clinical significance
Critical green inclusions are a rare finding, and when found they are suggestive of a poor prognosis, hence the colloquial term death crystals. A 2018 review found that 56% of patients died shortly after the inclusions were first identified (usually within two weeks). However, critical green inclusions are of limited utility for predicting mortality because they are usually found in severely ill patients whose poor prognosis is already evident for other reasons by the time the crystals are detected.
The inclusions were once hypothesized to be bile products phagocytized during fulminant hepatic injury, due to the high incidence of critical green inclusions observed in cases of acute hypoxic and ischaemic hepatitis. However, recent studies have highlighted that the inclusions stain positive for Oil Red O as opposed to bile stains, suggesting high lipid content. Additionally, some cases with critical green inclusions were not associated with notable hepatic injury. Currently, it is suggested that critical green inclusions are more likely to be phagocytized products of lysosomal degradation related to tissue injury.
Composition
The composition of the inclusions is not well understood, but transmission electron microscopy has shown that they are rich in lipids and possibly related to lipofuscin. Microscopic examination of liver tissue in patients with critical green inclusions has demonstrated prominent deposition of lipofuscin, suggesting that the white blood cell inclusions represent phagocytosis of this substance following severe injury to the liver.
References
External links
Images of critical green inclusions at the American Society of Hematology image bank
Histopathology
Hematology
Hematopathology
Abnormal clinical and laboratory findings for blood | Critical green inclusion | [
"Chemistry"
] | 458 | [
"Histopathology",
"Microscopy"
] |
57,936,744 | https://en.wikipedia.org/wiki/Butroxydim | Butroxydim is a chemical used as a herbicide. It is a group A herbicide used to kill grass weeds in a range of broadacre crops. Structurally related herbicides against grasses are alloxydim, sethoxydim, clethodim, and cycloxydim.
References
Ketones
Ketoxime ethers
Herbicides
Ethoxy compounds | Butroxydim | [
"Chemistry",
"Biology"
] | 80 | [
"Ketones",
"Herbicides",
"Biocides",
"Functional groups"
] |
57,939,963 | https://en.wikipedia.org/wiki/Random%20column%20packing | Random column packing is the practice of packing a distillation column with randomly fitting filtration material in order to optimize surface area over which reactants can interact while minimizing the complexity of construction of such columns. Random column packing is an alternative to structured column packing.
Packed columns
Packed columns utilizing filter media for chemical exchange are the most common devices used in the chemical industry for reactant contact optimization. Packed columns are used in a range of industries to allow intimate contact between two immiscible/partly immiscible fluids, which can be liquid/gas or liquid/liquid. The fluids are passed through a column in a countercurrent flow.
In the column it is important to maintain an effective mass transfer, so it is essential that a packing is selected which will support a large surface area for mass transfer.
History
Random packing was used as early as 1820. Originally the packing material consisted of glass spheres, however in 1850 they were replaced by a more porous pumice stone and pieces of coke.
Applications
Random packed columns are used in a variety of applications, including:
Distillation
Stripping
Carbon dioxide scrubbing
Liquid%E2%80%93liquid extraction
Types
Raschig ring
The Raschig ring is a piece of tube, invented circa 1914, that is used in large numbers in a packing column. Raschig rings are usually made of ceramic or metals, and they provide a large surface area within the column, allowing for interaction between liquid and gas vapors.
Lessing ring
Lessing rings are a type of random packing similar to the Raschig ring invented in the early 20th century by German-born British chemist Rudolf Lessing (1878-1964) of Mond Nickel Company. Originally wrapped from steel strips according to his 1919 patent, now they are made of ceramic. Lessing rings have partitions insides which increase the surface area and enhance mass transfer efficiency. Lessing rings have a high density and an excellent heat and acid resistance. Lessing rings withstand corrosion and are used in regenerative oxide systems and transfer systems.
Pall ring
Pall rings are the most common form of random packing. They are similar to Lessing rings and were developed from the Raschig ring. Pall rings have similar cylindrical dimensions but has rows of windows which increase performance by increasing the surface area. They are suited for low pressure drop and high capacity applications. They have a degree of randomness and a relatively high liquid hold up, promoting a high absorption, especially when the rate of reaction is slow. The cross structure of the Pall ring makes it mechanically robust and suitable for use in deep packed beds.
Białecki ring
The Bialecki ring was patented in 1974 by Polish chemical engineer from Kraków Zbigniew Białecki rings are an improved version of Raschig rings. The rings may be injection moulded of plastics or press-formed from metal sheet without welding. Specific surface area of filling ranges between 60 and 440 m2/m3.
Dixon ring
Dixon rings have a similar design to Lessing rings. They are made of stainless steel mesh, giving Dixon rings a low pressure drop and after pre-wetting. Dixon rings have a very large surface area, which increases the rate of mass transfer. Dixon rings have a large liquid hold up, a low pressure drop and a large surface area, and have a high mass transfer rate. Dixon rings are used for laboratory distillation and scrubbing applications.
References
Chemical process engineering | Random column packing | [
"Chemistry",
"Engineering"
] | 709 | [
"Chemical process engineering",
"Chemical engineering"
] |
64,514,125 | https://en.wikipedia.org/wiki/Mixture%20fraction | Mixture fraction () is a quantity used in combustion studies that measures the mass fraction of one stream of a mixture formed by two feed streams, one the fuel stream and the other the oxidizer stream. Both the feed streams are allowed to have inert gases. The mixture fraction definition is usually normalized such that it approaches unity in the fuel stream and zero in the oxidizer stream. The mixture-fraction variable is commonly used as a replacement for the physical coordinate normal to the flame surface, in nonpremixed combustion.
Definition
Assume a two-stream problem having one portion of the boundary the fuel stream with fuel mass fraction and another portion of the boundary the oxidizer stream with oxidizer mass fraction . For example, if the oxidizer stream is air and the fuel stream contains only the fuel, then and . In addition, assume there is no oxygen in the fuel stream and there is no fuel in the oxidizer stream. Let be the mass of oxygen required to burn unit mass of fuel (for hydrogen gas, and for alkanes, ). Introduce the scaled mass fractions as and . Then the mixture fraction is defined as
where
is the stoichiometry parameter, also known as the overall equivalence ratio. On the fuel-stream boundary, and since there is no oxygen in the fuel stream, and hence . Similarly, on the oxidizer-stream boundary, and so that . Anywhere else in the mixing domain, . The mixture fraction is a function of both the spatial coordinates and the time , i.e.,
Within the mixing domain, there are level surfaces where fuel and oxygen are found to be mixed in stoichiometric proportion. This surface is special in combustion because this is where a diffusion flame resides. Constant level of this surface is identified from the equation , where is called as the stoichiometric mixture fraction which is obtained by setting (since if they were react to consume fuel and oxygen, only on the stoichiometric locations both fuel and oxygen will be consumed completely) in the definition of to obtain
.
Relation between local equivalence ratio and mixture fraction
When there is no chemical reaction, or considering the unburnt side of the flame, the mass fraction of fuel and oxidizer are and (the subscript denotes unburnt mixture). This allows to define a local fuel-air equivalence ratio
The local equivalence ratio is an important quantity for partially premixed combustion. The relation between local equivalence ratio and mixture fraction is given by
The stoichiometric mixture fraction defined earlier is the location where the local equivalence ratio .
Scalar dissipation rate
In turbulent combustion, a quantity called the scalar dissipation rate with dimensional units of that of an inverse time is used to define a characteristic diffusion time. Its definition is given by
where is the diffusion coefficient of the scalar. Its stoichiometric value is .
Liñán's mixture fraction
Amable Liñán introduced a modified mixture fraction in 1991 that is appropriate for systems where the fuel and oxidizer have different Lewis numbers. If and are the Lewis number of the fuel and oxidizer, respectively, then Liñán's mixture fraction is defined as
where
The stoichiometric mixture fraction is given by
.
References
Fluid dynamics
Combustion | Mixture fraction | [
"Chemistry",
"Engineering"
] | 664 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
63,160,697 | https://en.wikipedia.org/wiki/Uprifosbuvir | Uprifosbuvir (MK-3682) is an antiviral drug developed for the treatment of hepatitis C. It is a nucleotide analogue which acts as an NS5B RNA polymerase inhibitor. it was in Phase III human clinical trials.
In 2017 owner Merck wrote down the value of uprifosbuvir to US$240 million, for a write-down of $2.9 billion, reducing its earnings per share from 42¢ to a loss of 22¢ for the fourth quarter of 2016. This was attributed to the hepatitis C drug market rather than uprifosbuvir itself; the population of treatable patients diminished rapidly after the introduction in 2014 of sofosbuvir and the combination ledipasvir/sofosbuvir, drugs that cured hepatitis C, and whose market was also diminishing following their success in curing patients. Clinical testing of uprifosbuvir continued.
References
Anti–RNA virus drugs
Antiviral drugs | Uprifosbuvir | [
"Biology"
] | 207 | [
"Antiviral drugs",
"Biocides"
] |
63,164,704 | https://en.wikipedia.org/wiki/Nathaniel%20J.%20Fisch | Nathaniel Joseph Fisch is an American plasma physicist known for pioneering the excitation of electric currents in plasmas using electromagnetic waves, which was then used in tokamak experiments. This contributed to an increased understanding of plasma wave–particle interactions in the field for which he was awarded the James Clerk Maxwell Prize for Plasma Physics in 2005 and the Hannes Alfvén Prize in 2015.
Fisch's research also involve inertial fusion, as well as methods to generate intensive laser fields to accelerate particles, such as the ones used in plasma thrusters. He is also known to have worked on the hydrodynamics of charged liquids, petroleum refinement, and pattern recognition.
Early life and career
Fisch studied at the Massachusetts Institute of Technology (as MIT National Scholar 1968 to 1972), where he received his bachelor's and master's degree in 1972 and 1975 respectively, and received his doctorate in computer science and electrical engineering in 1978. From 1978, he was a scientist in the plasma physics laboratory at Princeton University, where he has been a professor in the Faculty of Astrophysics since 1991 (also associated with the Faculty of Mechanics and Flight Engineering since 2000) and heads the University's Plasma Physics Program. In 1986, he was a visiting scientist at IBM's Thomas J. Watson Research Center. From 1981 to 1986, he was a consultant at Exxon Research.
Honors and awards
Fisch was awarded the Guggenheim Fellowship in 1985. He was then elected a fellow of the American Physical Society in 1987, and was subsequently awarded the John Dawson Award for Excellence in Plasma Physics Research in 1992 for fundamental theoretical work on non-inductive power generation in toroidally enclosed plasmas.
In 2004, he received the Ernest Orlando Lawrence Award.
In 2005, he received the James Clerk Maxwell Prize for Plasma Physics for "theoretical development of efficient radio frequency (RF)-driven current in plasmas and for greatly expanding our ability to understand, to analyze, and to utilize wave–plasma interactions."
In 2015, he was awarded the Hannes Alfvén Prize from the European Physical Society for "his contributions to the understanding of plasma wave–particle interactions and their applications to efficiently driving currents with radio-frequency waves."
References
American plasma physicists
Fellows of the American Physical Society
1950 births
Living people
MIT School of Engineering alumni
Plasma physicists | Nathaniel J. Fisch | [
"Physics"
] | 472 | [
"Plasma physicists",
"Plasma physics"
] |
53,374,506 | https://en.wikipedia.org/wiki/Sigma-bond%20metathesis | In organometallic chemistry, sigma-bond metathesis is a chemical reaction wherein a metal-ligand sigma bond undergoes metathesis (exchange of parts) with the sigma bond in some reagent. The reaction is illustrated by the exchange of lutetium(III) methyl complex with a hydrocarbon (R-H):
(C5Me5)2Lu-CH3 + R-H → (C5Me5)2Lu-R + CH4
This reactivity was first observed by Patricia Watson, a researcher at duPont.
The reaction is mainly observed for complexes of metals with d0 configuration, e.g. complexes of Sc(III), Zr(IV), Nb(V), Ta(V), etc. f-Element complexes also participate, regardless of the number of f-electrons. The reaction is thought to proceed via cycloaddition. Indeed, the rate of the reaction is characterized by a highly negative entropy of activation, indicating an ordered transition state. For metals unsuited for redox, sigma bond metathesis provides a pathway for introducing substituents.
The reaction attracted much attention because hydrocarbons are normally unreactive substrates, whereas some sigma-bond metatheses are facile. Unfortunately the reaction does not readily allow the introduction of functional groups. It has been suggested that dehydrocoupling reactions proceed via sigma-bond metathesis.
See also
Carbon–hydrogen bond activation
Metal-catalyzed σ-bond rearrangement
References
Organometallic chemistry | Sigma-bond metathesis | [
"Chemistry"
] | 318 | [
"Organometallic chemistry"
] |
53,378,620 | https://en.wikipedia.org/wiki/Erycina%20pusilla | Erycina pusilla is a species of flowering plants, which is a tiny orchid with an overall size of 2.5 to 3.5 cm from the orchid family, Orchidaceae. Its species are native to Mexico, Belize, Central America, South America and Trinidad.
The leaves are shaped like a lance head (lanceolate) and arranged in a fan. Unlike other similar orchids, E. pusilla never develops lengthwise folded leaves (conduplicate leaves) or extra storage organs (pseudobulbs).
The blooming season is from fall to spring. It produces solitary light-yellow orchid-shaped flowers. In comparison to the overall plant size, these flowers can reach a relatively large size (1 to 2.5 cm). The lateral sepals are united near the flower base.
Compared to other orchids, E. pusilla has a short life cycle (about 17 months). It can reach adulthood in just one season, while the majority of the orchids reach maturity in up to 5 years.
Name
It is commonly known as the tiny psygmorchis, due to its miniature size.
The current scientific name is Erycina pusilla. The etymology of its scientific name refers to its beauty and tiny size: “Erycina” is a byname of the Roman goddess for beauty, Venus (Venus of Eryx), and “pusilla” is Latin meaning “very little”. It was formerly classified in the genus Psygmorchis, due to its fan-shaped leaves (“psygmos” Greek for fan).
Synonyms
Homotypic synonyms:
Epidendrum pusillum L., Sp. Pl. ed. 2: 1352 (1763)
Cymbidium pusillum (L.) Sw., Nova Acta Regiae Soc. Sci. Upsal. 6: 74 (1799).
Oncidium pusillum (L.) Mutel, Mém. Soc. Roy. Centr. Agric. Dépt. N. 1835-1836: 84 (1837).
Tolumnia pusilla (L.) Hoehne, Ic. Orch. Bras.: 231 (1949).
Psygmorchis pusilla (L.) Dodson & Dressler, Phytologia 24: 288 (1972).
Heterotypic synonyms:
Oncidium iridifolium Kunth in F.W.H.von Humboldt, A.J.A.Bonpland & C.S.Kunth, Nov. Gen. Sp. 1: 344 (1816).
Epidendrum ventilabrum Vell., Fl. Flumin. 9: t. 32 (1831).
Oncidium allemanii Barb.Rodr., Gen. Spec. Orchid. 2: 185 (1882).
Oncidium pusillum var. megalanthum Schltr., Repert. Spec. Nov. Regni Veg. Beih. 27: 115 (1924).
Psygmorchis allemanii (Barb.Rodr.) Garay & Stacy, Bradea 1: 408 (1974).
Erycina allemanii (Barb.Rodr.) N.H.Williams & M.W.Chase, Lindleyana 16: 136 (2001).
Distribution and habitat
Erycina pusilla can be found in the neotropical region, including South and Central America, the southern Mexican lowlands, the Caribbean islands and southern Florida.
Its habitat consists of humid forests at elevations of with temperatures varying from warm to hot. Like many orchids, E. pusilla grows harmlessly upon other plants. It gets moisture and nutrients from the surroundings without affecting the host plant (commensalism).
Its quick development permits this orchid to grow on relatively short-lasting sites such as twigs or even leaves of bushes and trees, such as coffee plant or hibiscus. For this reason, it is usually classified as a twig epiphyte.
Use in science
Erycina pusilla is a promising model candidate for Oncidium research. Its relatively tiny size and its short life cycle, facilitates its cultivation. Additionally, it has the ability to complete its life cycle in vitro. The functional genomic research is easier because E. pusilla only has 6 chromosomes and a small genome size (1.5 pg 1C nucleus). Another aspect that speaks for the use of this orchid in research, is the rare pollination and production of seeds in nature. This reduces the risk of undesired propagation of transgenic lines. The rapid growth and the low chromosome numbers make E. pusilla is also an excellent parent for traditional hybridization methods. All these characteristics make E. pusilla a promising model not only for research, but also for commercial breeding, since it constitutes an excellent parent for traditional hybridization methods.
Beyond the use of this orchid for research and commercial purposes, E. pusilla has also medical applications. The ingestion of whole plant cooked treats colic and stomachache. Additionally, the whole plant boiled is also used as a wash to treat lacerations cuts and wounds.
In vitro cultivation
Sporadic flowering in flasks was first reported by Livingston (1962), although the in vitro cultivation was just established (2007). The primary culture of E. pusilla becomes a callus after about one month of cultivation. Three months later it reaches leaf stage and after eight months the flowering stage begins. After two and half months E. pusilla produces fruits. A new cycle can start from a new primary culture: protocorm-like body (PLB) in vitro.
Genome
The transcriptome sequence of E. pusilla is available (Orchidstra Database). Some basic molecular resources were also established, including the sequencing of the chloroplast genome, the transcriptome and the BAC library. The miRNA database of E. pusilla, including the identification of miRNA biosynthesis-related genes and miRNA families, was established in 2013.
The chloroplast genome has been sequenced efficiently and economically by using BAC library and next-generation sequencing. The chloroplast genome of E. pusilla is 143.164 bp in size, which contains a pair of inverted repeats (IRa and IRb) of 23.439 bp separated by large and small single copy regions of 84.189 and 12.097 bp, respectively. From these result compare to Oncidium, the gene order of chloroplast genome between E. pusilla and Oncidium are similar. In Taiwan, different hybridization compatibility of E. pusilla with Oncidium, Rodriguezia and Tolumnia was found by crossing with several important Oncidiinae orchids.
MADS-box genes
Due to their role in plant growth, the characterization of MADS-box genes in E. pusilla has turned into a hot topic for both researchers and commercial orchid breeders. MADS-box genes encode for MADS-domain proteins, which are generally transcription factors. In plants, these proteins control key developmental processes throughout almost all life stages.
To date, 28 MADS-box genes were isolated in E. pusilla, namely EpMADS1 to 28. Nearly all of them contain introns greater than 10 kb, which reflects the complexity of the E. pusilla genome. Many EpMADS genes have expression patterns similar to those MADS-box genes in Arabidopsis. The 28 proteins, encoded by the E. pusilla MADS-box genes, are classified as type I or type II based on BLASTP analyses.
References
Other sources
Pridgeon, A.M., Cribb, P.J., Chase, M.A. & Rasmussen, F. eds. (1999). Genera Orchidacearum Vols 1–3. Oxford Univ. Press.
Berg Pana, H. 2005. Handbuch der Orchideen-Namen. Dictionary of Orchid Names. Dizionario dei nomi delle orchidee. Ulmer, Stuttgart.
Establishment of an Agrobacterium-mediated genetic transformation procedure for the experimental model orchid Erycina pusilla. Shu-Hong Lee, Chia-Wen Li, Chia-Hui, Liau Pao-Yi, Chang Li-Jen Liao, Choun-Sea Lin Ming-Tsair Chan, Plant Cell, Tissue and Organ Culture, January 2015, Volume 120, Issue 1, pp 211–220
External links
Oncidiinae
Plant models
Orchids of Central America
Orchids of Belize | Erycina pusilla | [
"Biology"
] | 1,800 | [
"Model organisms",
"Plant models"
] |
53,382,074 | https://en.wikipedia.org/wiki/George%20Cresswell%20Furnace | George Cresswell Furnace, also known as the George Cresswell Furnace Stack is a historic lead furnace located near Potosi, Washington County, Missouri. It was built about 1840, and is an open hearth furnace measuring about 100 feet square at its base and constructed of massive limestone blocks interlaced with mortar. The stack rises to a height of approximately 25 feet.
It was listed on the National Register of Historic Places in 1988.
References
Industrial buildings and structures on the National Register of Historic Places in Missouri
Industrial buildings completed in 1840
Buildings and structures in Washington County, Missouri
National Register of Historic Places in Washington County, Missouri
Lead
Smelting | George Cresswell Furnace | [
"Chemistry"
] | 130 | [
"Metallurgical processes",
"Smelting"
] |
62,252,965 | https://en.wikipedia.org/wiki/Phosphinous%20acids | Phosphinous acids are usually organophosphorus compounds with the formula R2POH. They are pyramidal in structure. Phosphorus is in the oxidation state III. Most phosphinous acids rapidly convert to the corresponding phosphine oxide, which are tetrahedral and are assigned oxidation state V.
Synthesis
Only one example is known, bis(trifluoromethyl)phosphinous acid, (CF3)2POH. It is prepared in several steps from phosphorus trichloride (Et = ethyl):
PCl3 + 2 Et2NH → PCl2NEt2 + Et2NH2Cl
2 P(NEt2)3 + PCl2NEt2 + 2 CF3Br → P(CF3)2NEt2 + 2 BrClP(NEt2)3
P(CF3)2NEt2 + H2O → P(CF3)2OH + HNEt2
Reactions
With the lone exception of the bis(trifluoromethyl) derivative, the dominant reaction of phosphinous acids is tautomerization:
PR2OH → OPR2H
Even the pentafluorophenyl compound P(C6F5)2OH is unstable with respect to the phosphine oxide.
Although phosphinous acids are rare, their P-bonded coordination complexes are well established, e.g. Mo(CO)5P(OH)3.
Secondary and primary phosphine oxides
Tertiary phosphine oxides, compounds with the formula R3PO cannot tautomerize. The situation is different for the secondary and primary phosphine oxides, with the respective formulas R2(H)PO and R(H)2PO.
References
Functional groups
Organophosphorus compounds | Phosphinous acids | [
"Chemistry"
] | 369 | [
"Organophosphorus compounds",
"Organic compounds",
"Functional groups"
] |
62,256,831 | https://en.wikipedia.org/wiki/Hughes%E2%80%93Ingold%20symbol | A Hughes–Ingold symbol describes various details of the reaction mechanism and overall result of a chemical reaction. For example, an SN2 reaction is a substitution reaction ("S") by a nucleophilic process ("N") that is bimolecular ("2" molecular entities involved) in its rate-determining step. By contrast, an E2 reaction is an elimination reaction, an SE2 reaction involves electrophilic substitution, and an SN1 reaction is unimolecular. The system is named for British chemists Edward D. Hughes and Christopher Kelk Ingold.
References
Chemical reactions
Reaction mechanisms
Chemical nomenclature | Hughes–Ingold symbol | [
"Chemistry"
] | 134 | [
"Reaction mechanisms",
"nan",
"Physical organic chemistry",
"Chemical reaction stubs",
"Chemical kinetics"
] |
62,258,052 | https://en.wikipedia.org/wiki/Itamar%20Medical | Itamar Medical is a multinational company focused on the development, manufacturing and sales of medical devices related to respiratory sleep disorders. The company is headquartered in Caesarea, Israel and is owned by ZOLL Medical Corporation. The company is a medical device company providing continuum of care in the area of sleep disorder based on its WatchPAT diagnostic devices and early diagnosis of Atherosclerosis.
Company overview
Itamar Medical was founded in 1995 as a developer of devices for assessing vascular defects. Its early products included technology for early detection of heart disease (EndoPAT) and detection of sleep disorders (WatchPAT).
The company is named after Itamar Yaron (one of the founders' brothers), who was killed in the Yom Kippur War when trying to rescue an injured soldier and was lawarded with a Medal of Courage. Itamar's headquarters is located in Caesarea, Israel, and the company has offices in the US, Japan, and the Netherlands.
In 2007, the company went public on the Tel Aviv Stock Exchange and on the Nasdaq. In 2011, the company made its first step on the Indian market.
In 2012, the company's WatchPAT started being distributed in Russia by Medical Diagnostic Methods.
In 2020, the company raised $40 million on the NASDAQ and won the SleepTech award for 2020.
In 2021, the company was acquired by ZOLL Medical Corporation and its stock delisted.
Company's devices
WatchPAT (FDA approval)
EndoPAT (FDA approval)
Sleep Apnea
References
External links
Companies based in Caesarea
Health care companies of Israel
Companies formerly listed on the Nasdaq
Companies listed on the Tel Aviv Stock Exchange
Israeli companies established in 1997
Sleep disorders
2021 mergers and acquisitions | Itamar Medical | [
"Biology"
] | 360 | [
"Behavior",
"Sleep",
"Sleep disorders"
] |
62,261,660 | https://en.wikipedia.org/wiki/World%20Flora%20Online | World Flora Online is an Internet-based compendium of the world's plant species.
Description
The World Flora Online (WFO) is an open-access database, launched in October 2012 as a follow-up project to The Plant List, with the aim of publishing an online flora of all known plants by 2020. It is a project of the United Nations Convention on Biological Diversity, with goal of halting the loss of plant species worldwide by 2020. It is developed by a collaborative group of institutions around the world in response to the 2011–2020 Global Strategy for Plant Conservation (GSPC)'s updated Target 1: to produce "an online flora of all known plants".
An accessible flora of all known plant species was considered a fundamental requirement for plant conservation. It provides a baseline for the achievement and monitoring of other targets of the strategy. The previous target of GSPC was achieved in 2010 with The Plant List. WFO was conceived in 2012 by an initial group of four institutions; the Missouri Botanical Garden, the New York Botanical Garden, the Royal Botanic Garden Edinburgh and the Royal Botanic Gardens, Kew. In all, 36 institutions are involved in the production.
See also
International Plant Names Index
Plants of the World Online
References
Bibliography
, see also The Plant List
.
Databases in the United Kingdom
Databases in the United States
Missouri Botanical Garden
Online botany databases
Online taxonomy databases
Plant taxonomy
Royal Botanic Gardens, Kew | World Flora Online | [
"Biology"
] | 285 | [
"Botanical nomenclature",
"Plants",
"Botanical terminology",
"Biological nomenclature",
"Plant taxonomy"
] |
62,262,830 | https://en.wikipedia.org/wiki/Undercut%20%28turning%29 | In turning, an undercut is a recess in a diameter generally on the inside diameter of the part.
On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool, and also referred to as thread relief in this context. A rule of thumb is that the undercut should be at least 1.5 threads long and the diameter should be at least smaller than the minor diameter of the thread. Strictly speaking the relief simply needs to be equal or slightly smaller than the minor diameter of the thread. Thread relief can also be internal on a bore, and then the relief needs to be larger than the major thread diameter. They are also often used on shafts that have diameter changes so that a mating part can seat against the shoulder. If an undercut is not provided there is always a small radius left behind even if a sharp corner is intended. These types of undercuts are called out on technical drawings by saying the width and either the depth or the diameter of the bottom of the neck.
References
Bibliography
.
Mechanical engineering | Undercut (turning) | [
"Physics",
"Engineering"
] | 234 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
76,174,498 | https://en.wikipedia.org/wiki/Bosch-Meiser%20process | The Bosch–Meiser process is an industrial process, which was patented in 1922 and named after its discoverers, the German chemists Carl Bosch and Wilhelm Meiser for the large-scale manufacturing of urea, a valuable nitrogenous chemical.
The whole process consists of two main equilibrium reactions, with incomplete conversion of the reactants.
The first, called carbamate formation: the fast exothermic reaction of liquid ammonia with gaseous carbon dioxide () at high temperature and pressure to form ammonium carbamate ():
(ΔH = −117 kJ/mol at 110 atm and 160 °C)
The second, called urea conversion: the slower endothermic decomposition of ammonium carbamate into urea and water:
(ΔH = +15.5 kJ/mol at 160–180 °C)
The overall conversion of and to urea is exothermic, with the reaction heat from the first reaction driving the second. The conditions that favor urea formation (high temperature) have an unfavorable effect on the carbamate formation equilibrium. The process conditions are a compromise: the ill-effect on the first reaction of the high temperature (around 190 °C) needed for the second is compensated for by conducting the process under high pressure (140–175 bar), which favors the first reaction. Although it is necessary to compress gaseous carbon dioxide to this pressure, the ammonia is available from the ammonia production plant in liquid form, which can be pumped into the system much more economically. To allow the slow urea formation reaction time to reach equilibrium, a large reaction space is needed, so the synthesis reactor in a large urea plant tends to be a massive pressure vessel.
Reactant recycling
Because the urea conversion is incomplete, the urea must be separated from the unconverted reactants, including the ammonium carbamate. Various commercial urea processes are characterized by the conditions under which urea forms and the way that unconverted reactants are further processed.
Conventional recycle processes
In early "straight-through" urea plants, reactant recovery (the first step in "recycling") was done by letting down the system pressure to atmospheric to let the carbamate decompose back to ammonia and carbon dioxide. Originally, because it was not economic to recompress the ammonia and carbon dioxide for recycle, the ammonia at least would be used for the manufacture of other products such as ammonium nitrate or ammonium sulfate, and the carbon dioxide was usually wasted. Later process schemes made recycling unused ammonia and carbon dioxide practical. This was accomplished by the "total recycle process", developed in the 1940s to 1960s and now called the "conventional recycle process". It proceeds by depressurizing the reaction solution in stages (first to 18–25 bar and then to 2–5 bar) and passing it at each stage through a steam-heated carbamate decomposer, then recombining the resulting carbon dioxide and ammonia in a falling-film carbamate condenser and pumping the carbamate solution back into the urea reaction vessel.
Stripping recycle process
The "conventional recycle process" for recovering and reusing the reactants has largely been supplanted by a stripping process, developed in the early 1960s by Stamicarbon in The Netherlands, that operates at or near the full pressure of the reaction vessel. It reduces the complexity of the multi-stage recycle scheme, and it reduces the amount of water recycled in the carbamate solution, which has an adverse effect on the equilibrium in the urea conversion reaction and thus on overall plant efficiency. Effectively all new urea plants use the stripper, and many total recycle urea plants have converted to a stripping process.
In the conventional recycle processes, carbamate decomposition is promoted by reducing the overall pressure, which reduces the partial pressure of both ammonia and carbon dioxide, allowing these gasses to be separated from the urea product solution. The stripping process achieves a similar effect without lowering the overall pressure, by suppressing the partial pressure of just one of the reactants in order to promote carbamate decomposition. Instead of feeding carbon dioxide gas directly to the urea synthesis reactor with the ammonia, as in the conventional process, the stripping process first routes the carbon dioxide through the stripper. The stripper is a carbamate decomposer that provides a large amount of gas-liquid contact. This flushes out free ammonia, reducing its partial pressure over the liquid surface and carrying it directly to a carbamate condenser (also under full system pressure). From there, reconstituted ammonium carbamate liquor is passed to the urea production reactor. That eliminates the medium-pressure stage of the conventional recycle process.
Side reactions
The three main side reactions that produce impurities have in common that they decompose urea.
Urea hydrolyzes back to ammonium carbamate in the hottest stages of the synthesis plant, especially in the stripper, so residence times in these stages are designed to be short.
Biuret is formed when two molecules of urea combine with the loss of a molecule of ammonia.
Normally this reaction is suppressed in the synthesis reactor by maintaining an excess of ammonia, but after the stripper, it occurs until the temperature is reduced. Biuret is undesirable in urea fertilizer because it is toxic to crop plants to varying degrees, but it is sometimes desirable as a nitrogen source when used in animal feed.
Isocyanic acid HNCO and ammonia results from the thermal decomposition of ammonium cyanate , which is in chemical equilibrium with urea:
This decomposition is at its worst when the urea solution is heated at low pressure, which happens when the solution is concentrated for prilling or granulation (see below). The reaction products mostly volatilize into the overhead vapours, and recombine when these condense to form urea again, which contaminates the process condensate.
Corrosion
Ammonium carbamate solutions are highly corrosive to metallic construction materials – even to resistant forms of stainless steel – especially in the hottest parts of the synthesis plant such as the stripper. Historically corrosion has been minimized (although not eliminated) by continuous injection of a small amount of oxygen (as air) into the plant to establish and maintain a passive oxide layer on exposed stainless steel surfaces. Highly corrosion resistant materials have been introduced to reduce the need for passivation oxygen, such as specialized duplex stainless steels in the 1990s, and zirconium or zirconium-clad titanium tubing in the 2000s.
Global production
In 2022, the world production of urea was estimated approximately at 210 million tons.
References
Fertilizers
Chemical processes
Industrial processes
Equilibrium chemistry
Catalysis
German inventions | Bosch-Meiser process | [
"Chemistry"
] | 1,417 | [
"Catalysis",
"Fertilizers",
"Equilibrium chemistry",
"Organic compounds",
"Chemical processes",
"Soil chemistry",
"nan",
"Chemical process engineering",
"Chemical kinetics",
"Ureas"
] |
76,179,196 | https://en.wikipedia.org/wiki/Neil%20Hindman | Neil Hindman (born April 14, 1943) is an American mathematician and Professor Emeritus at Howard University. His research focuses on various areas within mathematics, including topology, Stone-Čech compactification, discrete systems, and Ramsey theory.
Life and education
Neil Hindman actively participated in civil rights work during his college years. In the summer of 1964, he served as a freedom school coordinator in Mississippi.
Hindman completed his Bachelor of Arts degree in mathematics and physics in 1965 at Westmar College. He then pursued a graduate degree, earning a Master of Arts in mathematics from the University of Massachusetts in 1967. Subsequently, Hindman continued his academic journey at Wesleyan University, where he received his Ph.D. in 1969. Under the supervision of W. W. Comfort, Hindman wrote his doctoral thesis on "P-like spaces and their product with P-spaces."
Academic career
Neil Hindman began his academic career as a visiting assistant professor at Wesleyan University, serving from September 1969 to June 1970. Following this, he joined California State University, Los Angeles, as an assistant professor in September 1970. From September 1975 to August 1976, Hindman held a visiting associate professorship at SUNY (The State University of New York) at Binghamton. By December 1979, he had risen to the rank of Professor at California State University, Los Angeles.
In January 1980, Hindman transitioned to Howard University, where he assumed the role of associate professor, continuing to impart knowledge in mathematics. He dedicated several decades to teaching and research at Howard University, ultimately retiring as a Professor of Mathematics in June 2017.
Mathematical work
One of Hindman's early contributions was his dissertation for his Ph.D. thesis, conducted in collaboration with W. W. Comfort and S. Negrepontis. Their research explored conditions for defining F'-spaces and investigated concepts such as weakly Lindelöf spaces and P-spaces, shedding light on the structure of F-spaces in topology. This pioneering work significantly advanced theoretical models and analytical techniques within the field.
Hindman's Theorem, formulated and proven by Neil Hindman, addresses a conjecture originally proposed by Graham and Rothschild. The theorem asserts that any partition of the natural numbers into a finite number of classes contains at least one class with a sequence such that all finite sums of distinct elements from this sequence also belong to the same class. Hindman's Theorem confirms the conjecture by Graham and Rothschild and establishes its equivalence with the existence of an ultrafilter on . This theorem highlights the relationship between the partition regularity of the natural numbers and ultrafilters, offering a fundamental result with broad implications across various mathematical domains.
Hindman remains active in the fields of Ramsey Theory and Topology, with a particular focus on the Stone–Čech compactification.
Awards and honors
International Prize from the Japanese Association of Mathematical Science (2003)
Selected publications
Hindman, Neil. "Finite Sums from Sequences Within Cells of a Partition of N".
Gordon, C.; Hindman Neil. "Elementary Set Theory – Proof Techniques". Hafner Press, New York, 1975.
Hindman, Neil. "The Product of F-Space and P-Space."
Comfort, W.W.; Hindman, Neil; Negrepontis, S. "F'-Spaces and their product with P-spaces."
References
Living people
20th-century American mathematicians
21st-century American mathematicians
Topologists
American civil rights activists
Wesleyan University faculty
California State University, Los Angeles faculty
Binghamton University faculty
Howard University faculty
Westmar University alumni
University of Massachusetts alumni
Wesleyan University alumni
1943 births | Neil Hindman | [
"Mathematics"
] | 727 | [
"Topologists",
"Topology"
] |
76,180,661 | https://en.wikipedia.org/wiki/Effects%20of%20climate%20change%20on%20the%20tropics | Climate change effects on tropical regions includes changes in marine ecosystems, human livelihoods, biodiversity, degradation of tropical rainforests and effects the environmental stability in these areas. Climate change is characterized by alterations in temperature, precipitation patterns, and extreme weather events. Tropical areas, located between the Tropic of Cancer and the Tropic of Capricorn, are known for their warm temperatures, high biodiversity, and distinct ecosystems, including rainforests, coral reefs, and mangroves.
Tropical forests
Carbon cycle
Tropical forests are crucial in the global carbon cycle, acting as significant carbon sinks by absorbing CO2 through photosynthesis. However, climate change is altering this balance. Increased temperatures and changes in precipitation patterns can reduce forest growth rates and change species composition, potentially diminishing the forests' capacity to sequester carbon. Extreme weather events, such as droughts and storms, can lead to increased tree mortality, further reducing the carbon storage capacity of these forests and threatening their biodiversity and ecological services.
Degradation
Tropical rainforests are experiencing significant threats from climate change. Changes in rainfall patterns and increased temperatures can lead to droughts, affecting the health and distribution of rainforest species. These changes exacerbate the effects of deforestation and land-use change, leading to biodiversity loss and affecting the livelihoods of indigenous communities and local populations dependent on these forests. Moreover, the degradation of rainforests contributes to climate change by releasing stored carbon into the atmosphere, creating a feedback loop that further accelerates global warming.
A study highlighted in a 2022 Nature article underscores the broader climate benefits of tropical forests beyond carbon storage. Tropical forests cool the planet by one-third of a degree through biophysical mechanisms such as humidifying the air and releasing cooling chemicals, in addition to their role in extracting carbon dioxide from the air. This underscores the critical importance of preserving tropical forests not only for their carbon storage capacity but also for their broader role in regulating the Earth's climate.
Marine ecosystems
The warming of ocean waters has caused coral bleaching and the degradation of coral reefs, which are vital to marine biodiversity and fisheries. Coral reefs support a large proportion of the world's fish species, providing food and livelihoods for millions of people. As ocean temperatures rise, the symbiotic relationship between corals and their algae is disrupted, leading to bleaching and, in severe cases, the death of coral colonies. This not only affects the species that directly depend on coral reefs but also impacts the larger marine food web and fisheries productivity. In addition, climate change impacts oceanic currents and sea levels, further altering fish distributions and habitats. Furthermore, ocean acidification, resulting from increased CO2 levels, compromises the ability of shellfish and corals to form shells and skeletons, further endangering marine ecosystems and the communities that depend on them.
Adaptation and mitigation
Addressing the impacts of climate change on tropical regions requires global cooperation and local action. Strategies include protecting and restoring ecosystems, implementing sustainable land use and fisheries management practices, and reducing greenhouse gas emissions. Technological innovations, such as satellite monitoring of deforestation and forest fires, along with community-based conservation efforts, can play a crucial role in these strategies. Additionally, promoting sustainable agricultural practices near tropical forests can help preserve these ecosystems while supporting local economies.
The World Resources Institute highlights solutions that serve both adaptation and mitigation purposes, including protecting coastal wetlands, promoting sustainable agroforestry, decentralizing energy distribution, and securing indigenous peoples' land rights. These strategies not only help reduce carbon emissions but also improve resilience to climate impacts. For example, coastal wetlands buffer storm surges and floods while storing significant amounts of carbon. Agroforestry practices enhance land productivity and carbon sequestration, and decentralized energy systems improve resilience to climate variability. Recognizing and securing the land rights of indigenous peoples, who manage a substantial portion of the world's land, can lead to better forest conservation outcomes and lower deforestation rates.
In Zimbabwe, for example, a case study of smallholder farmers in the Nyanga District showcased the integration of traditional grains, drought-resistant crops, and early planting among other adaptation strategies. The involvement of community leaders, professionals, and local residents provided a rich source of knowledge on effective practices to combat the impacts of climate change on food security and livelihoods. This approach emphasizes the importance of local knowledge and community-based strategies in developing resilience to climate change.
NASA plays a critical role in providing the scientific data necessary for understanding and addressing climate change globally. Through missions like GRACE, ICESat, and Sentinel-6, NASA documents crucial changes in the Earth's ice sheets and sea levels, offering invaluable insights for both mitigation and adaptation efforts. Although not directly involved in policy-making, NASA's data supports global climate action by informing decision-makers, scientific communities, and the public.
See also
Climate change
Climate change in the United States
Climate change in Australia
Climate change in the Arctic
Climate change in Antarctica
References
Climate
Tropics
Climate change and the environment
Climate change adaptation
Climate change mitigation
Biodiversity
Tropical rainforests
Climate change by country and region | Effects of climate change on the tropics | [
"Biology"
] | 1,046 | [
"Tropical rainforests",
"Ecosystems",
"Biodiversity"
] |
76,185,356 | https://en.wikipedia.org/wiki/Kermesic%20acid | Kermesic acid is an anthraquinone derivative and the main component of the red dye kermes (false carmine). The compound is the aglycone of carminic acid, the main component of true carmine. As a dye, it is known as Natural Red 3.
Kermesic acid, like carminic acid and the laccaic acids, is an insect dye obtained from scale insects. Kermesic acid is found in insects of the genus Kermes. It is the only colored component of the dye kermes.
The chemical structure of kermesic acid was elucidated by Otto Dimroth in 1916.
References
Anthraquinones
Carboxylic acids | Kermesic acid | [
"Chemistry"
] | 151 | [
"Carboxylic acids",
"Functional groups"
] |
59,586,120 | https://en.wikipedia.org/wiki/Mantle%20oxidation%20state | Mantle oxidation state (redox state) applies the concept of oxidation state in chemistry to the study of the Earth's mantle. The chemical concept of oxidation state mainly refers to the valence state of one element, while mantle oxidation state provides the degree of decreasing or increasing valence states of all polyvalent elements in mantle materials confined in a closed system. The mantle oxidation state is controlled by oxygen fugacity and can be benchmarked by specific groups of redox buffers.
Mantle oxidation state changes because of the existence of polyvalent elements (elements with more than one valence state, e.g. Fe, Cr, V, Ti, Ce, Eu, C and others). Among them, Fe is the most abundant (≈8 wt% of the mantle) and its oxidation state largely reflects the oxidation state of mantle. Examining the valence state of other polyvalent elements could also provide the information of mantle oxidation state.
It is well known that the oxidation state can influence the partitioning behavior of elements and liquid water between melts and minerals, the speciation of C-O-H-bearing fluids and melts, as well as transport properties like electrical conductivity and creep.
The formation of diamond requires both reaching high pressures and high temperatures and a carbon source. The most common carbon source in the Earth's lower mantle is not elemental carbon, hence redox reactions need to be involved in diamond formation. Examining the oxidation state aids in predicting the P-T conditions of diamond formation and can elucidate the origin of deep diamonds.
Thermodynamic description of oxidation state
Mantle oxidation state can be quantified as the oxygen fugacity () of the system within the framework of thermodynamics. A higher oxygen fugacity implies a more oxygen-rich and more oxidized environment. At each given pressure-temperature conditions, for any compound or element M that bears the potential to be oxidized by oxygen
For example, if M is Fe, the redox equilibrium reaction can be Fe+1/2O2=FeO; if M is FeO, the redox equilibrium reaction can be 2FeO+1/2O2=Fe2O3.
Gibbs energy change associated with this reaction is therefore
Along each isotherm, the partial derivation of ΔG with respect to P is ΔV,
.
Combining the 2 equations above,
.
Therefore,
(note that ln(e as the base) changed to log(10 as the base) in this formula.
For a closed system, there might exist more than one of these equilibrium oxidation reactions, but since all these reactions share a same , examining one of them would allow extraction of oxidation state of the system.
Pressure effect on oxygen fugacity
The physics and chemistry of mantle largely depend on pressure. As mantle minerals are compressed, they are transformed into other minerals at certain depths. Seismic observations of velocity discontinuities and experimental simulations on phase boundaries both verified the structure transformations within the mantle. As such, the mantle can be further divided into three layers with distinct mineral compositions.
Since mantle mineral composition changes, the mineral hosting environment for polyvalent elements also alters. For each layer, the mineral combination governing the redox reactions is unique and will be discussed in detailed below.
Upper mantle
Between depths of 30 and 60 km, oxygen fugacity is mainly controlled by Olivine-Orthopyroxene-Spinel oxidation reaction.
Under deeper upper mantle conditions, Olivine-Orthopyroxene-Garnet oxygen barometer is the redox reaction that is used to calibrate oxygen fugacity.
In this reaction, 4 mole of ferrous ions were oxidized to ferric ions and the other 2 mole of ferrous ions remain unchanged.
Transition zone
Garnet-Garnet reaction can be used to estimate the redox state of transition zone.
A recent study showed that the oxygen fugacity of transition referred from Garnet-Garnet reaction is -0.26 to +3 relative to the Fe-FeO (IW, iron- wütstite) oxygen buffer.
Lower mantle
Disproportionation of ferrous iron at lower mantle conditions also affect the mantle oxidation state. This reaction is different from the reactions mentioned above as it does not incorporate the participation of free oxygen.
,
FeO resides in the form of ferropericlase (Fp) and Fe2O3 resides in the form of bridgmanite (Bdg). There is no oxygen fugacity change associated with the reaction. However, as the reaction products differ in density significantly, the metallic iron phase could descend downwards to the Earth's core and get separated from the mantle. In this case, the mantle loses metallic iron and becomes more oxidized.
Implications for diamond formation
The equilibrium reaction involving diamond is
.
Examining the oxygen fugacity of the upper mantle and transition enables us to compare it with the conditions (equilibrium reaction shown above) required for diamond formation. The results show that the is usually 2 units lower than the carbonate-carbon reaction which means favoring the formation of diamond at transition zone conditions.
It has also been reported that pH decrease would also facilitate the formation of diamond in Mantle conditions.
where the subscript aq means 'aqueous', implying H2 is dissolved in the solution.
Deep diamonds have become important windows to look into the mineralogy of the Earth's interior. Minerals not stable at the surface could possibly be found within inclusions of superdeep diamonds—implying they were stable where these diamond crystallized. Because of the hardness of diamonds, the high pressure environment is retained even after transporting to the surface. So far, these superdeep minerals brought by diamonds include ringwoodite, ice-VII, cubic δ-N2 and Ca-perovskite.
See also
Ultra-high-pressure metamorphism
Polymorphism (materials science)
Table of thermodynamic equations
List of oxidation states of the elements
References
Earth
Geochemistry
Geophysics
High pressure science | Mantle oxidation state | [
"Physics",
"Chemistry"
] | 1,240 | [
"High pressure science",
"Applied and interdisciplinary physics",
"nan",
"Geophysics"
] |
59,587,692 | https://en.wikipedia.org/wiki/ThinkPad%20T61 | The ThinkPad T61 is a premium, business-class laptop computer manufactured originally by IBM, which sold the rights to Lenovo. A ThinkPad, it was part of the T series, and was first manufactured in 2006. It was offered as a modular platform, allowing buyers to customize most all of its major features, including processor speed, amount of RAM and hard disk storage, screen size and resolution, quality and speed of video card, and additional capabilities such as a fingerprint reader, smart card reader, and Zip drive. The T61 came with the Windows Vista operating system.
References
External links
Thinkwiki.de - T61 (in German)
Thinkpad T61 wiki
IBM laptops
Lenovo laptops
T61
Computer-related introductions in 2006 | ThinkPad T61 | [
"Technology"
] | 162 | [
"Computing stubs",
"Computer hardware stubs"
] |
51,846,334 | https://en.wikipedia.org/wiki/Distal%20promoter | Distal promoter elements are regulatory DNA sequences that can be many kilobases distant from the gene that they regulate.
They can either be enhancers (increasing expression) or silencers (decreasing expression). They act by binding activator or repressor proteins (transcription factors) and the intervening DNA bends such that the bound proteins contact the core promoter and RNA polymerase.
In T-cell development
T-cell development and activation is controlled by complementary placement of proximal and distal lck promoters. The generated environment of a Lck-PROX mice when approached with proximal promoter demonstrates maximal lck protein and normal thymic development, while distal promoters lead to deficient lck protein and unnormal thymic levels.
Further research at the late stage of thymocyte development reveals that distal Lck promoter with driven Cre will result in the distal lck gene promoter to drive Cre expression to be limited within innate-like T cells. There is a cell type specific function in innate-like T cells based on the distal lck promoter - driven Cre.
In cancer
Multiple studies have discovered abnormalities in distal promoters within cancer cells. For example, an overactive distal promoter located about 1 kilobase away from the MUC5B gene contributes to atypical expression of this gene in gastric cancer cells. Similarly, a few polymorphisms in the RUNX3 distal promoter alter the promoter's function, increasing the activity of the NF-κB transcription factor and the expression of the IL1B gene. These polymorphisms have been correlated with increased vulnerability to intestinal gastric cancer.
Another cancer- related gene is EGLN2, which is located in the chromosome (19q13.2 region). This gene encodes an enzyme that can recognize conserved prolyl residues and hydroxylates it in a α-subunit of hypoxia inducible factor (HIF). The functional polymorphism is a 4bp insertion/deletion within the distal promoters, which can affect the expression of EGLN2.
In RNA Polymerase II (RNAP2)
Distal promoters in RNA polymerase II bind at enhancer elements and may act as a marker for active regulatory sequences.
Reference
Genetics | Distal promoter | [
"Biology"
] | 461 | [
"Genetics"
] |
51,847,293 | https://en.wikipedia.org/wiki/CO2%20fertilization%20effect | {{DISPLAYTITLE:CO2 fertilization effect}}
The CO2 fertilization effect or carbon fertilization effect causes an increased rate of photosynthesis while limiting leaf transpiration in plants. Both processes result from increased levels of atmospheric carbon dioxide (CO2). The carbon fertilization effect varies depending on plant species, air and soil temperature, and availability of water and nutrients. Net primary productivity (NPP) might positively respond to the carbon fertilization effect, although evidence shows that enhanced rates of photosynthesis in plants due to CO2 fertilization do not directly enhance all plant growth, and thus carbon storage. The carbon fertilization effect has been reported to be the cause of 44% of gross primary productivity (GPP) increase since the 2000s. Earth System Models, Land System Models and Dynamic Global Vegetation Models are used to investigate and interpret vegetation trends related to increasing levels of atmospheric CO2. However, the ecosystem processes associated with the CO2 fertilization effect remain uncertain and therefore are challenging to model.
Terrestrial ecosystems have reduced atmospheric CO2 concentrations and have partially mitigated climate change effects. The response by plants to the carbon fertilization effect is unlikely to significantly reduce atmospheric CO2 concentration over the next century due to the increasing anthropogenic influences on atmospheric CO2. Earth's vegetated lands have shown significant greening since the early 1980s largely due to rising levels of atmospheric CO2.
Theory predicts the tropics to have the largest uptake due to the carbon fertilization effect, but this has not been observed. The amount of uptake from fertilization also depends on how forests respond to climate change, and if they are protected from deforestation.
Changes in atmospheric carbon dioxide may reduce the nutritional quality of some crops, with for instance wheat having less protein and less of some minerals. Food crops could see a reduction of protein, iron and zinc content in common food crops of 3 to 17%.
Mechanism
Through photosynthesis, plants use CO2 from the atmosphere, water from the ground, and energy from the sun to create sugars used for growth and fuel. While using these sugars as fuel releases carbon back into the atmosphere (photorespiration), growth stores carbon in the physical structures of the plant (i.e. leaves, wood, or non-woody stems). With about 19 percent of Earth's carbon stored in plants, plant growth plays an important role in storing carbon on the ground rather than in the atmosphere. In the context of carbon storage, growth of plants is often referred to as biomass productivity. This term is used because researchers compare the growth of different plant communities by their biomass, amount of carbon they contain.
Increased biomass productivity directly increases the amount of carbon stored in plants. And because researchers are interested in carbon storage, they are interested in where most of the biomass is found in individual plants or in an ecosystem. Plants will first use their available resources for survival and support the growth and maintenance of the most important tissues like leaves and fine roots which have short lives. With more resources available plants can grow more permanent, but less necessary tissues like wood.
If the air surrounding plants has a higher concentration of carbon dioxide, they may be able to grow better and store more carbon and also store carbon in more permanent structures like wood. Evidence has shown this occurring for a few different reasons. First, plants that were otherwise limited by carbon or light availability benefit from a higher concentration of carbon. Another reason is that plants are able use water more efficiently because of reduced stomatal conductance. Plants experiencing higher CO2 concentrations may benefit from a greater ability to gain nutrients from mycorrhizal fungi in the sugar-for-nutrients transaction. The same interaction may also increase the amount of carbon stored in the soil by mycorrhizal fungi.
Observations and trends
From 2002 to 2014, plants appear to have gone into overdrive, starting to pull more CO2 out of the air than they have done before. The result was that the rate at which CO2 accumulates in the atmosphere did not increase during this time period, although previously, it had grown considerably in concert with growing greenhouse gas emissions.
A 1993 review of scientific greenhouse studies found that a doubling of concentration would stimulate the growth of 156 different plant species by an average of 37%. Response varied significantly by species, with some showing much greater gains and a few showing a loss. For example, a 1979 greenhouse study found that with doubled concentration the dry weight of 40-day-old cotton plants doubled, but the dry weight of 30-day-old maize plants increased by only 20%.
In addition to greenhouse studies, field and satellite measurements attempt to understand the effect of increased in more natural environments. In free-air carbon dioxide enrichment (FACE) experiments plants are grown in field plots and the concentration of the surrounding air is artificially elevated. These experiments generally use lower levels than the greenhouse studies. They show lower gains in growth than greenhouse studies, with the gains depending heavily on the species under study. A 2005 review of 12 experiments at 475–600 ppm showed an average gain of 17% in crop yield, with legumes typically showing a greater response than other species and C4 plants generally showing less. The review also stated that the experiments have their own limitations. The studied levels were lower, and most of the experiments were carried out in temperate regions. Satellite measurements found increasing leaf area index for 25% to 50% of Earth's vegetated area over the past 35 years (i.e., a greening of the planet), providing evidence for a positive CO2 fertilization effect.
Depending on environment, there are differential responses to elevated atmospheric CO2 between major 'functional types' of plant, such as and plants, or more or less woody species; which has the potential among other things to alter competition between these groups. Increased CO2 can also lead to increased Carbon : Nitrogen ratios in the leaves of plants or in other aspects of leaf chemistry, possibly changing herbivore nutrition. Studies show that doubled concentrations of CO2 will show an increase in photosynthesis in C3 plants but not in C4 plants. However, it is also shown that plants are able to persist in drought better than the plants.
Experimentation by enrichment
The effects of enrichment can be most simply attained in a greenhouse (see for its agricultural use). However, for experimentation, the results obtained in a greenhouse would be doubted due to it introducing too many confounding variables. Open-air chambers have been similarly doubted, with some critiques attributing, e.g., a decline in mineral concentrations found in these -enrichment experiments to constraints put on the root system. The current state-of-the art is the FACE methodology, where is put out directly in the open field. Even then, there are doubts over whether the results of FACE in one part of the world applies to another.
Free-Air Enrichment (FACE) experiments
The ORNL conducted FACE experiments where levels were increased above ambient levels in forest stands. These experiments showed:
Increased root production stimulated by increased , resulting in more soil carbon.
An initial increase of net primary productivity, which was not sustained.
Faster decline in nitrogen availability in increased forest plots.
Change in plant community structure, with minimal change in microbial community structure.
Enhanced cannot significantly increase the leaf carrying capacity or leaf area index of an area.
FACE experiments have been criticized as not being representative of the entire globe. These experiments were not meant to be extrapolated globally. Similar experiments are being conducted in other regions such as in the Amazon rainforest in Brazil.
Pine trees
Duke University did a study where they dosed a loblolly pine plantation with elevated levels of . The studies showed that the pines did indeed grow faster and stronger. They were also less prone to damage during ice storms, which is a factor that limits loblolly growth farther north. The forest did relatively better during dry years. The hypothesis is that the limiting factors in the growth of the pines are nutrients such as nitrogen, which is in deficit on much of the pine land in the Southeast. In dry years, however, the trees do not bump up against those factors since they are growing more slowly because water is the limiting factor. When rain is plentiful trees reach the limits of the site's nutrients and the extra is not beneficial. Most forest soils in Southeastern region are deficient in nitrogen and phosphorus as well as trace minerals. Pine forests often sit on land that was used for cotton, corn or tobacco. Since these crops depleted originally shallow and infertile soils, tree farmers must work to improve soils.
Impacts on human nutrition
See also
Effects of climate change on agriculture
References
External links
4. The CO2 fertilization effect: higher carbohydrate production and retention as biomass and seed yield
CO2 fertilization
Atmosphere of Earth
Carbon dioxide
Greenhouse gases
Mineral deficiencies | CO2 fertilization effect | [
"Chemistry",
"Environmental_science"
] | 1,816 | [
"Greenhouse gases",
"Carbon dioxide",
"Environmental chemistry"
] |
51,849,458 | https://en.wikipedia.org/wiki/Zcash | Zcash is a privacy-focused cryptocurrency which is based on Bitcoin's codebase. It shares many similarities, such as a fixed total supply of 21 million units.
Transactions can be transparent, similar to bitcoin transactions, or they can be shielded transactions which use a type of zero-knowledge proof to provide anonymity in transactions. Zcash coins are either in a transparent pool or a shielded pool.
Zcash offers private transactors the option of "selective disclosure", allowing a user to prove payment for auditing purposes. One such reason is to make it easier for private transactors to comply with anti-money laundering laws and tax regulations.
Use
Zcash transactions can be transparent, similar to bitcoin transactions, in which case they are controlled by a "t-addr", or they can be shielded and are controlled by a "z-addr". A shielded transaction uses a type of zero-knowledge proof, specifically a non-interactive zero-knowledge proof, called "zk-SNARK", which provides anonymity to the coin holders in the transaction. Zcash coins are either in a transparent pool or a shielded pool. As of December 2017 only around 4% of Zcash coins were in the shielded pool and at that time most cryptocurrency wallet programs did not support z-addrs and no web-based wallets supported them. The shielded pool of Zcash coins were further analyzed for security and it was found that the anonymity set can be shrunk considerably by heuristics-based identifiable patterns of usage.
While miners receive 80% of a block reward, 20% is given to the "Zcash development fund": 8% to Zcash Open Major Grants, 7% to Electric Coin Co., and 5% to The Zcash Foundation.
History
Development work on Zcash began in 2013 by Johns Hopkins University professor Matthew Green and some of his graduate students. The development was completed by the for-profit Zcash Company, led by Zooko Wilcox, a Colorado based computer security specialist and cypherpunk. In October 2016, The Zcash Company raised over $3 million from Silicon Valley venture capitalists to complete the development of Zcash.
Zcash was first mined in late October 2016. The initial demand was high, and within a week Zcash coins were trading for five thousand dollars a piece. Ten percent of all coins mined for the first four years were to be allotted to the Zcash Company, its employees, the investors, and the non-profit Zcash Foundation.
The setup of Zcash required the careful execution of a trusted setup procedure, something that subsequently became known as "The Ceremony". to create the Zcash private key. In order to ensure privacy, a truly random enormous number needed to be generated to be used as the private key, while also ensuring that no person or computer retains a copy of the key, or could subsequently regenerate the key. If the private key were available, counterfeit Zcash coins could be generated. The Ceremony was a two-day process, executed simultaneously during a short window of time in six different locations globally, by persons who did not know in advance who else was going to be participating in the event. The private key was generated, and used to instantiate Zcash, and the computers used in the process were reportedly destroyed. In 2022, Edward Snowden claimed to have participated in The Ceremony under a pseudonym.
On February 21, 2019, the "Zcash Company" announced a re-branding as the Electric Coin Company (ECC).
On May 19, 2020, a paper titled "Alt-Coin Traceability" investigated the privacy of Zcash and another cryptocurrency Monero. This paper concluded that "more academic research is needed in Zcash overall" and that the privacy guarantees of Zcash are "questionable". The paper claimed that, since the current heuristics from a 2018 Usenix Security Symposium paper entitled "An Empirical Analysis of Anonymity in Zcash" still continue today, the result is making Zcash less anonymous and more traceable.
On June 8, 2020, Chainalysis added support for Zcash to their Chainalysis Reactor and "Know Your Transaction" (KYT) products. They noted that less than 1% of ZEC transactions were completely shielded, with the sender, receiver and amount all hidden, enabling Chainalysis to provide partial information for over 99% of ZEC activity. Chainalysis also cites a research report by the RAND corporation which revealed that less than 0.2% of the cryptocurrency addresses mentioned on the dark web were Zcash or Dash addresses.
On October 12, 2020, the Electronic Coin Company announced a new non-profit 501(c)3 organization called the Bootstrap Project (Bootstrap) in a company blog post titled "ECC’s owners to donate ECC". A majority of the investors and owners of Zerocoin Electric Coin Company LLC (ECC) have agreed to donate the ECC company as the wholly owned property of Bootstrap. ECC's blog post claims that nothing will change within the company other than the ownership including the Board of Directors. On October 27, 2020, ECC announced that its shareholders have officially voted in favor of donating 100 percent of the company's shares to Bootstrap. On March 30, 2021, the company's transparency report said that it is "now a wholly owned entity of the 501(c)3 Bootstrap".
In September 2023, a mining pool named ViaBTC had seized control of over half the hashing power on Zcash. This 51% dominance raised worries about an attack a 51% attack where they could potentially manipulate transactions and harm the network. To shield users from the potential fallout, Coinbase swiftly enacted a series of defensive measures, including placing Zcash markets into "limit-only" mode, effectively quelling significant price swings while the situation unfolded.
See also
Legality of bitcoin by country
Zerocoin protocol
SNARK
References
External links
Cryptocurrency projects
Application layer protocols
Software using the MIT license
Cryptography
Private currencies
Internet properties established in 2016 | Zcash | [
"Mathematics",
"Engineering"
] | 1,316 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
51,850,608 | https://en.wikipedia.org/wiki/Digital%20therapeutics | Digital therapeutics, a subset of digital health, are evidence-based therapeutic interventions driven by high quality software programs to prevent, manage, or treat a medical disorder or disease. Digital therapeutic companies should publish trial results inclusive of clinically meaningful outcomes in peer-reviewed journals. The treatment relies on behavioral and lifestyle changes usually spurred by a collection of digital impetuses. Because of the digital nature of the methodology, data can be collected and analyzed as both a progress report and a preventative measure. Treatments are being developed for the prevention and management of a wide variety of diseases and conditions, including type 1 & type II diabetes, congestive heart failure, obesity, Alzheimer's disease, dementia, asthma, substance abuse, ADHD, hypertension, anxiety, depression, and several others. Digital therapeutics often employ strategies rooted in cognitive behavioral therapy.
Definitions
Although digital therapeutics can be employed in numerous ways, the term can broadly be defined as a treatment or therapy that utilizes digital and often Internet-based health technologies to spur changes in patient behavior. The use of digital products to improve health outcomes dates as far back as 2000. The term itself has been in use since around 2012. The first mention of the term in a peer-reviewed research publication was in 2015, in which Dr. Cameron Sepah formally defined the field as: "Digital therapeutics are evidence-based behavioral treatments delivered online that can increase accessibility and effectiveness of health care." Digital therapeutics can be used as a standalone therapy or in conjunction with more conventional treatments like pharmacological or in-person therapy. As of 2018, digital therapeutics continues to be an evolving field that medical professionals, students, and patients are beginning to utilize.
The Digital Therapeutics Alliance states: "Digital therapeutics (DTx) deliver evidence-based therapeutic interventions to patients that are driven by high quality software programs to prevent, manage, or treat a broad spectrum of physical, mental, and behavioral conditions." Digital therapeutics are different from wellness apps or medication reminders in that they require rigorous clinical evidence to substantiate intended use and impact on disease state.
It is often used as a preventive measure for patients who are at risk of developing more serious conditions. For instance, a patient with prediabetes may be prescribed digital therapeutics as a method to change their diet and behavior that could otherwise lead to a diabetes diagnosis. Digital therapeutics can also be used as a treatment option for existing conditions. For instance, a patient with type II diabetes can use digital therapeutics to manage the disease more effectively.
The methodology uses a variety of digital implements to help manage, monitor, and prevent illnesses in at-risk patients. These include mobile devices and technologies, apps, sensors, desktop computers, and various Internet of Things devices. These implements can collect a wide variety of data, ranging from big to small. Digital therapeutics can theoretically collect a high volume of data from a variety of sources. It also collects "smaller" data, "capturing personalized physiological parameters, behavior patterns and social and geographical patterns that can be recorded from multiple digital sources."
Methodologies
Digital therapeutics can be used for a variety of conditions. There is no single methodology used in the practice of digital therapeutics. Many approaches use methods based upon cognitive behavioral therapy to spur patients to make lifestyle changes, reinforced with gamification, peer support, and in some cases telehealth such as coaching or psychotherapy. The method can be used to manage and improve outcomes in numerous conditions, including type II diabetes, Alzheimer's disease, dementia, congestive heart failure, chronic obstructive pulmonary disease, asthma, lung disease, obesity, substance abuse, ADHD, insomnia, hypertension, anxiety, depression, and others.
Methodologies can be as simple as psychoeducation or sending notifications designed to alter behavior to patients who are at risk of obesity or diabetes and as complex as administering an ingestible radio tag that communicates with an external sensor to monitor the efficacy of a given medication. Diabetes and obesity prevention and management is a major focus in the field of digital therapeutics. Connected devices like insulin pumps, blood glucose meters, and wearable devices can all send data to a unified system. The therapy also uses self-reported data like diet or other lifestyle factors. It is also often used to monitor the potential for heart and lung conditions and change behaviors like smoking, poor diet, or a lack of exercise.
Digital therapeutics can also be used to treat patients with psychological and neurological symptoms. For example, patients with disorders like ADHD, depression, and anxiety can receive cognitive behavioral therapy via their mobile devices. One study looked at the efficacy of avatar-based therapeutic interventions to reduce depressive symptoms. There is also evidence of how Avatar therapy can help in reducing the distress caused by voices in schizophrenia. Another study analyzed seven clinical trials to demonstrate the efficacy of a digital therapeutic in significantly reducing blood pressure. A preliminary study suggested that a mobile mindfulness app may be able to decrease acute stress while improving mood.
Outcomes
The general consensus among researchers in the field of digital therapeutics is that the discipline requires more clinical data and investigation to be fully evaluated. A variety of studies have been conducted to evaluate the efficacy and impact of behavior change techniques that utilize a digital platform, however. In a meta-analysis of 85 such studies comprising a total sample size of over 43,000 participants, researchers discovered that digital therapeutics have a "statistically small but significant effect on health-related behavior." The study also showed that a broader use of theory, behavior change techniques, and modes of delivery (especially regular notifications) improved the efficacy of a given program.
Individual studies have also showed some benefits for patients. For instance, a diabetes prevention program using digital therapeutics saw participants lose an average of 4.7% of baseline body weight after 1 year (4.2% after 2 years) and undergo a 0.38% reduction in A1c levels after 1 year (0.43% after 2 years). Another weight loss pilot program using digital therapeutics reported a mean weight loss of 13.5 pounds (or 7.3% of baseline) with a significant average drop in both systolic and diastolic blood pressure (18.6 mmHg and 6.4 mmHg respectively). The study also saw a slight but statistically insignificant drop in total cholesterol, LDL, triglycerides, and A1c.
That said, the concept of outcomes and impact in the context of digital therapeutics is generally defined broader than strictly health outcomes. A recent review of 244 studies highlighted that, on top of improving health outcomes in certain circumstances as outlined above, the value of digital therapeutics can also arise from improving healthcare access to underserved populations and address health inequalities, or reducing healthcare expenditure. This broader interpretation has also called into question the fitness of traditional health technology assessment frameworks and given rise to calls to develop novel frameworks that embrace the variety of impacts that digital therapeutics can have.
Regulation of digital therapeutics
While a broad range of unregulated health apps have historically been available in the App Store (iOS/iPadOS) or Google Play since their launch, many of these have been found to produce inconsistent, misleading, or dangerous results. In response, regulators in the United States such as the Food and Drug Administration have developed regulatory frameworks such as software as a medical device (SaMD) which require manufacturers to prove that their apps are safe, effective, and that rigorous quality management system processes are in place to ensure that remains the case as software updates occur. However, in an assessment of 4,936 apps that fall under the SaMD umbrella in the United States published in 2019, only 105 (2.13%) included a specific summary of their cybersecurity content. While the content of cybersecurity in SaMD devices was observed to be growing in newer SaMD applications, the (perceived) absence of clear cybersecurity measures in the vast majority of SaMD devices remains a substantial barrier for patients and health professionals to start using digital therapeutics in practice.
In the European Union, regulation (EU) 2017/745, commonly known as "EU MDR" classifies potential digital therapeutics in terms of their intended use, application, and potential to cause harm. Because digital therapeutics are increasingly operating within a regulated environment, the degree of documentation and regulatory compliance (such as ISO 13485 or the CE mark) has increased too.
Reimbursement and commercialization
Unlike medication or the billable hours of healthcare professionals, there are not currently clear pathways to reimbursement in most health systems. While many manufacturers seek FDA clearance or approval, this is only the first step on the path to being reimbursed for the use of a DTX. Accordingly, digital therapeutics companies pursue a range of business models:
Direct-to-consumer (DTC) - A digital therapeutic available directly through a smartphone's app store. Business models available include Freemium, subscriptions, advertising, and selling the anonymized data to third parties
Business-to-business (B2B) - Employers may wish to make digital therapeutics available for their workforce, either directly as a benefit or via a benefits administration company
Pharmaceutical industry partnership - Jointly developing solutions either to directly treat patients or to be used adjunctively i.e. in tandem with a drug, where the DTx may serve to enhance the effectiveness of the drug by promoting medication adherence or having its own effect. While such partnerships can bring in multi-million dollar revenues, ultimately the pharmaceutical company itself will seek reimbursement and is potentially not the ultimate customer.
Payer reimbursement - Being paid directly by a health system or health insurance company. While still in a nascent state in most countries, lobbying efforts are underway in the US for there to be specific reimbursement codes for DTx. Germany is the first country with specific processes for reimbursement (known as DiGa), with France, Italy, and the UK currently considering new pathways to reimbursement. Although German physicians and insurers can prescribe digital therapeutics and be reimbursed at a national level for around 300 Euro per course of treatment, adoption to date has been slow.
To help build the business case for their usage, DTx companies often commission health economics evaluations of their interventions to show that their use ultimately lowers healthcare costs in the medium-long term, such as by reducing the need for hospital admissions or expensive surgeries.
See also
List of digital therapeutics companies
References
Health informatics
Behavior modification | Digital therapeutics | [
"Biology"
] | 2,170 | [
"Behavior",
"Health informatics",
"Behavior modification",
"Behaviorism",
"Human behavior",
"Medical technology"
] |
51,856,036 | https://en.wikipedia.org/wiki/Khinchin%27s%20theorem%20on%20the%20factorization%20of%20distributions | Khinchin's theorem on the factorization of distributions says that every probability distribution P admits (in the convolution semi-group of probability distributions) a factorization
where P1 is a probability distribution without any indecomposable factor and P2 is a distribution that is either degenerate or is representable as the convolution of a finite or countable set of indecomposable distributions. The factorization is not unique, in general.
The theorem was proved by A. Ya. Khinchin for distributions on the line, and later it became clear that it is valid for distributions on considerably more general groups. A broad class (see) of topological semi-groups is known, including the convolution semi-group of distributions on the line, in which factorization theorems analogous to Khinchin's theorem are valid.
References
Theory of probability distributions | Khinchin's theorem on the factorization of distributions | [
"Mathematics"
] | 186 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
51,859,663 | https://en.wikipedia.org/wiki/Starobinsky%20inflation | Starobinsky inflation is a modification of general relativity used to explain cosmological inflation. It was the first model to describe how the universe could have gone through an extremely rapid period of exponential expansion.
History
In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky originally used the semi-classical Einstein equations with free quantum matter fields. However, it was soon realized that the late time inflation which is relevant for observable universe was essentially controlled by the contribution from a squared Ricci scalar in the effective action
where and is the Ricci scalar. This action corresponds to the potential
in the Einstein frame. As a result, the inflationary scenario associated to this potential or to an action including an term are referred to as Starobinsky inflation. To distinguish, models using the original, more complete, quantum effective action are then called (trace)-anomaly induced inflation.
Observables
Starobinsky inflation gives a prediction for primordial observables, e.g., the spectral tilt and the tensor-scalar ratio :
where is the number of e-foldings since the horizon crossing. As , these are compatible with experimental data, with 2018 CMB data from the Planck satellite giving a constraint of (95% confidence) and (68% confidence). The model also gives precise predictions to higher order observables, such as a negative running for the scalar spectral tilt: , and a negative tensor tilt: .
See also
Alternatives to general relativity
Inflation (cosmology)
References
Inflation (cosmology)
General relativity | Starobinsky inflation | [
"Physics"
] | 436 | [
"General relativity",
"Theory of relativity"
] |
51,860,534 | https://en.wikipedia.org/wiki/Infinite%20derivative%20gravity | Infinite derivative gravity is a theory of gravity which attempts to remove cosmological and black hole singularities by adding extra terms to the Einstein–Hilbert action, which weaken gravity at short distances.
History
In 1987, Krasnikov considered an infinite set of higher derivative terms acting on the curvature terms and showed that by choosing the coefficients wisely, the propagator would be ghost-free and exponentially suppressed in the ultraviolet regime. Tomboulis (1997) later extended this work. By looking at an equivalent scalar-tensor theory, Biswas, Mazumdar and Siegel (2005) looked at bouncing FRW solutions. In 2011, Biswas, Gerwick, Koivisto and Mazumdar demonstrated that the most general infinite derivative action in 4 dimensions, around constant curvature backgrounds, parity invariant and torsion free, can be expressed by:
where the are functions of the D'Alembert operator and a mass scale , is the Ricci scalar, is the Ricci tensor and is the Weyl tensor. In order to avoid ghosts, the propagator (which is a combination of the s) must be the exponential of an entire function. A lower bound was obtained on the mass scale of IDG using experimental data on the strength of gravity at short distances, as well as by using data on inflation and on the bending of light around the Sun. The GHY boundary terms were found using the ADM 3+1 spacetime decomposition. One can show that the entropy for this theory is finite in various contexts.
The effect of IDG on black holes and the propagator was examined by Modesto. Modesto further looked at the renormalisability of the theory, as well as showing that it could generate "super-accelerated" bouncing solutions instead of a big bang singularity. Calcagni and Nardelli investigated the effect of IDG on the diffusion equation. IDG modifies the way gravitational waves are produced and how they propagate through space. The amount of power radiated away through gravitational waves by binary systems is reduced, although this effect is far smaller than the current observational precision. This theory is shown to be stable and propagates finite number of degrees of freedom.
Avoidance of singularities
This action can produce a bouncing cosmology, by taking a flat FRW metric with a scale factor or , thus avoiding the cosmological singularity problem. The propagator around a flat space background was obtained in 2013.
This action avoids a curvature singularity for a small perturbation to a flat background near the origin, while recovering the fall of the GR potential at large distances. This is done using the linearised equations of motion which is a valid approximation because if the perturbation is small enough and the mass scale is large enough, then the perturbation will always be small enough that quadratic terms can be neglected. It also avoids the Hawking–Penrose singularity in this context.
Stability of black hole singularities
It was shown that in non-local gravity, Schwarzschild singularities are stable to small perturbations. Further stability analysis of black holes was carried out by Myung and Park.
Equations of motion
The equations of motion for this action are
where
References
Theories of gravity
General relativity
Albert Einstein | Infinite derivative gravity | [
"Physics"
] | 670 | [
"Theories of gravity",
"General relativity",
"Theoretical physics",
"Theory of relativity"
] |
71,846,667 | https://en.wikipedia.org/wiki/Avraam%20I.%20Isayev | Avraam I. Isayev (born 1942) is a University of Akron professor of polymer engineering known for widely used texts on rheology and polymer molding technology, as well as for development of technology for ultrasonic devulcanization of tire rubber.
Education
Isayev was born in Azerbaijan and is a US citizen. He earned two master's degrees, the first in Chemical Engineering from Azerbaijan Institute of Oil and Chemistry in Baku (USSR) in 1964, and a second in Applied Mathematics from the Institute of Electronic Machine Building in Moscow (USSR) in 1975. He completed a doctorate in Polymer Engineering and Science at the Institute of Petrochemical Synthesis of the Academy of Sciences of the USSR in Moscow in 1970.
Career
Isayev began his career in 1970 at the Institute of Petrochemical Synthesis of the Academy of Sciences in Moscow. In 1977, he joined the Israel Institute of Technology in Israel. He joined Cornell University in 1979. He joined the Polymer Engineering department at the University of Akron in 1983. He has been a visiting professor at Kyoto University, University of Aachen, and the University of Linz. During the period from 1990 to 2009 he was the director of Molding Technology Research and Development Center (MOLDTECH). He is the Editor-in-Chief of the journal Advances in Polymer Technology.
Awards
Society Plastics Engineers (SPE) Fellow
OMNOVA Solutions Signature University Award from the OMNOVA Solutions Foundation
1996 Outstanding Researcher Award from the University of Akron
1999 Melvin Mooney Distinguished Technology Award from Rubber Division of the ACS
George Stafford Whitby Award from Rubber Division of the ACS
Silver Medal from the Institute of Materials (London)
1999 Vinogradov Prize from the G. V. Vinogradov Society of Rheology (Moscow)
NorTech Award given by Crain Publishers
James L. White Award of Polymer Processing Society
SPE International Award
References
1942 births
Polymer scientists and engineers
20th-century American engineers
Azerbaijan State Oil and Industry University alumni
Living people
University of Akron faculty | Avraam I. Isayev | [
"Chemistry",
"Materials_science"
] | 403 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
71,850,060 | https://en.wikipedia.org/wiki/HD%20104555 | HD 104555, also known as HR 4595, is a star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 6.02, allowing it to be faintly visible to the naked eye. Based on parallax measurements from Gaia Data Release 3, it is estimated to be 336 light years distant. It appears to be receding from the Solar System, having a heliocentric radial velocity of .
This is an evolved, orange hued giant star with a stellar classification of K3 III. It is currently on the horizontal branch, generating energy via helium fusion at its core. It has twice the mass of the Sun but at 955 million years old, it has expanded to 9.82 times its girth. It radiates 60 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 10455 has an iron abundance 12% below solar levels, making it slightly metal deficient. Like most giants, it spins slowly, having a projected rotational velocity lower than .
HIP 58713 is an 8th magnitude co-moving star located away along a position angle of . It is a main sequence star with a spectral class of F8, and is estimated to be around the same distance as HD 104555.
References
K-type giants
Horizontal-branch stars
Octans
Octantis, 12
PD-84 00371
104555
058697
4595
Double stars
F-type main-sequence stars | HD 104555 | [
"Astronomy"
] | 308 | [
"Octans",
"Constellations"
] |
71,855,084 | https://en.wikipedia.org/wiki/Passive%20daytime%20radiative%20cooling | Passive daytime radiative cooling (PDRC) (also passive radiative cooling, daytime passive radiative cooling, radiative sky cooling, photonic radiative cooling, and terrestrial radiative cooling) is the use of unpowered, reflective/thermally-emissive surfaces to lower the temperature of a building or other object.
It has been proposed as a method of reducing temperature increases caused by greenhouse gases by reducing the energy needed for air conditioning, lowering the urban heat island effect, and lowering human body temperatures.
PDRCs can aid systems that are more efficient at lower temperatures, such as photovoltaic systems, dew collection devices, and thermoelectric generators.
Some estimates propose that dedicating 1–2% of the Earth's surface area to PDRC would stabilize surface temperatures. Regional variations provide different cooling potentials with desert and temperate climates benefiting more than tropical climates, attributed to the effects of humidity and cloud cover. PDRCs can be included in adaptive systems, switching from cooling to heating to mitigate any potential "overcooling" effects. PDRC applications for indoor space cooling is growing with an estimated "market size of ~$27 billion in 2025."
PDRC surfaces are designed to be high in solar reflectance to minimize heat gain and strong in longwave infrared (LWIR) thermal radiation heat transfer matching the atmosphere's infrared window (8–13 μm). This allows the heat to pass through the atmosphere into space.
PDRCs leverage the natural process of radiative cooling, in which the Earth cools by releasing heat to space. PDRC operates during daytime. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50-100 W/m2. The average PDRC has an estimated cooling power of ~100-150 W/m2, proportional to the exposed surface area.
PDRC applications are deployed as sky-facing surfaces. Low-cost scalable PDRC materials with potential for mass production include coatings, thin films, metafabrics, aerogels, and biodegradable surfaces.
While typically white, other colors can also work, although generally offering less cooling potential.
Research, development, and interest in PDRCs has grown rapidly since the 2010s, attributable to a breakthrough in the use of photonic metamaterials to increase daytime cooling in 2014, along with growing concerns over energy use and global warming. PDRC can be contrasted with traditional compression-based cooling systems (e.g., air conditioners) that consume substantial amounts of energy, have a net heating effect (heating the outdoors more than cooling the indoors), require ready access to electric power and often employ coolants that deplete the ozone or have a strong greenhouse effect,
Unlike solar radiation management, PDRC increases heat emission beyond simple reflection.
Implementation
A 2019 study reported that "widescale adoption of radiative cooling could reduce air temperature near the surface, if not the whole atmosphere." To address global warming, PDRCs must be designed "to ensure that the emission is through the atmospheric transparency window and out to space, rather than just to the atmosphere, which would allow for local but not global cooling."
Desert climates have the highest radiative cooling potential due to low year-round humidity and cloud cover, while tropical climates have less potential due to higher humidity and cloud cover. Costs for global implementation have been estimated at $1.25 to $2.5 trillion or about 3% of global GDP, with expected economies of scale. Low-cost scalable materials have been developed for widescale implementation, although some challenges toward commercialization remain.
Some studies recommended efforts to maximize solar reflectance or albedo of surfaces, with a goal of thermal emittance of 90%. For example, increasing reflectivity from 0.2 (typical rooftop) to 0.9 is far more impactful than improving an already reflective surface, such as from 0.9 to 0.97.
Benefits
Studies have reported many PDRC benefits:
Advancing toward a carbon neutral future and achieving net-zero emissions.
Alleviating electrical grids and renewable energy sources from devoting electric energy to cooling.
Balancing the Earth's energy budget.
Cooling human body temperatures during extreme heat.
Improving atmospheric water collection systems and dew harvesting techniques.
Improving performance of solar energy systems.
Mitigating energy crises.
Mitigating urban heat island effect.
Reducing greenhouse gas emissions by replacing fossil fuel energy use devoted to cooling.
Reducing local and global temperature increases associated with global warming.
Reducing thermal pollution of water resources.
Reducing water consumption for wet cooling processing.
Other geoengineering approaches
PDRC has been claimed to be more stable, adaptable, and reversible than stratospheric aerosol injection (SAI).
Wang et al. claimed that SAI "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, and thus preferred PDRC. Munday noted that although "unexpected effects will likely occur" with the global implementation of PDRC, that "these structures can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades."
When compared to the reflective surfaces approach of increasing surface albedo, such as through painting roofs white, or the space mirror proposals of "deploying giant reflective surfaces in space", Munday claimed that "the increased reflectivity likely falls short of what is needed and comes at a high financial cost." PDRC differs from the reflective surfaces approach by "increasing the radiative heat emission from the Earth rather than merely decreasing its solar absorption".
Function
The basic measure of PDRCs is their solar reflectivity (in 0.4–2.5 μm) and heat emissivity (in 8–13 μm), to maximize "net emission of longwave thermal radiation" and minimize "absorption of downward shortwave radiation". PDRCs use the infrared window (8–13 μm) for heat transfer with the coldness of outer space (~2.7 K) to radiate heat and subsequently lower ambient temperatures with zero energy input.
PDRCs mimic the natural process of radiative cooling, in which the Earth cools itself by releasing heat to outer space (Earth's energy budget), although during the daytime, lowering ambient temperatures under direct solar intensity. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50 and 100 W/m2. As of 2022 the average PDRC had a cooling power of ~100–150 W/m2. Cooling power is proportional to the installation's surface area.
Measuring effectiveness
The most useful measurements come in a real-world setting. Standardized devices have been proposed.
Evaluating atmospheric downward longwave radiation based on "the use of ambient weather conditions such as the surface air temperature and humidity instead of the altitude-dependent atmospheric profiles," may be problematic since "downward longwave radiation comes from various altitudes of the atmosphere with different temperatures, pressures, and water vapor contents" and "does not have uniform density, composition, and temperature across its thickness."
Broadband emitters (BE) vs. selective emitters (SE)
Broadband emitters possess high emittance in both the solar spectrum and atmospheric LWIR window (8 to 14 μm), whereas selective emitters only emit longwave infrared radiation.
In theory, selective thermal emitters can achieve higher cooling power. However, selective emitters face challenges in real-world applications that can weaken their performance, such as from dropwise condensation (common even in semi-arid climates) that can accumulate on even hydrophobic surfaces and reduce emission. Broadband emitters outperform selective materials when "the material is warmer than the ambient air, or when its sub-ambient surface temperature is within the range of several degrees".
Each type can be advantageous for certain applications. Broadband emitters may be better for horizontal applications, such as roofs, whereas selective emitters may be more useful on vertical surfaces such as building facades, where dropwise condensation is inconsequential and their stronger cooling power can be achieved.
Broadband emitters can be made angle-dependent to potentially enhance performance. Polydimethylsiloxane (PDMS) is a common broadband emitter. Most PDRC materials are broadband, primarily due to their lower cost and higher performance at above-ambient temperatures.
Hybrid systems
Combining PDRCs with other systems may increase their cooling power. When included in a combined thermal insulation, evaporative cooling, and radiative cooling system consisting of "a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," 300% higher ambient cooling power was demonstrated. This could extend the shelf life of food by 40% in humid climates and 200% in dry climates without refrigeration. The system however requires water "re-charges" to maintain cooling power.
A dual-mode asymmetric photonic mirror (APM) consisting of silicon-based diffractive gratings could achieve all-season cooling, even under cloudy and humid conditions, as well as heating. The cooling power of APM could perform 80% more when compared to standalone radiative coolers. Under cloudy sky, it could achieve 8 °C more cooling and, for heating, 5.7 °C.
Climatic variations
The cooling potential of various areas varies primarily based on climate zones, weather patterns, and events. Dry and hot regions generally have higher radiative cooling power (up to 120 W m2), while colder regions or those with high humidity or cloud cover generally have less. Cooling potential changes seasonally due to shifts in humidity and cloud cover. Studies mapping daytime radiative cooling potential have been done for China, India, the United States, and across Europe.
Deserts
Dry regions such as western Asia, north Africa, Australia and the southwestern United States are ideal for PDRC due to the relative lack of humidity and cloud cover across the seasons. The cooling potential for desert regions has been estimated at "in the higher range of 80–110 W m2", and 120 W m2. The Sahara Desert and western Asia is the largest area on earth with such a high cooling potential.
The cooling potential of desert regions is likely to remain relatively unfulfilled due to low population densities, reducing demand for local cooling, despite tremendous cooling potential.
Temperate climates
Temperate climates have a high radiative cooling potential and greater population density, which may increase interest in PDRCs. These zones tend to be "transitional" zones between dry and humid climates. High population areas in temperate zones may be susceptible to an "overcooling" effect from PDRCs due to temperature shifts from summer to winter, which can be overcome with the modification of PDRCs to adjust for temperature shifts.
Tropics
While PDRCs have proven successful in temperate regions, reaching the same level of performance is more difficult in tropical climes. This has primarily been attributed to the higher solar irradiance and atmospheric radiation, particularly humidity and cloud cover. The average cooling potential of tropical climates varies between 10 and 40 W m2, significantly lower than hot and dry climates.
For example, the cooling potential of most of southeast Asia and the Indian subcontinent is significantly diminished in the summer due to a dramatic increase in humidity, dropping as low as 10–30 W/m2. Other similar zones, such as tropical savannah areas in Africa, see a more modest decline during summer, dropping to 20–40 W/m2. However, tropical regions generally have a higher albedo or radiative forcing due to sustained cloud cover and thus their land surface contributes less to planetary albedo.
A 2022 study reported that a PDRC surface in tropical climates should have a solar reflectance of at least 97% and an infrared emittance of at least 80% to reduce temperatures. The study applied a - coating with a "solar reflectance and infrared emittance (8–13 μm) of 98.4% and 95% respectively" in the tropical climate of Singapore and achieved a "sustained daytime sub-ambient temperature of 2°C" under direct solar intensity of 1000 W m2.
Variables
Humidity and cloud coverage
Humidity and cloud coverage significantly weaken PDRC effectiveness. A 2022 study noted that "vertical variations of both vapor concentration and temperature in the atmosphere" can have a considerable impact on radiative coolers. The authors reported that aerosol and cloud coverage can weaken the effectiveness of radiators and thus concluded that adaptable "design strategies of radiative coolers" are needed to maximize effectiveness under these climatic conditions.
Dropwise condensation
The formation of dropwise condensation on PDRC surfaces can alter the infrared emittance of selective PDRC emitters, which can weaken their performance. Even in semi-arid environments, dew formation. Another 2022 study reported that the cooling power of selective emitters "may broaden the narrowband emittances of the selective emitter and reduce their sub-ambient cooling power and their supposed cooling benefits over broadband emitters" and that:Our work shows that the assumed benefits of selective emitters are even smaller when it comes to the largest application of radiative cooling – cooling roofs of buildings. However, recently, it has been shown that for vertical building facades experiencing broadband summertime terrestrial heat gains and wintertime losses, selective emitters can achieve seasonal thermoregulation and energy savings. Since dew formation appears less likely on vertical surfaces even in exceptionally humid environments, the thermoregulatory benefits of selective emitters will likely persist in both humid and dry operating conditions.
Rain
Rain can generally help clean PDRC surfaces covered with dust, dirt, or other debris. However, in humid areas, consistent rain can result in water accumulation that can hinder performance. Porous PDRCs can mitigate these conditions. Another response is to make hydrophobic self-cleaning PDRCs. Scalable and sustainable hydrophobic PDRCs that avoid VOCs can repel rainwater and other liquids.
Wind
Wind may alter the efficiency of passive radiative cooling surfaces and technologies. A 2020 study proposed using a "tilt strategy and wind cover strategy" to mitigate wind effects. The researchers reported regional differences in China, noting that "85% of China's areas can achieve radiative cooling performance with wind cover" whereas in northwestern China wind cover effects would be more substantial. Bijarniya et al. similarly proposes the use of a wind shield in areas susceptible to high winds.
Materials and production
PDRC surfaces can be made of various materials. However, for widespread application, PDRC materials must be low cost, available for mass production, and applicable in many contexts. Most research has focused on coatings and thin films, which tend to be more available for mass production, lower cost, and more applicable in a wider range of contexts, although other materials may provide potential for specific applications.
PDRC research has identified more sustainable material alternatives, even if not fully biodegradable. A 2023 study reported that "most PDRC materials now are non-renewable polymers, artificial photonic or synthetic chemicals, which will cause excessive emissions by consuming fossil fuels and go against the global carbon neutrality goal. Environmentally friendly bio-based renewable materials should be an ideal material to devise PDRC systems."
Multilayer and complex structures
Advanced photonic materials and structures, such as multilayer thin films, micro/nanoparticles, photonic crystals, metamaterials, and metasurfaces, have been reported as potential approaches. However, while multilayer and complex nano-photonic structures have proven successful in experimental scenarios and simulations, a 2022 study reorted that widespread application "is severely restricted because of the complex and expensive processes of preparation". Similarly, a 2020 study reported that "scalable production of artificial photonic radiators with complex structures, outstanding properties, high throughput, and low cost is still challenging". This has advanced research of simpler structures for PDRC materials possibly better suited for mass production.
Coatings
PDRC coatings such as paints may be advantageous given their direct application to surfaces, simplifying preparation and reducing costs, although not all coatings are inexpensive. A 2022 study stated that coatings generally offer "strong operability, convenient processing, and low cost, which have the prospect of large-scale utilization". PDRC coatings have been developed in colors other than white while still demonstrating high solar reflectance and heat emissivity.
Coatings must be durable and resistant to soiling, which can be achieved with porous PDRCs or hydrophobic topcoats that can withstand cleaning, although hydrophobic coatings use polytetrafluoroethylene or similar compounds to be water-resistant. Negative environmental impacts can be mitigated by limiting use of other toxic solvents common in paints, such as acetone. Non-toxic or water-based paints have been developed.
Porous Polymers Coating (PPC) exhibit excellent PDRC performance. These polymers have a high concentration of tiny pores, which scatter light effectively at the boundary between the polymer and the air. This scattering enhances both solar reflectance (more than 96%) and thermal emittance (97% of heat), lowering surface temperatures six degrees below the surroundings at noon in Phoenix. This process is solution-based, aiding scalability. Dye of the desired color is coated on the polymer. Compared to traditional dye in porous polymer, in which the dye is mixed in the polymer, the new design can cool more effectively.
A 2018 study reported significantly lowered coating costs, stating that "photonic media, when properly randomized to minimize the photon transport mean free path, can be used to coat a black substrate and reduce its temperature by radiative cooling." This coating could "outperform commercially available solar-reflective white paint for daytime cooling" without expensive manufacturing steps or materials.
Films
Many thin films offer high solar reflectance and heat emittance. However, films with precise patterns or structures are not scalable "due to the cost and technical difficulties inherent in large-scale precise lithography" (2022), or "due to complex nanoscale lithography/synthesis and rigidity" (2021).
The polyacrylate hydrogel film from the 2022 study has broader applications, including potential uses in building construction and large-scale thermal management systems. This research focused on a film developed for hybrid passive cooling. The film uses sodium polyacrylate, a low-cost industrial material, to achieve high solar reflectance and high mid-infrared emittance. A significant feature of this material is its ability to absorb atmospheric moisture, aiding evaporative cooling. This tripartite mechanism allows for efficient cooling under varying atmospheric conditions, including high humidity or given limited access to clear skies.
Metafabrics
PDRCs can be made of metafabrics, which can be used in clothing to shield/regulate body temperatures. Most metafabrics are made of petroleum-based fibers. For instance, 2023 study reported that a that "new flexible cellulose fibrous films with wood-like hierarchical microstructures need to be developed for wearable PDRC applications."
A 2021 study chose a composite of titanium oxide and polylactic acid (TiO2-PLA) with a polytetrafluoroethylene (PTFE) lamination. The fabric underwent optical and thermal characterization, measuring like reflectivity and emissivity. Numerical simulations, including Lorenz-Mie theory and Monte Carlo simulations, were crucial in predicting the fabric's performance and guiding optimization. Mechanical testing was conducted to assess the fabric's durability, strength, and practicality.
The study reported exceptional ability to facilitate radiative cooling. The fabric achieved 94.5% emissivity and 92.4% reflectivity. This combination of high emissivity and reflectivity is central to its cooling capabilities, significantly outperforming traditional fabrics. Additionally, the fabric's mechanical properties, including strength, durability, waterproofness, and breathability, confirmed its suitability for clothing.
Aerogels
Aerogels offer a potential low-cost material scalable for mass production. Some aerogels can be considered a more environmentally friendly alternative to other materials, with degradable potential and the absence of toxic chemicals. Aerogels can be useful as thermal insulation to reduce solar absorption and parasitic heat gain to improve the cooling performance of PDRCs.
Nano bubbles
Pigments absorb light. Soap bubbles show a prism of different colors on their surfaces. These colors result from the way light interacts with differing thicknesses of the bubble's surface, termed structural color. One study reported that cellulose nanocrystals (CNCs), which are derived from the cellulose found in plants, could be made into iridescent, colorful films without added pigment. They made films with blue, green and red colors that, when placed under sunlight, were an average of nearly 7ᵒF cooler than the surrounding air. The film generated over 120 W m−2 of cooling power.
Biodegradable surfaces
Many proposed radiative cooling materials are not biodegradable. A 2022 study reported that "sustainable materials for radiative cooling have not been sufficiently investigated."
Micro-grating
A silica micro-grating photonic device cooled commercial silicon cells by 3.6 °C under solar intensity of 830 W m−2 to 990 W m−2.
Applications
Passive daytime radiative cooling has "the potential to simultaneously alleviate the two major problems of energy crisis and global warming" along with an "environmental protection refrigeration technology." PDRCs have an array of potential applications, but are now most often applied to various aspects of the built environment, such as building envelopes, cool pavements, and other surfaces to decrease energy demand, costs, and emissions. PDRC has been applied for indoor space cooling, outdoor urban cooling, solar cell efficiency, power plant condenser cooling, among other applications. For outdoor applications, PDRC durability is an important requirement.
Indoor space cooling
The most common application is on building envelopes, including cool roofs. A PDRC can double the energy savings of a white roof. This makes PDRCs an alternative or supplement to air conditioning that lowers energy demand and reduces air conditioning's release of hydrofluorocarbons (HFC) into the atmosphere. HFCs can be thousands of times more potent than .
Air conditioning accounts for 12%-15% of global energy usage, while emissions from air conditioning account for "13.7% of energy-related emissions, approximately 52.3 EJ yearly" or 10% of total emissions. Air conditioning applications are expected to rise. However, this can be significantly reduced with the mass production of low-cost PDRCs for indoor space cooling. A multilayer PDRC surface covering 10% of a building's roof can replace 35% of air conditioning used during the hottest hours of daytime.
In suburban single-family residential areas, PDRCs can lower energy costs by 26% to 46% in the United States and lower temperatures on average by 5.1ᵒC. With the addition of "cold storage to utilize the excess cooling energy of water generated during off-peak hours, the cooling effects for indoor air during the peak-cooling-load times can be significantly enhanced" and air temperatures may be reduced by 6.6–12.7 °C.
In cities, PDRCs can produce significant energy and cost savings. In a study on US cities, Zhou et al. found that "cities in hot and arid regions can achieve high annual electricity consumption savings of >2200 kWh, while <400 kWh is attainable in colder and more humid cities," ranking from highest to lowest by electricity consumption savings as follows: Phoenix (~2500 kWh), Las Vegas (~2250 kWh), Austin (~2100 kWh), Honolulu (~2050 kWh), Atlanta (~1500 kWh), Indianapolis (~1200 kWh), Chicago (~1150 kWh), New York City (~900 kWh), Minneapolis (~850 kWh), Boston (~750 kWh), Seattle (~350 kWh). In a study projecting energy savings for Indian cities in 2030, Mumbai and Kolkata had a lower energy savings potential, Jaisalmer, Varansai, and Delhi had a higher potential, although with significant variations from April to August dependent on humidity and wind cover.
The growing interest and rise in PDRC application to buildings has been attributed to cost savings related to "the sheer magnitude of the global building surface area, with a market size of ~$27 billion in 2025," as estimated in a 2020 study.
Outdoor urban space cooling
PDRC surfaces can mitigate extreme heat from the urban heat island effect that occurs in over 450 cities worldwide. It can be as much as hotter in urban areas than nearby rural areas. On an average hot summer day, the roofs of buildings can be hotter than the surrounding air, warming air temperatures further through convection. Well-insulated dark rooftops are significantly hotter than all other urban surfaces, including asphalt pavements, further expanding air conditioning demand (which further accelerates global warming and urban heat island through the release of waste heat into the ambient air) and increasing risks of heat-related disease and fatal health effects.
PDRCs can be applied to building roofs and urban shelters to significantly lower surface temperatures with zero energy consumption by reflecting heat out of the urban environment and into outer space. The primary obstacle to PDRC implementation is the glare that may be caused through the reflection of visible light onto surrounding buildings. Colored PDRC surfaces may mitigate glare. such as Zhai et al. "Super-white paints with commercial high-index (n~1.9) retroreflective spheres", or the use of retroreflective materials (RRM) may also mitigate glare. Surrounding buildings without PDRC may weaken the cooling power of PDRCs.
Even when installed on roofs in highly dense urban areas, broadband radiative cooling panels lower surface temperatures at the sidewalk level. A 2022 study assessed the effects of PDRC surfaces in winter, including non-modulated and modulated PDRCs, in the Kolkata metropolitan area. A non-modulated PDRC with a reflectance of 0.95 and emissivity of 0.93 decreased ground surface temperatures by nearly and with an average daytime reduction of .
While in summer the cooling effects of broadband non-modulated PDRCs may be desirable, they could present an uncomfortable "overcooling" effect for city populations in winter and thus increase energy use for heating. This can be mitigated by broadband modulated PDRCs, which they found could increase daily ambient urban temperatures by in winter. While in Kolkata "overcooling" is unlikely, elsewhere it could have unwanted impacts. Therefore, modulated PDRCs may be preferred in cities with warm summers and cold winters for controlled cooling, while non-modulated PDRCs may be more beneficial for cities with hot summers and moderate winters.
In a study on urban bus shelters, it was found that most shelters fail at providing thermal comfort for commuters, while a tree could provide more cooling. Other methods to cool shelters often involve air conditioning or other energy intensive measures. Urban shelters with PDRC roofing can significantly reduce temperatures with zero energy input, while adding "a non-reciprocal mid-infrared cover" can increase benefits by reducing incoming atmospheric radiation as well as reflecting radiation from surrounding buildings.
For outdoor urban space cooling, a 2021 study recommended that PDRC in urban areas primarily focus on increasing albedo so long as emissivity can be maintained above 90%.
Solar energy efficiency
PDRC surfaces can be integrated with solar energy plants, referred to as solar energy–radiative cooling (SE–RC), to improve functionality and performance by preventing solar cells from 'overheating' and thus degrading. Since silicon solar cells have a maximum efficiency of 33.7% (with the average commercial panel reaching around 20%), the majority of absorbed power produces excess heat and increases the operating temperature. Solar cell efficiency declines 0.4-0.5% for every 1 ᵒC increase in temperature.
PDRC can extend the life of solar cells by lowering the operating temperature of the system. Integrating PDRCs into solar energy systems is also relatively simple, given that "most solar energy harvesting systems have a sky-facing flat plate structural design, which is similar to radiative cooling systems." Integration has been reported to increase energy gain per unit area while increasing the fraction of the day the cell operates.
Methods have been proposed to potentially enhance cooling performance. One 2022 study proposed using a "full-spectrum synergetic management (FSSM) strategy to cool solar cells, which combines radiative cooling and spectral splitting to enhance radiative heat dissipation and reduce the waste heat generated by the absorption of sub-BG photons."
Personal thermal management
Personal thermal management (PTM) employs PDRC in fabrics to regulate body temperatures during extreme heat. While other fabrics are useful for heat accumulation, they "may lead to heat stroke in hot weather." A 2021 study claimed that "incorporating passive radiative cooling structures into personal thermal management technologies could effectively defend humans against intensifying global climate change."
Wearable PDRCs can come in different forms and target outdoor workers. Products are at the prototype stage. Although most textiles are white, colored wearable materials in select colors may be appropriate in some contexts.
Power plant condenser cooling
Power plant condensers used in thermoelectric power plants and concentrated solar plants (CSP) can cool water for effective use within the heat exchanger. A study of a pond covered with a radiative cooler reported that 150 W m2 flux could be achieved without loss of water. PDRC can reduce water use and thermal pollution caused by water cooling.
A review reported that supplementing the air-cooled condenser for radiative cooling panels in a thermoelectric power plant condenser achieved a 4096 kWhth/day cooling effect with a pump energy consumption of 11 kWh/day. A concentrated solar plant (CSP) on the supercritical cycle at 550ᵒC was reported to produce 5% net output gain over an air-cooled system by integration with 14 m2 /kWe capacity radiative cooler."
Thermal regulation of buildings
In addition to cooling, PDRC surfaces can be modified for bi-directional thermal regulation (cooling and heating). This can be achieved through switching thermal emittance between high and low values.
Thermoelectric generation
When combined with a thermoelectric generator, a PDRC surface can generate small amounts of electricity.
Automobile and greenhouse cooling
Thermally enclosed spaces, including automobiles and greenhouses, are particularly susceptible to harmful temperature increases. This is because of the heavy presence of windows, which are transparent to incoming solar radiation yet opaque to outgoing long-wave thermal radiation, which causes them to heat rapidly in the sun. Automobile temperatures in direct sunlight can rise to 60–82 ᵒC when ambient temperatures is only 21 ᵒC.
Water harvesting
Dew harvesting yields may be improved via with PDRC. Selective PDRC emitters that have a high emissivity and broadband emitters may produce varying results. In one study using a broadband PDRC, the device condensed ~8.5 mL day of water for 800 W m2 of peak solar intensity." Whereas selective emitters may be less advantageous in other contexts, they may be superior for dew harvesting applications. PDRCs could improve atmospheric water harvesting by being combined with solar vapor generation systems to improve water collection rates.
Water and ice cooling
PDRC surfaces can be installed over the surface of a body of water for cooling. In a controlled study, a body of water was cooled 10.6 ᵒC below the ambient temperature with the usage of a photonic radiator.
PDRC surfaces have been developed to cool ice and prevent ice from melting under sunlight. It has been proposed as a sustainable method for ice protection. This can also be applied to protect refrigerated food from spoiling.
Side effects
Jeremy Munday writes that although "unexpected effects will likely occur", PDRC structures "can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades." Stratospheric aerosol injection "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, preferring PDRC. Zevenhoven et al. state that "instead of stratospheric aerosol injection (SAI), cloud brightening or a large number of mirrors in the sky (“sunshade geoengineering”) to block out or reflect incoming (short-wave, SW) solar irradiation, long-wavelength (LW) thermal radiation can be selectively emitted and transferred through the atmosphere into space".
"Overcooling" and PDRC modulation
"Overcooling" is cited as a side effect of PDRCs that may be problematic, especially when PDRCs are applied in high-population areas with hot summers and cool winters, characteristic of temperate zones. While PDRC application in these areas can be useful in summer, in winter it can result in an increase in energy consumption for heating and thus may reduce the benefits of PDRCs on energy savings and emissions. As per Chen et al., "to overcome this issue, dynamically switchable coatings have been developed to prevent overcooling in winter or cold environments."
The detriments of overcooling can be reduced by modulation of PDRCs, harnessing their passive cooling abilities during summer, while modifying them to passively heat during winter. Modulation can involve "switching the emissivity or reflectance to low values during the winter and high values during the warm period." In 2022, Khan et al. concluded that "low-cost optically modulated" PDRCs are "under development" and "are expected to be commercially available on the market soon with high future potential to reduce urban heat in cities without leading to an overcooling penalty during cold periods."
There are various methods of making PDRCs 'switchable' to mitigate overcooling. Most research has used vanadium dioxide (VO2), an inorganic compound, to achieve temperature-based 'switchable' cooling and heating effects. While, as per Khan et al., developing VO2 is difficult, their review found that "recent research has focused on simplifying and improving the expansion of techniques for different types of applications." Chen et al. found that "much effort has been devoted to VO2 coatings in the switching of the mid-infrared spectrum, and only a few studies have reported the switchable ability of temperature-dependent coatings in the solar spectrum." Temperature-dependent switching requires no extra energy input to achieve both cooling and heating.
Other methods of PDRC 'switching' require extra energy input to achieve desired effects. One such method involves changing the dielectric environment. This can be done through "reversible wetting" and drying of the PDRC surface with common liquids such as water and alcohol. However, for this to be implemented on a mass scale, "the recycling, and utilization of working liquids and the tightness of the circulation loop should be considered in realistic applications."
Another method involves 'switching' through mechanical force, which may be useful and has been "widely investigated in [PDRC] polymer coatings owing to their stretchability." For this method, "to achieve a switchable coating in εLWIR, mechanical stress/strain can be applied in a thin PDMS film, consisting of a PDMS grating and embedded nanoparticles." One study estimated, with the use of this method, that "19.2% of the energy used for heating and cooling can be saved in the US, which is 1.7 times higher than the only cooling mode and 2.2 times higher than the only heating mode," which may inspire additional research and development.
Glare and visual appearance
Glare caused from surfaces with high solar reflectance may present visibility concerns that can limit PDRC application, particularly within urban environments at the ground level. PDRCs that use a "scattering system" to generate reflection in a more diffused manner have been developed and are "more favorable in real applications," as per Lin et al.
Low-cost PDRC colored paint coatings, which reduce glare and increase the color diversity of PDRC surfaces, have also been developed. While some of the surface's solar reflectance is lost in the visible light spectrum, colored PDRCs can still exhibit significant cooling power, such as a coating by Zhai et al., which used a α- coating (resembling the color of the compound) to develop a non-toxic paint that demonstrated a solar reflectance of 99% and heat emissivity of 97%.
Generally it is noted that there is a tradeoff between cooling potential and darker colored surfaces. Less reflective colored PDRCs can also be applied to walls while more reflective white PDRCs can be applied to roofs to increase visual diversity of vertical surfaces, yet still contribute to cooling.
History
Nocturnal passive radiative cooling has been recognized for thousands of years, with records showing awareness by the ancient Iranians, demonstrated through the construction of Yakhchāls, since 400 B.C.E.
PDRCwas hypothesized by Félix Trombe in 1967. The first experimental setup was created in 1975, but was only successful for nighttime cooling. Further developments to achieve daytime cooling using different material compositions were not successful.
In the 1980s, Lushiku and Granqvist identified the infrared window as a potential way to access the ultracold outer space as a way to achieve passive daytime cooling.
Early attempts at developing passive radiative daytime cooling materials took inspiration from nature, particularly the Saharan silver ant and white beetles, noting how they cooled themselves in extreme heat.
Research and development in PDRCevolved rapidly in the 2010s with the discovery of the ability to suppress solar heating using photonic metamaterials, which widely expanded research and development in the field.
In 2024, Nissan introduced a paint that lowers car interior temperatures by up to 21 °F in direct sunlight. It involves two types of particles, each operating at a different frequency. One reflects near-infrared light. The second converts other frequencies to match the infrared window, radiating the energy into space.
See also
Albedo
Emissivity
Energy conservation
Low-energy building
Passive cooling
Passive house
Passive solar building design
Radiative cooling
Sustainable city
Urban heat island
Zero-energy building
References
Atmospheric radiation
Climate change adaptation
Climate change mitigation
Climate engineering
Cooling technology
Energy conservation
Heat transfer
Heating, ventilation, and air conditioning
Passive cooling
Photonics
Renewable energy
Solar design
Sustainable architecture
Sustainable building
Thermodynamics
Renewable energy technology | Passive daytime radiative cooling | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 8,087 | [
"Transport phenomena",
"Sustainable building",
"Physical phenomena",
"Heat transfer",
"Sustainable architecture",
"Planetary engineering",
"Solar design",
"Geoengineering",
"Building engineering",
"Energy engineering",
"Architecture",
"Construction",
"Thermodynamics",
"Environmental social... |
56,200,257 | https://en.wikipedia.org/wiki/Michael%20C.%20Mitchell | Michael C. Mitchell (born January 4, 1946) is an American planner, designer, lecturer and environmentalist. He works on rural development.
Earlier career
At Portland State University, Mitchell became one of the organizers of the First Earth Day in 1970, coordinating universities throughout America's northwest states.[citation needed] After his work on the First Earth Day, he was one of ten university students selected from across the nation by President Richard Nixon's Administration to form a national Youth Advisory Board on environmental matters, S.C.O.P.E (Student Council on Pollution and the Environment) was assigned to the U.S. Department of Interior where Mitchell was a reviewer on the creation of the first Environmental Impact Statement (EIS).
Mitchell continued his work with what became the United States Environmental Protection Agency (EPA), writing an environmental education program for students.[citation needed]
MCM Group International
MCM Group is an international planning and design firm headquartered in Los Angeles. Founded in 1984 by Michael C. Mitchell after the close of the Los Angeles Olympic Games, where he served as the head of planning and operations, the firm has sought to expand those planning techniques as a model to address prominent social problems. MCM Group provides feasibility consulting, planning, architecture, landscape design and sustainable engineering services. Mitchell has developed offices in Tokyo, Moscow, Middle East offices in Doha, Qatar, an African base in Nairobi, Kenya and currently four offices in China, with its headquarters in Beijing.
1984 Los Angeles Summer Olympic Games
In the early 1980s, Mitchell was recruited by the Los Angeles Olympic Organizing Committee, where he served as the Group vice-president of Planning and Control (Finance). Among his duties included overseeing the planning of the Olympic venues and supervising the architectural department's venue planning. During the Olympics he was responsible for the Games Operations Center and oversaw the closeout of the Games after their completion.
He has since served as a senior planning consultant to six other Olympic Games and four World Fairs.
LA84 Foundation
As head of the close-out operations after the completion of the Los Angeles 1984 Summer Olympics, Mitchell oversaw the creation of the LA84 Foundation, which was formed out of the $225 million surplus from the operations of the Games. The Foundation is now a national leader in supporting youth programs, providing recreation and learning opportunities to disadvantaged youth, training youth coaches and convening national conferences on youth sports issues.
Live Aid
In the Spring of 1985, Mitchell was contacted by Bob Geldof, an Irish rock musician, that had been working on issues of drought and famine in Africa. Geldof asked Mitchell to produce a worldwide televised music show to raise funds to help alleviate the catastrophic consequences of the worst African famine in a century.
Mitchell became the Executive Producer of the worldwide Live Aid broadcast (under a newly formed venture Worldwide Sports and Entertainment) and President of the Live Aid Foundation in America.
The July 13, 1985 broadcast was the world's first large globally interactive show seen by 1.5 billion viewers in 150 countries. Whereas the 1984 Olympics utilized three satellites to beam from one location around the world, Live Aid utilized thirteen satellites sending and receiving concerts from seven locations from around the world and producing one international feed back to the 150 nations. Despite 1985 being at the height of the Cold War, Mitchell established a global broadcast with a live concert from the Soviet Union featuring Autograph, and a delayed Live Aid showing in China.
President Ronald Reagan's Administration supported the Live Aid Foundation by providing wheat from America's reserves and awarded Mitchell a Presidential Citation for the Live Aid Foundation's contributions to humanity.
NFIE
Mitchell continued his contributions to social and education programs by accepting an appointment to the Board of the National Education Association's Foundation for the Improvement of Education (NFIE), serving on the board from 1987 to 1997. Since its beginning in 1969, the Foundation has served as a laboratory of learning, offering funding and other resources to public school educators, their schools, and districts to solve complex teaching and learning challenges.
Fund for Democracy and Development
During the dissolution of the Soviet Union starting in 1990, Russia and Ukraine experienced a severe shortage of medical and food supplies. Working throughout both countries witnessing first-hand the growing crisis, Mitchell and his close friend, Yankel Ginzburg, an American artist and humanitarian, who had family in Tver, Russia, responded to requests by Russia's leadership for assistance, co-founding the "Fund for Democracy and Development" to provide aid to alleviate the crisis.
Mitchell served as the founding board chairman in 1991 and L. Ronald Scheman (co-founder of the Pan American Development Foundation, where his work included providing financial assistance to low-income rural communities), served as the first President. Past President Richard M. Nixon served as the honorary chairman of the Fund.
From 1991 to 1994 the Fund is credited with channeling 240 million dollars worth of staples and food supplies to the former Soviet Union. As gratitude for the contributions of the Fund, the Russian government commissioned a monument park to reflect American goodwill.
Amur Tiger Sanctuary
With offices established in Moscow and St. Petersburg, Mitchell contributed to several rural development and environmental projects across the former Soviet Union. Mitchell's planning of development projects in rural Russia included work in Siberia on sustainable resource and forest management practices.
While undertaking those projects in conjunction with local wildlife scientists Mitchell convinced the Prime Minister of Russia, Viktor Chernomyrdin, to establish the Amur Tiger Sanctuary in 1993, which was initially funded through the Global Survival Network (GSN), an environmental organization he co-founded with Steve Galster now of Freeland Foundation.
The Sanctuary included introducing armed ranger patrols to stop the threat that poachers played in the region. The initial work that Mitchell and the executive director of GSN, Steve Galster, did to establish the sanctuary was soon funded by the World Wildlife Fund (WWF), now known as World Wide Fund for Nature, and is currently carried out with the support of the Ministry of Natural Resources and Environment (Russia). As a result of this work, the wild Siberian Tiger population has rebounded from their critical endangered level.
Exposing animal and human trafficking
In order to strengthen the Sanctuary efforts to stop poaching, Mitchell worked with Steve Galster conducting undercover video interviews with the poachers. Through these undercover meetings, he and Galster discovered a link between animal poachers and human traffickers. What began as an effort to preserve habitat became an international exposé on trafficking. From 1995 to 1997 they undertook a two-year undercover investigation personally holding meetings with traffickers and trafficked women to expose the international relationship between animal and human trafficking.
Information and undercover video derived from their investigation were used to create a GSN written report, "Crime & Servitude" and a video documentary, "Bought & Sold." The film was released in 1997 and received widespread media coverage in the US and abroad, including specials on ABC Primetime Live, CNN, and BBC.
The documentary also helped to catalyze legislative reform on trafficking as well as new financial resources to address the problem.
Galster took what was learned during that undercover period and continues this work, founding the Freeland Foundation, which is the lead implementing partner of Asia's Regional Response to Endangered Species Trafficking (ARREST), a program sponsored by the U.S. government in partnership with ASEAN and over fifty governmental and non-governmental organizations.
The material that was collected during those two years is housed at the Human Rights Documentation Initiative (HRDI), The University of Texas at Austin
United Nations Day of Tolerance
Beginning in 1985, Mitchell began an association with Irving Sarnoff, the executive director of Friends of the United Nations (FOTUN), and his co-founder, Dr. Noel Brown, Director of the United Nations Environmental Program (UNEP), North America. The Friends of the United Nations is an NGO dedicated to advocating support for programs of the United Nations.
As part of their work on international social issues Mitchell was asked to create a celebration for the United Nations International Day for Tolerance in 1999. The International Day for Tolerance is an annual observation declared by UNESCO in 1995 to generate public awareness of the dangers of intolerance.
Mitchell organized the 1999 event honoring Mikhail Gorbachev, former leader of the Soviet Union and Arnold Schwarzenegger, actor, politician and Chairman of the USC Schwarzenegger Institute of State and Global Policy. Keynote speakers included John Kerry, U.S. Senator and U.S. Secretary of State.
Rural development
One of the first projects integrating agricultural development, sustainability, community and social values, and economic growth was in a region of Qingdao, China where his company, MCM Group, brought international blueberry agricultural experts to develop what is considered now one of the world's largest blueberry farms (The Qingdao Cangma Mountain Development). The project included hi-technology organic agriculture, agritourism, educational programs, local culture and residential development to provide the local rural community with a successful economic transition.
Lectures and education
Invited by universities in the U.S., China, South Korea and Japan, he has given lectures and planning studios, sharing his professional experience with students and faculty members.
University of Michigan, ERB Institute for Global Sustainable Enterprise – "Sustainability, Design Thinking, and Business Strategies: Developing 'Optimal Environments' in China" - March, 2011
"Experiential Design," Lecture at Tianjin University of Technology, April 20, 2012
Featured speaker at China's first International Architectural Education Forum held at Tianjin University, along with Karl Otto Ellefsen, the Director of the Oslo School of Architecture and President of the European Association for Architectural Education, and Preston Scott Cohen, Professor of Architecture at Harvard Graduate School of Design, September, 2014.
Keynote Address at Sino-US International Design Exhibition – Los Angeles, September 3, 2016
Huaqiao University, College of Tourism, November 2016
Tianjin Association of City Planning – Master Lecture, May 11, 2017
Keynote Speaker – "Digital Brings Changes to Entertainment Experience," at Tianjin's Design Week themed – "The Future is Now," May 13, 2017
New Urbanism & Agritourism Research Program – July 22, 2017
Led meeting of the New Urbanism and Agritourism Research Program in Beijing, 2017. The program won full support from enterprises, universities and social groups including China's Association of Mayors, Tsinghua University School of Social Sciences, Digital China, China State Farming Agriculture Group, Agricultural Valley Research Institute, Shandong University of Arts, Tourism College of Huaqiao University, CSA, Beijing Qunxue Urban and Rural Community Social Development Research Institute and Taiwan Rural Tourism Association (reference).
He also initiated internship programs providing Chinese and African students with opportunities to receive training in MCM offices.
Recognition
1984 : Los Angeles Olympics Organizing Committee – Recognition and Appreciation for Contribution to the Success of the Los Angeles Olympic Games, 1984.
1984 : Plaque, Michael C. Mitchell, Group Vice-President Planning and Control, Los Angeles Olympics Organizing Committee – Games of the XXIIIrd Olympiad, July 28-August 12, 1984.
1985 : Presidential Citation for the Live Aid Foundation's contributions to humanity, President Regan.
1994 : Honorary Member of the Russian Academy of Sciences
2013: World Hotel Association – Continental Diamond Award for Design
2013 : Gold Award for Design, Society of American Registered Architects(SARA), Project – Youth Olympic Games 2014
2013 : Bronze Award for Design, Society of American Registered Architects(SARA), Project: COFCO ECO Resort and Attraction, Agriculture Ecological Valley Development, Beijing
2014 : Design Award of Merit, Society of American Registered Architects(SARA), Project -Tianjin Stadium Redevelopment
2016: Selected to exhibit Cangma Mountain Development, 2015, at Time Space Existence – 2016, Palazzo Mora, Venice Biennale of Architecture.
Memberships & Affiliations
Urban Land Institute
American Farmland Trust
Association of Children's Museums
International Association of Amusement Parks and Attractions
International Association for China Planning.
References
Recent Publications
Contributing writer on sustainable issues for "Green" a publication of Domus Magazine (2012-2013)
Collection of Creative & Design magazine, Tianjin University Press, about MCM's Qinghe Snoopy Theme Park, February 2015
China Real Estate Business, Architecture Section, "Always With You," about MCM's Qinghe Snoopy Theme Park, July 5, 2015
Originated Magazine, November 11, 2016
China People's Daily, Beijing, "MCM Designing Beautiful Country," National Launch of Luneng's "Beautiful Countryside", January 19, 2017
External links
International Rural Development Center
1946 births
University of Portland alumni
Urban planning
Rural development
Living people
Activists from Portland, Oregon
People from Los Angeles | Michael C. Mitchell | [
"Engineering"
] | 2,571 | [
"Urban planning",
"Architecture"
] |
54,661,292 | https://en.wikipedia.org/wiki/Chandrasekhar%27s%20X-%20and%20Y-function | In atmospheric radiation, Chandrasekhar's X- and Y-function appears as the solutions of problems involving diffusive reflection and transmission, introduced by the Indian American astrophysicist Subrahmanyan Chandrasekhar. The Chandrasekhar's X- and Y-function defined in the interval , satisfies the pair of nonlinear integral equations
where the characteristic function is an even polynomial in generally satisfying the condition
and is the optical thickness of the atmosphere. If the equality is satisfied in the above condition, it is called conservative case, otherwise non-conservative. These functions are related to Chandrasekhar's H-function as
and also
Approximation
The and can be approximated up to nth order as
where and are two basic polynomials of order n (Refer Chandrasekhar chapter VIII equation (97)), where are the zeros of Legendre polynomials and , where are the positive, non vanishing roots of the associated characteristic equation
where are the quadrature weights given by
Properties
If are the solutions for a particular value of , then solutions for other values of are obtained from the following integro-differential equations
For conservative case, this integral property reduces to
If the abbreviations for brevity are introduced, then we have a relation stating In the conservative, this reduces to
If the characteristic function is , where are two constants, then we have .
For conservative case, the solutions are not unique. If are solutions of the original equation, then so are these two functions , where is an arbitrary constant.
See also
Chandrasekhar's H-function
References
Special functions
Integral equations
Scattering
Scattering, absorption and radiative transfer (optics) | Chandrasekhar's X- and Y-function | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 337 | [
" absorption and radiative transfer (optics)",
"Special functions",
"Integral equations",
"Mathematical objects",
"Equations",
"Combinatorics",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
74,661,284 | https://en.wikipedia.org/wiki/Clovibactin | Clovibactin (Novo29) is an experimental antibiotic isolated from an uncultured soil Gram-negative β-proteobacterium Eleftheria terrae ssp. carolina, which is one of many soil bacteria.
See also
Teixobactin
Zosurabalpin
References
Antibiotics
Peptides | Clovibactin | [
"Chemistry",
"Biology"
] | 69 | [
"Biomolecules by chemical classification",
"Biotechnology products",
"Antibiotics",
"Molecular biology",
"Biocides",
"Peptides"
] |
66,023,237 | https://en.wikipedia.org/wiki/Construction%20Corps%20%28Bulgaria%29 | The Construction Corps () in Bulgaria was a military construction organisation subordinated to the Ministry of Defence or directly to the government, which existed from 1920 to 2000.
The organisation started as national compulsory labour service (trudova povinnost) in 1920 which drafted all able-bodied Bulgarians in place of national military service. It was militarised and incorporated into the armed forces as the Labour Corps (Trudovi Voiski) during the period 1935–1946. During the Communist era it was re-organised a number of times, taking its final form and name in 1969.
History
National compulsory labour service 1920–1935
In the last months of World War I, the Ministry of War announced the idea of a conscription-based national labour service. For this purpose a commission was appointed consisting of: Chairman Major General Konstantin Kirkov; members: Colonel Ivan Bozhkov, Lieutenant Colonel Kosta Nikolov, Lieutenant Colonel Dimitar Nachev, Lieutenant General Stilian Kovachev, Lieutenant Colonel Todor Georgiev, Hristo Chakalov – Manager of the BNB, two agronomists and a representative of the Bulgarian Agricultural Bank. The original law drafted by the commission was not approved by the Council of Ministers but the draft did become the basis for all subsequent legislation on the subject.
Defeat in World War 1 brought to power in October 1919 the radical anti-war Agrarian party leader Aleksandar Stamboliyski. Faced with the ruinous consequences of the war Stamboliyski adopted compulsory labour service as one of two key reforms aimed at rebuilding the country (the other being land reform). The Bill provoked vehement opposition on the ground that it revived the Ottoman feudal labour obligation and exploited young people, but Stamboliyski's overwhelming election victory in 1920 meant it was voted into law on 23 May 1920.
Stamboliyski's official reasons were to enable post-war reconstruction at a time when the impoverished country was faced with enormous war reparations; and to provide modern vocational education for young men and women. However, an underlying reason was to circumvent the limitations of the Treaty of Neuilly-sur-Seine on the size of the Bulgarian Armed Forces, which limited the army to 20,000. The new labour service de facto maintained the organisational structure of the former national military service, prompting protests from the neighbouring Yugoslavia and Greece that all the Bulgarians had to do was replace the spades with rifles and they'd have a trained army. The Inter-Allied Commission required the bill to be suspended until changes were agreed.
Compulsory labour service came into force on 14 June 1920 with the establishment of the Main Directorate "Compulsory Labour Service" within the Ministry of Public Works. All able-bodied Bulgarians, except those exempted for legitimate reasons (for example muslim females were exempted) and those who had served the state for more than three consecutive months, were required to serve either in the Regular service (eight months maximum for men between 20 and 40 years, four months for women between 16 and 30 years) or in the Temporary service up to 21 days a year. Exemptions could also be purchased at a set daily rate.
Labour service proved very effective in carrying out post-war reconstruction. The vast majority of the work was road and railway construction, although there were also manufacturing, agriculture and reforestation projects. An International Labour Report calculated that just in the Regular service from 1921 to 1936 a total of 313,669 "trudovaks" (labourers) were recorded as completing their compulsory service; that the work done for the State entailed 22,591,068 eight-hour days and reached a value of 1,680,088,675 leva; and that the annual balance-sheets showed aggregate receipts of 3,330,466,451 leva and expenditure of 2,449,101,898 leva, or a profit of 881,364,553 leva. The Bulgarian example was widely studied and copied abroad, for instance by Germany in the formation of the Reich Labour Service.
Labour corps 1935–1944
In the 1930s, as Bulgaria followed Germany in repudiating the military limitations imposed by the WW1 Paris peace treaties the labour service openly emerged as a military organisation. On 1 January 1935 jurisdiction was transferred to the Ministry of Defence, with the establishment of military ranks in 1936. Military age conscripts served in the regular armed forces or did labour service – one example being future Communist leader Todor Zhivkov who completed service in 1935, partially through work and partially through exemption purchase. In 1938 with the signing of the Salonika Agreement limits on the armed forces were officially removed and Bulgaria was able to fully reinstate compulsory military service. In 1940 the new Law of the Armed Forces officially incorporated "trudovaks" in the armed forces as the labour corps (trudovi voiski). By 1942 the fully mobilised wartime labour corps exceeded 80,000 men building roads and military installations, draining the Svishtov wetlands, increasing agricultural production and restoring communications in the newly recovered Southern Dobruja, Western Thrace and Vardar Macedonia.
During the war as Bulgaria allied with Nazi Germany Jewish men were drafted en-masse in the labour corps. In January 1941, the anti-semitic Law for Protection of the Nation came into effect, one of whose stipulations was that Jews must fulfill their military service in labour battalions. By order of the Bulgarian chief of the general staff, effective 27 January 1941, Jews were removed from the regular armed forces and were drafted in the labour corps, while retaining their military rank and privileges. Jewish reservists were allocated as labour corps reservists. After Bulgaria joined the Tripartite Pact on 1 March 1941 and became a base for German military operations against Yugoslavia and Greece repressive measures increased. From August 1941 Jewish men aged 20–44 were drafted (including all reservists), rising to 50 in 1943. Following diplomatic protests from German ambassador Adolf-Heinz Beckerle about the German Labour Front working alongside Bulgarian Jews in a military capacity from Jan 1942 Jews were transferred to labour units under the Ministry of Public Works, depriving them of their military ranks and privileges. Those units (usually 100-300 strong) were based in remote camps with poor conditions and typically did heavy labour completing specific stretches of roads. Approximately 12,000 Jews were mobilised in such units in addition to 2,000 communists and left wing agrarians. There were a number of reports of abusive behaviour by camp commandants, although it should be stressed that despite latter Communist governments' terming them "fascist concentration camps" these were in no way such - for instance labourers still had family leave and correspondence, and heads of family were paid a wage.
Greeks from Bulgarian occupation zone in Macedonia and Thrace were also forcibly conscripted into Labour Battalions. The measure did not exclude Greek Muslims.
Post War
From 1946 given the need to downsize the armed forces the labour corps were again detached from the army and re-organised as national compulsory labour service. All Bulgarian citizens of conscription age not accepted in the regular armed forces were subject to 18 months labour service, but de facto it was done mostly by men from minorities and those deemed unreliable for service ("considered unfit") in the armed forces.
A high point in the history of the Construction Troops was the design and building of the Alfred Beit Road Bridge in 1994–95. The Construction Troops won a commercial tender in competition with international companies. The metal works of the bridge were manufactured in Bulgaria and transported via ship from Burgas to the South African port of Durban and then on a 1,000 km stretch over land. The bridge is the only road border crossing on the South Africa–Zimbabwe border. The commander of the Construction Troops, Major General Radoslav Peshleevski (:bg:Радослав Пешлеевски) attended the official opening ceremony (seen in uniform behind Nelson Mandela.)
Structure
They were organized in seven Construction Divisions: three based in Sofia and one each in Plovdiv, Stara Zagora, Varna and Pleven.
Main Directorate of the Construction Troops (Главно управление на Строителните Войски)
Command (Командване)
Chief of the Main Directorate of the Construction Troops (Началник на Главно управление на СВ)
First Deputy-Chief and Chief of the Political Department (Зам.-началник на СВ, той е и началник на Политическо управление на СВ)
Deputy-Chief of the Construction Troops in Charge of the Construction Troops (Зам.-началник на СВ по строителството)
Deputy-Chief of the Construction Troops in Charge of the Rear (logistics) (Зам.-началник на СВ, той е и началник тил на СВ)
Deputy-Chief of the Construction Troops in Charge of the Economical Matters (Зам.-началник на СВ по икономическите въпроси)
Staff (Щаб)
Independent Departments and Branches of the MDCT (Самостоятелни управления и отдели в ГУСВ)
Operational Formations:
1st Construction Mechanized Division (1ва Строителна Механизирана Дивизия (1. СМД)) (Sukhodol, Sofia)
Command; Staff; Supply Company (Sukhodol, Sofia)
Training Battalion (Учебен Батальон) (Golemo Buchino, Pernik Province)
Special Battalion (Специален Батальон, for pre-production of building elements) (Sukhodol, Sofia; Pernik and Stanke Dimitrov)
1st Construction Regiment (1. Строителен Полк) (Botevgrad) (battalion and platoon in Botevgrad; battalion in Pravets)
2nd Construction Regiment (2. Строителен Полк) (Kyustendil) (battalion in Kyustendil; cadred battalions in Bobov Dol and Stanke Dimitrov, cadred platoon in Tran)
3rd Construction Regiment (3. Строителен Полк) (Pernik) (companies and platoons in Pernik, Samokov and the villages around them; cadred battalion in Bornaevo)
4th Construction Regiment (4. Строителен Полк) (Blagoevgrad) (battalions in Blagoevgrad, Sukhodol, Sofia, Petrich, Ilindentsi, cadred companies in Gotse Delchev and at the "Belmeken-Sestrimo" water supply cascade and a platoon at the Rila Monastery)
Automobile Machinery Regiment - Sofia (Автомашинен Полк - София) (Sukhodol, Sofia; Blagoevgrad, Pernik, Kyustendil, Samokov and Botevgrad)
5th Construction Mechanized Division (5та Строителна Механизирана Дивизия (5. СМД)) (Pleven)
Command; Staff; Supply Company and Training Battalion (Pleven)
1st Construction Regiment (1. Строителен Полк) (Roman) (5 battalions and a company in Roman)
2nd Construction Regiment (2. Строителен Полк) (Yasen) (battalion in Yasen, companies in Pleven, Lovech, Yasen and Zlatna Panega)
3rd Construction Regiment (3. Строителен Полк) (Vratsa) (companies and Vratsa, Vidin, Kozloduy and Slatina, platoon in Boychinovtsi)
4th Construction Regiment (4. Строителен Полк) (Veliko Tarnovo) (two battalions in Veliko Tarnovo, platoon in Svishtov)
5th Construction Regiment (5. Строителен Полк) (Gabrovo) (two battalions and three companies in Gabrovo and the nearby villages)
Automobile Machinery Regiment - Pleven (Автомашинен Полк - Плевен) (Yasen) (cadred battalions in Yasen, Roman and Veliko Tarnovo, cadred companies in Yasen and Vratsa)
6th Construction Mechanized Division (6та Строителна Механизирана Дивизия (6. СМД)) (Plovdiv)
Command; Staff; Supply Company and Training Battalion in Plovdiv, a platoon in Koprivshtitsa
1st Construction Regiment (1. Строителен Полк) (Sopot) (battalions in Sopot, Kalofer and Karnare, platoon in Klisura)
2nd Construction Regiment (2. Строителен Полк) (Panagyurishte) (battalion and company in Panagyurishte, battalion in Elshitsa and a platoon at the Copper Refinery Complex "Medet")
3rd Construction Regiment (3. Строителен Полк) (Smolyan) (battalions in Smolyan and Kardzhali, companies in Pamporovo, Madan and Smilyan)
4th Construction Regiment (4. Строителен Полк) (Plovdiv) (battalion in Plovdiv, companies in Svilengrad, Peshtera and Hisar, platoons in Parvomai and Laki)
Independent Construction Battalion (Velingrad) (7 platoons in Velingrad, platoon in Tsvetino and platoon in Yadenitsa)
Automobile Machinery Regiment - Plovdiv (5. Автомашинен Полк - Пловдив) (Plovdiv) (companies in Plovdiv, Smolyan, Sopot and Panagyurishte, platoons in Plovdiv and Velingrad)
Divisionary Special Company (blacksmith workshop) (Plovdiv)
13th Construction Mechanized Division (13та Строителна Механизирана Дивизия (13. СМД)) (Varna)
Command; Staff; Supply Company and Training Battalion (Varna)
1st Construction Regiment (1. Строителен Полк) (Devnya) (two battalions in Devnya, battalion in Kipra)
2nd Construction Regiment (2. Строителен Полк) (Varna) (battalion and two companies in Varna, battalion in Novi Pazar)
3rd Construction Regiment (3. Строителен Полк) (Shumen) (battalion in Shumen, battalion and two companies in Matnitsa)
4th Construction Regiment (4. Строителен Полк) (Devnya)
5th Construction Regiment (5. Строителен Полк) (Smyadovo)
Independent Service Regiment - Varna (Отделен Полк – Услуга – Варна) (Varna)
Independent Service Regiment - Devnya (Батальон – Услуга – Девня) (Devnya)
Independent Service Battalion - Ruse (Батальон – Услуга – Русе) (Ruse)
Automobile Machinery Regiment - Varna (Автомашинен Полк - Варна) (Varna) (battalions in Varna, Shumen and Devnya, companies in Varna and Smyadovo)
Disciplinary Rehabilitation Battalion (Дисциплинарен изправителен батальон) (Chernevo)
18th Construction Mechanized Division (18та Строителна Механизирана Дивизия (18. СМД)) (Stara Zagora)
Command; Staff; Supply Company and Training Battalion (Stara Zagora)
1st Construction Regiment (1. Строителен Полк) (Sliven) (two battalions in Sliven, battalion in Bratya Kunchevi)
2nd Construction Regiment (2. Строителен Полк) (Burgas) (battalion in Burgas, companies in Primorsko and Malko Tarnovo, platoons in Sarafovo, Grudovo and Vlas)
3rd Construction Regiment (3. Строителен Полк) (Kazanlak) (battalion in Kazanlak, battalion in Sheynovo and a battalion at the Buzludzha)
4th Construction Regiment (4. Строителен Полк) (Yambol) (battalion and company in Yambol, battalion in Elhovo)
5th Construction Regiment (5. Строителен Полк) (Radnevo) (battalion in Mednikarevo, companies in Radnevo, Stara Zagora and Yabalkovo and a service company in Troyanovo)
Divisionary Service Company - Stara Zagora (Дивизионна Рота – Услуга – Стара Загора) (Stara Zagora)
Special Battalion - Stara Zagora (Специален батальон – Стара Загора) (Stara Zagora)
Automobile Machinery Regiment - Stara Zagora (Автомашинен Полк - Стара Загора) (Stara Zagora) (battalions in Sliven, Kazanlak and Radnevo, companies in Burgas and Yambol)
Disciplinary Rehabilitation Battalion (Дисциплинарен изправителен батальон) (Mednikarevo)
20th Construction Mechanized Division (20та Строителна Механизирана Дивизия (20. СМД)) (Gorublyane, Sofia)(see :bg:20-а общостроителна дивизия)
Command; Staff; Supply Company (Gorublyane, Sofia) and Training Battalion (Chelopechene)
1st Construction Regiment (1. Строителен Полк) (Busmantsi) (battalion and company in Busmantsi, battalion in Bukhovo, platoon in Zhivkovo)
2nd Construction Regiment (2. Строителен Полк) (Darvenitsa, Sofia) (three battalions and a company in Darvenitsa)
3rd Construction Regiment (3. Строителен Полк МОК "Елаците") (Ravna Reka) (3 battalions at the Mining Refining Complex "Elatsite")
4th Construction Regiment (4. Строителен Полк) (Chelopech) (two battalions in Chelopech, company in Mirkovo)
Special Regiment (Специален полк) (Busmantsi) (two battalions and a company in Busmantsi)
Special Regiment (Специален полк) (Chelopechene) (company and platoon in Chelopechene, company in Chelopech)
1st Service Regiment (1. Полк – Услуга) (Bukhovo)
2nd Service Regiment (2. Полк – Услуга) (Sofia)
Automobile Machinery Company (Автомашинна Рота) (Chelopechene)
25th Construction Mechanized Division (25. Строителна Механизирана Дивизия) (Sofia) (housing construction)
Command; Staff; Supply Company; Training Battalion (Sofia)
1st Construction Regiment (1. Строителен Полк) (Zemlyane, Sofia)
2nd Construction Regiment (2. Строителен Полк) (Obelya, Sofia)
3rd Construction Regiment (3. Строителен Полк) (Boyana - the National Cinema Center, Sofia)
4th Construction Regiment (4. Строителен Полк) (Obelya, Sofia)
Special High Construction Battalion (Специален Батальон Батальон за Работа по Високи Обекти) (Zemlyane, Sofia)
Automobile Machinery Regiment - Obelya (Автомашинен Полк - Обеля) (Obelya, Sofia)
Service Company (Осигурителна Рота) (Lagera, Sofia)
Electrical Machinery and Installation Brigade (Електромашинна и монтажна бригада) (Sofia)
Command; Staff; Supply Platoon; Heavy Transportation and Mechanization Company (Sofia)
1st Installation Regiment (1. Монтажен Полк) (Sofia)
Independent Installation Platoon (Самостоятелен Монтажен Взвод) (Chelopech)
1st Installation Battalion (1. Монтажен Батальон) (Sofia)
2nd Installation Battalion (2. Монтажен Батальон) (Blagoevgrad)
2nd Installation Regiment (2. Монтажен Полк) (Plovdiv)
1st Installation Battalion (1. Монтажен Батальон) (Smolyan)
2nd Installation Battalion (2. Монтажен Батальон) (Sopot)
3rd Installation Battalion (2. Монтажен Батальон) (Sliven)
3rd Installation Regiment (3. Монтажен Полк) (Varna)
1st Installation Battalion (1. Монтажен Батальон) (Devnya)
2nd Installation Battalion (2. Монтажен Батальон) (Shumen)
9th Construction Mechanization Brigade (9. Бригада за строителна механизация) (Chelopechene, Sofia)
Command; Staff; Supply Platoon; Construction Platoon (Chelopechene, Sofia)
Lift Transport Battalion (Самостоятелен Подемно-транспортен Батальон) (Chelopechene, Sofia)
Automobile Machinery Battalion (Самостоятелен Автомашинен Батальон) (Iskar Railway Station)
Automobile Machinery Battalion (Самостоятелен Автомашинен Батальон) (Chelopech)
Building Materials Mixtures Regiment (Полк за строителни разтвори) (Chelopechene) (concrete mixing trucks)
Combined Repair Workshop (Обединена ремонтна работилница) (Chelopechene)
Support Institutions:
Complex Institute for Scientific Research, Development, Project and Implementation Activities of the Construction Troops (Комплексен Институт за Научноизследователска, Развойна, Проектантска и Внедрителска Дейност на Строителни Войски (КИНИРПВД – СВ)) (Sofia)
Direction (Направление Научно-изследователска и Развойна Дейност)
Direction Laboratories, Experimentation and Implementation (Направление Лаборатории, Експериментиране и Внедряване)
Direction Projects (Направление Проектиране)
Higher People's Military School for Construction "General Blagoi Ivanov" (Висше Народно Военно Строително Училище (ВНВСУ) "Ген. Благой Иванов") (Sofia) – trained career Construction Troops officers
Intermediate Military Construction Sergeant School (Средно сержантско военно строително училище (ССВСУ))
School for Installation Cadres (Школа за монтажни кадри) (Burgas)
See also
:ru:Строительные войска – Soviet and Russian Construction Troops
References
(Library of Congress Country Studies) Construction Troops "[T]hese units were controlled by the Ministry of Construction, Architecture, and Public Services.."
Military of Bulgaria
Military logistics units and formations
Construction in Europe
Construction organizations
Organizations established in 1920
Organizations disestablished in 2000 | Construction Corps (Bulgaria) | [
"Engineering"
] | 6,066 | [
"Construction",
"Construction organizations"
] |
66,023,394 | https://en.wikipedia.org/wiki/Non-explosive%20reactive%20armor | Non-explosive reactive armour (NxRA), also known as non-energetic reactive armor (NERA), is a type of vehicle armor used by modern main battle tanks and heavy infantry fighting vehicles. NERA advantages over explosive reactive armor (ERA) are its inexpensiveness, multi-hit capability, and ease of integration onto armored vehicles due to its nonexplosive nature.
Operating mechanism
The operating principle of NERA relies on the speed deviation of a shock wave propagating in different materials.
When a projectile such as a shaped-charge jet hits the NERA's front metal plate, a high speed shock wave is generated within. The shock wave propagates through the metal plate, until it encounters a confined non-metallic layer with elastic properties, such as rubber. Due to the lower propagation velocity of the non-metallic material, the shock wave refracts, in a manner similar to how light refracts in water. The shock wave then leaves the non-metallic layer and encounters the NERA's metallic back plate. Because of the prior refraction, the direction of propagation through the back plate is different than it was through the first plate. This causes a rapid acceleration of the metallic back plate in that new direction. This deformation, in conjunction with the first plate, is strong enough to shear the projectile or otherwise disrupt it.
Layout
NERA typically consists of three-layer composite sandwich structure sloped between 50° and 60°. In order to guarantee an excellent multi-hit capability against threats, the sandwiches are overlapped in a spaced configuration forming an array.
Materials
The two metallic plates in the NERA sandwich are made of steels of varying hardness and thickness. Depleted uranium plates have also been tested.
Rubber and plastic were initially used as the inner non-metallic material, but modern materials now include foam, nylon, polycarbonate, glass, elastomer and more energetic materials such as glycidyl azide polymer (GAP).
History
British developments
The threat posed by antitank guided missiles was clearly recognized by the FVRDE and as a result, a research program was initiated in 1963. The program was largely of an empirical nature and was directed by Dr G.N Harvey, then assistant director of Research at FVRDE (who has been generally credited with the invention of Chobham armor) in collaboration with J.P Downey, who was responsible for its extensive series of firing trials.
The research program began to bear fruit in 1964, and by the following year had resulted in the creation of a new form of armor which was more than twice as effective against shaped charges as rolled homogeneous armor of the same weight, and at least as effective as the latter against kinetic energy armor-piercing projectiles. The new armor was then called Chobham armour, after the location of FVRDE.
In 1968 work began on applying it to tanks and a feasibility study (codenamed Almagest) on fitting Chobham armour (also called Burlington) to the Chieftain main battle tank was undertaken. Two different Chobham armour kits were used, the skirt armor consisted basically of steel boxes containing plastic/steel sandwiches arranged in the manner of venetian blinds assembly while the front hull armor consisted of a bar armor mounted over a steel burster plate with three sandwiches consisting each of three to five plastic and steel layers underneath.
By February 1970 a decision was taken to build an experimental tank based on the Chieftain Mk. 3 components, which would incorporate Chobham armour. The test vehicle was built at FVRDE in 13 months and was designated as FV4211. In addition to having Chobham armour, the FV4211 was also the first main battle tank to have a hull made of welded aluminium plates to keep down its weight.
Russian developments
During the 1977 summer, samples of Chobham armour were smuggled from West Germany into East Germany by Soviet agents. In the early 1980s, NII Stali developed in conjunction with Uralvagonzavod a new turret for the late production T-72A with “отражающие листы” (Russian for "reflecting plates") armor inserts. By September 1982, the cast turret codenamed 172.10.077SB entered low rate production and was then dubbed "Super Dolly Parton" by Western observers due to its prominent shape. Each reflecting plate array consisted of an assembly of three layers ; a heavy armor plate, a rubber interlayer and a thin metal plate, all glued together.
French developments
By the end of 1979, the AMX-APX began to investigate further its research on composite armor for the upcoming AMX-40 main battle tank. In order to remain competitive on the foreign market, the new armor was to represent a technological breakthrough compared to spaced armor previously developed for the AMX-32.
Furthermore, the Staff of the French Army (EMAT) had high hopes in the EPC program which was to lead to the creation of the Leclerc main battle tank. Protection against modern threats being a keystone of the program. The armor research department of the AMX-APX was managed at the time by Maurice Bourgeat and his assistant Daniel Vallée, both were weapons scientists and worked closely with the French-German Research Institute of Saint -Louis (ISL) and the Central Technical Establishment of Weapons at Arcueil (ETCA). Under contract to the Technical Center of Land Weapons (CETAM) of Bourges, they developed the first configuration of what would later be named the PAC or Plaques Accélérées par Chocs (French for "Shock-Accelerated Plates") whose working principle and layout can be compared to Non-Explosive Reactive Armor (NERA).
Bourgeat and Vallée later worked on its integration on the Leclerc tank in the form of removable composite modules. They were awarded the 1987 Engineer Chanson Prize for their work.
Iraqi developments
In 1989, the Iraqi Military Production Authority (MPA) unveiled a composite appliqué armor kit for the T-55 at the Baghdad International Exhibition of Military Technology, also known as the Baghdad Arms Fair. This appliqué armour was fitted to a small number of tanks prior to the Gulf War. Those tanks were known by the Iraqi Army as Al Faw and Enigma by NATO intelligence services because of its unknown nature at that time.
During combat use in the battle of Khafji, the armour proved to be effective against MILAN anti-tank guided missiles. Post-war assessment of the appliqué armour of the captured Al Faw showed that the each armour block contained a spaced array of several sandwiches made of aluminium and steel sheets with a rubber interlayer. This composite armour relied on the bulging effect protection mechanism and tests have shown this design to have a twice the effectiveness of steel against shaped charge weapons, though no more effectiveness than steel against armor-piercing rounds.
Notes
References
External links
Article on the NERA featured on the T-72B
Vehicle armour
Composite materials
British inventions
Science and technology in the United Kingdom
History of the tank | Non-explosive reactive armor | [
"Physics"
] | 1,442 | [
"Materials",
"Composite materials",
"Matter"
] |
66,026,415 | https://en.wikipedia.org/wiki/Modern%20Quantum%20Mechanics | Modern Quantum Mechanics, often called Sakurai or Sakurai and Napolitano, is a standard graduate-level quantum mechanics textbook written originally by J. J. Sakurai and edited by San Fu Tuan in 1985, with later editions coauthored by Jim Napolitano. Sakurai died in 1982 before he could finish the textbook and both the first edition of the book, published in 1985 by Benjamin Cummings, and the revised edition of 1994, published by Addison-Wesley, were edited and completed by Tuan posthumously. The book was updated by Napolitano and released two later editions. The second edition was initially published by Addison-Wesley in 2010 and rereleased as an eBook by Cambridge University Press, who released a third edition in 2020.
Table of contents (3rd edition)
Prefaces
Chapter 1: Fundamental Concepts
Chapter 2: Quantum Dynamics
Chapter 3: Theory of Angular Momentum
Chapter 4: Symmetry in Quantum Mechanics
Chapter 5: Approximation Methods
Chapter 6: Scattering Theory
Chapter 7: Identical Particles
Chapter 8: Relativistic Quantum Mechanics
Appendix A: Electromagnetic Units
Appendix B: Elementary Solutions to Schrödinger's Wave Equation
Appendix C: Hamiltonian for a Charge in an Electromagnetic Field
Appendix D: Proof of the Angular-Momentum Rule (3.358)
Appendix E: Finding Clebsch-Gordan Coefficients
Appendix F: Notes on Complex Variables
Bibliography
Index
Reception
Early editions of the book have received several reviews. It is a standard textbook on the subject and is recommended in other works on the subject, it has inspired other textbooks on the subject, and it is used as a point of comparison in book reviews. Along with Griffith's Introduction to Quantum Mechanics, the book was also analyzed in a review of the "Philosophical Standpoints of Textbooks in Quantum Mechanics" in June 2020.
Publication history
(hardcover)
(hardcover)
(eBook)
(hardcover)
(eBook)
See also
Introduction to Quantum Mechanics, an undergraduate text by David J. Griffiths
List of textbooks on classical mechanics and quantum mechanics
References
External links
Publisher's website for the 2nd edition
Publisher's website for the 3rd edition
Book in the Internet Archive
Physics textbooks
1985 non-fiction books
1994 non-fiction books
2020 non-fiction books
Quantum mechanics | Modern Quantum Mechanics | [
"Physics"
] | 446 | [
"Quantum mechanics",
"Works about quantum mechanics"
] |
66,029,152 | https://en.wikipedia.org/wiki/List%20of%20k-uniform%20tilings | A k-uniform tiling is a tiling of tilings of the plane by convex regular polygons, connected edge-to-edge, with k types of vertices. The 1-uniform tiling include 3 regular tilings, and 8 semiregular tilings. A 1-uniform tiling can be defined by its vertex configuration. Higher k-uniform tilings are listed by their vertex figures, but are not generally uniquely identified this way.
The complete lists of k-uniform tilings have been enumerated up to k=6. There are 20 2-uniform tilings, 61 3-uniform tilings, 151 4-uniform tilings, 332 5-uniform tilings, and 673 6-uniform tilings. This article lists all solutions up to k=5.
Other tilings of regular polygons that are not edge-to-edge allow different sized polygons, and continuous shifting positions of contact.
Classification
Such periodic tilings of convex polygons may be classified by the number of orbits of vertices, edges and tiles. If there are orbits of vertices, a tiling is known as -uniform or -isogonal; if there are orbits of tiles, as -isohedral; if there are orbits of edges, as -isotoxal.
k-uniform tilings with the same vertex figures can be further identified by their wallpaper group symmetry.
Enumeration
1-uniform tilings include 3 regular tilings, and 8 semiregular ones, with 2 or more types of regular polygon faces. There are 20 2-uniform tilings, 61 3-uniform tilings, 151 4-uniform tilings, 332 5-uniform tilings and 673 6-uniform tilings. Each can be grouped by the number m of distinct vertex figures, which are also called m-Archimedean tilings.
Finally, if the number of types of vertices is the same as the uniformity (m = k below), then the tiling is said to be Krotenheerdt. In general, the uniformity is greater than or equal to the number of types of vertices (m ≥ k), as different types of vertices necessarily have different orbits, but not vice versa. Setting m = n = k, there are 11 such tilings for n = 1; 20 such tilings for n = 2; 39 such tilings for n = 3; 33 such tilings for n = 4; 15 such tilings for n = 5; 10 such tilings for n = 6; and 7 such tilings for n = 7.
1-uniform tilings (regular)
A tiling is said to be regular if the symmetry group of the tiling acts transitively on the flags of the tiling, where a flag is a triple consisting of a mutually incident vertex, edge and tile of the tiling. This means that, for every pair of flags, there is a symmetry operation mapping the first flag to the second. This is equivalent to the tiling being an edge-to-edge tiling by congruent regular polygons. There must be six equilateral triangles, four squares or three regular hexagons at a vertex, yielding the three regular tessellations.
m-Archimedean and k-uniform tilings
Vertex-transitivity means that for every pair of vertices there is a symmetry operation mapping the first vertex to the second.
If the requirement of flag-transitivity is relaxed to one of vertex-transitivity, while the condition that the tiling is edge-to-edge is kept, there are eight additional tilings possible, known as Archimedean, uniform or demiregular tilings. Note that there are two mirror image (enantiomorphic or chiral) forms of 34.6 (snub hexagonal) tiling, only one of which is shown in the following table. All other regular and semiregular tilings are achiral.
Grünbaum and Shephard distinguish the description of these tilings as Archimedean as referring only to the local property of the arrangement of tiles around each vertex being the same, and that as uniform as referring to the global property of vertex-transitivity. Though these yield the same set of tilings in the plane, in other spaces there are Archimedean tilings which are not uniform.
1-uniform tilings (semiregular)
2-uniform tilings
There are twenty (20) 2-uniform tilings of the Euclidean plane. (also called 2-isogonal tilings or demiregular tilings) Vertex types are listed for each. If two tilings share the same two vertex types, they are given subscripts 1,2.
3-uniform tilings
There are 61 3-uniform tilings of the Euclidean plane. 39 are 3-Archimedean with 3 distinct vertex types, while 22 have 2 identical vertex types in different symmetry orbits.
3-uniform tilings, 3 vertex types
3-uniform tilings, 2 vertex types (2:1)
4-uniform tilings
There are 151 4-uniform tilings of the Euclidean plane. Brian Galebach's search reproduced Krotenheerdt's list of 33 4-uniform tilings with 4 distinct vertex types, as well as finding 85 of them with 3 vertex types, and 33 with 2 vertex types.
4-uniform tilings, 4 vertex types
There are 33 with 4 types of vertices.
4-uniform tilings, 3 vertex types (2:1:1)
There are 85 with 3 types of vertices.
4-uniform tilings, 2 vertex types (2:2) and (3:1)
There are 33 with 2 types of vertices, 12 with two pairs of types, and 21 with 3:1 ratio of types.
5-uniform tilings
There are 332 5-uniform tilings of the Euclidean plane. Brian Galebach's search identified 332 5-uniform tilings, with 2 to 5 types of vertices. There are 74 with 2 vertex types, 149 with 3 vertex types, 94 with 4 vertex types, and 15 with 5 vertex types.
5-uniform tilings, 5 vertex types
There are 15 5-uniform tilings with 5 unique vertex figure types.
5-uniform tilings, 4 vertex types (2:1:1:1)
There are 94 5-uniform tilings with 4 vertex types.
5-uniform tilings, 3 vertex types (3:1:1) and (2:2:1)
There are 149 5-uniform tilings, with 60 having 3:1:1 copies, and 89 having 2:2:1 copies.
5-uniform tilings, 2 vertex types (4:1) and (3:2)
There are 74 5-uniform tilings with 2 types of vertices, 27 with 4:1 and 47 with 3:2 copies of each.
There are 29 5-uniform tilings with 3 and 2 unique vertex figure types.
Higher k-uniform tilings
k-uniform tilings have been enumerated up to 6. There are 673 6-uniform tilings of the Euclidean plane. Brian Galebach's search reproduced Krotenheerdt's list of 10 6-uniform tilings with 6 distinct vertex types, as well as finding 92 of them with 5 vertex types, 187 of them with 4 vertex types, 284 of them with 3 vertex types, and 100 with 2 vertex types.
References
Order in Space: A design source book, Keith Critchlow, 1970
Chapter X: The Regular Polytopes
Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, , pp. 50–57
External links
Euclidean and general tiling links:
n-uniform tilings, Brian Galebach
Euclidean plane geometry
Regular tilings
Tessellation | List of k-uniform tilings | [
"Physics",
"Mathematics"
] | 1,615 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
66,029,968 | https://en.wikipedia.org/wiki/Brewing%20equipment | Brewing equipment is the vessels and tools used to brew beer, which usually includes systems of saccharification, fermentation, refrigeration and clean-in-place.
Archaeologists uncovered ancient beer brewing equipment in an underground room built between 3400 and 2900 BC in China. A research report published in the Proceedings of the National Academy of Sciences of the United States of America said that the Mijiaya Site provided the earliest evidence of beer-making in China, indicating that people had mastered the beer brewing technology around 5,000 years ago.
In recent years, the concentration of the beer brewing equipment industry in the international market has been increasing, and the global manufacturing capacity of beer brewing equipment manufacturing is mainly concentrated in the Europe and Asia. Examples of manufacturers of beer brewing equipment are BrewJacket, and American Beer Equipment.
References
Brewing
Equipment
Machines | Brewing equipment | [
"Physics",
"Technology",
"Engineering"
] | 173 | [
"Physical systems",
"Machines",
"Mechanical engineering"
] |
47,693,506 | https://en.wikipedia.org/wiki/List%20of%20conjugated%20polymers |
See also
Light emitting polymers in OLEDs
Conductive polymer
Electroluminescence
References
Organic polymers
Conductive polymers
Conjugated polymers | List of conjugated polymers | [
"Chemistry"
] | 28 | [
"Organic polymers",
"Molecular electronics",
"Organic compounds",
"nan",
"Conductive polymers"
] |
47,698,145 | https://en.wikipedia.org/wiki/DEG%20monobutyl%20ether | Diethylene glycol butyl ether (2-(2-butoxyethoxy)ethanol) is the organic compound with the formula . A colorless liquid, it is common industrial solvent. It is one of several glycol ether solvents. It has low odour and high boiling point. It is mainly used as a solvent for paints and varnishes in the chemical industry, household detergents, and textile processing.:
Production and use
Diethylene glycol monobutyl ether (DEGBE) is produced from butanol by ethoxylation, i.e., the reaction of ethylene oxide in the presence of a basic catalyst.
References
Glycol ethers
Commodity chemicals
Chemical synthesis
Butyl compounds | DEG monobutyl ether | [
"Chemistry"
] | 160 | [
"nan",
"Commodity chemicals",
"Chemical synthesis",
"Products of chemical industry"
] |
47,700,932 | https://en.wikipedia.org/wiki/Nabarro%E2%80%93Herring%20creep | In materials science, Nabarro–Herring creep (NH creep) is a mechanism of deformation of crystalline materials (and amorphous materials) that occurs at low stresses and held at elevated temperatures in fine-grained materials. In Nabarro–Herring creep, atoms diffuse through the crystals, and the rate of creep varies inversely with the square of the grain size so fine-grained materials creep faster than coarser-grained ones. NH creep is solely controlled by diffusional mass transport.
This type of creep results from the diffusion of vacancies from regions of high chemical potential at grain boundaries subjected to normal tensile stresses to regions of lower chemical potential where the average tensile stresses across the grain boundaries are zero. Self-diffusion within the grains of a polycrystalline solid can cause the solid to yield to an applied shear stress, the yielding being caused by a diffusional flow of matter within each crystal grain away from boundaries where there is a normal pressure and toward those where there is a normal tension. Atoms migrating in the opposite direction account for the creep strain (). The creep strain rate is derived in the next section. NH creep is more important in ceramics than metals as dislocation motion is more difficult to effect in ceramics.
Derivation of the creep rate
The Nabarro–Herring creep rate, , can be derived by considering an individual rectangular grain (in a single or polycrystal). Two opposing sides have a compressive stress applied and the other two have a tensile stress applied. The atomic volume is decreased by compression and increased by tension. Under this change, the activation energy to form a vacancy is altered by . The atomic volume is and the stress is . The plus and minus indication is an increase or decrease in the activation energy due to the tensile and compressive stresses, respectively. The fraction of vacancy concentrations in the compressive () and tensile () regions are given as:
In these equations is the vacancy formation energy, is the Boltzmann constant, and is the absolute temperature. These vacancy concentrations are maintained at the lateral and horizontal surfaces in the grain. These net concentrations drive vacancies to the compressive regions from the tensile ones which causes grain elongation in one dimension and grain compression in the other. This is creep deformation caused by a flux of vacancy motion.
The vacancy flux, , associated with this motion is given by:
where is the vacancy diffusivity. This is given as:
where is the diffusivity when 0 vacancies are present and is the vacancy motion energy. The term is the vacancy concentration gradient. The term is proportional to the grain size and . If we multiply by we obtain:
where is the volume changed per unit time during creep deformation. The change in volume can be related to the change in length along the tensile axis as . Using the relationship between and the NH creep rate is given by:
This equation can be greatly simplified. The lattice self-diffusion coefficient is given by:
As previously stated, NH creep occurs at low stresses and high temperatures. In this range . For small , . Thus we can re-write as:
where is a constant that absorbs the approximations in the derivation.
Alternatively, this can be derived in a different method where the constant has different dimensions. In this case, the NH creep rate is given by:
Comparison to Coble creep
Coble creep is closely related to Nabarro–Herring creep and is controlled by diffusion as well. Unlike Nabarro–Herring creep, mass transport occurs by diffusion along the surface of single crystals or the grain boundaries in a polycrystal. For a general expression of creep rate, the comparison between Nabarro–Herring and Coble creep can be presented as follows:
is the shear modulus. The diffusivity is obtained form the tracer diffusivity, . The dimensionless constant depends intensively on the geometry of grains. The parameters , and are dependent on creep mechanisms. Nabbaro–Herring creep does not involve the motion of dislocations. It predominates over high-temperature dislocation-dependent mechanisms only at low stresses, and then only for fine-grained materials. Nabarro–Herring creep is characterized by creep rates that increase linearly with the stress and inversely with the square of grain diameter.
In contrast, in Coble creep atoms diffuse along grain boundaries and the creep rate varies inversely with the cube of the grain size. Lower temperatures favor Coble creep and higher temperatures favor Nabbaro–Herring creep because the activation energy for vacancy diffusion within the lattice is typically larger than that along the grain boundaries, thus lattice diffusion slows down relative to grain boundary diffusion with decreasing temperature.
Experimental and theoretical examples
Creep in dense, polycrystalline magnesium oxide and iron-doped polycrystalline magnesia
Compressive creep in polycrystalline beryllium oxide
Creep in polycrystalline that has been doped with Cr, Fe, or Ti
Creep in dry synthetic dunite which results in trace melt and some grain growth
Reproduced for nanopolycrystalline systems in Phase Field Crystal simulations (theory matched in terms of creep stress and grain size exponents)
References
Materials degradation | Nabarro–Herring creep | [
"Materials_science",
"Engineering"
] | 1,072 | [
"Materials degradation",
"Materials science"
] |
47,704,844 | https://en.wikipedia.org/wiki/Monsanto%20Co.%20v.%20Rohm%20and%20Haas%20Co. | Monsanto Co. v. Rohm and Haas Co., 456 F.2d 592 (3d Cir. 1972), is a 1972 decision of the United States Court of Appeals for the Third Circuit interpreting what conduct amounts to fraudulent procurement of a patent.
This case is one of the early decisions following the US Supreme Court's 1964 decision in Walker Process v. Food Machinery holding fraud on the US Patent Office as potentially violating the Sherman Antitrust Act, and one of the first (if not the first) to hold that failure to disclose material information to the Patent Office was fraudulent.
Background
Monsanto procured U.S. Patent No. No. 3,382,280, issued May 7, 1968, having the title 3',4'-dichloropropionanilide (known as 3,4-DCPA or propanil), a herbicide that selectively killed weeds without killing crop plants such as rice. In November 1969 Monsanto sued Rohm and Haas for patent infringement. The only substantial issue was the validity of the patent, and that ultimately turned on whether Monsanto had committed fraud on the Patent Office in procuring the patent.
The application that resulted in the '280 patent was the third of three successive applications, the first two of which were unsuccessful. In the first application, filed in 1957, Monsanto sought a patent on some 100 "compounds, including 3,4-DCPA and 3,4-DCAA (3,4-dichloroacetanilide), a chemical with some similar properties and a similar physical structure. Monsanto claimed that all the members of the class possessed "unusual and valuable herbicidal activity," while related compounds possessed "little or no herbicidal efficiency."
Unpersuaded by Monsanto's arguments, the Patent Office rejected the application as unpatentable over the prior art. In 1961, Monsanto filed a new application claiming another large class of compounds, again including 3,4-DCPA and 3,4-DCAA and again asserting that the class possessed "unusual and valuable herbicidal activity." Again, the Patent Office rejected the application as unpatentable over the prior art. In 1967, Monsanto applied again, this time claiming only 3,4-DCPA and representing only that 3,4-DCPA had "unusual and valuable herbicidal activity" and that its activity was "surprising" because "related compounds possess little or no herbicidal efficiency." Again. the Patent Office (initially) rejected the application on the ground that the product was obvious over the prior art. But this time Monsanto overcame the rejection.
A major issue in the Patent Office was whether the patent application should be denied because 3,4-DCPA was obvious from previously known products, the most significant of which was 3,4-DCAA (3,4-dichloroacetanilide), a chemical with some similar properties and a similar physical structure. Both were useful in making pigments and both had herbicidal properties. The structural difference between the two "closely related" compounds was that 3,4-DCAA "differ[ed] in its structural formula solely by having one less CH2 group" than 3,4-DCPA. Because of the similarity in structure between 3,4-DCPA and other chemicals, including 3,4-DCAA, the Patent Office rejected the patent application on obviousness grounds. Monsanto then tried to persuade the Office to withdraw the rejection by submitting documents to show that DCPA was not obvious, because it had greatly superior and unexpected selective herbicidal activity.
Monsanto filed the Husted Affidavit. The trial court found that this document "contains no affirmative misrepresentation and is accurate so far as it goes" but "it is misleading, and was intended to be misleading, in that it fails to state facts known to the applicant which were inconsistent with its position that propanil is a superior herbicide." The court of appeals commented:
The patent was issued, however, . . . after Monsanto submitted an affidavit of Dr. Robert F. Husted, based on tests performed by him on twenty plant species at three different rates of application per acre. The report as presented to the Patent Office asserted that 3,4-DCPA completely killed or severely injured nine of the eleven species and failed to have any effect on only two. Eight other compounds were reported to have no effect on any of the eleven plants and two other compounds, one of them 3,4-DCAA, either had very slight or no effect. Significantly, although the Husted tests entailed tests on twenty species, at three separate rates of application per acre, the Patent Office was informed of tests on only eleven species and only at one rate of application, two pounds per acre. In all, the affidavit showed less than 25 per cent of Husted's results; of 899 tests, only 110 were submitted. The district court concluded that this close-cropping of Husted's findings amounted to misrepresentation.
Under applicable law, as understood by the court, if a compound on which a patent is sought is very similar in structure to a known compound, as 3,4-DCPA and 3,4-DCAA were, a rebuttable presumption arises that the later compound is obvious from the earlier one. "To rebut this presumption it must be shown 'that the claimed compound possesse[d] unobvious or unexpected beneficial properties not actually possessed by the prior art homologue.'"
The Husted Affidavit thus appeared to rebut the obviousness rejection by showing that 3,4-DCPA possessed an unexpected beneficial herbicidal property that 3,4-DCAA and other products lacked. The Husted Affidavit misled the Patent Office, the trial court said, because both 3,4-DCPA and 3,4-DCAA "in fact possess the newly discovered property of the claimed compound."
The trial court said that Monsanto submitted a fraudulent affidavit in that: "It is, in short, composed of half-truths." The court pointed to such omissions as these which made the affidavit one of half-truths:
For example, one omitted test result showed that 3,4-DCAA had a complete kill on pigweed at 2 lbs. per acre of application, just as 3,4-DCPA did. . . . Monsanto, by the Husted affidavit, attempted to show that closely related compounds do not possess unique herbicidal properties. The facts not stated were, therefore material."
The district court held the patent invalid and Monsanto appealed top the Third Circuit.
Decision of Third Circuit
The Third Circuit affirmed the district court's judgment of fraud (2-1) in an opinion by Circuit Judge Aldisert.
Majority opinion
The court looked at the Husted Affidavit in light of its place in the three successive Monsanto patent applications:
In view of Huffman's previously unsuccessful attempts to obtain a patent, it is reasonable to conclude that his success with his 1968 application, originally rejected by the examiner, and later accepted after presentation of the Husted Report, was attributed to his emphasis that the compound possessed properties of 'surprising . . . herbicidal efficiency' not possessed by related compounds.
Whether 3,4-DCPA really was surprisingly superior to related compounds as a herbicide was therefore critical to the patent prosecution and to whether Monsanto had committed fraud on the Patent Office. The court rejected Monsanto's argument that in the affidavit it was merely putting its best foot forward:
The Husted affidavit to the Patent Office did not nearly reflect the total Husted test as transmitted to Huffman. Indeed, an examination of the report permits, if not compels, the misleading inference that it constituted a complete and accurate analysis of all the testing instead of an edited version thereof. Concealment and nondisclosure may be evidence of and equivalent to a false representation, because the concealment or suppression is, in effect, a representation that what is disclosed is the whole truth.
Looking at this pattern of conduct, the court said, "[W]e cannot bring ourselves to say that the application for the 1968 patent displayed that standard of conduct demanded under the circumstances." Rather, "Monsanto was obliged to disclose more information as to the herbicidal properties of related compounds to the Patent Office than it did." The reason is that "what is at issue is not merely a contest between the parties but a public interest" that spurious patents should not issue. As a result of Monsanto's truncated disclosure of the facts concerning 3,4-DCPA's comparative efficacy as a herbicide, "it was impossible for the Patent Office fairly to assess Monsanto's application against the prevailing statutory criteria."
The court concluded:
Thus, Monsanto's failure to disclose amounted to misrepresentation transgressing equitable standards of conduct owed the public by the applicant in return for its monopoly. Accordingly, Monsanto was not entitled to the patent monopoly, and the district court did not err in [invalidating the patent].
Dissent
Judge Kalodner dissented. First, he said, the majority decided the case on the basis of the preponderance of evidence but it should have used the "clear and convincing evidence" standard.
Second, the district court erroneously said that it was not necessary to prove specific intent to deceive the Patent Office "when there is evidence of a deliberate withholding of material information," and that the patent must be rejected "even if the decision not to disclose was motivated by nothing more than bad judgment as to the materiality of the information."
Third, it was not shown that the Patent Office "would not have issued the patent but for the claimed fraudulent conduct." There was no proof that the Patent Office was misled.
Commentary
In an article in the George Washington Law Review, Irving Kayton criticized "the Third Circuit's ignorance of the accepted manner and practice, defined by statute and rule, in which patent prosecution takes place." Kayton said that it was the responsibility of the patent examiner to review the records of the earlier stages of the patent prosecution, and there is a legal presumption that the examiner did that; that examination would have disclosed the information that Monsanto withheld in the Husted Affidavit. Accordingly, Kayton argued, "Monsanto['s] patent solicitor was thus completely justified in attributing actual, but at the least, constructive knowledge to the examiner of the contents of the disclosures of both the 1961 and 1957 applications." Co-author Richard H. Stern disagreed with his colleague on this issue. Stern maintained, "The proposed reliance on constructive knowledge, and the indulgence in legal fictions and presumptions that follows, has far more flavor of a determination not to learn damaging facts than it has of the candor required in such cases."
Moreover, argued Kayton, the fact that the Third Circuit and the judge in the similar Texas case disagreed over whether Monsanto was just "putting its best foot forward" or instead was intentionally defrauding the Patent Office indicates that reasonable men can differ: "This being so, is it not an inescapable conclusion . . . that the Monsanto patent solicitor's [conduct] . . . was at worst a nonculpable mistake of judgment?" Again, co-author Stern disagreed: "No. The Third Circuit, on the contrary, appears to have thought that Judge Singleton's views were plainly erroneous, and that his acceptance of the patentee's theory that 'it is all right to put your best foot forward with the Patent Office' was contrary to the legal standard as to candor required in ex parte proceedings of this type. Given so thoroughly erroneous a concept of the standard of candor, it is neither surprising that in the circumstances of this case the judge using that legal standard found no fraud nor surprising that the judges rejecting it found fraud."
Another discussion of this case faults the district court's decision for purporting to invalidate the patent for fraud, and by implication the subsequent Third Circuit affirmance. The stated basis of the criticism is that fraudulent procurement does not invalidate a patent, but merely makes it permanently unenforceable.
References
External links
United States Court of Appeals for the Third Circuit cases
United States patent case law
Monsanto litigation
Dow Chemical Company
Herbicides
1972 in United States case law
Regulation of genetically modified organisms | Monsanto Co. v. Rohm and Haas Co. | [
"Engineering",
"Biology"
] | 2,621 | [
"Regulation of genetically modified organisms",
"Herbicides",
"Regulation of biotechnologies",
"Genetic engineering",
"Biocides"
] |
68,867,066 | https://en.wikipedia.org/wiki/Fibbinary%20number | In mathematics, the fibbinary numbers are the numbers whose binary representation does not contain two consecutive ones. That is, they are sums of distinct and non-consecutive powers of two.
Relation to binary and Fibonacci numbers
The fibbinary numbers were given their name by Marc LeBrun, because they combine certain properties of binary numbers and Fibonacci numbers:
The number of fibbinary numbers less than any given power of two is a Fibonacci number. For instance, there are 13 fibbinary numbers less than 32, the numbers 0, 1, 2, 4, 5, 8, 9, 10, 16, 17, 18, 20, and 21.
The condition of having no two consecutive ones, used in binary to define the fibbinary numbers, is the same condition used in the Zeckendorf representation of any number as a sum of non-consecutive Fibonacci numbers.
The th fibbinary number (counting 0 as the 0th number) can be calculated by expressing in its Zeckendorf representation, and re-interpreting the resulting binary sequence as the binary representation of a number. For instance, the Zeckendorf representation of 19 is 101001 (where the 1's mark the positions of the Fibonacci numbers used in the expansion ), the binary sequence 101001, interpreted as a binary number, represents , and the 19th fibbinary number is 41.
The th fibbinary number (again, counting 0 as 0th) is even or odd if and only if the th value in the Fibonacci word is 0 or 1, respectively.
Properties
Because the property of having no two consecutive ones defines a regular language, the binary representations of fibbinary numbers can be recognized by a finite automaton, which means that the fibbinary numbers form a 2-automatic set.
The fibbinary numbers include the Moser–de Bruijn sequence, sums of distinct powers of four. Just as the fibbinary numbers can be formed by reinterpreting Zeckendorff representations as binary, the Moser–de Bruijn sequence can be formed by reinterpreting binary representations as quaternary.
A number is a fibbinary number if and only if the binomial coefficient is odd. Relatedly, is fibbinary if and only if the central Stirling number of the second kind is odd.
Every fibbinary number takes one of the two forms or , where is another fibbinary number.
Correspondingly, the power series whose exponents are fibbinary numbers,
obeys the functional equation
provide asymptotic formulas for the number of integer partitions in which all parts are fibbinary.
If a hypercube graph of dimension is indexed by integers from 0 to , so that two vertices are adjacent when their indexes have binary representations with Hamming distance one, then the subset of vertices indexed by the fibbinary numbers forms a Fibonacci cube as its induced subgraph.
Every number has a fibbinary multiple. For instance, 15 is not fibbinary, but multiplying it by 11 produces 165 (101001012), which is.
References
Binary arithmetic
Base-dependent integer sequences
Fibonacci numbers | Fibbinary number | [
"Mathematics"
] | 689 | [
"Recurrence relations",
"Fibonacci numbers",
"Golden ratio",
"Arithmetic",
"Mathematical relations",
"Binary arithmetic"
] |
68,870,642 | https://en.wikipedia.org/wiki/Cross-polarization | Cross-polarization (CP), originally published as proton-enhanced nuclear induction spectroscopy (PENIS) is a solid-state nuclear magnetic resonance (ssNMR) technique to transfer nuclear magnetization from different types of nuclei via heteronuclear dipolar interactions. The 1H-X cross-polarization dramatically improves the sensitivity of ssNMR experiments of most experiments involving spin-1/2 nuclei, capitalizing on the higher 1H polarisation, and shorter T1(1H) relaxation times. It was developed by Michael Gibby, Alexander Pines and Professor John S. Waugh at the Massachusetts Institute of Technology.
In this technique the natural nuclear polarization of an abundant spin (typically 1H) is exploited to increase the polarization of a rare spin (such as 13C, 15N, 31P) by irradiating the sample with radio waves at the frequencies matching the Hartmann–Hahn condition:
where are the gyromagnetic ratios, is the spinning rate, and is an integer. This process is sometimes referred to as "spin-locking". The power of one contact pulse is typically ramped to achieve a more broadband and efficient magnetisation transfer.
The evolution of the X NMR signal intensity during the cross polarisation is a build-up and decay process whose time axis is usually referred to as the "contact time". At short CP contact times, a build-up of X magnetisation occurs, during which the transfer of 1H magnetisation from nearby spins (and remote spins through proton spin diffusion) to X occurs. For longer CP contact times, the X magnetisation decreases from T1ρ(X) relaxation, i.e. the decay of the magnetisation during a spin lock.
References
Nuclear magnetic resonance
Spectroscopy | Cross-polarization | [
"Physics",
"Chemistry"
] | 363 | [
"Molecular physics",
"Nuclear magnetic resonance",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Nuclear physics",
"Spectroscopy"
] |
73,263,964 | https://en.wikipedia.org/wiki/Inverted%20ligand%20field%20theory | Inverted ligand field theory (ILFT) describes a phenomenon in the bonding of coordination complexes where the lowest unoccupied molecular orbital is primarily of ligand character. This is contrary to the traditional ligand field theory or crystal field theory picture and arises from the breaking down of the assumption that in organometallic complexes, ligands are more electronegative and have frontier orbitals below those of the d orbitals of electropositive metals. Towards the right of the d-block, when approaching the transition-metalmain group boundary, the d orbitals become more core-like, making their cations more electronegative. This decreases their energies and eventually arrives at a point where they are lower in energy than the ligand frontier orbitals. Here the ligand field inverts so that the bonding orbitals are more metal-based, and antibonding orbitals more ligand-based. The relative arrangement of the d orbitals are also inverted in complexes displaying this inverted ligand field.
History
The first example of an inverted ligand field was demonstrated in paper form 1995 by James Snyder. In this theoretical paper, Snyder proposed that the [Cu(CF3)4]− complexes reported by Naumann et al. and assigned a formal oxidation state of 3+ at the copper would be better thought of as Cu(I). By comparing the d-orbital occupation, calculated charges and orbital population of [Cu(CF3)4]− "Cu(III)" complex and the formally Cu(I) [Cu(CH3)2]− complex, they illustrated how the former could be better described as a d10 copper complex experiencing two electron donation from the CF3− ligands. The phenomenon, termed an inverted ligand field by Roald Hoffman, began to be described by Aullón and Alvarez as they identified this phenonmenon as being a result of relative electronegativities. Lancaster and co-workers later provided experimental evidence to support the assignment of this oxidation state. Using UV/visible/near IR spectroscopy, Cu K-edge X-ray absorption spectroscopy, and 1s2p resonant inelastic X-ray scattering in concert with density functional theory, multiplet theory, and multireference calculations, they were able to map the ground state electronic configuration. This showed that the lowest unoccupied orbital was of primarily trifluoromethyl character. This confirmed the presence of an inverted ligand field and started building experimental tools to probe this phenomenon. Since the Snyder case, many other complexes of later transition metals have been shown to display inverted ligand field through both theoretical and experimental methods.
Probing inverted ligand fields
Computational and experimental techniques have been imperative for the study of inverted ligand fields, especially when used in cooperatively.
Computational
Computational methods have played a large role in understanding the nature of bonding in both molecular and solid-state systems displaying inverted ligand fields. The Hoffman group has completed many calculations to probe occurrence of inverted ligand fields in varying systems. In a study of the absorption of CO on PtBi and PtBi2 surfaces, on an octahedral [Pt(BiH3)6]4+ model with a Pt thought of having a formal 4+ oxidation state, the team found that the t2g metal orbitals were higher energy that the eg orbitals. This inversion of the d orbital ordering was attributed to the bismuth based ligands being higher in energy than the metal d orbitals. In another study involving calculations on Ag(III) salt KAgF4, other Ag(II), and Ag(III) compounds, the Ag d orbitals were found to be below those of the fluoride ligand orbitals, and was confirmed by Grochala and cowrokers by core and valence spectroscopies.
The Mealli group developed the program Computer Aided Composition of Atomic Orbitals (CACAO) to provide visualised molecular orbitals analyses based on perturbation theory principles. This program successfully displayed orbital energy inversion with organometallic complexes containing electronegative metals such as Ni or Cu bound to electropositive ligand atoms such as B, Si, or Sn. In these cases the bonding was described as a ligand to metal dative bond or sigma backdonation.
Alvarez and coworkers used computational methods to illustrate ligand field inversion in the band structures of solid state materials. The group found that, contrary to the classical bonding scheme, in calculated MoNiP8 band structures the eg-type orbitals of the octahedral nickel atom were found to be the major component of an occupied band below the t2g set. Additionally, the band around the fermi level which included the Ni+ antibonding orbitals were found to be mostly of phosphorus character, a clear example if an inverted ligand field. Similar observations were made in other solid state materials like the skutterudite CoP3 structure. A consequence of the inverted ligand field in this case is that the conductivity in skutterudites is associated with the phosphorus rings rather than the metal atoms.
Experimental
X-ray absorption spectroscopy (XAS) has been a powerful tool in deducing the oxidation states of transition metals. Energy shifts in XAS are higher due to the higher effective nuclear charge of atoms in higher oxidations, presumably due to the higher binding energy for deeper, more core-like electrons.
Despite this being a very powerful technique, competing effects on the rising edge positions can make assignment difficult. It was initially thought that the weak, quadrupole-allowed pre-edge peak assigned as the Cu 1s to 3d transition could be used to distinguish between Cu(II) and Cu(III) with the features appearing at 8979 +/- 0.3 eV and 8981 +/- 0.5 eV, respectively. Ab initio calculations by Tomson, Wieghardt, and co-workers displayed that pre-peaks previously assigned as Cu(III) could be displayed by Cu(II) bearing complexes. Many groups have displayed that metal K-edge XAS transitions involving ligand-localised acceptor orbitals, as well as spectral shifts from change in coordination environment, can make metal K-edge analysis less predictable.
The most sussessful use of K and L-edge XAS provide valuable information on the composition of molecular orbitals and display inverted ligand fields has been done in studies that made use of computational techniques in concert with experimental techniques. This was the case of the L2[Cu2(S2)n]2+ complexes of York, Brown, and Tolman, and the Cu(CF3)4- by various groups including Hoffman, Overgaard, and Lancaster.
Another experimental tool used to probe ligand field inversion includes Electron paramagnetic resonance (ESR/EPR), which can provide information regarding the metal electronic configuration, the nature of the SOMO, and high resolution information on the ligands.
Impact of charge and geometry
Changes in both charge and geometry of organometallic complexes can greatly vary the energies of molecular orbitals and can therefore dictate the likelihood of observing an inverted ligand field. Hoffman and coworkers explored the impact of these variables by calculating the atomic composition of molecular orbitals for mono- di- and trianion copper complexes. The square planar monoanion displayed the reported ligand field inversion. The "Cu(II)" which has an intermediate square planar to tetrahedral geometry also displayed this feature with the antibonding t2-derived orbital being mostly of ligand character and the x2-y2 orbital being the lowest molecular orbital of the d block. The tetrahedral trianion showed a return to the Werner-type ligand field. By modulating the geometry of the "Cu(II)" species and displaying the change in energies of MO on walsh diagrams, the group was able to show how the complex could display both a classical and inverted ligand field when in Td and SP geometry respectively. Additional calculations on the Cu(I) with non-tetrahedral geometry also displayed an inverted ligand field. This indicated the importance of not just oxidation state but geometry in determining the inversion of a ligand field.
Consequences on bonding
The inversion of ligand fields has interesting implications on the nature of reactivity of organometallic complexes. This sigma non-innocence of ligands arising from inverted ligand fields could therefore be used to tune reactivity of complexes and open space in understanding the mechanisms of existing reactions.
In an analysis of the [ZnF4]2- , it was found that due to ligand field inversion displayed in this species, core ionization removes an electron from the metal-rich bonding t2 orbital, lengthening the Zn-F bonds. This is contrary to the classical ligand field where ionization would remove an electron from the antibonding t2 orbital shortening the Zn-F bonds.
The presence of electron-deficient ligands also result in an inverted ligand field. Calculations have shown that the large O 2p contribution into the LUMO/LUMO+1 in [(LTEEDCu)2(O2)]2+ should make the complex highly oxidizing as it contains electron deficient O2- ligands. Studies have corroborated this property as this complex has shown to be able to undergo C-H and C-F activation and aromatic hydroxylation.
There is evidence showing that reductive elimination on species displaying ligand field inversion do not undergo a redox event at the metal center. The C-CF3 bond formation by "Ni(IV)" complexes was completed without redox participation of the Nickel. The metal appears to remain Ni(II) throughout the reaction. The mechanism is thought to be through the attack of a masked electrophilic cation by anionic CF3. The electron deficiency here is due to the inverted ligand field.
References
Chemical bonding
Inorganic chemistry | Inverted ligand field theory | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,019 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
73,272,155 | https://en.wikipedia.org/wiki/Principal%20interacting%20orbital | Principal interacting orbital (PIO), based on quantum chemical calculations, provides chemists with visualization of a set of semi-localized dominant interacting orbitals. The method offers additional perspective to molecular orbitals (MO) obtained from quantum chemical calculations (DFT for instance), which often provide extensively delocalized orbitals that are hard to interpret and relate with chemists' intuition on electronic structures and orbital interactions. Several other efforts have been made to help visualize semi-localized dominant interacting orbitals that represents well chemists' intuition, while maintaining the mathematical rigorosity. Notable examples include the natural atomic orbitals (NAO), natural bond orbitals (NBO), charge decomposition analysis (CDA), and adaptive natural density partitioning (AdNDP). PIO analysis uniquely provides semi-localized MOs that are chemically accurate (i.e., not always produces 2-center-2-electron localized orbitals, continuous evolution of PIOs along potential energy surface, etc.) and easy to interpret.
General workflow
A typical workflow is summarized here. For details, please refer to the reference or consult the website.
Optimize structure and calculate electronic structure.
Run NBO analysis to obtain the NAO basis and corresponding density matrix.
Run PIO analysis.
Mathematical details
The PIO analysis is based on the statistical method principal component analysis (PCA).
Chemical examples
Diels-Alder reaction
Ethylene and hexadeca-1,3,5,7,9,11,13,15-octaene
The Diels-Alder reaction of hexadeca-1,3,5,7,9,11,13,15-octaene and ethylene can be thought of as a [4+2] reaction between a substituted diene and a dienophile. The frontier molecular orbitals produced by a typical structural optimization are as follows: the HOMO and LUMO of the dienophile "ethylene" are two-centered, while the HOMO and the LUMO of the substituted diene "hexadeca-1,3,5,7,9,11,13,15-octaene" are delocalized over the entire molecule.
This is different from chemists' traditional depiction of the Diels-Alder reaction: the HOMO (two-centered) of the dienophile interacts with the LUMO of the diene (four-centered), and the LUMO (two-centered) of the dienophile interacts with the HOMO of the diene (four-centered).
The computed delocalized HOMO and LUMO in hexadeca-1,3,5,7,9,11,13,15-octaene makes it hard for chemists to make useful interpretations.
On the other hand, the dominant PIOs from PIO analysis resemble the HOMO/LUMO (four-centered) of an unsubstituted butadiene. This highlights an advantage of PIO calculation—it localizes the orbitals to the reactive part and preserves the multi-centered feature. Another feature of PIO calculation that must be highlighted is that the first two principal orbital interactions—which resembles the interaction of the HOMO of the diene and the LUMO of the dienophile, and the interaction of the LUMO of the diene and the HOMO of the dienophile—sums to over 95% of the total orbital interaction between the two fragments.
Reaction coordinate tracing
PIO analysis with intrinsic reaction coordinate (IRC) calculation gives continuous results. The continuality extends to the evolution of the shape of the PIOs and their percentage of contribution to the overall orbital interaction. This is another advantage of PIO analysis over other methods to obtain localized electronic structures such as NBO and AdNDP. The other methods require predefined parameters and often lead to ambiguous chemical structures and unphysical discontinuity. For instance, when the Diels-Alder reaction is analyzed with IRC and NBO, (1) the orbitals on the diene are described as two-center-two-electron bonds, and (2) the result is not continuous—three pi bonds would suddenly switch to three newly formed bonds.
Further, PIO tracing of reaction coordinate can reveal other properties such as the electronic demand of a Diels-Alder reaction. For a normal demand DA reaction (EDG on diene and EWG on dienophile), PIO analysis shows that the reaction is dominated with contribution from the HOMO of the diene and the LUMO of the dienophile. For a reverse demand DA (EWG on diene and EDG on dienophile), PIO analysis shows that the reaction is dominated with contribution from the LUMO of the diene and the HOMO of the dienophile. On the other hand, for a neutral demand DA, contributions from the diene-HOMO/dienophile-LUMO and diene-LUMO/dienophile-HOMO are roughly equal.
Zeise's salt
PIO can also be used to describe transition metal compounds, which are often more complicated to analyze than main group compounds due to more possible bonding patterns. A classic example is Zeise's salt, which is usually described with the Dewar-Chatt-Duncanson (DCD) model. C2H4 donates its pi electrons to the empty orbital of Pt, while its π* orbital accepts electrons from Pt. The semilocalized bonding cannot be adequately described with methods such as NBO (localized two-center-two-electron) and CMO (delocalized over the entire molecule). On the other hand, PIO analysis produces a model that is in best agreement with our chemical intuition. The top two PIOs sums to over 90% of the overall orbital contribution. The first PIO pair is between the dz2 orbital of the metal and the pi orbital of ethylene. The second PIO pair is between the dxz orbital of the metal and the π* orbital of ethylene.
[Re2Cl8]2-
PIO analysis of [Re2Cl8]2- four primary orbital interactions, which corresponds to the quadruple bond (one σ, two π, and one δ).
References
Wikipedia Student Program
Quantum chemistry | Principal interacting orbital | [
"Physics",
"Chemistry"
] | 1,307 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic",
" and optical physics"
] |
73,273,828 | https://en.wikipedia.org/wiki/List%20of%20linear%20ordinary%20differential%20equations | This is a list of named linear ordinary differential equations.
A–Z
{| class="wikitable sortable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
|-
|Airy
|2
|
|Optics
|-
|Bessel
|2
|
|Wave propagation
|-
|Cauchy-Euler
|n
|
|
|-
|Chebyshev
|2
|
|Orthogonal polynomials
|-
|Damped harmonic oscillator
|2
|
|Damping
|-
|Frenet-Serret
|1
|
|Differential geometry
|-
|General Laguerre
|2
|
|Hydrogen atom
|-
|General Legendre
|2
|
|
|-
|Harmonic oscillator
|2
|
|Simple harmonic motion
|-
|Heun
|2
|
|
|-
|Hill
|2
|, (f periodic)
|Physics
|-
|Hypergeometric
|2
|
|
|-
|Kummer
|2
|
|
|-
|Laguerre
|2
|
|
|-
|Legendre
|2
|
|Orthogonal polynomials
|-
|Matrix
|1
|
|
|-
|Picard-Fuchs
|2
|
|Elliptic curves
|-
|Riemann
|2
|
|
|-
|Quantum harmonic oscillator
|2
|
|Quantum mechanics
|-
|Sturm-Liouville
|2
|
|Applied mathematics
|}
See also
List of nonlinear ordinary differential equations
List of nonlinear partial differential equations
List of named differential equations
References
differential, ordinary, linear | List of linear ordinary differential equations | [
"Mathematics"
] | 330 | [
"Lists of equations",
"Mathematical objects",
"Mathematical tables",
"Equations"
] |
53,384,839 | https://en.wikipedia.org/wiki/International%20Union%20for%20Vacuum%20Science%2C%20Technique%20and%20Applications | The International Union for Vacuum Science, Technique, and Applications (IUVSTA) is a union of 35 science and technology national member societies that supports collaboration in vacuum science, technique and applications.
Founded in 1958, IUVSTA is an interdisciplinary union which represents several thousands of physicists, chemists, materials scientists, engineers and technologists who are active in basic and applied research, development, manufacturing, sales and education. IUVSTA finances advanced scientific workshops, international schools and technical courses, worldwide.
The main purposes of the IUVSTA are to organize and sponsor international conferences and educational activities, as well as to facilitate research and technological developments in the field of vacuum science and its applications.
History
The history and structure of the Union are described in two articles in scientific journals.
Membership
IUVSTA is a Union (or federation) of National Vacuum Societies. There can be only one member society (or National Committee) in any one nation. This Society must be representative of the scientific and technical fields encompassed by the Divisions of IUVSTA. Where appropriate a Society can represent more than one nation. IUVSTA can only recognise societies in geographical areas recognised by the United Nations as independent nations.
Member societies
Technical divisions
Applied surface science
Biointerfaces
Electronic materials & processing
Nanometer structures
Plasma science and technologies
Surface engineering
Surface science
Thin film
Vacuum science and technology
Activities
Conference organization
European Conference on Surface Science (ECOSS) annual series in collaboration with the European Physical Society.
European Vacuum Conference series. Biennial.
International Thin Film Conference.
International Vacuum Congress and Exhibition, for all areas of activity of the Union. Triennial.
Vacuum and Surface Sciences Conference of Asia and Australia (VASSCAA). Biennial.
Workshops and education
Workshops on front-line research.
An education program for both technically developed and developing countries in the form of schools, webinars and technical training courses.
Standards and prizes
Interaction with the International Organization for Standardization on the establishment of international vacuum standards.
The awarding of international prizes:
The IUVSTA Prize for Science
The IUVSTA Prize for Technology
The IUVSTA EBARA Award
The IUVSTA Medard W. Welch International Scholarship
The IUVSTA Elsevier Student Travel Awards
External affiliations
IUVSTA maintains formal links with other Non-Government Organizations involved in education, and the promotion and dissemination of science and associated techniques. With the support of IUVSTA divisions, fruitful cooperation with UNESCO, ISC, ICTP and TWAS have been initiated and developed. Such contacts facilitate the organization of specialized workshops and may offer financial support for students attending short courses, seminars and congresses. Links with other organizations such as ISO are the responsibility of the IUVSTA divisions.
United Nations Educational, Scientific and Cultural Organization (UNESCO)
IUVSTA has been admitted to UNESCO in the “Relations Operationnelles” category.
International Science Council (ISC)
IUVSTA is a Scientific Associate of the International Science Council (formerly International Council of Scientific Unions, ICSU)
International Centre for Theoretical Physics (ICTP)
IUVSTA cooperates financially and scientifically with International Centre for Theoretical Physics (ICTP) in the organization of workshops of high scientific level held in Trieste. These workshops address a post-graduate and post-doctoral audience from the lesser developed countries.
Third World Academy of Sciences (TWAS)
Preliminary contacts have been made with Third World Academy of Sciences (TWAS) which foresees the organization of short courses on rough vacuum techniques and applications dedicated to technicians.
International Standards Organization (ISO)
IUVSTA has a formal liaison with the International Standards Organization (ISO). IUVSTA sends a representative to the TC/201 Surface Chemical Analysis committee and to the ISO TC/112 Vacuum Technology committee. These links are maintained via the Applied Surface Science and Vacuum Science and Technology divisions, respectively.
Structure and organization
Current primary officers
Source:
President: François Reniers
President Elect: Jay Hendricks
Past President: Anouk Galtayries
Secretary General: Christoph Eisenmenger-Sittner
Scientific Director: Katsuyuki Fukutani
Scientific Secretary: Anton Stampfl
Treasurer: Arnaud Delcorte
Recording Secretary (non-voting officer): Ana Gomes Silva
Current national councillors
List of presidents
Source:
The president under the early federation was:
1958-1962 — Prof. Dr. Emil Thomas
Past and present presidents of IUVSTA:
2022-2025 — Prof.François Reniers
2019-2022 — Prof. Anouk Galtayries
2016-2019 — Prof. Lars Montelius
2013-2016 — Prof. Mariano Anderle
2010-2013 — Prof. Jean Jacques Pireaux
2007-2010 — Dr. J.W. "Bill" Rogers, Jr.
2004-2007 — Prof. Ugo Valbusa
2001-2004 — Dr. M.-G. Barthes-Labrousse
1998-2001 — Prof. D. Phillip Woodruff
1995-1998 — Prof. John L. Robins
1992-1995 — Prof. Theodore E. Madey
1989-1992 — Prof. Jose L. de Segovia
1986-1989 — Prof. Dr. Heribert Jahrreiss
1983-1986 — Prof. Dr. Janos Antal
1980-1983 — Dr. James M. Lafferty
1977-1980 — Prof. Dr. Leslie Holland
1974-1977 — Dr. Albertus Venema
1971-1974 — Dr. Luther E. Pruess
1968-1971 — Prof. Dr. Kurt Diels
1965-1968 — Prof. Dr. Jean Debiesse
1962-1965 — Mr. Medard W. Welch
Honorary presidents
2021 — Peter Barna
1989 — Prof. Dr. E. Thomas
1983 — Prof. Dr. H.C. M. Auwärter
1977 — Mr. M. W. Welch
1962 — Prof. Dr. L. Dunoyer
1962 — Prof. Dr. M. Pirani
Honorary and founding members of the Union
Mr. A.S.D. Barrett
Mlle. M. Berthaud
Prof. D. Degras
Prof. K. Diels
Prof E. Thomas
Dr. A. Venema
Mr. M.W. Welch
References
External links
IUVSTA web site, iuvsta.org
Physics World August 2016, live.iop-pp01.agh.sleek.net . IUVSTA reflections. An Interview with Mariano Anderle, IUVSTA President 2013–2016.
Scientific organizations established in 1958
International scientific organizations
Vacuum
Plasma technology and applications
Nanotechnology
International organisations based in Vienna
Non-profit organisations based in Austria
Members of the International Science Council | International Union for Vacuum Science, Technique and Applications | [
"Physics",
"Materials_science",
"Engineering"
] | 1,356 | [
"Plasma physics",
"Plasma technology and applications",
"Vacuum",
"Materials science",
"Nanotechnology",
"Matter"
] |
53,391,053 | https://en.wikipedia.org/wiki/Shu%20Jie%20Lam | Shu Jie Lam is a Malaysian-Chinese research chemist specialising in biomolecular engineering. She is researching star polymers designed to attack superbugs as antibiotics.
References
Malaysian chemists
Nanotechnology
People from Batu Pahat
University of Melbourne alumni
Malaysian women chemists | Shu Jie Lam | [
"Materials_science",
"Engineering"
] | 56 | [
"Nanotechnology",
"Materials science"
] |
53,394,201 | https://en.wikipedia.org/wiki/Socionature | Socionature is the idea that nature and humanity are one and the same and can be thought of or referenced as a single concept. An example of this perspective would be the difference in experience two cultures might have with a drought. One culture might view drought as a form of natural variability in the environment and store surplus food for these times. Another culture might be engaged in for profit farming and see the drought as a damaging natural crisis. The first culture would be an example of a socionature viewpoint.
Definition and link to Marxist critique
In the Encyclopedia of Geography, Christopher Bear explained:"Socionature is a concept that is used to argue that society and nature are inseparable and should not be analyzed in abstraction from each other. The concept is rooted in – but operates as a critique of – Marxist approaches such as historical materialism and post-structural approaches such as actor-network theory. Drawing on the former, it emphasizes temporality and processes of becoming, while its engagement with post-structural thought leads to a focus on ontological hybridity. At the heart of research on socionatures is an interest in processes of their production, and especially on the labour that is involved and the uneven power relationships that emerge."
References
Further reading
For a Sociology of ‘Socionature’: Ontology and the Commodity-Based Approach
https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118786352.wbieg0212
Ecology | Socionature | [
"Biology"
] | 306 | [
"Ecology"
] |
53,394,249 | https://en.wikipedia.org/wiki/The%20Compatibility%20Gene | The Compatibility Gene is a 2013 book about the discovery of the mechanism of compatibility in the human immune system by the English professor of immunology, Daniel M. Davis. It describes the history of immunology with the discovery of the principle of graft rejection by Peter Medawar in the 1950s, and the way the body distinguishes self from not-self via natural killer cells. The compatibility mechanism contributes also to the success of pregnancy by helping the placenta to form, and may play a role in mate selection.
Context
Author
Daniel M. Davis has a doctorate in physics from Strathclyde University. He was professor of molecular immunology at Imperial College London and director of research at the University of Manchester's collaborative centre for inflammation research. Davis is a recognised as an expert in the field by the Nature journal of immunology. Davis is a recognised expert for his research in the immune synapse, membrane nanotubes, and natural killer cells.
Subject
The book's context is the history of immunology, from the earliest questioning about why people become ill and why some may recover, to the 19th century pioneers who demonstrated that bacteria caused many diseases. In the 20th century where, slowly at first but at an accelerating pace, biologists started to build an understanding of the genetic basis of variation and natural selection, and alongside that, the foundations of scientific medicine, including immunology. As Steven Pinker observes, few stories of scientific endeavour have never been told. "This is one of them. Ostensibly about a set of genes that we all have and need, this book is really about the men and women who discovered them and worked out what they do. It’s about brilliant insights and lucky guesses; the glory of being proved right and the paralysing fear of getting it wrong; the passion for cures and the lust for Nobels. It’s a search for the essence of scientific greatness by a scientist who is headed that way himself."
Book
Contents
The book is in three parts. In part 1, Davis describes the history of research into biological compatibility, starting with the story of Peter Medawar's life and discoveries in graft rejection. He tours the history of medicine from Hippocrates to the 19th century pioneers Louis Pasteur and Robert Koch, and Frank Macfarlane Burnet's concept of the immune system's ability to discriminate self from non-self. He explains how advances in understanding of immunity, from Karl Landsteiner's discovery of the ABO blood group system onwards, permit organ transplants to take place. The compatibility genes are named as three class I human leucocyte antigen (HLA) genes (A, B, and C) and three class II (DP, DQ, and DR), each with numerous versions (alleles).
Lastly, Davis tells the human side of the story of the discovery of killer T-cells. Alan Townsend found that killer T-cells destroyed cells that carried an HLA protein and small fragments of viral protein. Those small peptides were all the evidence the T-cells needed to decide that a cell was diseased.
In part 2, Davis describes the nature of the genetic differences between people, like having the allele for Huntington's disease, can be small but decisive. An HLA protein variant, B*27, is associated with a serious inherited disease, ankylosing spondylitis, but also protects against AIDS. Other variants protected against other diseases. Perhaps the polymorphisms in HLA, the many forms each HLA gene can take, are maintained by natural selection for competing factors. He explains that variations in HLA genes may predict which drugs will be beneficial for individuals, implying a new era of personalised medicine. He tells the story of how Klas Kärre came up with the concept of the missing self, a sign (by the absence of an HLA protein) that a cell was diseased, and should be killed by a natural killer cell.
In part 3, Davis describes the famous experiment that called for female partners to sniff boxes containing their male partners' T-shirts, which they had worn for two days. There was a slight association between finding the smell sexy and the two partners having different compatibility genes. It could possibly indicate sexual selection for outbreeding, at least in the HLA system. He explains what is known of the role of compatibility genes in the brain. He tells the story of how the variable genes of the immune system affect the success of pregnancy. Far from the baby's HLA proteins somehow being tolerated by the mother (unlike anyone else's), the strong reaction against the baby's antigens helps to drive proper development of the placenta, in particular the growth of chorionic villi that ensure efficient transfer (for instance of oxygen) between mother and baby. Davis concludes the book by telling a story of genetic compatibility between his wife and himself. He finds himself wondering whether all women should have found him exceptionally attractive, at least when he was younger. He observes that on the contrary there is no hierarchy in HLA: some variants are good in one situation, and bad in another.
Publication
The book was first published in the UK by Allen Lane (hardback) in 2013. Paperback editions were brought out by Penguin Books in Britain, and by Oxford University Press in America, both in 2014. An Italian translation was published by Bollati Boringhieri in Turin in 2016.
Reception
The Compatibility Gene has been well received by critics and scientists.
Mark Viney, reviewing the book in the New Scientist, comments that Davis covers human compatibility genes well, but that he should have gone into more detail on the different systems in other organisms.
The science writer Peter Forbes, writing in The Guardian, notes that when Watson and Crick cracked the genetic code in 1953, it seemed that medicine would instantly profit: but half a century went by before the genome was decoded, and 98% of it seemed at first glance to be junk DNA. Now its complexity is starting to be understood, one function at a time. One specialised area is the immune system, with its own ultra-variable set of proteins. They are not only complicated, but have many functions, in immunity, sexual attraction (perhaps), pregnancy, and brain function. Unsurprisingly, Forbes observes, this makes immunology, and its popularisation, "extremely difficult". Davis "sugars the pill" by choosing to go into the researchers' lives and struggles in great detail. Forbes notes that Davis does not mention that most of the genetic differences between humans and chimpanzees are to do with the immune system and brain development: perhaps (he suggests) these are connected.
Nicola Davis, reviewing the book in The Times, writes that Davis "weaves a warm biographical thread through his tale of scientific discovery, revealing the drive and passion of those in the vanguard of research." The tale of the pioneers such as Medawar is "fairly familiar but Davis's readable narrative allows them to be seen afresh". She finds the account more challenging as it approaches more recent discoveries, but with "plenty of rewarding moments".
Emily Banham, reviewing the book for Nature, notes that compatibility genes lie at the heart of our immune systems, playing a part in the success of skin grafts, pregnancy, and more.
The biologist Rebecca Nesbit, reviewing The Compatibility Gene for The Biologist, writes that Davis shares many stories of dedicated scientists, brought together by "a small cluster of 'compatibility genes' which play a large role in how we react to disease, and are central to how our immune systems work." She notes that the book is as much about the people as the discoveries, but these are made worthwhile by the medical advances they keep producing, for example with possibilities for personalised medicine, as when people with one particular compatibility gene react adversely to an AIDS drug. She observes that all the same, he ends with the scientist's favourite refrain: "more research needed".
References
External links
Website
2013 non-fiction books
Popular science books
Immunology
Allen Lane (imprint) books | The Compatibility Gene | [
"Biology"
] | 1,666 | [
"Immunology"
] |
53,394,437 | https://en.wikipedia.org/wiki/Bayesian%20program%20synthesis | In programming languages and machine learning, Bayesian program synthesis (BPS) is a program synthesis technique where Bayesian probabilistic programs automatically construct new Bayesian probabilistic programs. This approach stands in contrast to routine practice in probabilistic programming where human developers manually write new probabilistic programs.
The framework
Bayesian program synthesis (BPS) has been described as a framework related to and utilizing probabilistic programming. In BPS, probabilistic programs are generated that are themselves priors over a space of probabilistic programs. This strategy allows automatic synthesis of new programs via probabilistic inference and is achieved by the composition of modular component programs.
The modularity in BPS allows inference to work on and test smaller probabilistic programs before being integrated into a larger model.
This framework can be contrasted with the family of automated program synthesis fields, which include programming by example and programming by demonstration. The goal in such fields is to find the best program that satisfies some constraint. In traditional program synthesis, for instance, verification of logical constraints reduce the state space of possible programs, allowing more efficient search to find an optimal program. Bayesian program synthesis differs both in that the constraints are probabilistic and the output is itself a distribution over programs that can be further refined.
Additionally, Bayesian program synthesis can be contrasted to the work on Bayesian program learning, where probabilistic program components are hand-written, pre-trained on data, and then hand assembled in order to recognize handwritten characters.
See also
Probabilistic programming language
References
External links
Commentary on BPS by David Garrity: Artificial Intelligence to see Significant Progress in 2017
Probability
Computer programming
Probability interpretations
Philosophy of mathematics
Philosophy of science
User interfaces
Programming paradigms | Bayesian program synthesis | [
"Mathematics",
"Technology",
"Engineering"
] | 366 | [
"User interfaces",
"Computer programming",
"Probability interpretations",
"Software engineering",
"Interfaces",
"nan",
"Computers"
] |
57,941,653 | https://en.wikipedia.org/wiki/Manolis%20Kellis | Manolis Kellis (; born 1977) is a professor of Computer Science and Computational Biology at the Massachusetts Institute of Technology (MIT) and a member of the Broad Institute of MIT and Harvard. He is the head of the Computational Biology Group at MIT and is a Principal Investigator in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT.
Kellis is known for his contributions to genomics, human genetics, epigenomics, gene regulation, genome evolution, disease mechanism, and single-cell genomics. He co-led the NIH Roadmap Epigenomics Project effort to create a comprehensive map of the human epigenome, the comparative analysis of 29 mammals to create a comprehensive map of conserved elements in the human genome, the ENCODE, GENCODE, and modENCODE projects to characterize the genes, non-coding elements, and circuits of the human genome and model organisms. A major focus of his work is understanding the effects of genetic variations on human disease, with contributions to obesity, diabetes, Alzheimer's disease, schizophrenia, and cancer.
Education and early career
Kellis was born in Greece, moved with his family to France when he was 12, and came to the U.S. in 1993. He obtained his PhD from MIT, where he worked with Eric Lander, founding director of the Broad Institute, and Bonnie Berger, professor at MIT and received the Sprowls award for the best doctorate thesis in Computer Science, and the first Paris Kanellakis graduate fellowship. Prior to computational biology, he worked on artificial intelligence, sketch and image recognition, robotics, and computational geometry, at MIT and at the Xerox Palo Alto Research Center.
Research and career
As of July 2018, Manolis Kellis has authored 187 journal publications that have been cited 68,380 times. He has helped direct several large-scale genomics projects, including the Roadmap Epigenomics project, the Encyclopedia of DNA Elements (ENCODE) project, the Genotype Tissue-Expression (GTEx) project.
Comparative genomics
Kellis started comparing the genomes of yeast species as an MIT graduate student. As part of this work, which was published in Nature in 2003, he developed computational methods to pinpoint patterns of similarity and difference between closely related genomes. The goal was to develop methods for understanding genomes with a view to apply them to the human genome.
He turned from yeast to flies and ultimately to mammals, comparing multiple species to explore genes, their control elements, and their deregulation in human disease. Kellis led several comparative genomics projects in human, mammals, flies, and yeast.
Epigenomics
Kellis co-led the NIH government-funded project to catalogue the human epigenome. He said during an interview with MIT Technology Review “If the genome is the book of life, the epigenome is the complete set of annotations and bookmarks.” His lab now uses this map to further the understanding of fundamental processes and disease in humans.
Obesity
Kellis and colleagues used epigenomic data to investigate the mechanistic basis of the strongest genetic association with obesity, published in the New England Journal of Medicine. They showed that this mechanism operates in the fat cells of both humans and mice and detailed how changes within the relevant genomic regions cause a shift from dissipating energy as heat (thermogenesis) to storing energy as fat. A full understanding of the phenomenon may lead to treatments for people whose 'slow metabolism' cause them to gain excessive weight.
Alzheimer's disease
Kellis, Li-Huei Tsai, and others at MIT used epigenomic markings in human and mouse brains to study the mechanisms leading to Alzheimer’s disease, published in Nature in 2015. They showed that immune cell activation and inflammation, which have long been associated with the condition, are not simply the result of neurodegeneration, as some researchers have argued. Rather, in mice engineered to develop Alzheimer’s-like symptoms, they found that immune cells start to change even before neural changes are observed.
Single-cell Genomics
The Kellis Lab has profiled a large number of human post-mortem brains at single-cell resolution, studying inter-individual variation associated with genetic differences and disease phenotypes, including the first single-cell transcriptomic analysis of Alzheimer's disease (Nature, 2019), single
Genotype-Tissue Expression (GTEx)
Kellis is a member of the Genotype-Tissue Expression (GTEx) project that seeks to elucidate the basis of disease predisposition. It is an NIH-sponsored project that seeks to characterize genetic variation in human tissues with roles in diabetes, heart disease, and cancer.
Kellis is also a Principal Investigator of the enhancing GTEx (eGTEx) consortium, studying epigenomic changes of regulatory elements and epitranscriptomic changes of RNA transcripts across multiple human tissues.
Disease Mechanism
To date, his lab has developed specific domain expertise in obesity, diabetes, Alzheimer's disease, schizophrenia, heart disease, ALS and FTLD, and cancer.
Teaching
In addition to his research, Kellis co-taught for several years MIT's required undergraduate introductory algorithm courses 6.006: Introduction to Algorithms and 6.046: Design and Analysis of Algorithms with Profs. Ron Rivest, Erik Demaine, Piotr Indyk, Srinivas Devadas and others.
He is also teaching a computational biology course at MIT, titled "Computational Biology: Genomes, Networks, Evolution." The course (6.047/6.878) is geared towards advanced undergraduate and early graduate students, seeking to learn the algorithmic and machine learning foundations of computational biology, and also be exposed to current frontiers of research in order to become active practitioners of the field. He started 6.881: Computational Personal Genomics: Making sense of complete genomes, and 6.883/9.S99: Neurogenomics: Computational Molecular Neuroscience This course is aimed at exploring the computational challenges associated with interpreting how sequence differences between individuals lead to phenotypic differences such as gene expression, disease predisposition, or response to treatment.
Awards and honors
Kellis received the US Presidential Early Career Award for Scientists and Engineers (PECASE), the National Science Foundation CAREER award, a Sloan Research Fellowship, the Gregor Mendel Medal for Outstanding Achievements in Science by the Mendel Lectures committee, the Athens Information Technology (AIT) Niki Award for Science and Engineering, the Ruth and Joel Spira Teaching award, and the George M. Sprowls Award for the best Ph.D. thesis in Computer Science at MIT. He was named as one of Technology Review's Top 35 Innovators Under 35 for his research in comparative genomics
Media appearances
Decoding A Genomic Revolution, TEDx Cambridge, 2013 "MIT Computational Biologist Manolis Kellis gives us a glimpse of the doctor’s office visit of the future, and uses his own genetic mutations to show us how a revolution in genomics is unlocking treatments that could transform medicine as we know it"
Regulatory Genomics and Epigenomics of Complex Disease, Welcome Trust, 2014 "Manolis Kellis, Massachusetts Institute of Technology, USA, gives one of the keynote lectures at Epigenomics of Common Diseases, (28-31 October 2014), organised by the Wellcome Genome Campus Advanced Courses and Scientific Conferences team at Churchill College, Cambridge
Manolis Kellis Reddit Ask Me Anything (AMA), Reddit Science AMA Series: "I’m Manolis Kellis, a professor of computer science at MIT studying the human genome to learn about what causes obesity, Alzheimer’s, cancer and other conditions. AMA about comp-bio and epigenomics, and how they impact human health".
References
Greek expatriates in France
Genetic epidemiologists
Living people
1977 births
Greek emigrants to the United States
Greek computer scientists
Human Genome Project scientists
Massachusetts Institute of Technology faculty
Massachusetts Institute of Technology alumni
Biotechnologists
21st-century American biologists
Scientists from Athens
Recipients of the Presidential Early Career Award for Scientists and Engineers | Manolis Kellis | [
"Engineering",
"Biology"
] | 1,662 | [
"Human Genome Project scientists",
"Biotechnologists"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.