id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
594,336 | https://en.wikipedia.org/wiki/Cell%20proliferation | Cell proliferation is the process by which a cell grows and divides to produce two daughter cells. Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite these terms sometimes being used interchangeably.
Stem cells undergo cell proliferation to produce proliferating "transit amplifying" daughter cells that later differentiate to construct tissues during normal development and tissue growth, during tissue regeneration after damage, or in cancer.
The total number of cells in a population is determined by the rate of cell proliferation minus the rate of cell death.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny.
In single-celled organisms, cell proliferation is largely responsive to the availability of nutrients in the environment (or laboratory growth medium).
In multicellular organisms, the process of cell proliferation is tightly controlled by gene regulatory networks encoded in the genome and executed mainly by transcription factors including those regulated by signal transduction pathways elicited by growth factors during cellcell communication in development. Recently it has been also demonstrated that cellular bicarbonate metabolism, which is responsible for cell proliferation, can be regulated by mTORC1 signaling. In addition, intake of nutrients in animals can induce circulating hormones of the Insulin/IGF-1 family, which are also considered growth factors, and that function to promote cell proliferation in cells throughout the body that are capable of doing so.
Uncontrolled cell proliferation, leading to an increased proliferation rate, or a failure of cells to arrest their proliferation at the normal time, is a cause of cancer.
References
Cellular processes | Cell proliferation | [
"Biology"
] | 510 | [
"Cellular processes"
] |
594,682 | https://en.wikipedia.org/wiki/Compactly%20generated%20group | In mathematics, a compactly generated (topological) group is a topological group G which is algebraically generated by one of its compact subsets. This should not be confused with the unrelated notion (widely used in algebraic topology) of a compactly generated space -- one whose topology is generated (in a suitable sense) by its compact subspaces.
Definition
A topological group G is said to be compactly generated if there exists a compact subset K of G such that
So if K is symmetric, i.e. K = K −1, then
Locally compact case
This property is interesting in the case of locally compact topological groups, since locally compact compactly generated topological groups can be approximated by locally compact, separable metric factor groups of G. More precisely, for a sequence
Un
of open identity neighborhoods, there exists a normal subgroup N contained in the intersection of that sequence, such that
G/N
is locally compact metric separable (the Kakutani-Kodaira-Montgomery-Zippin theorem).
References
Topological groups | Compactly generated group | [
"Physics",
"Mathematics"
] | 210 | [
"Space (mathematics)",
"Topological spaces",
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Topological groups",
"Spacetime"
] |
594,874 | https://en.wikipedia.org/wiki/Autoionization | Autoionization is a process by which an atom or a molecule in an excited state spontaneously emits one of the outer-shell electrons, thus going from a state with charge to a state with charge , for example from an electrically neutral state to a singly ionized state.
Autoionizing states are usually short-lived, and thus can be described as Fano resonances rather than normal bound states. They can be observed as variations in the ionization cross sections of atoms and molecules, by photoionization, electron ionization and other methods.
Examples
As examples, several Fano resonances in the extreme ultraviolet photoionization spectrum of neon are attributed to autoionizing states. Some are due to one-electron excitations, such as a series of three strong similarly shaped peaks at energies of 45.546, 47.121 and 47.692 eV which are interpreted as 1s2 2s1 2p6 np (1P) states for n = 3, 4 and 5. These states of neutral neon lie beyond the first ionization energy because it takes more energy to excite a 2s electron than to remove a 2p electron. When autoionization occurs, the np → 2s de-excitation provides the energy needed to remove one 2p electron and form the Ne+ ground state.
Other resonances are attributed to two-electron excitations. The same neon photoionization spectrum considered above contains a fourth strong resonance in the same region at 44.979 eV but with a very different shape, which is interpreted as the 1s2 2s2 2p4 3s 3p (1P) state. For autoionization, the 3s → 2p transition provides the energy to remove the 3p electron.
Electron ionization allows the observation of some states which cannot be excited by photons due to selection rules. In neon for example again, the excitation of triplet states is forbidden by the spin selection rule ΔS = 0, but the 1s2 2s2 2p4 3s 3p (3P) has been observed by electron ionization at 42.04 eV. Ion impact by high energy H+, He+ and Ne+ ions has also been used.
If a core electron is missing, a positive ion can autoionize further and lose a second electron in the Auger effect. In neon, X-ray excitation can remove a 1s electron, producing an excited Ne+ ion with configuration 1s1 2s2 2p6. In the subsequent Auger process a 2s → 1s transition and simultaneous emission of a second electron from 2p leads to the Ne2+ 1s2 2s1 2p5 ionic state.
Molecules, in addition, can have vibrationally autoionizing Rydberg states, in which the small amount of energy necessary to ionize a Rydberg state is provided by vibrational excitation.
Autodetachment
When the excited state of the atom or molecule consists of a compound state of a neutral particle and a resonantly attached electron, autoionization is referred to as autodetachment. In this case the compound state begins with a net negative charge before the autoionization process, and ends with a neutral charge. The ending state will often be vibrationally or rotationally excited state as a result of excess energy from the resonant attachment process.
References
Atomic physics
Molecular physics
Quantum chemistry | Autoionization | [
"Physics",
"Chemistry"
] | 703 | [
" and optical physics stubs",
"Quantum chemistry stubs",
"Quantum chemistry",
"Molecular physics",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"Atomic physics",
"nan",
"Atomic",
"Molecular physics stubs",
"Physical chemistry stubs",
" and opti... |
595,145 | https://en.wikipedia.org/wiki/Dry%20distillation | Dry distillation is the heating of solid materials to produce gaseous products (which may condense into liquids or solids). The method may involve pyrolysis or thermolysis, or it may not (for instance, a simple mixture of ice and glass could be separated without breaking any chemical bonds, but organic matter contains a greater diversity of molecules, some of which are likely to break).
If there are no chemical changes, just phase changes, it resembles classical distillation, although it will generally need higher temperatures. Dry distillation in which chemical changes occur is a type of destructive distillation or cracking.
Uses
The method has been used to obtain liquid fuels from coal and wood. It can also be used to break down mineral salts such as sulfates () through thermolysis, in this case producing sulfur dioxide (SO2) or sulfur trioxide (SO3) gas which can be dissolved in water to obtain sulfuric acid. By this method sulfuric acid was first identified and artificially produced. When substances of vegetable origin, e.g. coal, oil shale, peat or wood, are heated in the absence of air (dry distillation), they decompose into gas, liquid products and coke/charcoal. The yield and chemical nature of the decomposition products depend on the nature of the raw material and the conditions under which the dry distillation is done. Decomposition within a temperature range of 450 °C to about 600 °C is called carbonization or low-temperature degassing. At temperatures above 900 °C, the process is called coking or high-temperature degassing. If coal is gasified to make coal gas or carbonized to make coke then coal tar is among the by-products.
Wood
When wood is heated above 270 °C it begins to carbonize. If air is absent, the final product (since there is no oxygen present to react with the wood) is charcoal. If air (which contains oxygen) is present, the wood will catch fire and burn when it reaches a temperature of about 400–500 °C and the fuel product is wood ash. If wood is heated away from air, first the moisture is driven off. Until this is complete, the wood temperature remains at about 100–110 °C. When the wood is dry its temperature rises, and at about 270 °C, it begins to spontaneously decompose. This is the well known exothermic reaction which takes place in charcoal burning. At this stage evolution of the by-products of wood carbonization starts. These substances are given off gradually as the temperature rises and at about 450 °C the evolution is complete. The solid residue, charcoal, is mainly carbon (about 70%) and small amounts of tarry substances which can be driven off or decomposed completely only by raising the temperature to above about 600 °C.
In the common practice of charcoal burning using internal heating of the charged wood by burning a part of it, all the by-product vapors and gases escape into the atmosphere as smoke. The by-products can be recovered by passing the off-gases through a series of water to yield so-called wood vinegar (pyroligneous acid) and the non-condensible wood gas passes on through the condenser and may be burned to provide heat. The wood gas is only usable as fuel, and consists typically of 17% methane; 2% hydrogen; 23% carbon monoxide; 38% carbon dioxide; 2% oxygen and 18% nitrogen. It has a gas calorific value of about 10.8 MJ/m3 (290 BTU/cu.ft.) i.e. about one third the value of natural gas. When deciduous tree woods are subjected to distillation, the products are methanol (wood alcohol) and charcoal. The distillation of pine wood causes Pine tar and pitch to drip away from the wood and leave behind charcoal. Birch tar from birch bark is a particularly fine tar, known as "Russian oil", suitable for leather protection. The by-products of wood tar are turpentine and charcoal.
Tar kilns are dry distillation ovens, historically used in Scandinavia for producing tar from wood. They were built close to the forest, from limestone or from more primitive holes in the ground. The bottom is sloped into an outlet hole to allow the tar to pour out. The wood is split into dimensions of a finger, stacked densely, and finally covered tight with dirt and moss. If oxygen can enter, the wood might catch fire, and the production would be ruined. On top of this, a fire is stacked and lit. After a few hours, the tar starts to pour out and continues to do so for a few days.
See also
Coal oil
Destructive distillation
Gasworks
Pitch (resin)
Syngas
Tar
References
Distillation
Pyrolysis | Dry distillation | [
"Chemistry"
] | 997 | [
"Separation processes",
"Pyrolysis",
"Oil shale technology",
"Organic reactions",
"Distillation",
"Synthetic fuel technologies"
] |
595,428 | https://en.wikipedia.org/wiki/File%20verification | File verification is the process of using an algorithm for verifying the integrity of a computer file, usually by checksum. This can be done by comparing two files bit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate a hash of the copied file and comparing that to the hash of the original file.
Integrity verification
File integrity can be compromised, usually referred to as the file becoming corrupted. A file can become corrupted by a variety of ways: faulty storage media, errors in transmission, write errors during copying or moving, software bugs, and so on.
Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions, hash collisions may result in false positives, but the likelihood of collisions is often negligible with random corruption.
Authenticity verification
It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such as viruses or backdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to be collision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called a preimage attack.
For this purpose, cryptographic hash functions are employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively, digital signatures can be employed to assure tamper resistance.
File formats
A checksum file is a small file that contains the checksums of other files.
There are a few well-known checksum file formats.
Several utilities, such as md5deep, can use such checksum files to automatically verify an entire directory of files in one operation.
The particular hash algorithm used is often indicated by the file extension of the checksum file.
The ".sha1" file extension indicates a checksum file containing 160-bit SHA-1 hashes in sha1sum format.
The ".md5" file extension, or a file named "MD5SUMS", indicates a checksum file containing 128-bit MD5 hashes in md5sum format.
The ".sfv" file extension indicates a checksum file containing 32-bit CRC32 checksums in simple file verification format.
The "crc.list" file indicates a checksum file containing 32-bit CRC checksums in brik format.
As of 2012, best practice recommendations is to use SHA-2 or SHA-3 to generate new file integrity digests; and to accept MD5 and SHA-1 digests for backward compatibility if stronger digests are not available.
The theoretically weaker SHA-1, the weaker MD5, or much weaker CRC were previously commonly used for file integrity checks.
CRC checksums cannot be used to verify the authenticity of files, as CRC32 is not a collision resistant hash function -- even if the hash sum file is not tampered with, it is computationally trivial for an attacker to replace a file with the same CRC digest as the original file, meaning that a malicious change in the file is not detected by a CRC comparison.
See also
Checksum
Data deduplication
References
Computer files
Error detection and correction | File verification | [
"Engineering"
] | 741 | [
"Error detection and correction",
"Reliability engineering"
] |
1,556,672 | https://en.wikipedia.org/wiki/Phosphatidic%20acid | Phosphatidic acids are anionic phospholipids important to cell signaling and direct activation of lipid-gated ion channels. Hydrolysis of phosphatidic acid gives rise to one molecule each of glycerol and phosphoric acid and two molecules of fatty acids. They constitute about 0.25% of phospholipids in the bilayer.
Structure
Phosphatidic acid consists of a glycerol backbone, with, in general, a saturated fatty acid bonded to carbon-1, an unsaturated fatty acid bonded to carbon-2, and a phosphate group bonded to carbon-3.
Formation and degradation
Besides de novo synthesis, PA can be formed in three ways:
By phospholipase D (PLD), via the hydrolysis of the P-O bond of phosphatidylcholine (PC) to produce PA and choline.
By the phosphorylation of diacylglycerol (DAG) by DAG kinase (DAGK).
By the acylation of lysophosphatidic acid by lysoPA-acyltransferase (LPAAT); this is the most common pathway.
The glycerol 3-phosphate pathway for de novo synthesis of PA is shown here:
In addition, PA can be converted into DAG by lipid phosphate phosphohydrolases (LPPs) or into lyso-PA by phospholipase A (PLA).
Roles in the cell
The role of PA in the cell can be divided into three categories:
PA is the precursor for the biosynthesis of many other lipids.
The physical properties of PA influence membrane curvature.
PA acts as a signaling lipid, recruiting cytosolic proteins to appropriate membranes (e.g., sphingosine kinase 1).
PA plays very important role in phototransduction in Drosophila.
PA is a lipid ligand that gates ion channels. See also lipid-gated ion channels.
The first three roles are not mutually exclusive. For example, PA may be involved in vesicle formation by promoting membrane curvature and by recruiting the proteins to carry out the much more energetically unfavourable task of neck formation and pinching.
Roles in biosynthesis
PA is a vital cell lipid that acts as a biosynthetic precursor for the formation (directly or indirectly) of all acylglycerol lipids in the cell.
In mammalian and yeast cells, two different pathways are known for the de novo synthesis of PA, the glycerol 3-phosphate pathway or the dihydroxyacetone phosphate pathway. In bacteria, only the former pathway is present, and mutations that block this pathway are lethal, demonstrating the importance of PA. In mammalian and yeast cells, where the enzymes in these pathways are redundant, mutation of any one enzyme is not lethal. However, it is worth noting that in vitro, the various acyltransferases exhibit different substrate specificities with respect to the acyl-CoAs that are incorporated into PA. Different acyltransferases also have different intracellular distributions, such as the endoplasmic reticulum (ER), the mitochondria or peroxisomes, and local concentrations of activated fatty acids. This suggests that the various acyltransferases present in mammalian and yeast cells may be responsible for producing different pools of PA.
The conversion of PA into diacylglycerol (DAG) by LPPs is the commitment step for the production of phosphatidylcholine (PC), phosphatidylethanolamine (PE) and phosphatidylserine (PS). In addition, DAG is also converted into CDP-DAG, which is a precursor for phosphatidylglycerol (PG), phosphatidylinositol (PI) and phosphoinositides (PIP, PIP2, PIP3).
PA concentrations are maintained at extremely low levels in the cell by the activity of potent LPPs. These convert PA into DAG very rapidly and, because DAG is the precursor for so many other lipids, it too is soon metabolised into other membrane lipids. This means that any upregulation in PA production can be matched, over time, with a corresponding upregulation in LPPs and in DAG metabolising enzymes.
PA is, therefore, essential for lipid synthesis and cell survival, yet, under normal conditions, is maintained at very low levels in the cell.
Biophysical properties
PA is a unique phospholipid in that it has a small highly charged head group that is very close to the glycerol backbone. PA is known to play roles in both vesicle fission and fusion, and these roles may relate to the biophysical properties of PA.
At sites of membrane budding or fusion, the membrane becomes or is highly curved. A major event in the budding of vesicles, such as transport carriers from the Golgi, is the creation and subsequent narrowing of the membrane neck. Studies have suggested that this process may be lipid-driven, and have postulated a central role for DAG due to its, likewise, unique molecular shape. The presence of two acyl chains but no headgroup results in a large negative curvature in membranes.
The LPAAT BARS-50 has also been implicated in budding from the Golgi. This suggests that the conversion of lysoPA into PA might affect membrane curvature. LPAAT activity doubles the number of acyl chains, greatly increasing the cross-sectional area of the lipid that lies ‘within’ the membrane while the surface headgroup remains unchanged. This can result in a more negative membrane curvature. Researchers from Utrecht University have looked at the effect of lysoPA versus PA on membrane curvature by measuring the effect these have on the transition temperature of PE from lipid bilayers to nonlamellar phases using 31P-NMR. The curvature induced by these lipids was shown to be dependent not only on the structure of lysoPA versus PA but also on dynamic properties, such as the hydration of head groups and inter- and intramolecular interactions. For instance, Ca2+ may interact with two PAs to form a neutral but highly curved complex. The neutralisation of the otherwise repulsive charges of the headgroups and the absence of any steric hindrance enables strong intermolecular interactions between the acyl chains, resulting in PA-rich microdomains. Thus in vitro, physiological changes in pH, temperature, and cation concentrations have strong effects on the membrane curvature induced by PA and lysoPA.
The interconversion of lysoPA, PA, and DAG – and changes in pH and cation concentration – can cause membrane bending and destabilisation, playing a direct role in membrane fission simply by virtue of their biophysical properties. However, though PA and lysoPA have been shown to affect membrane curvature in vitro; their role in vivo is unclear.
The roles of lysoPA, PA, and DAG in promoting membrane curvature do not preclude a role in recruiting proteins to the membrane. For instance, the Ca2+ requirement for the fusion of complex liposomes is not greatly affected by the addition of annexin I, though it is reduced by PLD. However, with annexin I and PLD, the extent of fusion is greatly enhanced, and the Ca2+ requirement is reduced almost 1000-fold to near physiological levels.
Thus the metabolic, biophysical, recruitment, and signaling roles of PA may be interrelated.
Role in signaling
PA is kept low in the bulk of the membrane in order to transiently burst and signal locally in high concentration. For example TREK-1 channels are activated by local association with PLD and production of PA. The dissociation constant of PA for TREK-1 is approximately 10 micromolar. The relatively weak binding combined with a low concentration of PA in the membrane allows the channel to turn off. The local high concentration for activation suggests at least some restrictions in local lipid diffusion. The bulk low concentration of PA combined with high local bursts is the opposite of PIP2 signaling. PIP2 is kept relatively high in the membrane and then transiently hydrolized near a protein in order to transiently reduce PIP2 signaling. PA signaling mirrors PIP2 signaling in that the bulk concentration of signaling lipid need not change to exert a potent local effect on a target protein.
As described above, PLD hydrolyzes PC to form PA and choline. Because choline is very abundant in the cell, PLD activity does not significantly affect choline levels; and choline is unlikely to play any role in signaling.
The role of PLD activation in numerous signaling contexts, combined with the lack of a role for choline, suggests that PA is important in signaling. However, PA is rapidly converted to DAG, and DAG is also known to be a signaling molecule. This raises the question as to whether PA has any direct role in signaling or whether it simply acts as a precursor for DAG production. If it is found that PA acts only as a DAG precursor, then one can raise the question as to why cells should produce DAG using two enzymes when they contain the PLC that could produce DAG in a single step.
PA produced by PLD or by DAGK can be distinguished by the addition of [γ-32P]ATP. This will show whether the phosphate group is newly derived from the kinase activity or whether it originates from the PC.
Although PA and DAG are interconvertible, they do not act in the same pathways. Stimuli that activate PLD do not activate enzymes downstream of DAG, and vice versa. For example, it was shown that addition of PLD to membranes results in the production of [32P]-labeled PA and [32P]-labeled phosphoinositides. The addition of DAGK inhibitors eliminates the production of [32P]-labeled PA but not the PLD-stimulated production of phosphoinositides.
It is possible that, though PA and DAG are interconvertible, separate pools of signaling and non-signaling lipids may be maintained. Studies have suggested that DAG signaling is mediated by polyunsaturated DAG, whereas PLD-derived PA is monounsaturated or saturated. Thus functional saturated/monounsaturated PA can be degraded by hydrolysing it to form non-functional saturated/monounsaturated DAG, whereas functional polyunsaturated DAG can be degraded by converting it into non-functional polyunsaturated PA.
This model suggests that PA and DAG effectors should be able to distinguish lipids with the same headgroups but with differing acyl chains. Although some lipid-binding proteins are able to insert themselves into membranes and could hypothetically recognize the type of acyl chain or the resulting properties of the membrane, many lipid-binding proteins are cytosolic and localize to the membrane by binding only the headgroups of lipids. Perhaps the different acyl chains can affect the angle of the head-group in the membrane. If this is the case, it suggests that a PA-binding domain must not only be able to bind PA specifically but must also be able to identify those head-groups that are at the correct angle. Whatever the mechanism is, such specificity is possible. It is seen in the pig testes DAGK that is specific for polyunsaturated DAG and in two rat hepatocyte LPPs that dephosphorylate different PA species with different activities. Moreover, the stimulation of SK1 activity by PS in vitro was shown to vary greatly depending on whether dioleoyl (C18:1), distearoyl (C18:0), or 1-stearoyl, 2-oleoyl species of PS were used.
Thus it seems that, though PA and DAG are interconvertible, the different species of lipids can have different biological activities; and this may enable the two lipids to maintain separate signaling pathways.
Measurement of PA production
As PA is rapidly converted to DAG, it is very short-lived in the cell. This means that it is difficult to measure PA production and therefore to study the role of PA in the cell. However, PLD activity can be measured by the addition of primary alcohols to the cell. PLD then carries out a transphosphatidylation reaction, instead of hydrolysis, producing phosphatidyl alcohols in place of PA. The phosphatidyl alcohols are metabolic dead-ends, and can be readily extracted and measured. Thus PLD activity and PA production (if not PA itself) can be measured, and, by blocking the formation of PA, the involvement of PA in cellular processes can be inferred.
Protein interactors
SK1
PDE4A1
Raf1
mTOR
PP1
SHP1
Spo20p
p47phox
PKCε
PLCβ
PIP5K
Opi1
TREK-1
Kv
Kir2.2
References
External links
Biomolecules
Signal transduction
Organophosphates | Phosphatidic acid | [
"Chemistry",
"Biology"
] | 2,781 | [
"Natural products",
"Signal transduction",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Neurochemistry",
"Molecular biology"
] |
1,557,358 | https://en.wikipedia.org/wiki/Molecular%20knot | In chemistry, a molecular knot is a mechanically interlocked molecular architecture that is analogous to a macroscopic knot. Naturally-forming molecular knots are found in organic molecules like DNA, RNA, and proteins. It is not certain that naturally occurring knots are evolutionarily advantageous to nucleic acids or proteins, though knotting is thought to play a role in the structure, stability, and function of knotted biological molecules. The mechanism by which knots naturally form in molecules, and the mechanism by which a molecule is stabilized or improved by knotting, is ambiguous. The study of molecular knots involves the formation and applications of both naturally occurring and chemically synthesized molecular knots. Applying chemical topology and knot theory to molecular knots allows biologists to better understand the structures and synthesis of knotted organic molecules.
The term knotane was coined by Vögtle et al. in 2000 to describe molecular knots by analogy with rotaxanes and catenanes, which are other mechanically interlocked molecular architectures. The term has not been broadly adopted by chemists and has not been adopted by IUPAC.
Naturally occurring molecular knots
Organic molecules containing knots may fall into the categories of slipknots or pseudo-knots. They are not considered mathematical knots because they are not a closed curve, but rather a knot that exists within an otherwise linear chain, with termini at each end. Knotted proteins are thought to form molecular knots during their tertiary structure folding process, and knotted nucleic acids generally form molecular knots during genomic replication and transcription, though details of knotting mechanism continue to be disputed and ambiguous. Molecular simulations are fundamental to the research on molecular knotting mechanisms.
Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. Naturally knotted RNA has not yet been reported.
A number of proteins containing naturally occurring molecular knots have been identified. The knot types found to be naturally occurring in proteins are the and knots, as identified in the KnotProt database of known knotted proteins.
Chemically synthesized molecular knots
Several synthetic molecular knots have been reported. Knot types that have been successfully synthesized in molecules are and 819 knots. Though the and knots have been found to naturally occur in knotted molecules, they have not been successfully synthesized. Small-molecule composite knots have also not yet been synthesized.
Artificial DNA, RNA, and protein knots have been successfully synthesized. DNA is a particularly useful model of synthetic knot synthesis, as the structure naturally forms interlocked structures and can be easily manipulated into forming knots control precisely the raveling necessary to form knots. Molecular knots are often synthesized with the help of crucial metal ion ligands.
History
The first researcher to suggest the existence of a molecular knot in a protein was Jane Richardson in 1977, who reported that carbonic anhydrase B (CAB) exhibited apparent knotting during her survey of various proteins' topological behavior. However, the researcher generally attributed with the discovery of the first knotted protein is Marc. L. Mansfield in 1994, as he was the first to specifically investigate the occurrence of knots in proteins and confirm the existence of the trefoil knot in CAB. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots.
In 1989, Sauvage and coworkers reported the first synthetic knotted molecule: a trefoil synthesized via a double-helix complex with the aid of Cu+ ions.
Vogtle et al. was the first to describe molecular knots as knotanes in 2000. Also in 2000 was William Taylor's creation of an alternative computational method to analyze protein knotting that set the termini at a fixed point far enough away from the knotted component of the molecule that the knot type could be well-defined. In this study, Taylor discovered a deep knot in a protein. With this study, Taylor confirmed the existence of deeply knotted proteins.
In 2007, Eric Yeates reported the identification of a molecular slipknot, which is when the molecule contains knotted subchains even though their backbone chain as a whole is unknotted and does not contain completely knotted structures that are easily detectable by computational models. Mathematically, slipknots are difficult to analyze because they are not recognized in the examination of the complete structure.
A pentafoil knot prepared using dynamic covalent chemistry was synthesized by Ayme et al. in 2012, which at the time was the most complex non-DNA molecular knot prepared to date. Later in 2016, a fully organic pentafoil knot was also reported, including the very first use of a molecular knot to allosterically regulate catalysis. In January 2017, an 819 knot was synthesized by David Leigh's group, making the 819 knot the most complex molecular knot synthesized.
An important development in knot theory is allowing for intra-chain contacts within an entangled molecular chain. Circuit topology has emerged as a topology framework that formalises the arrangement of contacts as well as chain crossings in a folded linear chain. As a complementary approach, Colin Adams. et al., developed a singular knot theory that is applicable to folded linear chains with intramolecular interactions.
Applications
Many synthetic molecular knots have a distinct globular shape and dimensions that make them potential building blocks in nanotechnology.
See also
Circuit topology of folded linear molecules
Supramolecular chemistry
Knotted protein
Knotted polymers
Topology (chemistry)
Knot theory
Molecular Borromean rings
References
External links
Supramolecular chemistry
Macrocycles
Molecular topology | Molecular knot | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,139 | [
"Organic compounds",
"Molecular topology",
"Macrocycles",
"Topology",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
1,557,416 | https://en.wikipedia.org/wiki/Case-hardening | Case-hardening or carburization is the process of introducing carbon to the surface of a low-carbon iron, or more commonly a low-carbon steel object, in order to harden the surface.
Iron which has a carbon content greater than ~0.02% is known as steel. Steel which has a carbon content greater than ~0.25% can be direct-hardened by heating to around 600°C, and then quickly cooling, often by immersing in water or oil, known as quenching. Hardening is desirable for metal components because it gives increased strength and wear resistance, the tradeoff being that hardened steel is generally more brittle and less malleable than when it is in a softer state.
In order to produce a hard skin on steels which have less than ~0.2% carbon, carbon can be introduced into the surface by heating steel in the presence of some carbon-rich substance such as powdered charcoal or hydrocarbon gas. This causes carbon to diffuse into the surface of the steel. The depth of this high carbon layer depends on the exposure time, but 0.5mm is a typical case depth. Once this has been done the steel must be heated and quenched to harden this higher carbon 'skin'. Below this skin, the steel core will remain soft due to its low carbon content.
History
Early iron smelting made use of bloomeries which converted iron ore into metallic iron by heating it in a furnace which burnt wood and charcoal. Because the temperatures that could be achieved by this method were generally below the melting point of iron, it was not truly smelted, but instead converted into a spongy metallic iron/slag matrix. This matrix then required re-heating and hammering to extract as much of the slag as possible, in order to produce a low-carbon malleable wrought iron which could then be forged into tools etc. Due to its low carbon content, wrought iron is quite soft, so something like a knife blade could not be kept very sharp; it would blunt quickly and bend easily.
As smelting techniques improved, higher furnace temperatures could be achieved which were sufficient to fully melt iron. However, in the process, the iron picked up carbon from the charcoal or coke used to heat it. This resulted in molten iron with a carbon content of around 3%, which was termed cast iron. This liquid iron could be cast into complex shapes, but due to its high carbon content, it was very brittle, not at all malleable, and totally unsuitable for something like a knife blade. Further processing was required to remove the excess carbon from cast iron and create malleable wrought iron (the ultimate developments of this being the Bessemer converter and the Siemens process).
After the removal of almost all carbon from cast iron, the result was a metal that was very malleable and ductile but not very hard, nor capable of being hardened by heating and quenching. This led to the introduction of case hardening. The resulting case-hardened product combines much of the malleability and toughness of a low-carbon steel core with the hardness and resilience of the outer high-carbon steel skin.
The traditional method of applying the carbon to the surface of the iron involved packing the iron in a mixture of carbon-rich material such as ground bone and charcoal or a combination of leather, hooves, salt and urine, all inside a well-sealed box (the "case"). This carburizing package is then heated to a high temperature—but still under the melting point of the iron—and left at that temperature for a length of time. The longer the package is held at the high temperature, the deeper the carbon will diffuse into the surface. Different depths of hardening are desirable for different purposes: sharp tools need deep hardening to allow grinding and resharpening without exposing the soft core, while machine parts like gears might need only shallow hardening for increased wear resistance.
The resulting case-hardened part may show distinct surface discoloration, if the carbon material is mixed organic matter as described above. The steel darkens significantly and shows a mottled pattern of black, blue, and purple caused by the various compounds formed from impurities in the bone and charcoal. This oxide surface works similarly to bluing, providing a degree of corrosion resistance, as well as an attractive finish. Case colouring refers to this pattern and is commonly encountered as a decorative finish on firearms.
Case-hardened steel combines extreme hardness and extreme toughness, which is not readily matched by homogeneous alloys since hard homogeneous steels tend to be brittle, especially those steels whose hardness relies on carbon content alone. Alloy steels containing nickel, chromium, or molybdenum can have very high hardness, strength, or elongation values, but at a greater cost than a case-hardened item with a low-carbon core.
Chemistry
Carbon itself is solid at case-hardening temperatures and so is immobile. Transport to the surface of the steel was as gaseous carbon monoxide, generated by the breakdown of the carburising compound and the oxygen packed into the sealed box. This takes place with pure carbon but too slowly to be workable. Although oxygen is required for this process it is re-circulated through the CO cycle and so can be carried out inside a sealed box (the "case"). The sealing is necessary to stop the CO either leaking out or being oxidised to CO2 by excess outside air.
Adding an easily decomposed carbonate "energiser" such as barium carbonate breaks down to BaO + CO2 and this encourages the reaction:
C (from the donor) + CO2 <—> 2 CO
increasing the overall abundance of CO and the activity of the carburising compound.
It is a common knowledge fallacy that case-hardening was done with bone but this is misleading. Although bone was used, the main carbon donor was hoof and horn. Bone contains some carbonates but is mainly calcium phosphate (as hydroxylapatite). This does not have the beneficial effect of encouraging CO production and it can also introduce phosphorus as an impurity into the steel alloy.
Modern use
Both carbon and alloy steels are suitable for case-hardening; typically mild steels are used, with low carbon content, usually less than 0.3% (see plain-carbon steel for more information). These mild steels are not normally hardenable due to the low quantity of carbon, so the surface of the steel is chemically altered to increase the hardenability. Case-hardened steel is formed by diffusing carbon (carburization), nitrogen (nitriding) or boron (boriding) into the outer layer of the steel at high temperature, and then heat treating the surface layer to the desired hardness.
The term case-hardening is derived from the practicalities of the carburization process itself, which is essentially the same as the ancient process. The steel work piece is placed inside a case packed tight with a carbon-based case-hardening compound. This is collectively known as a carburizing pack. The pack is put inside a hot furnace for a variable length of time. Time and temperature determines how deep into the surface the hardening extends. However, the depth of hardening is ultimately limited by the inability of carbon to diffuse deeply into solid steel, and a typical depth of surface hardening with this method is up to 1.5 mm. Other techniques are also used in modern carburizing, such as heating in a carbon-rich atmosphere. Small items may be case-hardened by repeated heating with a torch and quenching in a carbon rich medium, such as the commercial products Kasenit / Casenite or "Cherry Red". Older formulations of these compounds contain potentially toxic cyanide compounds, while the more recent types such as Cherry Red do not.
Processes
Flame or induction hardening
Flame or induction hardening are processes in which the surface of the steel is heated very rapidly to high temperatures (by direct application of an oxy-gas flame, or by induction heating) then cooled rapidly, generally using water; this creates a "case" of martensite on the surface. A carbon content of 0.3–0.6 wt% C is needed for this type of hardening. Unlike other methods, flame or induction hardening does not change chemical composition of the material. Because it is merely a localized heat-treatment process, they are typically only useful on high-carbon steels that will respond sufficiently to quench hardening.
Typical uses are for the shackle of a lock, where the outer layer is hardened to be file resistant, and mechanical gears, where hard gear mesh surfaces are needed to maintain a long service life while toughness is required to maintain durability and resistance to catastrophic failure.
Flame hardening uses direct impingement of an oxy-gas flame onto a defined surface area. The result of the hardening process is controlled by four factors:
Design of the flame head
Duration of heating
Target temperature to be reached
Composition of the metal being treated
Carburizing
Carburizing is a process used to case-harden steel with a carbon content between 0.1 and 0.3 wt% C. In this process iron is introduced to a carbon rich environment at elevated temperatures for a certain amount of time, and then quenched so that the carbon is locked in the structure; one of the simpler procedures is repeatedly to heat a part with an acetylene torch set with a fuel-rich flame and quench it in a carbon-rich fluid such as oil.
Carburization is a diffusion-controlled process, so the longer the steel is held in the carbon-rich environment the greater the carbon penetration will be and the higher the carbon content. The carburized section will have a carbon content high enough that it can be hardened again through flame or induction hardening.
It is possible to carburize only a portion of a part, either by protecting the rest by a process such as copper plating, or by applying a carburizing medium to only a section of the part.
The carbon can come from a solid, liquid or gaseous source; if it comes from a solid source the process is called pack carburizing. Packing low carbon steel parts with a carbonaceous material and heating for some time diffuses carbon into the outer layers. A heating period of a few hours might form a high-carbon layer about one millimeter thick.
Liquid carburizing involves placing parts in a bath of a molten carbon-containing material, often a metal cyanide; gas carburizing involves placing the parts in a furnace maintained with a methane-rich interior.
Nitriding
Nitriding heats the steel part to in an atmosphere of ammonia gas and dissociated ammonia. The time the part spends in this environment dictates the depth of the case. The hardness is achieved by the formation of nitrides. Nitride forming elements must be present for this method to work; these elements include chromium, molybdenum, and aluminum. The advantage of this process is that it causes little distortion, so the part can be case-hardened after being quenched, tempered and machined.
No quenching is done after nitriding.
Cyaniding
Cyaniding is a case-hardening process that is fast and efficient; it is mainly used on low-carbon steels. The part is heated to in a bath of sodium cyanide and then is quenched and rinsed, in water or oil, to remove any residual cyanide.
2NaCN + O2 → 2NaCNO
2NaCNO + O2 → Na2CO3 + CO + N2
2CO → CO2 + C
This process produces a thin, hard shell (between ) that is harder than the one produced by carburizing, and can be completed in 20 to 30 minutes compared to several hours so the parts have less opportunity to become distorted. It is typically used on small parts such as bolts, nuts, screws and small gears. The major drawback of cyaniding is that cyanide salts are poisonous.
Carbonitriding
Carbonitriding is similar to cyaniding except a gaseous atmosphere of ammonia and hydrocarbons is used instead of sodium cyanide. If the part is to be quenched, it is heated to ; if not, then the part is heated to .
Ferritic nitrocarburizing
Ferritic nitrocarburizing diffuses mostly nitrogen and some carbon into the case of a workpiece below the critical temperature, approximately . Under the critical temperature the workpiece's microstructure does not convert to an austenitic phase, but stays in the ferritic phase, which is why it is called ferritic nitrocarburization.
Applications
Parts that are subject to high pressures and sharp impacts are still commonly case-hardened. Examples include firing pins and rifle bolt faces, or engine camshafts. In these cases, the surfaces requiring the hardness may be hardened selectively, leaving the bulk of the part in its original tough state.
Firearms were a common item case-hardened in the past, as they required precision machining best done on low carbon alloys, yet needed the hardness and wear resistance of a higher carbon alloy. Many modern replicas of older firearms, particularly single action revolvers, are still made with case-hardened frames, or with case coloring, which simulates the mottled pattern left by traditional charcoal and bone case-hardening.
Another common application of case-hardening is on screws, particularly self-drilling screws. In order for the screws to be able to drill, cut and tap into other materials like steel, the drill point and the forming threads must be harder than the material(s) that it is drilling into. However, if the whole screw is uniformly hard, it will become very brittle and it will break easily. This is overcome by ensuring that only the surface is hardened, and the core remains relatively softer and thus less brittle. For screws and fasteners, case-hardening is achieved by a simple heat treatment consisting of heating and then quenching.
For theft prevention, lock shackles and chains are often case-hardened to resist cutting, whilst remaining less brittle inside to resist impact. As case-hardened components are difficult to machine, they are generally shaped before hardening.
See also
Differential hardening
Diffusion hardening
Quench polish quench
Shot peening
Surface engineering
Von Stahel und Eysen
References
External links
Case Hardening
Surface Hardening of Steels
Case Hardening Steel and Metal
Metal heat treatments | Case-hardening | [
"Chemistry"
] | 3,025 | [
"Metallurgical processes",
"Metal heat treatments"
] |
1,557,503 | https://en.wikipedia.org/wiki/Wake-on-ring | Wake-on-Ring (WOR) or Wake-on-Modem (WOM) is a specification that allows supported computers and devices to "wake up" or turn on from a sleeping, hibernating or "soft off" state (e.g. ACPI state G1 or G2), and begin operation.
The basic premise is that a special signal is sent over phone lines to the computer through its dial-up modem, telling it to fully power-on and begin operation. Common uses were archive databases and BBSes, although hobbyist use was significant.
Fax machines use a similar system, in which they are mostly idle until receiving an incoming fax signal, which spurs operation.
This style of remote operation has mostly been supplanted by Wake-on-LAN, which is newer but works in much the same way.
See also
Additional resources
"Wake on Modem" entry from Smart Computing Encyclopedia
Networking standards
BIOS
Unified Extensible Firmware Interface
Remote control | Wake-on-ring | [
"Technology",
"Engineering"
] | 207 | [
"Networking standards",
"Computing stubs",
"Computer standards",
"Computer networks engineering"
] |
1,557,574 | https://en.wikipedia.org/wiki/Conservation%20genetics | Conservation genetics is an interdisciplinary subfield of population genetics that aims to understand the dynamics of genes in a population for the purpose of natural resource management, conservation of genetic diversity, and the prevention of species extinction. Scientists involved in conservation genetics come from a variety of fields including population genetics, research in natural resource management, molecular ecology, molecular biology, evolutionary biology, and systematics. The genetic diversity within species is one of the three fundamental components of biodiversity (along with species diversity and ecosystem diversity), so it is an important consideration in the wider field of conservation biology.
Genetic diversity
Genetic diversity is the total amount of genetic variability within a species. It can be measured in several ways, including: observed heterozygosity, expected heterozygosity, the mean number of alleles per locus, the percentage of loci that are polymorphic, and estimated effective population size. Genetic diversity on the population level is a crucial focus for conservation genetics as it influences both the health of individuals and the long-term survival of populations: decreased genetic diversity has been associated with reduced average fitness of individuals, such as high juvenile mortality, reduced immunity, diminished population growth, and ultimately, higher extinction risk.
Heterozygosity, a fundamental measurement of genetic diversity in population genetics, plays an important role in determining the chance of a population surviving environmental change, novel pathogens not previously encountered, as well as the average fitness within a population over successive generations. Heterozygosity is also deeply connected, in population genetics theory, to population size (which itself clearly has a fundamental importance to conservation). All things being equal, small populations will be less heterozygous– across their whole genomes– than comparable, but larger, populations. This lower heterozygosity (i.e. low genetic diversity) renders small populations more susceptible to the challenges mentioned above.
In a small population, over successive generations and without gene flow, the probability of mating with close relatives becomes very high, leading to inbreeding depression– a reduction in average fitness of individuals within a population. The reduced fitness of the offspring of closely related individuals is fundamentally tied to the concept of heterozygosity, as the offspring of these kinds of pairings are, by necessity, less heterozygous (more homozygous) across their whole genomes than outbred individuals. A diploid individual with the same maternal and paternal grandfather, for example, will have a much higher chance of being homozygous at any loci inherited from the paternal copies of each of their parents' genomes than would an individual with unrelated maternal and paternal grandfathers (each diploid individual inherits one copy of their genome from their mother and one from their father).
High homozygosity (low heterozygosity) reduces fitness because it exposes the phenotypic effects of recessive alleles at homozygous sites. Selection can favour the maintenance of alleles which reduce the fitness of homozygotes, the textbook example being the sickle-cell beta-globin allele, which is maintained at high frequencies in populations where malaria is endemic due to the highly adaptive heterozygous phenotype (resistance to the malarial parasite Plasmodium falciparum).
Low genetic diversity also reduces the opportunities for chromosomal crossover during meiosis to create new combinations of alleles on chromosomes, effectively increasing the average length of unrecombined tracts of chromosomes inherited from parents. This in turn reduces the efficacy of selection, across successive generations, to remove fitness-reducing alleles and promote fitness-enhancing alleles from a population. A simple hypothetical example would be two adjacent genes– A and B– on the same chromosome in an individual. If the allele at A promotes fitness "one point", while the allele at B reduces fitness "one point", but the two genes are inherited together, then selection cannot favour the allele at A while penalising the allele at B– the fitness balance is "zero points". Recombination can swap out alternative alleles at A and B, allowing selection to promote the optimal alleles to the optimal frequencies in the population– but only if there are alternative alleles to choose between.
The fundamental connection between genetic diversity and population size in population genetics theory can be clearly seen in the classic population genetics measure of genetic diversity, the Watterson estimator, in which genetic diversity is measured as a function of effective population size and mutation rate. Given the relationship between population size, mutation rate, and genetic diversity, it is clearly important to recognise populations at risk of losing genetic diversity before problems arise as a result of the loss of that genetic diversity. Once lost, genetic diversity can only be restored by mutation and gene flow. If a species is already on the brink of extinction there will likely be no populations to use to restore diversity by gene flow, and any given population will be small and therefore diversity will accumulate in that population by mutation much more slowly than it would in a comparable, but bigger, population (since there are fewer individuals whose genomes are mutating in a smaller population than a bigger population).
Contributors to extinction
Inbreeding and inbreeding depression
The accumulation of deleterious mutations
A decrease in frequency of heterozygotes in a population, or heterozygosity, which decreases a species' ability to evolve to deal with change in the environment
Outbreeding depression
Fragmented populations
Taxonomic uncertainties, which can lead to a reprioritization of conservation efforts
Genetic drift as the main evolutionary process, instead of natural selection
Management units within species
Hybridization with allochthonous species, with the progressive substitution of the initial endemic species
Techniques
Specific genetic techniques are used to assess the genomes of a species regarding specific conservation issues as well as general population structure. This analysis can be done in two ways, with current DNA of individuals or historic DNA.
Techniques for analysing the differences between individuals and populations include
Alloenzymes
Random fragment length polymorphisms
Amplified fragment length polymorphisms
Random amplification of polymorphic DNA
Single strand conformation polymorphism
Minisatellites
Microsatellites
Single-nucleotide polymorphisms
DNA sequencing
These different techniques focus on different variable areas of the genomes within animals and plants. The specific information that is required determines which techniques are used and which parts of the genome are analysed. For example, mitochondrial DNA in animals has a high substitution rate, which makes it useful for identifying differences between individuals. However, it is only inherited in the female line, and the mitochondrial genome is relatively small. In plants, the mitochondrial DNA has very high rates of structural mutations, so is rarely used for genetic markers, as the chloroplast genome can be used instead. Other sites in the genome that are subject to high mutation rates such as the major histocompatibility complex, and the microsatellites and minisatellites are also frequently used.
These techniques can provide information on long-term conservation of genetic diversity and expound demographic and ecological matters such as taxonomy.
Another technique is using historic DNA for genetic analysis. Historic DNA is important because it allows geneticists to understand how species reacted to changes to conditions in the past. This is a key to understanding the reactions of similar species in the future.
Techniques using historic DNA include looking at preserved remains found in museums and caves. Museums are used because there is a wide range of species that are available to scientists all over the world. The problem with museums is that, historical perspectives are important because understanding how species reacted to changes in conditions in the past is a key to understanding reactions of similar species in the future. Evidence found in caves provides a longer perspective and does not disturb the animals.
Another technique that relies on specific genetics of an individual is noninvasive monitoring, which uses extracted DNA from organic material that an individual leaves behind, such as a feather. Environmental DNA (eDNA) can be extracted from soil, water, and air. Organisms deposit tissue cells into the environment and the degradation of these cells results in DNA being released into the environment.This too avoids disrupting the animals and can provide information about the sex, movement, kinship and diet of an individual.
Other more general techniques can be used to correct genetic factors that lead to extinction and risk of extinction. For example, when minimizing inbreeding and increasing genetic variation multiple steps can be taken. Increasing heterozygosity through immigration, increasing the generational interval through cryopreservation or breeding from older animals, and increasing the effective population size through equalization of family size all helps minimize inbreeding and its effects. Deleterious alleles arise through mutation, however certain recessive ones can become more prevalent due to inbreeding. Deleterious mutations that arise from inbreeding can be removed by purging, or natural selection. Populations raised in captivity with the intent of being reintroduced in the wild suffer from adaptations to captivity.
Inbreeding depression, loss of genetic diversity, and genetic adaptation to captivity are disadvantageous in the wild, and many of these issues can be dealt with through the aforementioned techniques aimed at increasing heterozygosity. In addition creating a captive environment that closely resembles the wild and fragmenting the populations so there is less response to selection also help reduce adaptation to captivity.
Solutions to minimize the factors that lead to extinction and risk of extinction often overlap because the factors themselves overlap. For example, deleterious mutations are added to populations through mutation, however the deleterious mutations conservation biologists are concerned with are ones that are brought about by inbreeding, because those are the ones that can be taken care of by reducing inbreeding. Here the techniques to reduce inbreeding also help decrease the accumulation of deleterious mutations.
Applications
These techniques have wide-ranging applications. One example is in defining species and subspecies of salmonids. Hybridization is an especially important issue in salmonids and this has wide-ranging conservation, political, social and economic implications.
More specific example, the Cutthroat Trout. In analysis of its mtDNA and alloenzymes, hybridization between native and non-native species has been shown to be one of the major factors contributing to the decline in its populations. This has led to efforts to remove some hybridized populations so native populations could breed more readily. Cases like these impact everything from the economy of local fishermen to larger companies, such as timber.
Defining species and subspecies has conservation implication in mammals, too. For example, the northern white rhino and southern white rhino were previously mistakenly identified as the same species given their morphological similarities, but recent mtDNA analyses showed that the species are genetically distinct. As a result, the northern white rhino population has dwindled to near-extinction due to poaching crisis, and the prior assumption that it could freely breed with the southern population is revealed to be a misguided approach in conservation efforts.
More recent applications include using forensic genetic identification to identify species in cases of poaching. Wildlife DNA registers are used to regulate trade of protected species, species laundering, and poaching. Conservation genetics techniques can be used alongside a variety of scientific disciplines. For example, landscape genetics has been used in conjunction with conservation genetics to identify corridors and population dispersal barriers to give insight into conservation management.
Implications
New technology in conservation genetics has many implications for the future of conservation biology. At the molecular level, new technologies are advancing. Some of these techniques include the analysis of minisatellites and MHC. These molecular techniques have wider effects from clarifying taxonomic relationships, as in the previous example, to determining the best individuals to reintroduce to a population for recovery by determining kinship. These effects then have consequences that reach even further. Conservation of species has implications for humans in the economic, social, and political realms. In the biological realm increased genotypic diversity has been shown to help ecosystem recovery, as seen in a community of grasses which was able to resist disturbance to grazing geese through greater genotypic diversity. Because species diversity increases ecosystem function, increasing biodiversity through new conservation genetic techniques has wider reaching effects than before.
A short list of studies a conservation geneticist may research include:
Phylogenetic classification of species, subspecies, geographic races, and populations, and measures of phylogenetic diversity and uniqueness.
Identifying hybrid species, hybridization in natural populations, and assessing the history and extent of introgression between species.
Population genetic structure of natural and managed populations, including identification of Evolutionary Significant Units (ESUs) and management units for conservation.
Assessing genetic variation within a species or population, including small or endangered populations, and estimates such as effective population size (Ne).
Measuring the impact of inbreeding and outbreeding depression, and the relationship between heterozygosity and measures of fitness (see Fisher's fundamental theorem of natural selection).
Evidence of disrupted mate choice and reproductive strategy in disturbed populations.
Forensic applications, especially for the control of trade in endangered species.
Practical methods for monitoring and maximizing genetic diversity during captive breeding programs and re-introduction schemes, including mathematical models and case studies.
Conservation issues related to the introduction of genetically modified organisms.
The interaction between environmental contaminants and the biology and health of an organism, including changes in mutation rates and adaptation to local changes in the environment (e.g. industrial melanism).
New techniques for noninvasive genotyping, see noninvasive genotyping for conservation.
Monitor genetic variability in populations and assess genes of fitness amongst organism populations.
See also
Animal genetic resources
Forest genetic resources
The State of the World's Animal Genetic Resources for Food and Agriculture
Notes
References
External links
What is Conservation Genetics?
Science
Genetics
Blackwell - synergy
UTM Departments
UWYO
PNAS
Science
ESF
Conservation biology
Applied genetics
Population genetics
Rare breed conservation | Conservation genetics | [
"Biology"
] | 2,832 | [
"Conservation biology"
] |
1,557,634 | https://en.wikipedia.org/wiki/Propositional%20formula | In propositional logic, a propositional formula is a type of syntactic formula which is well formed. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.
A propositional formula is constructed from simple propositions, such as "five is greater than three" or propositional variables such as p and q, using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example:
(p AND NOT q) IMPLIES (p OR q).
In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under discussion, just like an expression such as "" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance.
Propositions
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas).
For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by "AND"s: ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) .
Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS there, behind the rocks."). Thus the simple "primitive" assertions must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous.
Example: "That purple dog is running", "This cow is blue", "Switch M31 is closed", "This cap is off", "Tomorrow is Friday".
For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted.
Relationship between propositional and predicate formulas
The predicate calculus goes a step further than the propositional calculus to an "analysis of the inner structure of propositions" It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation (stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
Example: "This blue pig has wings" becomes two sentences in the propositional calculus: "This pig has wings" AND "This pig is blue", whose internal structure is not considered. In contrast, in the predicate calculus, the first sentence breaks into "this pig" as the subject, and "has wings" as the predicate. Thus it asserts that object "this pig" is a member of the class (set, collection) of "winged things". The second sentence asserts that object "this pig" has an attribute "blue" and thus is a member of the class of "blue things". One might choose to write the two sentences connected with AND as:
p|W AND p|B
The generalization of "this pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", p is either found to be a member of this domain or not. Thus there is a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F } where { T, F } is the set of the Boolean values "true" and "false". Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So one now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.:
( B(p) AND W(p) ) evaluates to { T, F }
In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. called logical quantifiers are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the formal validity of the following statement:
"All blue pigs have wings but some pigs have no wings, hence some pigs are not blue".
Identity
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain a "theory" of IDENTITY. Some authors refer to "predicate logic with identity" to emphasize this extension. See more about this below.
An algebra of propositions, the propositional calculus
An algebra (and there are many different ones), loosely defined, is a method by which a collection of symbols called variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &, ∨, =, ≡, ∧, ¬ are manipulated within a system of rules. These symbols, and well-formed strings of them, are said to represent objects, but in a specific algebraic system these objects do not have meanings. Thus work inside the algebra becomes an exercise in obeying certain laws (rules) of the algebra's syntax (symbol-formation) rather than in semantics (meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebra —a formula— to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus or the sentential calculus.
While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT).
Usefulness of propositional formulas
Analysis: In deductive reasoning, philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth tables) for correctness (soundness). For example: Is the following argument sound?
"Given that consciousness is sufficient for an artificial intelligence and only conscious entities can pass the Turing test, before we can conclude that a robot is an artificial intelligence the robot must pass the Turing test."
Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth tables. For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ:
Example: in row 5, ( (b+a) + ci ) = ( (1+0) + 1 ) = the number "2". written as a binary number this is 102, where "co"=1 and Σ=0 as shown in the right-most columns.
Propositional variables
The simplest type of propositional formula is a propositional variable. Propositions that are simple (atomic), symbolic expressions are often denoted by variables named p, q, or P, Q, etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = p (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = q.
Truth-value assignments, formula evaluations
Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each variable represents a simple sentence, the truth values are being applied to the "truth" or "falsity" of these simple sentences.
Truth values in rhetoric, philosophy and mathematics
The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: analytic—true no matter what (e.g. tautology), and synthetic—derived from experience and thereby susceptible to confirmation by third parties (the verification theory of meaning). Empiricists hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is !" Is this statement a TRUTH? Truly I said it. And maybe I am seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and "", and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering
Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness, engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital; those with a continuous range of behaviors are called analog. Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator.
Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS.
Propositional connectives
Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional connectives. Examples of connectives include:
The unary negation connective. If is a formula, then is a formula.
The classical binary connectives . Thus, for example, if and are formulas, so is .
Other binary connectives, such as NAND, NOR, and XOR
The ternary connective IF ... THEN ... ELSE ...
Constant 0-ary connectives ⊤ and ⊥ (alternately, constants { T, F }, { 1, 0 } etc. )
The "theory-extension" connective EQUALS (alternately, IDENTITY, or the sign " = " as distinguished from the "logical connective" )
Connectives of rhetoric, philosophy and mathematics
The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables. The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.).
The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table.
Engineering connectives
In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to evaluate with "1" = "T" and "0" = "F". This is done for the purposes of analysis/minimization and synthesis of formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical product from Boole's notion (a*a = a) and logical sum from Jevons' notion (a+a = a).
CASE connective: IF ... THEN ... ELSE ...
The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement in the C programming language).
The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ):
( IF 'counter is zero' THEN 'go to instruction b ' ELSE 'go to instruction a ') ≡
( (c → b) & (~c → a) ) ≡ ( ( IF 'counter is zero' THEN 'go to instruction b ' ) AND ( IF 'It is NOT the case that counter is zero' THEN 'go to instruction a ) " ≡
( (c & b) ∨ (~c & a) ) ≡ " ( 'Counter is zero' AND 'go to instruction b ) OR ( 'It is NOT the case that 'counter is zero' AND 'go to instruction a ) "
Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical non sequitur because the second sentence is not connected in meaning to the first.
Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' " evaluates as a TRUTH given that 'Winston Churchill was Chinese' is a FALSEHOOD and 'The sun rises in the east' evaluates as a TRUTH.
In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication.
The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it selects between them exhaustively and unambiguously. In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer.
IDENTITY and evaluation
The first table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalence" is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand, logical equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'":
"IF 's' THEN 'b' AND IF 'b' THEN 's' " is written as ((s → b) & (b → s)) or in an abbreviated form as (s ↔ b). As the rightmost symbol string is a definition for a new symbol in terms of the symbols on the left, the use of the IDENTITY sign = is appropriate:
((s → b) & (b → s)) = (s ↔ b)
Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibniz's law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated.
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, (, ), variables p1, p2, p3, ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols.
For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any well-formed formulas (wffs) A and B in his "formal statement calculus" L. A valuation v is a function from the wffs of his system L to the range (output) { T, F }, given that each variable p1, p2, p3 in a wff is assigned an arbitrary truth value { T, F }.
The two definitions () and () define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v(A) does not mean v(~A)". Definition () specifies the third row in the truth table, and the other three rows then come from an application of definition (). In particular () assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system).
More complex formulas
As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives IF ... THEN ... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND (a & b & c & ... & n), OR (a ∨ b ∨ c ∨ ... ∨ n) are constructed from strings of two-argument AND and OR and written in abbreviated form without the parentheses. These, and other connectives as well, can then be used as building blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various theorems to analyze and simplify their formulas.
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh maps or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc.
Definitions
A definition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the definition is presented, either form of the equivalent symbol or formula can be used. The following symbolism =Df is following the convention of Reichenbach. Some examples of convenient definitions drawn from the symbol set { ~, &, (, ) } and variables. Each definition is producing a logically equivalent formula that can be used for substitution or replacement.
definition of a new variable: (c & d) =Df s
OR: ~(~a & ~b) =Df (a ∨ b)
IMPLICATION: (~a ∨ b) =Df (a → b)
XOR: (~a & b) ∨ (a & ~b) =Df (a ⊕ b)
LOGICAL EQUIVALENCE: ( (a → b) & (b → a) ) =Df ( a ≡ b )
Axiom and definition schemas
The definitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or "schemata"), that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below.
Example: In the definition (~a ∨ b) =Df (a → b), other variable-symbols such as "SW2" and "CON1" might be used, i.e. formally:
a =Df SW2, b =Df CON1, so we would have as an instance of the definition schema (~SW2 ∨ CON1) =Df (SW2 → CON1)
Substitution versus replacement
Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be replaced in all instances throughout the overall formula.
Example: (c & d) ∨ (p & ~(c & ~d)), but (q1 & ~q2) ≡ d. Now wherever variable "d" occurs, substitute (q1 & ~q2):
(c & (q1 & ~q2)) ∨ (p & ~(c & ~(q1 & ~q2)))
Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula).
Example: Use this set of formula schemas/equivalences:
( (a ∨ 0) ≡ a ).
( (a & ~a) ≡ 0 ).
( (~a ∨ b) =Df (a → b) ).
( ~(~a) ≡ a )
Inductive definition
The classical presentation of propositional logic (see Enderton 2002) uses the connectives . The set of formulas over a given set of propositional variables is inductively defined to be the smallest set of expressions such that:
Each propositional variable in the set is a formula,
is a formula whenever is, and
is a formula whenever and are formulas and is one of the binary connectives .
This inductive definition can be easily extended to cover additional connectives.
The inductive definition can also be rephrased in terms of a closure operation (Enderton 2002). Let V denote a set of propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from XXV to XXV:
Given a string z, the operation returns .
Given strings y and z, the operation returns . There are similar operations , , and corresponding to the other binary connectives.
The set of formulas over V is defined to be the smallest subset of XXV containing V and closed under all the formula building operations.
Parsing formulas
The following "laws" of the propositional calculus are used to "reduce" complex formulas. The "laws" can be verified easily with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence ≡ or identity =. A complete analysis of all 2n combinations of truth-values for its n distinct variables will result in a column of 1's (T's) underneath this connective. This finding makes each law, by definition, a tautology. And, for a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one another.
Example: The following truth table is De Morgan's law for the behavior of NOT over OR: ~(a ∨ b) ≡ (~a & ~b). To the left of the principal connective ≡ (yellow column labelled "taut") the formula ~(b ∨ a) evaluates to (1, 0, 0, 0) under the label "P". On the right of "taut" the formula (~(b) ∨ ~(a)) also evaluates to (1, 0, 0, 0) under the label "Q". As the two columns have equivalent evaluations, the logical equivalence ≡ under "taut" evaluates to (1, 1, 1, 1), i.e. P ≡ Q. Thus either formula can be substituted for the other if it appears in a larger formula.
Enterprising readers might challenge themselves to invent an "axiomatic system" that uses the symbols { ∨, &, ~, (, ), variables a, b, c }, the formation rules specified above, and as few as possible of the laws listed below, and then derive as theorems the others as well as the truth-table valuations for ∨, &, and ~. One set attributed to Huntington (1904) (Suppes:204) uses eight of the laws defined below.
If used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be well-formed formulas and thus obey all the same rules as the variables. Thus the laws listed below are actually axiom schemas, that is, they stand in place of an infinite number of instances. Thus ( x ∨ y ) ≡ ( y ∨ x ) might be used in one instance, ( p ∨ 0 ) ≡ ( 0 ∨ p ) and in another instance ( 1 ∨ q ) ≡ ( q ∨ 1 ), etc.
Connective seniority (symbol rank)
In general, to avoid confusion during analysis and evaluation of propositional formulas, one can make liberal use of parentheses. However, quite often authors leave them out. To parse a complicated formula one first needs to know the seniority, or rank, that each of the connectives (excepting *) has over the other connectives. To "well-form" a formula, start with the connective with the highest rank and add parentheses around its components, then move down in rank (paying close attention to the connective's scope over which it is working). From most- to least-senior, with the predicate signs ∀x and ∃x, the IDENTITY = and arithmetic signs added for completeness:
≡ (LOGICAL EQUIVALENCE)
→ (IMPLICATION)
& (AND)
∨ (OR)
~ (NOT)
∀x (FOR ALL x)
∃x (THERE EXISTS AN x)
= (IDENTITY)
+ (arithmetic sum)
* (arithmetic multiply)
' (s, arithmetic successor).
Thus the formula can be parsed—but because NOT does not obey the distributive law, the parentheses around the inner formula (~c & ~d) is mandatory:
Example: " d & c ∨ w " rewritten is ( (d & c) ∨ w )
Example: " a & a → b ≡ a & ~a ∨ b " rewritten (rigorously) is
≡ has seniority: ( ( a & a → b ) ≡ ( a & ~a ∨ b ) )
→ has seniority: ( ( a & (a → b) ) ≡ ( a & ~a ∨ b ) )
& has seniority both sides: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~a ∨ b) ) )
~ has seniority: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) )
check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) )
Example:
d & c ∨ p & ~(c & ~d) ≡ c & d ∨ p & c ∨ p & ~d rewritten is ( ( (d & c) ∨ ( p & ~((c & ~(d)) ) ) ) ≡ ( (c & d) ∨ (p & c) ∨ (p & ~(d)) ) )
Commutative and associative laws
Both AND and OR obey the commutative law and associative law:
Commutative law for OR: ( a ∨ b ) ≡ ( b ∨ a )
Commutative law for AND: ( a & b ) ≡ ( b & a )
Associative law for OR: (( a ∨ b ) ∨ c ) ≡ ( a ∨ (b ∨ c) )
Associative law for AND: (( a & b ) & c ) ≡ ( a & (b & c) )
Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g. NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example:
( (c & d) ∨ (p & c) ∨ (p & ~d) ) above should be written ( ((c & d) ∨ (p & c)) ∨ (p & ~(d) ) ) or possibly ( (c & d) ∨ ( (p & c) ∨ (p & ~(d)) ) )
However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than one symbol, then the parentheses are mandatory, e.g. ~(a ∨ b).
Distributive laws
OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about De Morgan's law:
Distributive law for OR: ( c ∨ ( a & b) ) ≡ ( (c ∨ a) & (c ∨ b) )
Distributive law for AND: ( c & ( a ∨ b) ) ≡ ( (c & a) ∨ (c & b) )
De Morgan's laws
NOT, when distributed over OR or AND, does something peculiar (again, these can be verified with a truth-table):
De Morgan's law for OR: ¬(a ∨ b) ≡ (¬a ^ ¬b)
De Morgan's law for AND: ¬(a ^ b) ≡ (¬a ∨ ¬b)
Laws of absorption
Absorption, in particular the first one, causes the "laws" of logic to differ from the "laws" of arithmetic:
Absorption (idempotency) for OR: (a ∨ a) ≡ a
Absorption (idempotency) for AND: (a & a) ≡ a
Laws of evaluation: Identity, nullity, and complement
The sign " = " (as distinguished from logical equivalence ≡, alternately ↔ or ⇔) symbolizes the assignment of value or meaning. Thus the string (a & ~(a)) symbolizes "0", i.e. it means the same thing as symbol "0" ". In some "systems" this will be an axiom (definition) perhaps shown as ( (a & ~(a)) =Df 0 ); in other systems, it may be derived in the truth table below:
Commutation of equality: (a = b) ≡ (b = a)
Identity for OR: (a ∨ 0) = a or (a ∨ F) = a
Identity for AND: (a & 1) = a or (a & T) = a
Nullity for OR: (a ∨ 1) = 1 or (a ∨ T) = T
Nullity for AND: (a & 0) = 0 or (a & F) = F
Complement for OR: (a ∨ ~a) = 1 or (a ∨ ~a) = T, law of excluded middle
Complement for AND: (a & ~a) = 0 or (a & ~a) = F, law of contradiction
Double negative (involution)
¬(¬a) ≡ a
Well-formed formulas (wffs)
A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms of its propositional variables and logical connectives. When formulas are written in infix notation, as above, unique readability is ensured through an appropriate use of parentheses in the definition of formulas. Alternatively, formulas can be written in Polish notation or reverse Polish notation, eliminating the need for parentheses altogether.
The inductive definition of infix formulas in the previous section can be converted to a formal grammar in Backus-Naur form:
<formula> ::= <propositional variable>
| ( ¬ <formula> )
| ( <formula> ∧ <formula>)
| ( <formula> ∨ <formula> )
| ( <formula> → <formula> )
| ( <formula> ↔ <formula> )
It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and any nonempty initial segment of a formula has more left than right parentheses. This fact can be used to give an algorithm for parsing formulas. For example, suppose that an expression x begins with . Starting after the second symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used to generate a recursive descent parser for formulas.
Example of parenthesis counting:
This method locates as "1" the principal connective the connective under which the overall evaluation of the formula occurs for the outer-most parentheses (which are often omitted). It also locates the inner-most connective where one would begin evaluatation of the formula without the use of a truth table, e.g. at "level 6".
Well-formed formulas versus valid formulas in inferences
The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: "The formula that represents the inference evaluates to "truth" beneath its principal connective, no matter what truth-values are assigned to its variables", i.e. the formula is a tautology.
Quite possibly a formula will be well-formed but not valid. Another way of saying this is: "Being well-formed is necessary for a formula to be valid but it is not sufficient." The only way to find out if it is both well-formed and valid is to submit it to verification with a truth table or by use of the "laws":
Example 1: What does one make of the following difficult-to-follow assertion? Is it valid? "If it's sunny, but if the frog is croaking then it's not sunny, then it's the same as saying that the frog isn't croaking." Convert this to a propositional formula as follows:
" IF (a AND (IF b THEN NOT-a) THEN NOT-a" where " a " represents "its sunny" and " b " represents "the frog is croaking":
( ( (a) & ( (b) → ~(a) ) ≡ ~(b) )
This is well-formed, but is it valid? In other words, when evaluated will this yield a tautology (all T) beneath the logical-equivalence symbol ≡ ? The answer is NO, it is not valid. However, if reconstructed as an implication then the argument is valid.
"Saying it's sunny, but if the frog is croaking then it's not sunny, implies that the frog isn't croaking."
Other circumstances may be preventing the frog from croaking: perhaps a crane ate it.
Example 2 (from Reichenbach via Bertrand Russell):
"If pigs have wings, some winged animals are good to eat. Some winged animals are good to eat, so pigs have wings."
( ((a) → (b)) & (b) → (a) ) is well formed, but an invalid argument as shown by the red evaluation under the principal implication:
Reduced sets of connectives
A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula with just the connectives in that set. There are many complete sets of connectives, including , , and . There are two binary connectives that are complete on their own, corresponding to NAND and NOR, respectively. Some pairs are not complete, for example .
The stroke (NAND)
The binary connective corresponding to NAND is called the Sheffer stroke, and written with a vertical bar | or vertical arrow ↑. The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on its own, all other connectives can be expressed using only the stroke. For example, where the symbol " ≡ " represents logical equivalence:
~p ≡ p|p
p → q ≡ p|~q
p ∨ q ≡ ~p|~q
p & q ≡ ~(p|q)
In particular, the zero-ary connectives (representing truth) and (representing falsity) can be expressed using the stroke:
IF ... THEN ... ELSE
This connective together with { 0, 1 }, ( or { F, T } or { , } ) forms a complete set. In the following the IF...THEN...ELSE relation (c, b, a) = d represents ( (c → b) ∨ (~c → a) ) ≡ ( (c & b) ∨ (~c & a) ) = d
(c, b, a):
(c, 0, 1) ≡ ~c
(c, b, 1) ≡ (c → b)
(c, c, a) ≡ (c ∨ a)
(c, b, c) ≡ (c & b)
Example: The following shows how a theorem-based proof of "(c, b, 1) ≡ (c → b)" would proceed, below the proof is its truth-table verification. ( Note: (c → b) is defined to be (~c ∨ b) ):
Begin with the reduced form: ( (c & b) ∨ (~c & a) )
Substitute "1" for a: ( (c & b) ∨ (~c & 1) )
Identity (~c & 1) = ~c: ( (c & b) ∨ (~c) )
Law of commutation for V: ( (~c) ∨ (c & b) )
Distribute "~c V" over (c & b): ( ((~c) ∨ c ) & ((~c) ∨ b )
Law of excluded middle (((~c) ∨ c ) = 1 ): ( (1) & ((~c) ∨ b ) )
Distribute "(1) &" over ((~c) ∨ b): ( ((1) & (~c)) ∨ ((1) & b )) )
Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) ≡ (b & 1) ≡ b: ( ~c ∨ b )
( ~c ∨ b ) is defined as c → b Q. E. D.
In the following truth table the column labelled "taut" for tautology evaluates logical equivalence (symbolized here by ≡) between the two columns labelled d. Because all four rows under "taut" are 1's, the equivalence indeed represents a tautology.
Normal forms
An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal form and disjunctive normal form. Any propositional formula can be reduced to its conjunctive or disjunctive normal form.
Reduction to normal form
Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to minimize the number of literals (see below) requires some tools: reduction by De Morgan's laws and truth tables can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article; for more see Quine–McCluskey algorithm.
Literal, term and alterm
In electrical engineering, a variable x or its negation ~(x) can be referred to as a literal. A string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm. Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic multiplication.
Examples
a, b, c, d are variables. ((( a & ~(b) ) & ~(c)) & d) is a term. This can be abbreviated as (a & ~b & ~c & d), or a~b~cd.
p, q, r, s are variables. (((p ∨ ~(q) ) ∨ r) ∨ ~(s) ) is an alterm. This can be abbreviated as (p ∨ ~q ∨ r ∨ ~s).
Minterms
In the same way that a 2n-row truth table displays the evaluation of a propositional formula for all 2n possible values of its variables, n variables produces a 2n-square Karnaugh map (even though we cannot draw it in its full-dimensional realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table evaluation represents one minterm.
Any propositional formula can be reduced to the "logical sum" (OR) of the active (i.e. "1"- or "T"-valued) minterms. When in this form the formula is said to be in disjunctive normal form. But even though it is in this form, it is not necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The first column is the decimal equivalent of the binary equivalent of the digits "cba", in other words:
Example
cba2 = c*22 + b*21 + a*20:
cba = (c=1, b=0, a=1) = 1012 = 1*22 + 0*21 + 1*20 = 510
This numbering comes about because as one moves down the table from row to row only one variable at a time changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional hypercubes called Hasse diagrams where each corner's variables change only one at a time as one moves around the edges of the cube. Hasse diagrams (hypercubes) flattened into two dimensions are either Veitch diagrams or Karnaugh maps (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge "wrap arounds" to the bottom edge, and the left edge wraps around to the right edge—the Karnaugh diagram is really a three- or four- or n-dimensional flattened object.
Reduction by use of the map method (Veitch, Karnaugh)
Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplified the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers. The method proceeds as follows:
Produce the formula's truth table
Produce the formula's truth table. Number its rows using the binary-equivalents of the variables (usually just sequentially 0 through n-1) for n variables.
Technically, the propositional function has been reduced to its (unminimized) conjunctive normal form: each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized) conjunctive normal form.
Example: ((c & d) ∨ (p & ~(c & (~d)))) = q in conjunctive normal form is:
( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = q
However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12 to 6).
Create the formula's Karnaugh map
Use the values of the formula (e.g. "p") found by the truth-table method and place them in their into their respective (associated) Karnaugh squares (these are numbered per the Gray code convention). If values of "d" for "don't care" appear in the table, this adds flexibility during the reduction phase.
Reduce minterms
Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals, and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical, even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical) or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. "p" from squares #3 and #7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to show the "trick" behind the combining of terms on a Karnaugh map:
Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the table's edges wrap around). So each of these pairs can be reduced.
Observe that by the Idempotency law (A ∨ A) = A, we can create more terms. Then by association and distributive laws the variables to disappear can be paired, and then "disappeared" with the Law of contradiction (x & ~x)=0. The following uses brackets [ and ] only to keep track of the terms; they have no special significance:
Put the formula in conjunctive normal form with the formula to be reduced:
q = ( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = ( #3 ∨ #7 ∨ #6 ∨ #4 )
Idempotency (absorption) [ A ∨ A) = A:
( #3 ∨ [ #7 ∨ #7 ] ∨ [ #6 ∨ #6 ] ∨ #4 )
Associative law (x ∨ (y ∨ z)) = ( (x ∨ y) ∨ z )
( [ #3 ∨ #7 ] ∨ [ #7 ∨ #6 ] ∨ [ #6 ∨ #4] )
[ (~p & d & c ) ∨ (p & d & c) ] ∨ [ (p & d & c) ∨ (p & d & ~c) ] ∨ [ (p & d & ~c) ∨ (p & ~d & ~c) ].
Distributive law ( x & (y ∨ z) ) = ( (x & y) ∨ (x & z) ) :
( [ (d & c) ∨ (~p & p) ] ∨ [ (p & d) ∨ (~c & c) ] ∨ [ (p & ~c) ∨ (c & ~c) ] )
Commutative law and law of contradiction (x & ~x) = (~x & x) = 0:
( [ (d & c) ∨ (0) ] ∨ [ (p & d) ∨ (0) ] ∨ [ (p & ~c) ∨ (0) ] )
Law of identity ( x ∨ 0 ) = x leading to the reduced form of the formula:
q = ( (d & c) ∨ (p & d) ∨ (p & ~c) )
Verify reduction with a truth table
Impredicative propositions
Given the following examples-as-definitions, what does one make of the subsequent reasoning:
(1) "This sentence is simple." (2) "This sentence is complex, and it is conjoined by AND."
Then assign the variable "s" to the left-most sentence "This sentence is simple". Define "compound" c = "not simple" ~s, and assign c = ~s to "This sentence is compound"; assign "j" to "It [this sentence] is conjoined by AND". The second sentence can be expressed as:
( NOT(s) AND j )
If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. "This sentence is complex" is a FALSEHOOD (it is simple, by definition). So their conjunction (AND) is a falsehood. But when taken in its assembled form, the sentence a TRUTH.
This is an example of the paradoxes that result from an impredicative definition—that is, when an object m has a property P, but the object m is defined in terms of property P. The best advice for a rhetorician or one involved in deductive analysis is avoid impredicative definitions but at the same time be on the lookout for them because they can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas with feedback.
Propositional formula with "feedback"
The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of objects and relations) that forbids this from happening.
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p ∨ s) = q, then let p = q. Observe that q's "definition" depends on itself "q" as well as on "s" and the OR connective; this definition of q is thus impredicative.
Either of two conditions can result: oscillation or memory.
It helps to think of the formula as a black box. Without knowledge of what is going on "inside" the formula-"box" from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the "hidden" variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one can build any sort of bounded computational model (e.g. Turing machines, counter machines, register machines, Macintosh computers, etc.).
Oscillation
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
Oscillation with delay: If a delay (ideal or non-ideal) is inserted in the abstract formula between p and q then p will oscillate between 1 and 0: 101010...101... ad infinitum. If either of the delay and NOT are not abstract (i.e. not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input "p". The delay must be viewed as a kind of proposition that has "qd" (q-delayed) as output for "q" as input. This new proposition adds another column to the truth table. The inconsistency is now between "qd" and "p" as shown in red; two stable states resulting:
Memory
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of "delay", this condition presents itself as a momentary inconsistency between the fed-back output variable q and p = qdelayed.
A truth table reveals the rows where inconsistencies occur between p = qdelayed at the input and q at the output. After "breaking" the feed-back, the truth table construction proceeds in the conventional manner. But afterwards, in every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted (i.e. p=0 together with q=1, or p=1 and q=0); when the "line" is "remade" both are rendered impossible by the Law of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated as inconsistent and hence "impossible".
Once-flip memory
About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output "q" feeds back into "p". Given that the formula is first evaluated (initialized) with p=0 & q=0, it will "flip" once when "set" by s=1. Thereafter, output "q" will sustain "q" in the "flipped" condition (state q=1). This behavior, now time-dependent, is shown by the state diagram to the right of the once-flip.
Flip-flop memory
The next simplest case is the "set-reset" flip-flop shown below the once-flip. Given that r=0 & s=0 and q=0 at the outset, it is "set" (s=1) in a manner similar to the once-flip. It however has a provision to "reset" q=0 when "r"=1. And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so when and if (s=0 & r=1) the flip-flop will be reset. Or, if (s=1 & r=0) the flip-flop will be set. In the abstract (ideal) instance in which s=1 ⇒ s=0 & r=1 ⇒ r=0 simultaneously, the formula q will be indeterminate (undecidable). Due to delays in "real" OR, AND and NOT the result will be unknown at the outset but thereafter predicable.
Clocked flip-flop memory
The formula known as "clocked flip-flop" memory ("c" is the "clock" and "d" is the "data") is given below. It works as follows: When c = 0 the data d (either 0 or 1) cannot "get through" to affect output q. When c = 1 the data d "gets through" and output q "follows" d's value. When c goes from 1 to 0 the last value of the data remains "trapped" at output "q". As long as c=0, d can change value without causing q to change.
Examples
( ( c & d ) ∨ ( p & ( ~( c & ~( d ) ) ) ) = q, but now let p = q:
( ( c & d ) ∨ ( q & ( ~( c & ~( d ) ) ) ) = q
The state diagram is similar in shape to the flip-flop's state diagram, but with different labelling on the transitions.
Historical development
Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle: (1) The law of identity: "Whatever is, is.", (2) The law of noncontradiction: "Nothing can both be and not be", and (3) The law of excluded middle: "Everything must be or not be."
Example: Here O is an expression about an object's BEING or QUALITY:
Law of Identity: O = O
Law of contradiction: ~(O & ~(O))
Law of excluded middle: (O ∨ ~(O))
The use of the word "everything" in the law of excluded middle renders Russell's expression of this law open to debate. If restricted to an expression about BEING or QUALITY with reference to a finite collection of objects (a finite "universe of discourse") -- the members of which can be investigated one after another for the presence or absence of the assertion—then the law is considered intuitionistically appropriate. Thus an assertion such as: "This object must either BE or NOT BE (in the collection)", or "This object must either have this QUALITY or NOT have this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram.
Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotle's syllogisms, John Locke's Essay concerning human understanding (1690) used the word semiotics (theory of the use of symbols). By 1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Locke's semiotics. George Bentham's work (1827) resulted in the notion of "quantification of the predicate" (1827) (nowadays symbolized as ∀ ≡ "for all"). A "row" instigated by William Hamilton over a priority dispute with Augustus De Morgan "inspired George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847" (Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:
"Boole's principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of choosing the property x and choosing x again and again is the same as choosing x once... As consequence of it he formed the equations x•(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of contradiction and the law of excluded middle" (p. xxviiff). For Boole "1" was the universe of discourse and "0" was nothing.
Gottlob Frege's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so daunting that it had little influence excepting on one person: Bertrand Russell. First as the student of Alfred North Whitehead he studied Frege's work and suggested a (famous and notorious) emendation with respect to it (1904) around the problem of an antinomy that he discovered in Frege's treatment ( cf Russell's paradox ). Russell's work led to a collaboration with Whitehead that, in the year 1912, produced the first volume of Principia Mathematica (PM). It is here that what we consider "modern" propositional logic first appeared. In particular, PM introduces NOT and OR and the assertion symbol ⊦ as primitives. In terms of these notions they define IMPLICATION → ( def. *1.01: ~p ∨ q ), then AND (def. *3.01: ~(~p ∨ ~q) ), then EQUIVALENCE p ←→ q (*4.01: (p → q) & ( q → p ) ).
Henry M. Sheffer (1921) and Jean Nicod demonstrate that only one connective, the "stroke" | is sufficient to express all propositional formulas.
Emil Post (1921) develops the truth-table method of analysis in his "Introduction to a general theory of elementary propositions". He notes Nicod's stroke | .
Whitehead and Russell add an introduction to their 1927 re-publication of PM adding, in part, a favorable treatment of the "stroke".
Computation and switching logic:
William Eccles and F. W. Jordan (1919) describe a "trigger relay" made from a vacuum tube.
George Stibitz (1937) invents the binary adder using mechanical relays. He builds this on his kitchen table.
Example: Given binary bits ai and bi and carry-in ( c_ini), their summation Σi and carry-out (c_outi) are:
( ( ai XOR bi ) XOR c_ini )= Σi
( ai & bi ) ∨ c_ini ) = c_outi;
Alan Turing builds a multiplier using relays (1937–1938). He has to hand-wind his own relay coils to do this.
Textbooks about "switching circuits" appear in the early 1950s.
Willard Quine 1952 and 1955, E. W. Veitch 1952, and M. Karnaugh (1953) develop map-methods for simplifying propositional functions.
George H. Mealy (1955) and Edward F. Moore (1956) address the theory of sequential (i.e. switching-circuit) "machines".
E. J. McCluskey and H. Shorr develop a method for simplifying propositional (switching) circuits (1962).
Footnotes
Citations
References
and , 2005, A Short Course in Discrete Mathematics, Dover Publications, Mineola NY, . This text is used in a "lower division two-quarter [computer science] course" at UC San Diego.
, 2002, A Mathematical Introduction to Logic. Harcourt/Academic Press.
, (Pergamon Press 1963), 1966, (Dover edition 2007), Boolean Algebra, Dover Publications, Inc. Minola, New York, . Emphasis on the notion of "algebra of classes" with set-theoretic symbols such as ∩, ∪, ' (NOT), ⊂ (IMPLIES). Later Goldstein replaces these with &, ∨, ¬, → (respectively) in his treatment of "Sentence Logic" pp. 76–93.
and Gérard Bornet 1997, George Boole: Selected Manuscripts on Logic and its Philosophy, Birkhäuser Verlag, Basil, (Boston).
1978, Logic for Mathematicians, Cambridge University Press, Cambridge UK, .
1965, Introduction to the Theory of Switching Circuits, McGraw-Hill Book Company, New York. No ISBN. Library of Congress Catalog Card Number 65-17394. McCluskey was a student of Willard Quine and developed some notable theorems with Quine and on his own. For those interested in the history, the book contains a wealth of references.
1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc, Englewood Cliffs, N.J.. No ISBN. Library of Congress Catalog Card Number 67-12342. Useful especially for computability, plus good sources.
1969, 1997, Mathematical Logic: A First Course, Dover Publications, Inc., Mineola, New York, (pbk.).
1957 (1999 Dover edition), Introduction to Logic, Dover Publications, Inc., Mineola, New York. (pbk.). This book is in print and readily available.
On his page 204 in a footnote he references his set of axioms to E. V. Huntington, "Sets of Independent Postulates for the Algebra of Logic", Transactions of the American Mathematical Society, Vol. 5 91904) pp. 288-309.
1941 (1995 Dover edition), Introduction to Logic and to the Methodology of Deductive Sciences, Dover Publications, Inc., Mineola, New York. (pbk.). This book is in print and readily available.
1967, 3rd printing with emendations 1976, From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge, Massachusetts. (pbk.) Translation/reprints of Frege (1879), Russell's letter to Frege (1902) and Frege's letter to Russell (1902), Richard's paradox (1905), Post (1921) can be found here.
and 1927 2nd edition, paperback edition to *53 1962, Principia Mathematica, Cambridge University Press, no ISBN. In the years between the first edition of 1912 and the 2nd edition of 1927, H. M. Sheffer 1921 and M. Jean Nicod (no year cited) brought to Russell's and Whitehead's attention that what they considered their primitive propositions (connectives) could be reduced to a single |, nowadays known as the "stroke" or NAND (NOT-AND, NEITHER ... NOR...). Russell-Whitehead discuss this in their "Introduction to the Second Edition" and makes the definitions as discussed above.
1968, Logic Design with Integrated Circuits, John Wiley & Sons, Inc., New York. No ISBN. Library of Congress Catalog Card Number: 68-21185. Tight presentation of engineering's analysis and synthesis methods, references McCluskey 1965. Unlike Suppes, Wickes' presentation of "Boolean algebra" starts with a set of postulates of a truth-table nature and then derives the customary theorems of them (p. 18ff).
External links
Propositional calculus
Boolean algebra
Statements
Syntax (logic)
Propositions
Logical expressions | Propositional formula | [
"Mathematics"
] | 15,189 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic",
"Logical expressions"
] |
1,557,789 | https://en.wikipedia.org/wiki/Longeron | In engineering, a longeron or stringer is a load-bearing component of a framework.
The term is commonly used in connection with aircraft fuselages and automobile chassis. Longerons are used in conjunction with stringers to form structural frameworks.
Aircraft
In an aircraft fuselage, stringers are attached to formers (also called frames) and run in the longitudinal direction of the aircraft. They are primarily responsible for transferring the aerodynamic loads acting on the skin onto the frames and formers. In the wings or horizontal stabilizer, longerons run spanwise (from wing root to wing tip) and attach between the ribs. The primary function here also is to transfer the bending loads acting on the wings onto the ribs and spar.
The terms "longeron" and "stringer" are sometimes used interchangeably. Historically, though, there is a subtle difference between the two terms. If the longitudinal members in a fuselage are few in number (usually 4 to 8) and run all along the fuselage length, then they are called "longerons". The longeron system also requires that the fuselage frames be closely spaced (about every ). If the longitudinal members are numerous (usually 50 to 100) and are placed just between two formers/frames, then they are called "stringers". In the stringer system the longitudinal members are smaller and the frames are spaced further apart (about ). Generally, longerons are of larger cross-section when compared to stringers. On large modern aircraft the stringer system is more common because it is more weight-efficient, despite being more complex to construct and analyze. Some aircraft use a combination of both stringers and longerons.
Longerons often carry larger loads than stringers and also help to transfer skin loads to internal structure. Longerons nearly always attach to frames or ribs. Stringers are usually not attached to anything but the skin, where they carry a portion of the fuselage bending moment through axial loading. It is not uncommon to have a mixture of longerons and stringers in the same major structural component.
Space launch vehicles
Stringers are also used in the construction of some launch vehicle propellant tanks. For example, the Falcon 9 launch vehicle uses stringers in the kerosene (RP-1) tanks, but not in the liquid oxygen tanks, on both the first and second stages.
References
Aircraft components
Mechanical engineering | Longeron | [
"Physics",
"Engineering"
] | 482 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,558,208 | https://en.wikipedia.org/wiki/Agricultural%20wastewater%20treatment | Agricultural wastewater treatment is a farm management agenda for controlling pollution from confined animal operations and from surface runoff that may be contaminated by chemicals in fertilizer, pesticides, animal slurry, crop residues or irrigation water. Agricultural wastewater treatment is required for continuous confined animal operations like milk and egg production. It may be performed in plants using mechanized treatment units similar to those used for industrial wastewater. Where land is available for ponds, settling basins and facultative lagoons may have lower operational costs for seasonal use conditions from breeding or harvest cycles. Animal slurries are usually treated by containment in anaerobic lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes.
Nonpoint source pollution includes sediment runoff, nutrient runoff and pesticides. Point source pollution includes animal wastes, silage liquor, milking parlour (dairy farming) wastes, slaughtering waste, vegetable washing water and firewater. Many farms generate nonpoint source pollution from surface runoff which is not controlled through a treatment plant.
Farmers can install erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include contour plowing, crop mulching, crop rotation, planting perennial crops and installing riparian buffers. Farmers can also develop and implement nutrient management plans to reduce excess application of nutrients and reduce the potential for nutrient pollution. To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality.
Nonpoint source pollution
Nonpoint source pollution from farms is caused by surface runoff from fields during rain storms. Agricultural runoff is a major source of pollution, in some cases the only source, in many watersheds.
Sediment runoff
Soil washed off fields is the largest source of agricultural pollution in the United States. Excess sediment causes high levels of turbidity in water bodies, which can inhibit growth of aquatic plants, clog fish gills and smother animal larvae.
Farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include:
contour ploughing
crop mulching
crop rotation
planting perennial crops
installing riparian buffers.
Nutrient runoff
Nitrogen and phosphorus are key pollutants found in runoff, and they are applied to farmland in several ways, such as in the form of commercial fertilizer, animal manure, or municipal or industrial wastewater (effluent) or sludge. These chemicals may also enter runoff from crop residues, irrigation water, wildlife, and atmospheric deposition.
Farmers can develop and implement nutrient management plans to mitigate impacts on water quality by:
mapping and documenting fields, crop types, soil types, water bodies
developing realistic crop yield projections
conducting soil tests and nutrient analyses of manures and/or sludges applied
identifying other significant nutrient sources (e.g., irrigation water)
evaluating significant field features such as highly erodible soils, subsurface drains, and shallow aquifers
applying fertilizers, manures, and/or sludges based on realistic yield goals and using precision agriculture techniques.
Pesticides
Pesticides are widely used by farmers to control plant pests and enhance production, but chemical pesticides can also cause water quality problems. Pesticides may appear in surface water due to:
direct application (e.g. aerial spraying or broadcasting over water bodies)
runoff during rain storms
aerial drift (from adjacent fields).
Some pesticides have also been detected in groundwater.
Farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality.
There are few safe ways of disposing of pesticide surpluses other than through containment in well managed landfills or by incineration. In some parts of the world, spraying on land is a permitted method of disposal.
Point source pollution and treatment steps
Farms with large livestock and poultry operations, such as factory farms, can be a major source of point source wastewater. In the United States, these facilities are called concentrated animal feeding operations or confined animal feeding operations and are being subject to increasing government regulation.
Antibiotic-resistant bacteria have been found to infiltrate the water cycle from farms. Raising animals accounts for 73% of antibiotics use globally, and wastewater treatment facilities can transfer antibiotic-resistant bacteria to humans.
Animal wastes
The constituents of animal wastewater typically contain
Strong organic content — much stronger than human sewage
High solids concentration
High nitrate and phosphorus content
Antibiotics
Synthetic hormones
Often high concentrations of parasites and their eggs
Spores of Cryptosporidium (a protozoan) resistant to drinking water treatment processes
Spores of Giardia
Human pathogenic bacteria such as Brucella and Salmonella
Animal wastes from cattle can be produced as solid or semisolid manure or as a liquid slurry. The production of slurry is especially common in housed dairy cattle.
Treatment
Whilst solid manure heaps outdoors can give rise to polluting wastewaters from runoff, this type of waste is usually relatively easy to treat by containment and/or covering of the heap.
Animal slurries require special handling and are usually treated by containment in lagoons before disposal by spray or trickle application to grassland. Constructed wetlands are sometimes used to facilitate treatment of animal wastes, as are anaerobic lagoons. Excessive application or application to sodden land or insufficient land area can result in direct runoff to watercourses, with the potential for causing severe pollution. Application of slurries to land overlying aquifers can result in direct contamination or, more commonly, elevation of nitrogen levels as nitrite or nitrate.
The disposal of any wastewater containing animal waste upstream of a drinking water intake can pose serious health problems to those drinking the water because of the highly resistant spores present in many animals that are capable of causing disabling disease in humans. This risk exists even for very low-level seepage via shallow surface drains or from rainfall run-off.
Some animal slurries are treated by mixing with straws and composted at high temperature to produce a bacteriologically sterile and friable manure for soil improvement.
Piggery waste
Piggery waste is comparable to other animal wastes and is processed as for general animal waste, except that many piggery wastes contain elevated levels of copper that can be toxic in the natural environment. The liquid fraction of the waste is frequently separated off and re-used in the piggery to avoid the prohibitively expensive costs of disposing of copper-rich liquid. Ascarid worms and their eggs are also common in piggery waste and can infect humans if wastewater treatment is ineffective.
Silage liquor
Fresh or wilted grass or other green crops can be made into a semi-fermented product called silage which can be stored and used as winter forage for cattle and sheep. The production of silage often involves the use of an acid conditioner such as sulfuric acid or formic acid. The process of silage making frequently produces a yellow-brown strongly smelling liquid which is very rich in simple sugars, alcohol, short-chain organic acids and silage conditioner. This liquor is one of the most polluting organic substances known. The volume of silage liquor produced is generally in proportion to the moisture content of the ensiled material.
Treatment
Silage liquor is best treated through prevention by wilting crops well before silage making. Any silage liquor that is produced can be used as part of the food for pigs. The most effective treatment is by containment in a slurry lagoon and by subsequent spreading on land following substantial dilution with slurry. Containment of silage liquor on its own can cause structural problems in concrete pits because of the acidic nature of silage liquor.
Milking parlour (dairy farming) wastes
Although milk is an important food product, its presence in wastewaters is highly polluting because of its organic strength, which can lead to very rapid de-oxygenation of receiving waters. Milking parlour wastes also contain large volumes of wash-down water, some animal waste together with cleaning and disinfection chemicals.
Treatment
Milking parlour wastes are often treated in admixture with human sewage in a local sewage treatment plant. This ensures that disinfectants and cleaning agents are sufficiently diluted and amenable to treatment. Running milking wastewaters into a farm slurry lagoon is a possible option although this tends to consume lagoon capacity very quickly. Land spreading is also a treatment option.
Slaughtering waste
Wastewater from slaughtering activities is similar to milking parlour waste (see above) although considerably stronger in its organic composition and therefore potentially much more polluting.
Treatment
As for milking parlour waste (see above).
Vegetable washing water
Washing of vegetables produces large volumes of water contaminated by soil and vegetable pieces. Low levels of pesticides used to treat the vegetables may also be present together with moderate levels of disinfectants such as chlorine.
Treatment
Most vegetable washing waters are extensively recycled with the solids removed by settlement and filtration. The recovered soil can be returned to the land.
Firewater
Although few farms plan for fires, fires are nevertheless more common on farms than on many other industrial premises. Stores of pesticides, herbicides, fuel oil for farm machinery and fertilizers can all help promote fire and can all be present in environmentally lethal quantities in firewater from fire fighting at farms.
Treatment
All farm environmental management plans should allow for containment of substantial quantities of firewater and for its subsequent recovery and disposal by specialist disposal companies. The concentration and mixture of contaminants in firewater make them unsuited to any treatment method available on the farm. Even land spreading has produced severe taste and odour problems for downstream water supply companies in the past.
See also
Agricultural waste
Agricultural surface runoff
Dark fermentation
Sustainable agriculture
References
External links
Electronic Field Office Technical Guide - U.S. NRCS - Detailed soil conservation guides tailored to individual states/counties.
Waste treatment technology
Water pollution
Agriculture and the environment | Agricultural wastewater treatment | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,075 | [
"Environmental engineering",
"Waste treatment technology",
"Water treatment",
"Water pollution"
] |
1,558,218 | https://en.wikipedia.org/wiki/Industrial%20wastewater%20treatment | Industrial wastewater treatment describes the processes used for treating wastewater that is produced by industries as an undesirable by-product. After treatment, the treated industrial wastewater (or effluent) may be reused or released to a sanitary sewer or to a surface water in the environment. Some industrial facilities generate wastewater that can be treated in sewage treatment plants. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have their own specialized facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans. This applies to industries that generate wastewater with high concentrations of organic matter (e.g. oil and grease), toxic pollutants (e.g. heavy metals, volatile organic compounds) or nutrients such as ammonia. Some industries install a pre-treatment system to remove some pollutants (e.g., toxic compounds), and then discharge the partially treated wastewater to the municipal sewer system.
Most industries produce some wastewater. Recent trends have been to minimize such production or to recycle treated wastewater within the production process. Some industries have been successful at redesigning their manufacturing processes to reduce or eliminate pollutants. Sources of industrial wastewater include battery manufacturing, chemical manufacturing, electric power plants, food industry, iron and steel industry, metal working, mines and quarries, nuclear industry, oil and gas extraction, petroleum refining and petrochemicals, pharmaceutical manufacturing, pulp and paper industry, smelters, textile mills, industrial oil contamination, water treatment and wood preserving. Treatment processes include brine treatment, solids removal (e.g. chemical precipitation, filtration), oils and grease removal, removal of biodegradable organics, removal of other organics, removal of acids and alkalis, and removal of toxic materials.
Types
Industrial facilities may generate the following industrial wastewater flows:
Manufacturing process wastestreams, which can include conventional pollutants (i.e. controllable with secondary treatment systems), toxic pollutants (e.g. solvents, heavy metals), and other harmful compounds such as nutrients
Non-process wastestreams: boiler blowdown and cooling water, which produce thermal pollution and other pollutants
Industrial site drainage, generated both by manufacturing facilities, service industries and energy and mining sites
Wastestreams from the energy and mining sectors: acid mine drainage, produced water from oil and gas extraction, radionuclides
Wastestreams that are by-products of treatment or cooling processes: backwashing (water treatment), brine.
Contaminants
Industrial sectors
The specific pollutants generated and the resultant effluent concentrations can vary widely among the industrial sectors.
Battery manufacturing
Battery manufacturers specialize in fabricating small devices for electronics and portable equipment (e.g., power tools), or larger, high-powered units for cars, trucks and other motorized vehicles. Pollutants generated at manufacturing plants includes cadmium, chromium, cobalt, copper, cyanide, iron, lead, manganese, mercury, nickel, silver, zinc, oil and grease.
Centralized waste treatment
A centralized waste treatment (CWT) facility processes liquid or solid industrial wastes generated by off-site manufacturing facilities. A manufacturer may send its wastes to a CWT plant, rather than perform treatment on site, due to constraints such as limited land availability, difficulty in designing and operating an on-site system, or limitations imposed by environmental regulations and permits. A manufacturer may determine that using a CWT is more cost-effective than treating the waste itself; this is often the case where the manufacturer is a small business.
CWT plants often receive wastes from a wide variety of manufacturers, including chemical plants, metal fabrication and finishing; and used oil and petroleum products from various manufacturing sectors. The wastes may be classified as hazardous, have high pollutant concentrations or otherwise be difficult to treat. In 2000 the U.S. Environmental Protection Agency published wastewater regulations for CWT facilities in the US.
Chemical manufacturing
Organic chemicals manufacturing
The specific pollutants discharged by organic chemical manufacturers vary widely from plant to plant, depending on the types of products manufactured, such as bulk organic chemicals, resins, pesticides, plastics, or synthetic fibers. Some of the organic compounds that may be discharged are benzene, chloroform, naphthalene, phenols, toluene and vinyl chloride. Biochemical oxygen demand (BOD), which is a gross measurement of a range of organic pollutants, may be used to gauge the effectiveness of a biological wastewater treatment system, and is used as a regulatory parameter in some discharge permits. Metal pollutant discharges may include chromium, copper, lead, nickel and zinc.
Inorganic chemicals manufacturing
The inorganic chemicals sector covers a wide variety of products and processes, although an individual plant may produce a narrow range of products and pollutants. Products include aluminum compounds; calcium carbide and calcium chloride; hydrofluoric acid; potassium compounds; borax; chrome and fluorine-based compounds; cadmium and zinc-based compounds. The pollutants discharged vary by product sector and individual plant, and may include arsenic, chlorine, cyanide, fluoride; and heavy metals such as chromium, copper, iron, lead, mercury, nickel and zinc.
Electric power plants
Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Many of these plants discharge wastewater with significant levels of metals such as lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites). Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream.
Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. Technological advancements in ion-exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet recent EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters.
Food industry
Wastewater generated from agricultural and food processing operations has distinctive characteristics that set it apart from common municipal wastewater managed by public or private sewage treatment plants throughout the world: it is biodegradable and non-toxic, but has high Biological Oxygen Demand (BOD) and suspended solids (SS). The constituents of food and agriculture wastewater are often complex to predict, due to the differences in BOD and pH in effluents from vegetable, fruit, and meat products and due to the seasonal nature of food processing and post-harvesting.
Processing of food from raw materials requires large volumes of high grade water. Vegetable washing generates water with high loads of particulate matter and some dissolved organic matter. It may also contain surfactants and pesticides.
Aquaculture facilities (fish farms) often discharge large amounts of nitrogen and phosphorus, as well as suspended solids. Some facilities use drugs and pesticides, which may be present in the wastewater.
Dairy processing plants generate conventional pollutants (BOD, SS).
Animal slaughter and processing produces organic waste from body fluids, such as blood, and gut contents. Pollutants generated include BOD, SS, coliform bacteria, oil and grease, organic nitrogen and ammonia.
Processing food for sale produces wastes generated from cooking which are often rich in plant organic material and may also contain salt, flavourings, colouring material and acids or alkali. Large quantities of fats, oil and grease ("FOG") may also be present, which in sufficient concentrations can clog sewer lines. Some municipalities require restaurants and food processing businesses to use grease interceptors and regulate the disposal of FOG in the sewer system.
Food processing activities such as plant cleaning, material conveying, bottling, and product washing create wastewater. Many food processing facilities require on-site treatment before operational wastewater can be land applied or discharged to a waterway or a sewer system. High suspended solids levels of organic particles increase BOD and can result in significant sewer surcharge fees. Sedimentation, wedge wire screening, or rotating belt filtration (microscreening) are commonly used methods to reduce suspended organic solids loading prior to discharge.
Glass manufacturing
Glass manufacturing wastes vary with the type of glass manufactured, which includes fiberglass, plate glass, rolled glass, and glass containers, among others. The wastewater discharged by glass plants may include ammonia, BOD, chemical oxygen demand (COD), fluoride, lead, oil, phenol, and/or phosphorus. The discharges may also be highly acidic (low pH) or alkaline (high pH).
Iron and steel industry
The production of iron from its ores involves powerful reduction reactions in blast furnaces. Cooling waters are inevitably contaminated with products especially ammonia and cyanide. Production of coke from coal in coking plants also requires water cooling and the use of water in by-products separation. Contamination of waste streams includes gasification products such as benzene, naphthalene, anthracene, cyanide, ammonia, phenols, cresols together with a range of more complex organic compounds known collectively as polycyclic aromatic hydrocarbons (PAH).
The conversion of iron or steel into sheet, wire or rods requires hot and cold mechanical transformation stages frequently employing water as a lubricant and coolant. Contaminants include hydraulic oils, tallow and particulate solids. Final treatment of iron and steel products before onward sale into manufacturing includes pickling in strong mineral acid to remove rust and prepare the surface for tin or chromium plating or for other surface treatments such as galvanisation or painting. The two acids commonly used are hydrochloric acid and sulfuric acid. Wastewater include acidic rinse waters together with waste acid. Although many plants operate acid recovery plants (particularly those using hydrochloric acid), where the mineral acid is boiled away from the iron salts, there remains a large volume of highly acid ferrous sulfate or ferrous chloride to be disposed of. Many steel industry wastewaters are contaminated by hydraulic oil, also known as soluble oil.
Metal working
Many industries perform work on metal feedstocks (e.g. sheet metal, ingots) as they fabricate their final products. The industries include automobile, truck and aircraft manufacturing; tools and hardware manufacturing; electronic equipment and office machines; ships and boats; appliances and other household products; and stationary industrial equipment (e.g. compressors, pumps, boilers). Typical processes conducted at these plants include grinding, machining, coating and painting, chemical etching and milling, solvent degreasing, electroplating and anodizing. Wastewater generated from these industries may contain heavy metals (common heavy metal pollutants from these industries include cadmium, chromium, copper, lead, nickel, silver and zinc), cyanide and various chemical solvents, oil, and grease.
Mines and quarries
The principal waste-waters associated with mines and quarries are slurries of rock particles in water. These arise from rainfall washing exposed surfaces and haul roads and also from rock washing and grading processes. Volumes of water can be very high, especially rainfall related arisings on large sites. Some specialized separation operations, such as coal washing to separate coal from native rock using density gradients, can produce wastewater contaminated by fine particulate haematite and surfactants. Oils and hydraulic oils are also common contaminants.
Wastewater from metal mines and ore recovery plants are inevitably contaminated by the minerals present in the native rock formations. Following crushing and extraction of the desirable materials, undesirable materials may enter the wastewater stream. For metal mines, this can include unwanted metals such as zinc and other materials such as arsenic. Extraction of high value metals such as gold and silver may generate slimes containing very fine particles in where physical removal of contaminants becomes particularly difficult.
Additionally, the geologic formations that harbour economically valuable metals such as copper and gold very often consist of sulphide-type ores. The processing entails grinding the rock into fine particles and then extracting the desired metal(s), with the leftover rock being known as tailings. These tailings contain a combination of not only undesirable leftover metals, but also sulphide components which eventually form sulphuric acid upon the exposure to air and water that inevitably occurs when the tailings are disposed of in large impoundments. The resulting acid mine drainage, which is often rich in heavy metals (because acids dissolve metals), is one of the many environmental impacts of mining.
Nuclear industry
The waste production from the nuclear and radio-chemicals industry is dealt with as Radioactive waste.
Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater.
Oil and gas extraction
Oil and gas well operations generate produced water, which may contain oils, toxic metals (e.g. arsenic, cadmium, chromium, mercury, lead), salts, organic chemicals and solids. Some produced water contains traces of naturally occurring radioactive material. Offshore oil and gas platforms also generate deck drainage, domestic waste and sanitary waste. During the drilling process, well sites typically discharge drill cuttings and drilling mud (drilling fluid).
Petroleum refining and petrochemicals
Pollutants discharged at petroleum refineries and petrochemical plants include conventional pollutants (BOD, oil and grease, suspended solids), ammonia, chromium, phenols and sulfides.
Pharmaceutical manufacturing
Pharmaceutical plants typically generate a variety of process wastewaters, including solvents, spent acid and caustic solutions, water from chemical reactions, product wash water, condensed steam, blowdown from air pollution control scrubbers, and equipment washwater. Non-process wastewaters typically include cooling water and site runoff. Pollutants generated by the industry include acetone, ammonia, benzene, BOD, chloroform, cyanide, ethanol, ethyl acetate, isopropanol, methylene chloride, methanol, phenol and toluene. Treatment technologies used include advanced biological treatment (e.g. activated sludge with nitrification), multimedia filtration, cyanide destruction (e.g. hydrolysis), steam stripping and wastewater recycling.
Pulp and paper industry
Effluent from the pulp and paper industry is generally high in suspended solids and BOD. Plants that bleach wood pulp for paper making may generate chloroform, dioxins (including 2,3,7,8-TCDD), furans, phenols and chemical oxygen demand (COD). Stand-alone paper mills using imported pulp may only require simple primary treatment, such as sedimentation or dissolved air flotation. Increased BOD or COD loadings, as well as organic pollutants, may require biological treatment such as activated sludge or upflow anaerobic sludge blanket reactors. For mills with high inorganic loadings like salt, tertiary treatments may be required, either general membrane treatments like ultrafiltration or reverse osmosis or treatments to remove specific contaminants, such as nutrients.
Smelters
The pollutants discharged by nonferrous smelters vary with the base metal ore. Bauxite smelters generate phenols but typically use settling basins and evaporation to manage these wastes, with no need to routinely discharge wastewater. Aluminum smelters typically discharge fluoride, benzo(a)pyrene, antimony and nickel, as well as aluminum. Copper smelters typically generate cadmium, lead, zinc, arsenic and nickel, in addition to copper, in their wastewater. Lead smelters discharge lead and zinc. Nickel and cobalt smelters discharge ammonia and copper in addition to the base metals. Zinc smelters discharge arsenic, cadmium, copper, lead, selenium and zinc.
Typical treatment processes used in the industry are chemical precipitation, sedimentation and filtration.
Textile mills
Textile mills, including carpet manufacturers, generate wastewater from a wide variety of processes, including cleaning and finishing, yarn manufacturing and fabric finishing (such as bleaching, dyeing, resin treatment, waterproofing and retardant flameproofing). Pollutants generated by textile mills include BOD, SS, oil and grease, sulfide, phenols and chromium. Insecticide residues in fleeces are a particular problem in treating waters generated in wool processing. Animal fats may be present in the wastewater, which if not contaminated, can be recovered for the production of tallow or further rendering.
Textile dyeing plants generate wastewater that contain synthetic (e.g., reactive dyes, acid dyes, basic dyes, disperse dyes, vat dyes, sulphur dyes, mordant dyes, direct dyes, ingrain dyes, solvent dyes, pigment dyes) and natural dyestuff, gum thickener (guar) and various wetting agents, pH buffers and dye retardants or accelerators. Following treatment with polymer-based flocculants and settling agents, typical monitoring parameters include BOD, COD, color (ADMI), sulfide, oil and grease, phenol, TSS and heavy metals (chromium, zinc, lead, copper).
Industrial oil contamination
Industrial applications where oil enters the wastewater stream may include vehicle wash bays, workshops, fuel storage depots, transport hubs and power generation. Often the wastewater is discharged into local sewer or trade waste systems and must meet local environmental specifications. Typical contaminants can include solvents, detergents, grit, lubricants and hydrocarbons.
Water treatment
Many industries have a need to treat water to obtain very high quality water for their processes. This might include pure chemical synthesis or boiler feed water. Also, some water treatment processes produce organic and mineral sludges from filtration and sedimentation which require treatment. Ion exchange using natural or synthetic resins removes calcium, magnesium and carbonate ions from water, typically replacing them with sodium, chloride, hydroxyl and/or other ions. Regeneration of ion-exchange columns with strong acids and alkalis produces a wastewater rich in hardness ions which are readily precipitated out, especially when in admixture with other wastewater constituents.
Wood preserving
Wood preserving plants generate conventional and toxic pollutants, including arsenic, COD, copper, chromium, abnormally high or low pH, phenols, suspended solids, oil and grease.
Treatment methods
The various types of contamination of wastewater require a variety of strategies to remove the contamination. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans. Constructed wetlands are being used in an increasing number of cases as they provided high quality and productive on-site treatment. Other industrial processes that produce a lot of waste-waters such as paper and pulp production have created environmental concern, leading to development of processes to recycle water use within plants before they have to be cleaned and disposed.
An industrial wastewater treatment plant may include one or more of the following rather than the conventional treatment sequence of sewage treatment plants:
An API oil-water separator, for removing separate phase oil from wastewater.
A clarifier, for removing solids from wastewater.
A roughing filter, to reduce the biochemical oxygen demand of wastewater.
A carbon filtration plant, to remove toxic dissolved organic compounds from wastewater.
An advanced electrodialysis reversal (EDR) system with ion-exchange membranes.
Brine treatment
Brine treatment involves removing dissolved salt ions from the waste stream. Although similarities to seawater or brackish water desalination exist, industrial brine treatment may contain unique combinations of dissolved ions, such as hardness ions or other metals, necessitating specific processes and equipment.
Brine treatment systems are typically optimized to either reduce the volume of the final discharge for more economic disposal (as disposal costs are often based on volume) or maximize the recovery of fresh water or salts. Brine treatment systems may also be optimized to reduce electricity consumption, chemical usage, or physical footprint.
Brine treatment is commonly encountered when treating cooling tower blowdown, produced water from steam-assisted gravity drainage (SAGD), produced water from natural gas extraction such as coal seam gas, frac flowback water, acid mine or acid rock drainage, reverse osmosis reject, chlor-alkali wastewater, pulp and paper mill effluent, and waste streams from food and beverage processing.
Brine treatment technologies may include: membrane filtration processes, such as reverse osmosis; ion-exchange processes such as electrodialysis or weak acid cation exchange; or evaporation processes, such as brine concentrators and crystallizers employing mechanical vapour recompression and steam. Due to the ever increasing discharge standards, there has been an emergence of the use of advance oxidation processes for the treatment of brine. Some notable examples such as Fenton's oxidation and ozonation have been employed for degradation of recalcitrant compounds in brine from industrial plants.
Reverse osmosis may not be viable for brine treatment, due to the potential for fouling caused by hardness salts or organic contaminants, or damage to the reverse osmosis membranes from hydrocarbons.
Evaporation processes are the most widespread for brine treatment as they enable the highest degree of concentration, as high as solid salt. They also produce the highest purity effluent, even distillate-quality. Evaporation processes are also more tolerant of organics, hydrocarbons, or hardness salts. However, energy consumption is high and corrosion may be an issue as the prime mover is concentrated salt water. As a result, evaporation systems typically employ titanium or duplex stainless steel materials.
Brine management
Brine management examines the broader context of brine treatment and may include consideration of government policy and regulations, corporate sustainability, environmental impact, recycling, handling and transport, containment, centralized compared to on-site treatment, avoidance and reduction, technologies, and economics. Brine management shares some issues with leachate management and more general waste management. In the recent years, there has been greater prevalence in brine management due to global push for zero liquid discharge (ZLD)/minimal liquid discharge (MLD). In ZLD/MLD techniques, a closed water cycle is used to minimize water discharges from a system for water reuse. This concept has been gaining traction in recent years, due to increased water discharges and recent advancement in membrane technology. Increasingly, there has been also greater efforts to increase the recovery of materials from brines, especially from mining, geothermal wastewater or desalination brines. Various literature demosntrates the vaibility of extraction of valuable materials like sodium bicarbonates, sodium chlorides and precious metals (like rubidium, cesium and lithium). The concept of ZLD/MLD encompasses the downstream management of wastewater brines, to reduce discharges and also derive valuable products from it.
Solids removal
Most solids can be removed using simple sedimentation techniques with the solids recovered as slurry or sludge. Very fine solids and solids with densities close to the density of water pose special problems. In such case filtration or ultrafiltration may be required. Although flocculation may be used, using alum salts or the addition of polyelectrolytes. Wastewater from industrial food processing often requires on-site treatment before it can be discharged to prevent or reduce sewer surcharge fees. The type of industry and specific operational practices determine what types of wastewater is generated and what type of treatment is required. Reducing solids such as waste product, organic materials, and sand is often a goal of industrial wastewater treatment. Some common ways to reduce solids include primary sedimentation (clarification), dissolved air flotation (DAF), belt filtration (microscreening), and drum screening.
Oils and grease removal
The effective removal of oils and grease is dependent on the characteristics of the oil in terms of its suspension state and droplet size, which will in turn affect the choice of separator technology. Oil in industrial waste water may be free light oil, heavy oil, which tends to sink, and emulsified oil, often referred to as soluble oil. Emulsified or soluble oils will typically required "cracking" to free the oil from its emulsion. In most cases this is achieved by lowering the pH of the water matrix.
Most separator technologies will have an optimum range of oil droplet sizes that can be effectively treated. Each separator technology will have its own performance curve outlining optimum performance based on oil droplet size. the most common separators are gravity tanks or pits, API oil-water separators or plate packs, chemical treatment via dissolved air flotations, centrifuges, media filters and hydrocyclones.
Analyzing the oily water to determine droplet size can be performed with a video particle analyser.
API oil-water separators
Hydrocyclone
Hydrocyclone separators operate on the process where wastewater enters the cyclone chamber and is spun under extreme centrifugal forces more than 1000 times the force of gravity. This force causes the water and oil droplets (or solid particles) to separate. The separated materials is discharged from one end of the cyclone where treated water is discharged through the opposite end for further treatment, filtration or discharge. Hydrocyclones can also be utilised in a variety of context from solid-liquid separation to oil-water separation.
Removal of biodegradable organics
Biodegradable organic material of plant or animal origin is usually possible to treat using extended conventional sewage treatment processes such as activated sludge or trickling filter. Problems can arise if the wastewater is excessively diluted with washing water or is highly concentrated such as undiluted blood or milk. The presence of cleaning agents, disinfectants, pesticides, or antibiotics can have detrimental impacts on treatment processes.
Activated sludge process
Trickling filter process
A trickling filter consists of a bed of rocks, gravel, slag, peat moss, or plastic media over which wastewater flows downward and contacts a layer (or film) of microbial slime covering the bed media. Aerobic conditions are maintained by forced air flowing through the bed or by natural convection of air. The process involves adsorption of organic compounds in the wastewater by the microbial slime layer, diffusion of air into the slime layer to provide the oxygen required for the biochemical oxidation of the organic compounds. The end products include carbon dioxide gas, water and other products of the oxidation. As the slime layer thickens, it becomes difficult for the air to penetrate the layer and an inner anaerobic layer is formed.
Removal of other organics
Synthetic organic materials including solvents, paints, pharmaceuticals, pesticides, products from coke production and so forth can be very difficult to treat. Treatment methods are often specific to the material being treated. Methods include advanced oxidation processing, distillation, adsorption, ozonation, vitrification, incineration, chemical immobilisation or landfill disposal. Some materials such as some detergents may be capable of biological degradation and in such cases, a modified form of wastewater treatment can be used.
Removal of acids and alkalis
Acids and alkalis can usually be neutralised under controlled conditions. Neutralisation frequently produces a precipitate that will require treatment as a solid residue that may also be toxic. In some cases, gases may be evolved requiring treatment for the gas stream. Some other forms of treatment are usually required following neutralisation.
Waste streams rich in hardness ions as from de-ionisation processes can readily lose the hardness ions in a buildup of precipitated calcium and magnesium salts. This precipitation process can cause severe furring of pipes and can, in extreme cases, cause the blockage of disposal pipes. A 1-metre diameter industrial marine discharge pipe serving a major chemicals complex was blocked by such salts in the 1970s. Treatment is by concentration of de-ionisation waste waters and disposal to landfill or by careful pH management of the released wastewater.
Removal of toxic materials
Toxic materials including many organic materials, metals (such as zinc, silver, cadmium, thallium, etc.) acids, alkalis, non-metallic elements (such as arsenic or selenium) are generally resistant to biological processes unless very dilute. Metals can often be precipitated out by changing the pH or by treatment with other chemicals. Many, however, are resistant to treatment or mitigation and may require concentration followed by landfilling or recycling. Dissolved organics can be incinerated within the wastewater by the advanced oxidation process.
Smart capsules
Molecular encapsulation is a technology that has the potential to provide a system for the recyclable removal of lead and other ions from polluted sources. Nano-, micro- and milli- capsules, with sizes in the ranges 10 nm–1μm,1μm–1mm and >1mm, respectively, are particles that have an active reagent (core) surrounded by a carrier (shell).There are three types of capsule under investigation: alginate-based capsules, carbon nanotubes, polymer swelling capsules. These capsules provide a possible means for the remediation of contaminated water.
Removal of thermal pollution
To remove heat from wastewater generated by power plants or manufacturing plants, and thus to reduce thermal pollution, the following technologies are used:
cooling ponds, engineered bodies of water designed for cooling by evaporation, convection, and radiation
cooling towers, which transfer waste heat to the atmosphere through evaporation or heat transfer
cogeneration, a process where waste heat is recycled for domestic or industrial heating purposes.
Other disposal methods
Some facilities such as oil and gas wells may be permitted to pump their wastewater underground through injection wells. However, wastewater injection has been linked to induced seismicity.
Costs and trade waste charges
Economies of scale may favor a situation where industrial wastewater (with pre-treatment or without treatment) is discharged to the sewer and then treated at a large municipal sewage treatment plant. Typically, trade waste charges are applied in that case. Or it might be more economical to have full treatment of industrial wastewater on the same site where it is generated and then discharging this treated industrial wastewater to a suitable surface water body. This effectively reduces wastewater treatment charges collected by municipal sewage treatment plants by pre-treating wastewaters to reduce concentrations of pollutants measured to determine user fees.
Industrial wastewater plants may also reduce raw water costs by converting selected wastewaters to reclaimed water used for different purposes.
Society and culture
Global goals
The international community has defined the treatment of industrial wastewater as an important part of sustainable development by including it in Sustainable Development Goal 6. Target 6.3 of this goal is to "By 2030, improve water quality by reducing pollution, eliminating dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally". One of the indicators for this target is the "proportion of domestic and industrial wastewater flows safely treated".
See also
Best management practice for water pollution (BMP)
List of waste water treatment technologies
Purified water (for industrial use)
Water purification (for drinking water)
References
Further reading
External links
Water Environment Federation - Professional society
Industrial Wastewater Treatment Technology Database - EPA
Waste treatment technology
Sewerage
Industrial processes
Water pollution
Sanitation
cs:Čištění odpadních vod | Industrial wastewater treatment | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,740 | [
"Water treatment",
"Water pollution",
"Sewerage",
"Environmental engineering",
"Waste treatment technology"
] |
1,558,612 | https://en.wikipedia.org/wiki/Nickel%28II%29%20chloride | Nickel(II) chloride (or just nickel chloride) is the chemical compound NiCl2. The anhydrous salt is yellow, but the more familiar hydrate NiCl2·6H2O is green. Nickel(II) chloride, in various forms, is the most important source of nickel for chemical synthesis. The nickel chlorides are deliquescent, absorbing moisture from the air to form a solution. Nickel salts have been shown to be carcinogenic to the lungs and nasal passages in cases of long-term inhalation exposure.
Production and syntheses
Large scale production and uses of nickel chloride are associated with the purification of nickel from its ores. It is generated upon extraction nickel matte and residues obtained from roasting refining nickel-containing ores using hydrochloric acid. Electrolysis of nickel chloride solutions are used in the production of nickel metal. Other significant routes to nickel chloride arise from processing of ore concentrates such as various reactions involving copper chlorides:
Laboratory routes
Nickel chloride is not usually prepared in the laboratory because it is inexpensive and has a long shelf-life. The yellowish dihydrate, NiCl2·2H2O, is produced by heating the hexahydrate between 66 and 133 °C. The hydrates convert to the anhydrous form upon heating in thionyl chloride or by heating under a stream of HCl gas. Simply heating the hydrates does not afford the anhydrous dichloride.
The dehydration is accompanied by a color change from green to yellow.
In case one needs a pure compound without presence of cobalt, nickel chloride can be obtained by cautiously heating hexaamminenickel chloride:
\overset{hexammine\atop nickel~chloride}{[Ni(NH3)6]Cl2} ->[175-200^\circ\ce{C}] NiCl2{} + 6NH3
Structure of NiCl2 and its hydrates
NiCl2 adopts the CdCl2 structure. In this motif, each Ni2+ center is coordinated to six Cl− centers, and each chloride is bonded to three Ni(II) centers. In NiCl2 the Ni-Cl bonds have "ionic character". Yellow NiBr2 and black NiI2 adopt similar structures, but with a different packing of the halides, adopting the CdI2 motif.
In contrast, NiCl2·6H2O consists of separated trans-[NiCl2(H2O)4] molecules linked more weakly to adjacent water molecules. Only four of the six water molecules in the formula is bound to the nickel, and the remaining two are water of crystallization, so the formula of nickel(II) chloride hexahydrate is [NiCl2(H2O)4]·2H2O. Cobalt(II) chloride hexahydrate has a similar structure. The hexahydrate occurs in nature as the very rare mineral nickelbischofite.
The dihydrate NiCl2·2H2O adopts a structure intermediate between the hexahydrate and the anhydrous forms. It consists of infinite chains of NiCl2, wherein both chloride centers are bridging ligands. The trans sites on the octahedral centers occupied by aquo ligands. A tetrahydrate NiCl2·4H2O is also known.
Reactions
Nickel(II) chloride solutions are acidic, with a pH of around 4 due to the hydrolysis of the Ni2+ ion.
Coordination complexes
Most of the reactions ascribed to "nickel chloride" involve the hexahydrate, although specialized reactions require the anhydrous form.
Reactions starting from NiCl2·6H2O can be used to form a variety of nickel coordination complexes because the H2O ligands are rapidly displaced by ammonia, amines, thioethers, thiolates, and organophosphines. In some derivatives, the chloride remains within the coordination sphere, whereas chloride is displaced with highly basic ligands. Illustrative complexes include:
NiCl2 is the precursor to acetylacetonate complexes Ni(acac)2(H2O)2 and the benzene-soluble (Ni(acac)2)3, which is a precursor to Ni(1,5-cyclooctadiene)2, an important reagent in organonickel chemistry.
In the presence of water scavengers, hydrated nickel(II) chloride reacts with dimethoxyethane (dme) to form the molecular complex NiCl2(dme)2. The dme ligands in this complex are labile.
Applications in organic synthesis
NiCl2 and its hydrate are occasionally useful in organic synthesis.
As a mild Lewis acid, e.g. for the regioselective isomerization of dienols:
In combination with CrCl2 for the coupling of an aldehyde and a vinylic iodide to give allylic alcohols.
For selective reductions in the presence of LiAlH4, e.g. for the conversion of alkenes to alkanes.
As a precursor to Brown's P-1 and P-2 nickel boride catalyst through reaction with NaBH4.
As a precursor to finely divided Ni by reduction with Zn, for the reduction of aldehydes, alkenes, and nitro aromatic compounds. This reagent also promotes homo-coupling reactions, that is 2RX → R-R where R = aryl, vinyl.
As a catalyst for making dialkyl arylphosphonates from phosphites and aryl iodide, ArI:
ArI + P(OEt)3 → ArP(O)(OEt)2 + EtI
NiCl2-dme (or NiCl2-glyme) is used due to its increased solubility in comparison to the hexahydrate.
Safety
Nickel(II) chloride is irritating upon ingestion, inhalation, skin contact, and eye contact. Prolonged inhalation exposure to nickel and its compounds has been linked to increased cancer risk to the lungs and nasal passages.
References
External links
NIOSH Pocket Guide to Chemical Hazards
Nickel compounds
Chlorides
Metal halides
IARC Group 1 carcinogens
Coordination complexes | Nickel(II) chloride | [
"Chemistry"
] | 1,323 | [
"Chlorides",
"Inorganic compounds",
"Coordination complexes",
"Coordination chemistry",
"Salts",
"Metal halides"
] |
1,559,901 | https://en.wikipedia.org/wiki/Mean%20motion | In orbital mechanics, mean motion (represented by n) is the angular speed required for a body to complete one orbit, assuming constant speed in a circular orbit which completes in the same time as the variable speed, elliptical orbit of the actual body. The concept applies equally well to a small body revolving about a large, massive primary body or to two relatively same-sized bodies revolving about a common center of mass. While nominally a mean, and theoretically so in the case of two-body motion, in practice the mean motion is not typically an average over time for the orbits of real bodies, which only approximate the two-body assumption. It is rather the instantaneous value which satisfies the above conditions as calculated from the current gravitational and geometric circumstances of the body's constantly-changing, perturbed orbit.
Mean motion is used as an approximation of the actual orbital speed in making an initial calculation of the body's position in its orbit, for instance, from a set of orbital elements. This mean position is refined by Kepler's equation to produce the true position.
Definition
Define the orbital period (the time period for the body to complete one orbit) as P, with dimension of time. The mean motion is simply one revolution divided by this time, or,
with dimensions of radians per unit time, degrees per unit time or revolutions per unit time.
The value of mean motion depends on the circumstances of the particular gravitating system. In systems with more mass, bodies will orbit faster, in accordance with Newton's law of universal gravitation. Likewise, bodies closer together will also orbit faster.
Mean motion and Kepler's laws
Kepler's 3rd law of planetary motion states, the square of the periodic time is proportional to the cube of the mean distance, or
where a is the semi-major axis or mean distance, and P is the orbital period as above. The constant of proportionality is given by
where μ is the standard gravitational parameter, a constant for any particular gravitational system.
If the mean motion is given in units of radians per unit of time, we can combine it into the above definition of the Kepler's 3rd law,
and reducing,
which is another definition of Kepler's 3rd law. μ, the constant of proportionality, is a gravitational parameter defined by the masses of the bodies in question and by the Newtonian constant of gravitation, G (see below). Therefore, n is also defined
Expanding mean motion by expanding μ,
where M is typically the mass of the primary body of the system and m is the mass of a smaller body.
This is the complete gravitational definition of mean motion in a two-body system. Often in celestial mechanics, the primary body is much larger than any of the secondary bodies of the system, that is, . It is under these circumstances that m becomes unimportant and Kepler's 3rd law is approximately constant for all of the smaller bodies.
Kepler's 2nd law of planetary motion states, a line joining a planet and the Sun sweeps out equal areas in equal times, or
for a two-body orbit, where is the time rate of change of the area swept.
Letting t = P, the orbital period, the area swept is the entire area of the ellipse, dA = ab, where a is the semi-major axis and b is the semi-minor axis of the ellipse. Hence,
Multiplying this equation by 2,
From the above definition, mean motion n = . Substituting,
and mean motion is also
which is itself constant as a, b, and are all constant in two-body motion.
Mean motion and the constants of the motion
Because of the nature of two-body motion in a conservative gravitational field, two aspects of the motion do not change: the angular momentum and the mechanical energy.
The first constant, called specific angular momentum, can be defined as
and substituting in the above equation, mean motion is also
The second constant, called specific mechanical energy, can be defined,
Rearranging and multiplying by ,
From above, the square of mean motion n2 = . Substituting and rearranging, mean motion can also be expressed,
where the −2 shows that ξ must be defined as a negative number, as is customary in celestial mechanics and astrodynamics.
Mean motion and the gravitational constants
Two gravitational constants are commonly used in Solar System celestial mechanics: G, the Newtonian constant of gravitation and k, the Gaussian gravitational constant. From the above definitions, mean motion is
By normalizing parts of this equation and making some assumptions, it can be simplified, revealing the relation between the mean motion and the constants.
Setting the mass of the Sun to unity, M = 1. The masses of the planets are all much smaller, . Therefore, for any particular planet,
and also taking the semi-major axis as one astronomical unit,
The Gaussian gravitational constant k = , therefore, under the same conditions as above, for any particular planet
and again taking the semi-major axis as one astronomical unit,
Mean motion and mean anomaly
Mean motion also represents the rate of change of mean anomaly, and hence can also be calculated,
where M1 and M0 are the mean anomalies at particular points in time, and Δt (≡ t1-t0) is the time elapsed between the two. M0 is referred to as the mean anomaly at epoch t0, and Δt is the time since epoch.
Formulae
For Earth satellite orbital parameters, the mean motion is typically measured in revolutions per day. In that case,
where
d is the quantity of time in a day,
G is the gravitational constant,
M and m are the masses of the orbiting bodies,
a is the length of the semi-major axis.
To convert from radians per unit time to revolutions per day, consider the following:
From above, mean motion in radians per unit time is:
therefore the mean motion in revolutions per day is
where P is the orbital period, as above.
See also
Gaussian gravitational constant
Kepler orbit
Mean anomaly
Mean longitude
Mean motion resonance
Orbital elements
Notes
References
External links
Glossary entry mean motion at the US Naval Observatory's Astronomical Almanac Online
Orbits
Equations of astronomy | Mean motion | [
"Physics",
"Astronomy"
] | 1,277 | [
"Concepts in astronomy",
"Equations of astronomy"
] |
1,559,922 | https://en.wikipedia.org/wiki/Spacecraft%20flight%20dynamics | Spacecraft flight dynamics is the application of mechanical dynamics to model how the external forces acting on a space vehicle or spacecraft determine its flight path. These forces are primarily of three types: propulsive force provided by the vehicle's engines; gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or other body, such as Mars or Venus).
The principles of flight dynamics are used to model a vehicle's powered flight during launch from the Earth; a spacecraft's orbital flight; maneuvers to change orbit; translunar and interplanetary flight; launch from and landing on a celestial body, with or without an atmosphere; entry through the atmosphere of the Earth or other celestial body; and attitude control. They are generally programmed into a vehicle's inertial navigation systems, and monitored on the ground by a member of the flight controller team known in NASA as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator.
Flight dynamics depends on the disciplines of propulsion, aerodynamics, and astrodynamics (orbital mechanics and celestial mechanics). It cannot be reduced to simply attitude control; real spacecraft do not have steering wheels or tillers like airplanes or ships. Unlike the way fictional spaceships are portrayed, a spacecraft actually does not bank to turn in outer space, where its flight path depends strictly on the gravitational forces acting on it and the propulsive maneuvers applied.
Basic principles
A space vehicle's flight is determined by application of Newton's second law of motion:
where F is the vector sum of all forces exerted on the vehicle, m is its current mass, and a is the acceleration vector, the instantaneous rate of change of velocity (v), which in turn is the instantaneous rate of change of displacement. Solving for a, acceleration equals the force sum divided by mass. Acceleration is integrated over time to get velocity, and velocity is in turn integrated to get position.
Flight dynamics calculations are handled by computerized guidance systems aboard the vehicle; the status of the flight dynamics is monitored on the ground during powered maneuvers by a member of the flight controller team known in NASA's Human Spaceflight Center as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator.
For powered atmospheric flight, the three main forces which act on a vehicle are propulsive force, aerodynamic force, and gravitation. Other external forces such as centrifugal force, Coriolis force, and solar radiation pressure are generally insignificant due to the relatively short time of powered flight and small size of spacecraft, and may generally be neglected in simplified performance calculations.
Propulsion
The thrust of a rocket engine, in the general case of operation in an atmosphere, is approximated by:
where,
is the exhaust gas mass flow
is the effective exhaust velocity (sometimes otherwise denoted as c in publications)
is the effective jet velocity when pamb = pe
is the flow area at nozzle exit plane (or the plane where the jet leaves the nozzle if separated flow)
is the static pressure at nozzle exit plane
is the ambient (or atmospheric) pressure
The effective exhaust velocity of the rocket propellant is proportional to the vacuum specific impulse and affected by the atmospheric pressure:
where:
has units of seconds
is the gravitational acceleration at the surface of the Earth
The specific impulse relates the delta-v capacity to the quantity of propellant consumed according to the Tsiolkovsky rocket equation:
where:
is the initial total mass, including propellant, in kg (or lb)
is the final total mass in kg (or lb)
is the effective exhaust velocity in m/s (or ft/s)
is the delta-v in m/s (or ft/s)
Aerodynamic force
Aerodynamic forces, present near a body with a significant atmosphere such as Earth, Mars or Venus, are analyzed as: lift, defined as the force component perpendicular to the direction of flight (not necessarily upward to balance gravity, as for an airplane); and drag, the component parallel to, and in the opposite direction of flight. Lift and drag are modeled as the products of a coefficient times dynamic pressure acting on a reference area:
where:
CL is roughly linear with α, the angle of attack between the vehicle axis and the direction of flight (up to a limiting value), and is 0 at α = 0 for an axisymmetric body;
CD varies with α2;
CL and CD vary with Reynolds number and Mach number;
q, the dynamic pressure, is equal to 1/2 ρv2, where ρ is atmospheric density, modeled for Earth as a function of altitude in the International Standard Atmosphere (using an assumed temperature distribution, hydrostatic pressure variation, and the ideal gas law); and
Aref is a characteristic area of the vehicle, such as cross-sectional area at the maximum diameter.
Gravitation
The gravitational force that a celestial body exerts on a space vehicle is modeled with the body and vehicle taken as point masses; the bodies (Earth, Moon, etc.) are simplified as spheres; and the mass of the vehicle is much smaller than the mass of the body so that its effect on the gravitational acceleration can be neglected. Therefore the gravitational force is calculated by:
where:
is the gravitational force (weight);
is the space vehicle's mass; and
is the radial distance of the vehicle to the planet's center; and
is the radial distance from the planet's surface to its center; and
is the gravitational acceleration at the surface of the planet
g is the gravitational acceleration at altitude, which varies with the inverse square of the radial distance to the planet's center:
Powered flight
The equations of motion used to describe powered flight of a vehicle during launch can be as complex as six degrees of freedom for in-flight calculations, or as simple as two degrees of freedom for preliminary performance estimates. In-flight calculations will take perturbation factors into account such as the Earth's oblateness and non-uniform mass distribution; and gravitational forces of all nearby bodies, including the Moon, Sun, and other planets. Preliminary estimates can make some simplifying assumptions: a spherical, uniform planet; the vehicle can be represented as a point mass; solution of the flight path presents a two-body problem; and the local flight path lies in a single plane) with reasonably small loss of accuracy.
The general case of a launch from Earth must take engine thrust, aerodynamic forces, and gravity into account. The acceleration equation can be reduced from vector to scalar form by resolving it into its tangential (speed ) and angular (flight path angle relative to local vertical) time rate-of-change components relative to the launch pad. The two equations thus become:
where:
F is the engine thrust;
α is the angle of attack;
m is the vehicle's mass;
D is the vehicle's aerodynamic drag;
L is its aerodynamic lift;
r is the radial distance to the planet's center; and
g is the gravitational acceleration at altitude.
Mass decreases as propellant is consumed and rocket stages, engines or tanks are shed (if applicable).
The planet-fixed values of v and θ at any time in the flight are then determined by numerical integration of the two rate equations from time zero (when both v and θ are 0):
Finite element analysis can be used to integrate the equations, by breaking the flight into small time increments.
For most launch vehicles, relatively small levels of lift are generated, and a gravity turn is employed, depending mostly on the third term of the angle rate equation. At the moment of liftoff, when angle and velocity are both zero, the theta-dot equation is mathematically indeterminate and cannot be evaluated until velocity becomes non-zero shortly after liftoff. But notice at this condition, the only force which can cause the vehicle to pitch over is the engine thrust acting at a non-zero angle of attack (first term) and perhaps a slight amount of lift (second term), until a non-zero pitch angle is attained. In the gravity turn, pitch-over is initiated by applying an increasing angle of attack (by means of gimbaled engine thrust), followed by a gradual decrease in angle of attack through the remainder of the flight.
Once velocity and flight path angle are known, altitude and downrange distance are computed as:
The planet-fixed values of v and θ are converted to space-fixed (inertial) values with the following conversions:
where ω is the planet's rotational rate in radians per second, φ is the launch site latitude, and Az is the launch azimuth angle.
Final vs, θs and r must match the requirements of the target orbit as determined by orbital mechanics (see Orbital flight, above), where final vs is usually the required periapsis (or circular) velocity, and final θs is 90 degrees. A powered descent analysis would use the same procedure, with reverse boundary conditions.
Orbital flight
Orbital mechanics are used to calculate flight in orbit about a central body. For sufficiently high orbits (generally at least in the case of Earth), aerodynamic force may be assumed to be negligible for relatively short term missions (though a small amount of drag may be present which results in decay of orbital energy over longer periods of time.) When the central body's mass is much larger than the spacecraft, and other bodies are sufficiently far away, the solution of orbital trajectories can be treated as a two-body problem.
This can be shown to result in the trajectory being ideally a conic section (circle, ellipse, parabola or hyperbola) with the central body located at one focus. Orbital trajectories are either circles or ellipses; the parabolic trajectory represents first escape of the vehicle from the central body's gravitational field. Hyperbolic trajectories are escape trajectories with excess velocity, and will be covered under Interplanetary flight below.
Elliptical orbits are characterized by three elements. The semi-major axis a is the average of the radius at apoapsis and periapsis:
The eccentricity e can then be calculated for an ellipse, knowing the apses:
The time period for a complete orbit is dependent only on the semi-major axis, and is independent of eccentricity:
where is the standard gravitational parameter of the central body.
The orientation of the orbit in space is specified by three angles:
The inclination i, of the orbital plane with the fundamental plane (this is usually a planet or moon's equatorial plane, or in the case of a solar orbit, the Earth's orbital plane around the Sun, known as the ecliptic.) Positive inclination is northward, while negative inclination is southward.
The longitude of the ascending node Ω, measured in the fundamental plane counter-clockwise looking southward, from a reference direction (usually the vernal equinox) to the line where the spacecraft crosses this plane from south to north. (If inclination is zero, this angle is undefined and taken as 0.)
The argument of periapsis ω, measured in the orbital plane counter-clockwise looking southward, from the ascending node to the periapsis. If the inclination is 0, there is no ascending node, so ω is measured from the reference direction. For a circular orbit, there is no periapsis, so ω is taken as 0.
The orbital plane is ideally constant, but is usually subject to small perturbations caused by planetary oblateness and the presence of other bodies.
The spacecraft's position in orbit is specified by the true anomaly, , an angle measured from the periapsis, or for a circular orbit, from the ascending node or reference direction. The semi-latus rectum, or radius at 90 degrees from periapsis, is:
The radius at any position in flight is:
and the velocity at that position is:
Types of orbit
Circular
For a circular orbit, ra = rp = a, and eccentricity is 0. Circular velocity at a given radius is:
Elliptical
For an elliptical orbit, e is greater than 0 but less than 1. The periapsis velocity is:
and the apoapsis velocity is:
The limiting condition is a parabolic escape orbit, when e = 1 and ra becomes infinite. Escape velocity at periapsis is then
Flight path angle
The specific angular momentum of any conic orbit, h, is constant, and is equal to the product of radius and velocity at periapsis. At any other point in the orbit, it is equal to:
where φ is the flight path angle measured from the local horizontal (perpendicular to r.) This allows the calculation of φ at any point in the orbit, knowing radius and velocity:
Note that flight path angle is a constant 0 degrees (90 degrees from local vertical) for a circular orbit.
True anomaly as a function of time
It can be shown that the angular momentum equation given above also relates the rate of change in true anomaly to r, v, and φ, thus the true anomaly can be found as a function of time since periapsis passage by integration:
Conversely, the time required to reach a given anomaly is:
Orbital maneuvers
Once in orbit, a spacecraft may fire rocket engines to make in-plane changes to a different altitude or type of orbit, or to change its orbital plane. These maneuvers require changes in the craft's velocity, and the classical rocket equation is used to calculate the propellant requirements for a given delta-v. A delta-v budget will add up all the propellant requirements, or determine the total delta-v available from a given amount of propellant, for the mission. Most on-orbit maneuvers can be modeled as impulsive, that is as a near-instantaneous change in velocity, with minimal loss of accuracy.
In-plane changes
Orbit circularization
An elliptical orbit is most easily converted to a circular orbit at the periapsis or apoapsis by applying a single engine burn with a delta v equal to the difference between the desired orbit's circular velocity and the current orbit's periapsis or apoapsis velocity:
To circularize at periapsis, a retrograde burn is made:
To circularize at apoapsis, a posigrade burn is made:
Altitude change by Hohmann transfer
A Hohmann transfer orbit is the simplest maneuver which can be used to move a spacecraft from one altitude to another. Two burns are required: the first to send the craft into the elliptical transfer orbit, and a second to circularize the target orbit.
To raise a circular orbit at , the first posigrade burn raises velocity to the transfer orbit's periapsis velocity:
The second posigrade burn, made at apoapsis, raises velocity to the target orbit's velocity:
A maneuver to lower the orbit is the mirror image of the raise maneuver; both burns are made retrograde.
Altitude change by bi-elliptic transfer
A slightly more complicated altitude change maneuver is the bi-elliptic transfer, which consists of two half-elliptic orbits; the first, posigrade burn sends the spacecraft into an arbitrarily high apoapsis chosen at some point away from the central body. At this point a second burn modifies the periapsis to match the radius of the final desired orbit, where a third, retrograde burn is performed to inject the spacecraft into the desired orbit. While this takes a longer transfer time, a bi-elliptic transfer can require less total propellant than the Hohmann transfer when the ratio of initial and target orbit radii is 12 or greater.
Burn 1 (posigrade):
Burn 2 (posigrade or retrograde), to match periapsis to the target orbit's altitude:
Burn 3 (retrograde):
Change of plane
Plane change maneuvers can be performed alone or in conjunction with other orbit adjustments. For a pure rotation plane change maneuver, consisting only of a change in the inclination of the orbit, the specific angular momentum, h, of the initial and final orbits are equal in magnitude but not in direction. Therefore, the change in specific angular momentum can be written as:
where h is the specific angular momentum before the plane change, and Δi is the desired change in the inclination angle. From this it can be shown that the required delta-v is:
From the definition of h, this can also be written as:
where v is the magnitude of velocity before plane change and φ is the flight path angle. Using the small-angle approximation, this becomes:
The total delta-v for a combined maneuver can be calculated by a vector addition of the pure rotation delta-v and the delta-v for the other planned orbital change.
Translunar flight
Vehicles sent on lunar or planetary missions are generally not launched by direct injection to departure trajectory, but first put into a low Earth parking orbit; this allows the flexibility of a bigger launch window and more time for checking that the vehicle is in proper condition for the flight.
Escape velocity is not required for flight to the Moon; rather the vehicle's apogee is raised high enough to take it through a point where it enters the Moon's gravitational sphere of influence (SOI). This is defined as the distance from a satellite at which its gravitational pull on a spacecraft equals that of its central body, which is
where D is the mean distance from the satellite to the central body, and mc and ms are the masses of the central body and satellite, respectively. This value is approximately from Earth's Moon.
An accurate solution of the trajectory requires treatment as a three-body problem, but a preliminary estimate may be made using a patched conic approximation of orbits around the Earth and Moon, patched at the SOI point and taking into account the fact that the Moon is a revolving frame of reference around the Earth.
Translunar injection
This must be timed so that the Moon will be in position to capture the vehicle, and might be modeled to a first approximation as a Hohmann transfer. However, the rocket burn duration is usually long enough, and occurs during a sufficient change in flight path angle, that this is not very accurate. It must be modeled as a non-impulsive maneuver, requiring integration by finite element analysis of the accelerations due to propulsive thrust and gravity to obtain velocity and flight path angle:
where:
F is the engine thrust;
α is the angle of attack;
m is the vehicle's mass;
r is the radial distance to the planet's center; and
g is the gravitational acceleration, which varies with the inverse square of the radial distance:
Altitude , downrange distance , and radial distance from the center of the Earth are then computed as:
Mid-course corrections
A simple lunar trajectory stays in one plane, resulting in lunar flyby or orbit within a small range of inclination to the Moon's equator. This also permits a "free return", in which the spacecraft would return to the appropriate position for reentry into the Earth's atmosphere if it were not injected into lunar orbit. Relatively small velocity changes are usually required to correct for trajectory errors. Such a trajectory was used for the Apollo 8, Apollo 10, Apollo 11, and Apollo 12 crewed lunar missions.
Greater flexibility in lunar orbital or landing site coverage (at greater angles of lunar inclination) can be obtained by performing a plane change maneuver mid-flight; however, this takes away the free-return option, as the new plane would take the spacecraft's emergency return trajectory away from the Earth's atmospheric re-entry point, and leave the spacecraft in a high Earth orbit. This type of trajectory was used for the last five Apollo missions (13 through 17).
Lunar orbit insertion
In the Apollo program, the retrograde lunar orbit insertion burn was performed at an altitude of approximately on the far side of the Moon. This became the pericynthion of the initial orbits, with an apocynthion on the order of . The delta v was approximately . Two orbits later, the orbit was circularized at . For each mission, the flight dynamics officer prepared 10 lunar orbit insertion solutions so the one could be chosen with the optimum (minimum) fuel burn and best met the mission requirements; this was uploaded to the spacecraft computer and had to be executed and monitored by the astronauts on the lunar far side, while they were out of radio contact with Earth.
Interplanetary flight
In order to completely leave one planet's gravitational field to reach another, a hyperbolic trajectory relative to the departure planet is necessary, with excess velocity added to (or subtracted from) the departure planet's orbital velocity around the Sun. The desired heliocentric transfer orbit to a superior planet will have its perihelion at the departure planet, requiring the hyperbolic excess velocity to be applied in the posigrade direction, when the spacecraft is away from the Sun. To an inferior planet destination, aphelion will be at the departure planet, and the excess velocity is applied in the retrograde direction when the spacecraft is toward the Sun. For accurate mission calculations, the orbital elements of the planets must be obtained from an ephemeris, such as that published by NASA's Jet Propulsion Laboratory.
Simplifying assumptions
For the purpose of preliminary mission analysis and feasibility studies, certain simplified assumptions may be made to enable delta-v calculation with very small error:
All the planets' orbits except Mercury have very small eccentricity, and therefore may be assumed to be circular at a constant orbital speed and mean distance from the Sun.
All the planets' orbits (except Mercury) are nearly coplanar, with very small inclination to the ecliptic (3.39 degrees or less; Mercury's inclination is 7.00 degrees).
The perturbating effects of the other planets' gravity are negligible.
The spacecraft will spend most of its flight time under only the gravitational influence of the Sun, except for brief periods when it is in the sphere of influence of the departure and destination planets.
Since interplanetary spacecraft spend a large period of time in heliocentric orbit between the planets, which are at relatively large distances away from each other, the patched-conic approximation is much more accurate for interplanetary trajectories than for translunar trajectories. The patch point between the hyperbolic trajectory relative to the departure planet and the heliocentric transfer orbit occurs at the planet's sphere of influence radius relative to the Sun, as defined above in Orbital flight. Given the Sun's mass ratio of 333,432 times that of Earth and distance of , the Earth's sphere of influence radius is (roughly 1,000,000 kilometers).
Heliocentric transfer orbit
The transfer orbit required to carry the spacecraft from the departure planet's orbit to the destination planet is chosen among several options:
A Hohmann transfer orbit requires the least possible propellant and delta-v; this is half of an elliptical orbit with aphelion and perihelion tangential to both planets' orbits, with the longest outbound flight time equal to half the period of the ellipse. This is known as a conjunction-class mission. There is no "free return" option, because if the spacecraft does not enter orbit around the destination planet and instead completes the transfer orbit, the departure planet will not be in its original position. Using another Hohmann transfer to return requires a significant loiter time at the destination planet, resulting in a very long total round-trip mission time. Science fiction writer Arthur C. Clarke wrote in his 1951 book The Exploration of Space that an Earth-to-Mars round trip would require 259 days outbound and another 259 days inbound, with a 425-day stay at Mars.
Increasing the departure apsis speed (and thus the semi-major axis) results in a trajectory which crosses the destination planet's orbit non-tangentially before reaching the opposite apsis, increasing delta-v but cutting the outbound transit time below the maximum.
A gravity assist maneuver, sometimes known as a "slingshot maneuver" or Crocco mission after its 1956 proposer Gaetano Crocco, results in an opposition-class mission with a much shorter dwell time at the destination. This is accomplished by swinging past another planet, using its gravity to alter the orbit. A round trip to Mars, for example, can be significantly shortened from the 943 days required for the conjunction mission, to under a year, by swinging past Venus on return to the Earth.
Hyperbolic departure
The required hyperbolic excess velocity v∞ (sometimes called characteristic velocity) is the difference between the transfer orbit's departure speed and the departure planet's heliocentric orbital speed. Once this is determined, the injection velocity relative to the departure planet at periapsis is:
The excess velocity vector for a hyperbola is displaced from the periapsis tangent by a characteristic angle, therefore the periapsis injection burn must lead the planetary departure point by the same angle:
The geometric equation for eccentricity of an ellipse cannot be used for a hyperbola. But the eccentricity can be calculated from dynamics formulations as:
where is the specific angular momentum as given above in the Orbital flight section, calculated at the periapsis:
and ε is the specific energy:
Also, the equations for r and v given in Orbital flight depend on the semi-major axis, and thus are unusable for an escape trajectory. But setting radius at periapsis equal to the r equation at zero
anomaly gives an alternate expression for the semi-latus rectum:
which gives a more general equation for radius versus anomaly which is usable at any eccentricity:
Substituting the alternate expression for p also gives an alternate expression for a (which is defined for a hyperbola, but no longer represents the semi-major axis). This gives an equation for velocity versus radius which is likewise usable at any eccentricity:
The equations for flight path angle and anomaly versus time given in Orbital flight are also usable for hyperbolic trajectories.
Launch windows
There is a great deal of variation with time of the velocity change required for a mission, because of the constantly varying relative positions of the planets. Therefore, optimum launch windows are often chosen from the results of porkchop plots that show contours of characteristic energy (v∞2) plotted versus departure and arrival time.
Atmospheric entry
Controlled entry, descent, and landing of a vehicle are achieved by shedding the excess kinetic energy through aerodynamic heating from drag, which requires some means of heat shielding, and/or retrograde thrust. Terminal descent is usually achieved by means of parachutes and/or air brakes.
Attitude control
Since spacecraft spend most of their flight time coasting unpowered through the vacuum of space, they are unlike aircraft in that their flight trajectory is not determined by their attitude (orientation), except during atmospheric flight to control the forces of lift and drag, and during powered flight to align the thrust vector. Nonetheless, attitude control is often maintained in unpowered flight to keep the spacecraft in a fixed orientation for purposes of astronomical observation, communications, or for solar power generation; or to place it into a controlled spin for passive thermal control, or to create artificial gravity inside the craft.
Attitude control is maintained with respect to an inertial frame of reference or another entity (the celestial sphere, certain fields, nearby objects, etc.). The attitude of a craft is described by angles relative to three mutually perpendicular axes of rotation, referred to as roll, pitch, and yaw. Orientation can be determined by calibration using an external guidance system, such as determining the angles to a reference star or the Sun, then internally monitored using an inertial system of mechanical or optical gyroscopes. Orientation is a vector quantity described by three angles for the instantaneous direction, and the instantaneous rates of roll in all three axes of rotation. The aspect of control implies both awareness of the instantaneous orientation and rates of roll and the ability to change the roll rates to assume a new orientation using either a reaction control system or other means.
Newton's second law, applied to rotational rather than linear motion, becomes:
where is the net torque about an axis of rotation exerted on the vehicle, Ix is its moment of inertia about that axis (a physical property that combines the mass and its distribution around the axis), and is the angular acceleration about that axis in radians per second per second. Therefore, the acceleration rate in degrees per second per second is
Analogous to linear motion, the angular rotation rate (degrees per second) is obtained by integrating α over time:
and the angular rotation is the time integral of the rate:
The three principal moments of inertia Ix, Iy, and Iz about the roll, pitch and yaw axes, are determined through the vehicle's center of mass.
The control torque for a launch vehicle is sometimes provided aerodynamically by movable fins, and usually by mounting the engines on gimbals to vector the thrust around the center of mass. Torque is frequently applied to spacecraft, operating absent aerodynamic forces, by a reaction control system, a set of thrusters located about the vehicle. The thrusters are fired, either manually or under automatic guidance control, in short bursts to achieve the desired rate of rotation, and then fired in the opposite direction to halt rotation at the desired position. The torque about a specific axis is:
where r is its distance from the center of mass, and F is the thrust of an individual thruster (only the component of F perpendicular to r is included.)
For situations where propellant consumption may be a problem (such as long-duration satellites or space stations), alternative means may be used to provide the control torque, such as reaction wheels or control moment gyroscopes.
Notes
References
Sidi, M.J. "Spacecraft Dynamics & Control. Cambridge, 1997.
Thomson, W.T. "Introduction to Space Dynamics." Dover, 1961.
Wertz, J.R. "Spacecraft Attitude Determination and Control." Kluwer, 1978.
Wiesel, W.E. "Spaceflight Dynamics." McGraw-Hill, 1997.
Astrodynamics
Spaceflight concepts | Spacecraft flight dynamics | [
"Engineering"
] | 6,169 | [
"Astrodynamics",
"Aerospace engineering"
] |
1,560,117 | https://en.wikipedia.org/wiki/Delta%20bond | In chemistry, a delta bond (δ bond) is a covalent chemical bond, in which four lobes of an atomic orbital on one atom overlap four lobes of an atomic orbital on another atom. This overlap leads to the formation of a bonding molecular orbital with two nodal planes which contain the internuclear axis and go through both atoms.
The Greek letter δ in their name refers to d orbitals, since the orbital symmetry of the δ bond is the same as that of the usual (4-lobed) type of d orbital when seen down the bond axis. This type of bonding is observed in atoms that have occupied d orbitals with low enough energy to participate in covalent bonding, for example, in organometallic species of transition metals. Some rhenium, molybdenum, technetium, and chromium compounds contain a quadruple bond, consisting of one σ bond, two π bonds and one δ bond.
The orbital symmetry of the δ bonding orbital is different from that of a π antibonding orbital, which has one nodal plane containing the internuclear axis and a second nodal plane perpendicular to this axis between the atoms.
The δ notation was introduced by Robert Mulliken in 1931. The first compound identified as having a δ bond was potassium octachlorodirhenate(III). In 1965, F. A. Cotton reported that there was δ-bonding as part of the rhenium–rhenium quadruple bond in the [Re2Cl8]2− ion. Another example of a δ bond is proposed in cyclobutadieneiron tricarbonyl between an iron d orbital and the four p orbitals of the attached cyclobutadiene molecule.
References
Chemical bonding | Delta bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 365 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
1,561,268 | https://en.wikipedia.org/wiki/Synthetic%20membrane | An artificial membrane, or synthetic membrane, is a synthetically created membrane which is usually intended for separation purposes in laboratory or in industry. Synthetic membranes have been successfully used for small and large-scale industrial processes since the middle of the twentieth century. A wide variety of synthetic membranes is known. They can be produced from organic materials such as polymers and liquids, as well as inorganic materials. Most commercially utilized synthetic membranes in industry are made of polymeric structures. They can be classified based on their surface chemistry, bulk structure, morphology, and production method. The chemical and physical properties of synthetic membranes and separated particles as well as separation driving force define a particular membrane separation process. The most commonly used driving forces of a membrane process in industry are pressure and concentration gradient. The respective membrane process is therefore known as filtration. Synthetic membranes utilized in a separation process can be of different geometry and flow configurations. They can also be categorized based on their application and separation regime. The best known synthetic membrane separation processes include water purification, reverse osmosis, dehydrogenation of natural gas, removal of cell particles by microfiltration and ultrafiltration, removal of microorganisms from dairy products, and dialysis.
Membrane types and structure
Synthetic membrane can be fabricated from a large number of different materials. It can be made from organic or inorganic materials including solids such as metals, ceramics, homogeneous films, polymers, heterogeneous solids (polymeric mixtures, mixed glasses), and liquids. Ceramic membranes are produced from inorganic materials such as aluminium oxides, silicon carbide, and zirconium oxide. Ceramic membranes are very resistant to the action of aggressive media (acids, strong solvents). They are very stable chemically, thermally, and mechanically, and biologically inert. Even though ceramic membranes have a high weight and substantial production costs, they are ecologically friendly and have long working life. Ceramic membranes are generally made as monolithic shapes of tubular capillaries.
Liquid membranes
Liquid membranes refer to synthetic membranes made of non-rigid materials. Several types of liquid membranes can be encountered in industry: emulsion liquid membranes, immobilized (supported) liquid membranes, supported molten-salt membranes, and hollow-fiber contained liquid membranes. Liquid membranes have been extensively studied but thus far have limited commercial applications. Maintaining adequate long-term stability is a key problem, due to the tendency of membrane liquids to evaporate, dissolve in the phases in contact with them, or creep out of the membrane support.
Polymeric membranes
Polymeric membranes lead the membrane separation industry market because they are very competitive in performance and economics. Many polymers are available, but the choice of membrane polymer is not a trivial task. A polymer has to have appropriate characteristics for the intended application. The polymer sometimes has to offer a low binding affinity for separated molecules (as in the case of biotechnology applications), and has to withstand the harsh cleaning conditions. It has to be compatible with chosen membrane fabrication technology. The polymer has to be a suitable membrane former in terms of its chains rigidity, chain interactions, stereoregularity, and polarity of its functional groups. The polymers can range form amorphous and semicrystalline structures (can also have different glass transition temperatures), affecting the membrane performance characteristics. The polymer has to be obtainable and reasonably priced to comply with the low cost criteria of membrane separation process. Many membrane polymers are grafted, custom-modified, or produced as copolymers to improve their properties. The most common polymers in membrane synthesis are cellulose acetate, Nitrocellulose, and cellulose esters (CA, CN, and CE), polysulfone (PS), polyether sulfone(PES), polyacrilonitrile (PAN), polyamide, polyimide, polyethylene and polypropylene (PE and PP), polytetrafluoroethylene (PTFE), polyvinylidene fluoride (PVDF), polyvinylchloride (PVC).
Polymer electrolyte membranes
Polymer membranes may be functionalized into ion-exchange membranes by the addition of highly acidic or basic functional groups, e.g. sulfonic acid and quaternary ammonium, enabling the membrane to form water channels and selectively transport cations or anions, respectively. The most important functional materials in this category include proton-exchange membranes and alkaline anion-exchange membranes, that are at the heart of many technologies in water treatment, energy storage, energy generation. Applications within water treatment include reverse osmosis, electrodialysis, and reversed electrodialysis. Applications within energy storage include rechargeable metal-air electrochemical cells and various types of flow battery. Applications within energy generation include proton-exchange membrane fuel cells (PEMFCs), alkaline anion-exchange membrane fuel cells (AEMFCs), and both the osmotic- and electrodialysis-based osmotic power or blue energy generation.
Ceramic membranes
Ceramic membranes are made from inorganic materials (such as alumina, titania, zirconia oxides, recrystallised silicon carbide or some glassy materials).
By contrast with polymeric membranes, they can be used in separations where aggressive media (acids, strong solvents) are present. They also have excellent thermal stability which make them usable in high temperature membrane operations.
Surface chemistry
One of the critical characteristics of a synthetic membrane is its chemistry. Synthetic membrane chemistry usually refers to the chemical nature and composition of the surface in contact with a separation process stream. The chemical nature of a membrane's surface can be quite different from its bulk composition. This difference can result from material partitioning at some stage of the membrane's fabrication, or from an intended surface postformation modification. Membrane surface chemistry creates very important properties such as hydrophilicity or hydrophobicity (related to surface free energy), presence of ionic charge, membrane chemical or thermal resistance, binding affinity for particles in a solution, and biocompatibility (in case of bioseparations). Hydrophilicity and hydrophobicity of membrane surfaces can be expressed in terms of water (liquid) contact angle θ. Hydrophilic membrane surfaces have a contact angle in the range of 0°<θ<90° (closer to 0°), where hydrophobic materials have θ in the range of 90°<θ<180°.
The contact angle is determined by solving the Young's equation for the interfacial force balance. At equilibrium three interfacial tensions corresponding to solid/gas (γSG), solid/liquid (γSL), and liquid/gas (γLG) interfaces are counterbalanced. The consequence of the contact angle's magnitudes is known as wetting phenomena, which is important to characterize the capillary (pore) intrusion behavior. Degree of membrane surface wetting is determined by the contact angle. The surface with smaller contact angle has better wetting properties (θ=0°-perfect wetting). In some cases low surface tension liquids such as alcohols or surfactant solutions are used to enhance wetting of non-wetting membrane surfaces. The membrane surface free energy (and related hydrophilicity/hydrophobicity) influences membrane particle adsorption or fouling phenomena. In most membrane separation processes (especially bioseparations), higher surface hydrophilicity corresponds to the lower fouling. Synthetic membrane fouling impairs membrane performance. As a consequence, a wide variety of membrane cleaning techniques have been developed. Sometimes fouling is irreversible, and the membrane needs to be replaced. Another feature of membrane surface chemistry is surface charge. The presence of the charge changes the properties of the membrane-liquid interface. The membrane surface may develop an electrokinetic potential and induce the formation of layers of solution particles which tend to neutralize the charge.
Membrane morphology
Synthetic membranes can be also categorized based on their structure (morphology). Three such types of synthetic membranes are commonly used in separation industry: dense membranes, porous membranes, and asymmetric membranes. Dense and porous membranes are distinct from each other based on the size of separated molecules. Dense membrane is usually a thin layer of dense material utilized in the separation processes of small molecules (usually in gas or liquid phase). Dense membranes are widely used in industry for gas separations and reverse osmosis applications.
Dense membranes can be synthesized as amorphous or heterogeneous structures. Polymeric dense membranes such as polytetrafluoroethylene and cellulose esters are usually fabricated by compression molding, solvent casting, and spraying of a polymer solution. The membrane structure of a dense membrane can be in a rubbery or a glassy state at a given temperature depending on its glass transition temperature . Porous membranes are intended on separation of larger molecules such as solid colloidal particles, large biomolecules (proteins, DNA, RNA) and cells from the filtering media. Porous membranes find use in the microfiltration, ultrafiltration, and dialysis applications. There is some controversy in defining a "membrane pore". The most commonly used theory assumes a cylindrical pore for simplicity. This model assumes that pores have the shape of parallel, nonintersecting cylindrical capillaries. But in reality a typical pore is a random network of the unevenly shaped structures of different sizes. The formation of a pore can be induced by the dissolution of a "better" solvent into a "poorer" solvent in a polymer solution. Other types of pore structure can be produced by stretching of crystalline structure polymers. The structure of porous membrane is related to the characteristics of the interacting polymer and solvent, components concentration, molecular weight, temperature, and storing time in solution. The thicker porous membranes sometimes provide support for the thin dense membrane layers, forming the asymmetric membrane structures. The latter are usually produced by a lamination of dense and porous membranes.
See also
Membrane technology
Notes
References
Pinnau, I., Freeman, B.D., Membrane Formation and Modification, ACS, 1999.
Osada, Y., Nakagawa, T., Membrane Science and Technology, New York: Marcel Dekker, Inc,1992.
Perry, R.H., Green D.H., Perry’s Chemical Engineers’ Handbook,7th edition, McGraw-Hill, 1997.
Zeman, Leos J., Zydney, Andrew L., Microfiltration and Ultrafitration, Principles and Applications., New York: Marcel Dekker, Inc,1996.
Mulder M., Basic Principles of Membrane Technology, Kluwer Academic Publishers, Netherlands, 1996.
Jornitz, Maik W., Sterile Filtration, Springer, Germany, 2006
Jacob J., Pradanos P., Calvo J.I, Hernandez A., Jonsson G. Fouling kinetics and associated dynamics of structural modifications. J. Coll and Surf. 138(1997): 173–183.
Van Reis R., Zydney A. Bioprocess membrane technology. J Mem Sci. 297(2007): 16–50.
Madaeni S.S. The effect of large particles on microfiltration of small particles J. Por Mat. 8(2001): 143–148.
Martinez F., Martin A., Pradanos P., Calvo J.I., Palacio L.., Hernandez A. Protein adsorption and deposition onto microfiltration membranes: the role of solute-solid interactions. J. Coll Interf Sci. 221(2000): 254–261.
Palacio L., Ho C., Pradanos P., Calvo J.I, Kherif G., Larbot A., Hernandez A. Fouling, structure and charges of composite inorganic microfiltration membrane. J. Coll and Surf. 138(1998): 291–299.
Templin T., Johnston D., Singh V., Tumbleson M.E., Belyea R.L. Rausch K.D. Membrane separation of solids from corn processing streams. Biores Tech. 97(2006): 1536–1545.
Zydney A. L., Ho C. Effect of Membrane Morphology on System Capacity During Normal Flow Microfiltration. Biotechnol, Bioeng. 83(2003): 537–543.
Ripperger S., Schulz G. Microporous membranes in biotechnical applications. Bioprocess Eng. 1(1986): 43–49.
Ho C., Zydney A. Protein fouling of asymmetric and composite microfiltration membranes. Ind Eng Chem Res. 40(2001): 1412–1421.
Artificial, Syntetic Membrane|date=December 2008
Filtration
Membrane technology | Synthetic membrane | [
"Chemistry",
"Engineering"
] | 2,684 | [
"Separation processes",
"Chemical equipment",
"Membrane technology",
"Filtration",
"nan"
] |
14,552,225 | https://en.wikipedia.org/wiki/Brookfield%20Engineering | Brookfield Engineering is an engineering and manufacturing company with headquarters in Middleboro, Massachusetts. It is a subsidiary of the conglomerate Ametek. Its product line includes laboratory viscometers, rheometers, texture analyzers, and powder flow testers as well as in-line process instrumentation. These instruments are used by research, design, and process control departments.
Company history
The company was established in 1934 by Don Brookfield Sr., who graduated from MIT with a degree in electrochemical engineering. Brookfield Engineering was a family-run business until 1986, when it became an ESOP company. It has been ISO certified since the 1990s.
Brookfield Engineering has dealers in 60 countries and regional offices in the US, UK, Germany, India and China. All manufacturing is located in the US at company headquarters.
Principle of operation
Classical Brookfield viscometers employ the principle of rotational viscometry—the torque required to turn an object, such as a spindle, in a fluid indicates the viscosity of the fluid. Torque is applied through a calibrated spring to a disk or bob spindle immersed in test fluid and the spring deflection measures the viscous drag of the fluid against the spindle. The amount of viscous drag is proportional to the amount of torque required to rotate the spindle, and thus to the viscosity of a Newtonian fluid. In the case of non-Newtonian fluids, Brookfield viscosities measured under the same conditions (model, spindle, speed, temperature, time of test, container, and any other sample preparation procedures that may affect the behavior of the fluid) can be compared. When developing a new test method, trial and error is often necessary in order to determine the proper spindle and speeds. Successful test methods will deliver a % torque reading between 10 and 100. The rheological behavior of the test fluid can be observed using the same spindle at different speeds, but because the geometry of the fluid around a rotating bob or disk spindle in a large container does not allow a single shear rate to be assigned, proper rheometry is not feasible using this setup.
Apart from its rotating bob viscometers, Brookfield now also produces defined-geometry rheometers which allow complete rheological analysis of fluids.
See also
ASTM International
Bulk density
Deutsches Institut für Normung
Food Rheology
Mouthfeel
Rheology
References
Viscosity
Rheology
Companies based in Plymouth County, Massachusetts
Companies based in Massachusetts
1934 establishments in Massachusetts
Technology companies established in 1934
Manufacturing companies established in 1934 | Brookfield Engineering | [
"Physics",
"Chemistry"
] | 528 | [
"Physical phenomena",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Rheology",
"Fluid dynamics"
] |
14,552,970 | https://en.wikipedia.org/wiki/Stranski%E2%80%93Krastanov%20growth | Stranski–Krastanov growth (SK growth, also Stransky–Krastanov or 'Stranski–Krastanow') is one of the three primary modes by which thin films grow epitaxially at a crystal surface or interface. Also known as 'layer-plus-island growth', the SK mode follows a two step process: initially, complete films of adsorbates, up to several monolayers thick, grow in a layer-by-layer fashion on a crystal substrate. Beyond a critical layer thickness, which depends on strain and the chemical potential of the deposited film, growth continues through the nucleation and coalescence of adsorbate 'islands'. This growth mechanism was first noted by Ivan Stranski and Lyubomir Krastanov in 1938. It wasn't until 1958 however, in a seminal work by Ernst Bauer published in Zeitschrift für Kristallographie, that the SK, Volmer–Weber, and Frank–van der Merwe mechanisms were systematically classified as the primary thin-film growth processes. Since then, SK growth has been the subject of intense investigation, not only to better understand the complex thermodynamics and kinetics at the core of thin-film formation, but also as a route to fabricating novel nanostructures for application in the microelectronics industry.
Modes of thin-film growth
The growth of epitaxial (homogeneous or heterogeneous) thin films on a single crystal surface depends critically on the interaction strength between adatoms and the surface. While it is possible to grow epilayers from a liquid solution, most epitaxial growth occurs via a vapor phase technique such as molecular beam epitaxy (MBE). In Volmer–Weber (VW) growth, adatom–adatom interactions are stronger than those of the adatom with the surface, leading to the formation of three-dimensional adatom clusters or islands. Growth of these clusters, along with coarsening, will cause rough multi-layer films to grow on the substrate surface. Antithetically, during Frank–van der Merwe (FM) growth, adatoms attach preferentially to surface sites resulting in atomically smooth, fully formed layers. This layer-by-layer growth is two-dimensional, indicating that complete films form prior to growth of subsequent layers. Stranski–Krastanov growth is an intermediary process characterized by both 2D layer and 3D island growth. Transition from the layer-by-layer to island-based growth occurs at a critical layer thickness which is highly dependent on the chemical and physical properties, such as surface energies and lattice parameters, of the substrate and film. Figure 1 is a schematic representation of the three main growth modes for various surface coverages.
Determining the mechanism by which a thin film grows requires consideration of the chemical potentials of the first few deposited layers. A model for the layer chemical potential per atom has been proposed by Markov as:
where is the bulk chemical potential of the adsorbate material, is the desorption energy of an adsorbate atom from a wetting layer of the same material, the desorption energy of an adsorbate atom from the substrate, is the per atom misfit dislocation energy, and the per atom homogeneous strain energy. In general, the values of , , , and depend in a complex way on the thickness of the growing layers and lattice misfit between the substrate and adsorbate film. In the limit of small strains, , the criterion for a film growth mode is dependent on .
VW growth: (adatom cohesive force is stronger than surface adhesive force)
FM growth: (surface adhesive force is stronger than adatom cohesive force)
SK growth can be described by both of these inequalities. While initial film growth follows an FM mechanism, i.e. positive differential μ, nontrivial amounts of strain energy accumulate in the deposited layers. At a critical thickness, this strain induces a sign reversal in the chemical potential, i.e. negative differential μ, leading to a switch in the growth mode. At this point it is energetically favorable to nucleate islands and further growth occurs by a VW type mechanism. A thermodynamic criterion for layer growth similar to the one presented above can be obtained using a force balance of surface tensions and contact angle.
Since the formation of wetting layers occurs in a commensurate fashion at a crystal surface, there is often an associated misfit between the film and the substrate due to the different lattice parameters of each material. Attachment of the thinner film to the thicker substrate induces a misfit strain at the interface given by . Here and are the film and substrate lattice constants, respectively. As the wetting layer thickens, the associated strain energy increases rapidly. In order to relieve the strain, island formation can occur in either a dislocated or coherent fashion. In dislocated islands, strain relief arises by forming interfacial misfit dislocations. The reduction in strain energy accommodated by introducing a dislocation is generally greater than the concomitant cost of increased surface energy associated with creating the clusters. The thickness of the wetting layer at which island nucleation initiates, called the critical thickness , is strongly dependent on the lattice mismatch between the film and substrate, with a greater mismatch leading to smaller critical thicknesses. Values of can range from submonlayer coverage to up to several monolayers thick. Figure 2 illustrates a dislocated island during SK growth after reaching a critical layer height. A pure edge dislocation is shown at the island interface to illustrate the relieved structure of the cluster.
In some cases, most notably the Si/Ge system, nanoscale dislocation-free islands can be formed during SK growth by introducing undulations into the near surface layers of the substrate. These regions of local curvature serve to elastically deform both the substrate and island, relieving accumulated strain and bringing the wetting layer and island lattice constant closer to its bulk value. This elastic instability at is known as the Grinfeld instability (formerly Asaro–Tiller–Grinfeld; ATG). The resulting islands are coherent and defect-free, garnering them significant interest for use in nanoscale electronic and optoelectronic devices. Such applications are discussed briefly later. A schematic of the resulting epitaxial structure is shown in figure 3 which highlights the induced radius of curvature at the substrate surface and in the island. Finally, strain stabilization indicative of coherent SK growth decreases with decreasing inter-island separation. At large areal island densities (smaller spacing), curvature effects from neighboring clusters will cause dislocation loops to form leading to defected island creation.
Monitoring SK growth
Wide beam techniques
Analytical techniques such as Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), and reflection high energy electron diffraction (RHEED), have been extensively used to monitor SK growth. AES data obtained in situ during film growth in a number model systems, such as Pd/W(100), Pb/Cu(110), Ag/W(110), and Ag/Fe(110), show characteristic segmented curves like those presented in figure 4. Height of the film Auger peaks plotted as a function of surface coverage Θ, initially exhibits a straight line, which is indicative of AES data for FM growth. There is a clear break point at a critical adsorbate surface coverage followed by another linear segment at a reduced slope. The paired break point and shallow line slope is characteristic of island nucleation; a similar plot for FM growth would exhibit many such line and break pairs while a plot of the VW mode would be a single line of low slope. In some systems, reorganization of the 2D wetting layer results in decreasing AES peaks with increasing adsorbate coverage. Such situations arise when many adatoms are required to reach a critical nucleus size on the surface and at nucleation the resulting adsorbed layer constitutes a significant fraction of a monolayer. After nucleation, metastable adatoms on the surface are incorporated into the nuclei, causing the Auger signal to fall. This phenomenon is particularly evident for deposits on a molybdenum substrate.
Evolution of island formation during a SK transitions have also been successfully measured using LEED and RHEED techniques. Diffraction data obtained via various LEED experiments have been effectively used in conjunction with AES to measure the critical layer thickness at the onset of island formation. In addition, RHEED oscillations have proven very sensitive to the layer-to-island transition during SK growth, with the diffraction data providing detailed crystallographic information about the nucleated islands. Following the time dependence of LEED, RHEED, and AES signals, extensive information on surface kinetics and thermodynamics has been gathered for a number of technologically relevant systems.
Microscopies
Unlike the techniques presented in the last section in which probe size can be relatively large compared to island size, surface microscopies such scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunneling microscopy (STM), and Atomic force microscopy (AFM) offer the opportunity for direct viewing of deposit/substrate combination events. The extreme magnifications afforded by these techniques, often down to the nanometer length scale, make them particularly applicable for visualizing the strongly 3D islands. UHV-SEM and TEM are routinely used to image island formation during SK growth, enabling a wide range of information to be gathered, ranging from island densities to equilibrium shapes. AFM and STM have become increasingly utilized to correlate island geometry to the surface morphology of the surrounding substrate and wetting layer. These visualization tools are often used to complement quantitative information gathered during wide-beam analyses.
Application to nanotechnology
As mentioned previously, coherent island formation during SK growth has attracted increased interest as a means for fabricating epitaxial nanoscale structures, particularly quantum dots (QDs). Widely used quantum dots grown in the SK-growth-mode are based on the material combinations Si/Ge or InAs/GaAs. Significant effort has been spent developing methods to control island organization, density, and size on a substrate. Techniques such as surface dimpling with a pulsed laser and control over growth rate have been successfully applied to alter the onset of the SK transition or even suppress it altogether. The ability to control this transition either spatially or temporally enables manipulation of physical parameters of the nanostructures, like geometry and size, which, in turn, can alter their electronic or optoelectronic properties (i.e. band gap). For example, Schwarz–Selinger, et al. have used surface dimpling to create surface miscuts on Si that provide preferential Ge island nucleation sites surrounded by a denuded zone. In a similar fashion, lithographically patterned substrates have been used as nucleation templates for SiGe clusters. Several studies have also shown that island geometries can be altered during SK growth by controlling substrate relief and growth rate. Bimodal size distributions of Ge islands on Si are a striking example of this phenomenon in which pyramidal and dome-shaped islands coexist after Ge growth on a textured Si substrate. Such ability to control the size, location, and shape of these structures could provide invaluable techniques for 'bottom-up' fabrication schemes of next-generation devices in the microelectronics industry.
See also
Epitaxy
Thin films
Molecular-beam epitaxy
References
Thin films
Research in Bulgaria | Stranski–Krastanov growth | [
"Materials_science",
"Mathematics",
"Engineering"
] | 2,448 | [
"Nanotechnology",
"Planes (geometry)",
"Thin films",
"Materials science"
] |
14,557,176 | https://en.wikipedia.org/wiki/Preisach%20model%20of%20hysteresis | In electromagnetism, the Preisach model of hysteresis is a model of magnetic hysteresis. Originally, it generalized hysteresis as the relationship between the magnetic field and magnetization of a magnetic material as the parallel connection of independent relay hysterons. It was first suggested in 1935 by Ferenc (Franz) Preisach in the German academic journal . In the field of ferromagnetism, the Preisach model is sometimes thought to describe a ferromagnetic material as a network of small independently acting domains, each magnetized to a value of either or . A sample of iron, for example, may have evenly distributed magnetic domains, resulting in a net magnetic moment of zero.
Mathematically similar models seem to have been independently developed in other fields of science and engineering. One notable example is the model of capillary hysteresis in porous materials developed by Everett and co-workers. Since then, following the work of people like M. Krasnoselkii, A. Pokrovskii, A. Visintin, and I.D. Mayergoyz, the model has become widely accepted as a general mathematical tool for the description of hysteresis phenomena of different kinds.
Nonideal relay
The relay hysteron is the fundamental building block of the Preisach model. It is described as a two-valued operator denoted by . Its I/O map takes the form of a loop, as shown:
Above, a relay of magnitude 1, defines the "switch-off" threshold, and defines the "switch-on" threshold.
Graphically, if is less than , the output is "low" or "off." As we increase , the output remains low until reaches —at which point the output switches "on." Further increasing has no change. Decreasing , does not go low until reaches again. It is apparent that the relay operator takes the path of a loop, and its next state depends on its past state.
Mathematically, the output of is expressed as:
Where if the last time was outside of the boundaries , it was in the region of ; and if the last time was outside of the boundaries , it was in the region of .
This definition of the hysteron shows that the current value of the complete hysteresis loop depends upon the history of the input variable .
Discrete Preisach model
The Preisach model consists of many relay hysterons connected in parallel, given weights, and summed. This can be visualized by a block diagram:
Each of these relays has different and thresholds and is scaled by . With increasing , the true hysteresis curve is approximated better.
In the limit as approaches infinity, we obtain the continuous Preisach model.
Preisach plane
One of the easiest ways to look at the Preisach model is using a geometric interpretation.
Consider a plane of coordinates . On this plane, each point is mapped to a specific relay hysteron . Each relay can be plotted on this so-called Preisach plane with its values. Depending on their distribution on the Preisach plane, the relay hysterons can represent hysteresis with good accuracy.
We consider only the half-plane as any other case does not have a physical equivalent in nature.
Next, we take a specific point on the half plane and build a right triangle by drawing two lines parallel to the axes, both from the point to the line .
We now present the Preisach density function, denoted . This function describes the amount of relay hysterons of each distinct values of . As a default we say that outside the right triangle .
A modified formulation of the classical Preisach model has been presented, allowing analytical expression of
the Everett function. This makes the model considerably faster and especially adequate for inclusion in electromagnetic field computation or electric circuit analysis codes.
Vector Preisach model
The vector Preisach model is constructed as the linear superposition of scalar models. For considering the uniaxial anisotropy of the material, Everett functions are expanded by Fourier coefficients. In this case, the measured and simulated curves are in a very good agreement.
Another approach uses different relay hysteron, closed surfaces defined on the 3D input space. In general spherical hysteron is used for vector hysteresis in 3D, and circular hysteron is used for vector hysteresis in 2D.
Applications
The Preisach model has been applied to model hysteresis in a wide variety of fields, including to study irreversible changes in soil hydraulic conductivity as a result of saline and sodic conditions, the modeling of soil water retention and the effect of stress and strains on soil and rock structures.
See also
Jiles–Atherton model
Stoner–Wohlfarth model
References
External links
University College, Cork Hysteresis Tutorial
Budapest University of Technology and Economics, Hungary Matlab implementation of the Preisach model developed by Zs. Szabó.
Python implementation of Preisach Model.
Matlab implementation of Preisach Model.
Magnetic hysteresis
Hysteresis | Preisach model of hysteresis | [
"Physics",
"Materials_science",
"Engineering"
] | 1,063 | [
"Physical phenomena",
"Hysteresis",
"Magnetic hysteresis",
"Materials science"
] |
14,559,306 | https://en.wikipedia.org/wiki/InXitu | inXitu was a company based in Mountain View, California, which developed portable X-ray diffraction (XRD) and X-ray fluorescence (XRF) analysis instruments. The company name was a combination of the terms in situ and X-ray, portraying the company's dedication to developing X-ray instruments that could be easily transported to the original site of the material being analyzed.
Company history
The basis for inXitu began in 2003 when Philippe Sarrazin worked with NASA to file a patent on techniques used to develop the CheMin instrument for the Mars Curiosity rover. Sarrazin left NASA to form inXitu Research, which received two Small Business Innovation Research grants from Ames Research Center in 2004 to continue work on CheMin. inXitu Research merged with Microwave Power Technology (MPT) in 2007 and incorporated as inXitu, Inc. MPT's research and development in high vacuum systems was meshed with inXitu's experience with XRD equipment, and in early 2008 the company released Terra, a commercial field-portable XRD/XRF instrument. Bradley Boyer joined the company as President and Chief Executive Officer in September 2008. inXitu formed a partnership with Innov-X in December 2008, in which inXitu would manufacture XRD equipment for sale under the Innov-X brand name.
Also in 2008, inXitu worked with the Getty Conservation Institute to develop X-Duetto, a portable and non-destructive XRD/XRF device used for the analysis of works of art. It was commercially released as Duetto in mid 2009. The company released the BTX instrument in mid 2009, which is a desktop XRD/XRF device developed from Terra; the second generation BTX-II was released in early 2010.
inXitu was purchased by Olympus in November 2011.
References
Diffraction
Fluorescence
Defunct technology companies of the United States
X-ray equipment manufacturers | InXitu | [
"Physics",
"Chemistry",
"Materials_science"
] | 397 | [
"Luminescence",
"Fluorescence",
"Spectrum (physical sciences)",
"Diffraction",
"Crystallography",
"Spectroscopy"
] |
14,564,979 | https://en.wikipedia.org/wiki/Hypersonic%20flight | Hypersonic flight is flight through the atmosphere below altitudes of about at speeds greater than Mach 5, a speed where dissociation of air begins to become significant and high heat loads exist. Speeds over Mach 25 have been achieved below the thermosphere as of 2020.
Hypersonic vehicles are able to maneuver through the atmosphere in a non-parabolic trajectory, but their aerodynamic heat loads need to be managed.
History
The first manufactured object to achieve hypersonic flight was the two-stage Bumper rocket, consisting of a WAC Corporal second stage set on top of a V-2 first stage. In February 1949, at White Sands, the rocket reached a speed of , or about Mach 6.7. The vehicle, however, burned on atmospheric re-entry, and only charred remnants were found. In April 1961, Russian Major Yuri Gagarin became the first human to travel at hypersonic speed, during the world's first piloted orbital flight. Soon after, in May 1961, Alan Shepard became the first American and second person to fly hypersonic when his capsule reentered the atmosphere at a speed above Mach 5 at the end of his suborbital flight over the Atlantic Ocean.
In November 1961, Air Force Major Robert White flew the X-15 research aircraft at speeds over Mach 6.
On 3 October 1967, in California, an X-15 reached Mach 6.7.
The reentry problem of a space vehicle was extensively studied. The NASA X-43A flew on scramjet for 10 seconds, and then glided for 10 minutes on its last flight in 2004. The Boeing X-51 Waverider flew on scramjet for 210 seconds in 2013, finally reaching Mach 5.1 on its fourth flight test. The hypersonic regime has since become the subject for further study during the 21st century, and strategic competition between the United States, India, Russia, and China.
Physics
Stagnation point
The stagnation point of air flowing around a body is a point where its local velocity is zero. At this point the air flows around this location. A shock wave forms, which deflects the air from the stagnation point and insulates the flight body from the atmosphere. This can affect the lifting ability of a flight surface to counteract its drag and subsequent free fall.
In order to maneuver in the atmosphere at faster speeds than supersonic, the forms of propulsion can still be airbreathing systems, but a ramjet does not suffice for a system to attain Mach 5, as a ramjet slows down the airflow to subsonic. Some systems (waveriders) use a first stage rocket to boost a body into the hypersonic regime. Other systems (boost-glide vehicles) use scramjets after their initial boost, in which the speed of the air passing through the scramjet remains supersonic. Other systems (munitions) use a cannon for their initial boost.
High temperature effect
Hypersonic flow is a high energy flow. The ratio of kinetic energy to the internal energy of the gas increases as the square of the Mach number. When this flow enters a boundary layer, there are high viscous effects due to the friction between air and the high-speed object. In this case, the high kinetic energy is converted in part to internal energy and gas energy is proportional to the internal energy. Therefore, hypersonic boundary layers are high temperature regions due to the viscous dissipation of the flow's kinetic energy. Another region of high temperature flow is the shock layer behind the strong bow shock wave. In the case of the shock layer, the flow's velocity decreases discontinuously as it passes through the shock wave. This results in a loss of kinetic energy and a gain of internal energy behind the shock wave. Due to high temperatures behind the shock wave, dissociation of molecules in the air becomes thermally active. For example, for air at T > , dissociation of diatomic oxygen into oxygen radicals is active: O2 → 2O For T > , dissociation of diatomic nitrogen into N radicals is active: N2 → 2N Consequently, in this temperature range, a plasma forms: —molecular dissociation followed by recombination of oxygen and nitrogen radicals produces nitric oxide: N2 + O2 → 2NO, which then dissociates and recombines to form ions: N + O → NO+ + e−
Low density flow
At standard sea-level condition for air, the mean free path of air molecules is about . At an altitude of , where the air is thinner, the mean free path is . Because of this large free mean path aerodynamic concepts, equations, and results based on the assumption of a continuum begin to break down, therefore aerodynamics must be considered from kinetic theory. This regime of aerodynamics is called low-density flow.
For a given aerodynamic condition low-density effects depends on the value of a nondimensional parameter called the Knudsen number , defined as where is the typical length scale of the object considered. The value of the Knudsen number based on nose radius, , can be near one.
Hypersonic vehicles frequently fly at very high altitudes and therefore encounter low-density conditions. Hence, the design and analysis of hypersonic vehicles sometimes require consideration of low-density flow. New generations of hypersonic airplanes may spend a considerable portion of their mission at high altitudes, and for these vehicles, low-density effects will become more significant.
Thin shock layer
The flow field between the shock wave and the body surface is called the shock layer. As the Mach number M increases, the angle of the resulting shock wave decreases. This Mach angle is described by the equation where a is the speed of the sound wave and v is the flow velocity. Since M=v/a, the equation becomes . Higher Mach numbers position the shock wave closer to the body surface, thus at hypersonic speeds, the shock wave lies extremely close to the body surface, resulting in a thin shock layer. At low Reynolds number, the boundary layer grows quite thick and merges with the shock wave, leading to a fully viscous shock layer.
Viscous interaction
The compressible flow boundary layer increases proportionately to the square of the Mach number, and inversely to the square root of the Reynolds number.
At hypersonic speeds, this effect becomes much more pronounced, due to the exponential reliance on the Mach number. Since the boundary layer becomes so large, it interacts more viscously with the surrounding flow. The overall effect of this interaction is to create a much higher skin friction than normal, causing greater surface heat flow. Additionally, the surface pressure spikes, which results in a much larger aerodynamic drag coefficient. This effect is extreme at the leading edge and decreases as a function of length along the surface.
Entropy layer
The entropy layer is a region of large velocity gradients caused by the strong curvature of the shock wave. The entropy layer begins at the nose of the aircraft and extends downstream close to the body surface. Downstream of the nose, the entropy layer interacts with the boundary layer which causes an increase in aerodynamic heating at the body surface. Although the shock wave at the nose at supersonic speeds is also curved, the entropy layer is only observed at hypersonic speeds because the magnitude of the curve is far greater at hypersonic speeds.
Propulsion
Controlled detonation
Researchers in China have used shock waves in a detonation chamber to compress ionized argon plasma waves moving at Mach 14. The waves are directed into magnetohydrodynamic (MHD) generators to create a current pulse that could be scaled up to gigawatt scale, given enough argon gas to feed into the MHD generators.
Rotating detonation
A rotating detonation engine (RDE) might propel airframes in hypersonic flight; on 14 December 2023 engineers at GE Aerospace demonstrated their test rig, which is to combine an RDE with a ramjet/scramjet, in order to evaluate the regimes of rotating detonation combustion. The goal is to achieve sustainable turbine-based combined cycle (TBCC) propulsion systems, at speeds between Mach 1 and Mach 5.
Applications
Shipping
Transport consumes energy for three purposes: overcoming gravity, overcoming air/water friction, and achieving terminal velocity. The reduced trip times and higher flight altitudes reduce the first two, while increasing the third. Proponents claim that the net energy costs of hypersonic transport can be lower than those of conventional transport while slashing journey times.
Stratolaunch Roc can be used to launch hypersonic aircraft.
Hermeus demonstrated transition from turbojet aircraft engine operation to ramjet operation on 17 November 2022, thus avoiding the need to boost aircraft velocities by rocket or scramjet.
See: SR-72, § Mayhem
Weapons
Two main types of hypersonic weapons are hypersonic cruise missiles and hypersonic glide vehicles. Hypersonic weapons, by definition, travel five or more times the speed of sound. Hypersonic cruise missiles, which are powered by scramjets, are limited to below ; hypersonic glide vehicles can travel higher.
Hypersonic vehicles are much slower than ballistic (i.e. sub-orbital or fractional orbital) missiles, because they travel in the atmosphere, and ballistic missiles travel in the vacuum above the atmosphere. However, they can use the atmosphere to manoeuvre, making them capable of large-angle deviations from a ballistic trajectory. A hypersonic glide vehicle is usually launched with a ballistic first stage, then deploys wings and switches to hypersonic flight as it re-enters the atmosphere, allowing the final stage to evade existing missile defense systems which were designed for ballistic-only missiles.
According to a CNBC July 2019 report (and now in a CNN 2022 report), Russia and China lead in hypersonic weapon development, trailed by the United States, and in this case the problem is being addressed in a joint program of the entire Department of Defense. To meet this development need, the US Army is participating in a joint program with the US Navy and Air Force, to develop a hypersonic glide body. India is also developing such weapons. France and Australia may also be pursuing the technology. Japan is acquiring both scramjet (Hypersonic Cruise Missile), and boost-glide weapons (Hyper Velocity Gliding Projectile).
China
China's XingKong-2 (星空二号, Starry-sky-2), a waverider, had its first flight 3 August 2018.
In August 2021 China launched a boost-glide vehicle to low-earth orbit, circling Earth before maneuvering toward its target location, missing its target by two dozen miles. However China has responded that the vehicle was a spacecraft, and not a missile; there was a July 2021 test of a spaceplane, according to Chinese Foreign Ministry Spokesperson Zhao Lijian; Todd Harrison points out that an orbital trajectory would take 90 minutes for a spaceplane to circle Earth (which would defeat the mission of a weapon in hypersonic flight). The US DoD's headquarters (The Pentagon) reported in October 2021 that two such hypersonic launches have occurred; one launch did not demonstrate the accuracy needed for a precision weapon; the second launch by China demonstrated its ability to change trajectories, according to Pentagon reports on the 2021 competition in arms capabilities.
In 2022, China unveiled two more hypersonic models. An AI simulation has revealed that a Mach 11 aircraft can simply outrun a Mach 1.3 fighter attempting to engage it, while firing its missile at the "pursuing" fighter. This strategy entails a fire control system to accomplish an over-the-shoulder missile launch, which does not yet exist (2023).
In February 2023, the DF-27 covered in 12 minutes, according to leaked secret documents. The capability directly threatens Guam, and US Navy aircraft carriers.
Russia
In 2016, Russia is believed to have conducted two successful tests of Avangard, a hypersonic glide vehicle. The third known test, in 2017, failed. In 2018, an Avangard was launched at the Dombarovskiy missile base, reaching its target at the Kura shooting range, a distance of . Avangard uses new composite materials which are to withstand temperatures of up to . The Avangard's environment at hypersonic speeds reaches such temperatures. Russia considered its carbon fiber solution to be unreliable, and replaced it with new composite materials. Two Avangard hypersonic glide vehicles (HGVs) will first be mounted on SS-19 ICBMs; on 27 December 2019 the weapon was first fielded to the Yasnensky Missile Division, a unit in the Orenburg Oblast. In an earlier report, Franz-Stefan Gady named the unit as the 13th Regiment/Dombarovskiy Division (Strategic Missile Force).
In 2021 Russia launched a 3M22 Zircon antiship missile over the White Sea, as part of a series of tests. "Kinzhal and Zircon (Tsirkon) are standoff strike weapons". In February 2022, a coordinated series of missile exercises, some of them hypersonic, were launched on 18 February 2022 in an apparent display of power projection. The launch platforms ranged from submarines in the Barents sea in the Arctic, as well as from ships on the Black sea to the south of Russia. The exercise included a RS-24 Yars ICBM, which was launched from the Plesetsk Cosmodrome in Northern Russia until it reached its destination on the Kamchatka Peninsula in Eastern Russia. Ukraine estimated a 3M22 Zircon was used against it, but apparently did not exceed Mach 3 and was shot down 7 February 2024 in Kyiv.
United States
These tests have prompted US responses in weapons development. By 2018, the AGM-183 and Long-Range Hypersonic Weapon were in development per John Hyten's USSTRATCOM statement on 8 August 2018 (UTC). At least one vendor is developing ceramics to handle the temperatures of hypersonics systems. There are over a dozen US hypersonics projects as of 2018, notes the commander of USSTRATCOM; from which a future hypersonic cruise missile is sought, perhaps by Q4 FY2021. The Long range precision fires (LRPF) CFT is supporting Space and Missile Defense Command's pursuit of hypersonics. Joint programs in hypersonics are informed by Army work;<ref however, at the strategic level, the bulk of the hypersonics work remains at the Joint level. Long Range Precision Fires (LRPF) is an Army priority, and also a DoD joint effort. The Army and Navy's Common Hypersonic Glide Body (C-HGB) had a successful test of a prototype in March 2020. A wind tunnel for testing hypersonic vehicles was completed in Texas (2021). The Army's Land-based Hypersonic Missile "is intended to have a range of ". By adding rocket propulsion to a shell or glide body, the joint effort shaved five years off the likely fielding time for hypersonic weapon systems. Countermeasures against hypersonics will require sensor data fusion: both radar and infrared sensor tracking data will be required to capture the signature of a hypersonic vehicle in the atmosphere. There are also privately developed hypersonic systems, as well as critics.
DoD tested a Common Hypersonic Glide Body (C-HGB) in 2020. The Air Force dropped out of the tri-service hypersonic project in 2020, leaving only the Army and Navy on the C-HGB.
According to Air Force chief scientist, Dr. Greg Zacharias, the US anticipates having hypersonic weapons by the 2020s, hypersonic drones by the 2030s, and recoverable hypersonic drone aircraft by the 2040s. The focus of DoD development will be on air-breathing boost-glide hypersonics systems. Countering hypersonic weapons during their cruise phase will require radar with longer range, as well as space-based sensors, and systems for tracking and fire control. A mid-2021 report from the Congressional Research Service states the United States is "unlikely" to field an operational hypersonic glide vehicle (HGV) until 2023.
On 21 October 2021, the Pentagon stated that a test of a hypersonic glide body failed to complete because its booster failed; according to Lt. Cmdr. Timothy Gorman the booster was not part of the equipment under test, but the booster's failure mode will be reviewed to improve the test setup. The test occurred at Pacific Spaceport Complex – Alaska, on Kodiak island. Three rocketsondes at Wallops Island completed successful tests earlier that week, for the hypersonics effort. On 29 October 2021 the booster rocket for the Long-Range Hypersonic Weapon was successfully tested in a static test; the first stage thrust vector control system control system was included. On 26 October 2022 Sandia National Laboratories conducted a successful test of hypersonic technologies at Wallops Island.
On 28 June 2024 DoD announced a successful recent end-to-end test of the US Army's Long-Range Hypersonic Weapon all-up round (AUR) and the US Navy's Conventional Prompt Strike. The missile was launched from the Pacific Missile Range Facility, Kauai, Hawaii.
In September 2021, and in March 2022, US vendors Raytheon/Northrop Grumman, and Lockheed respectively, first successfully tested their air-launched, scramjet-powered hypersonic cruise missiles, which were funded by DARPA. By September 2022 Raytheon was selected for fielding Hypersonic Attack Cruise Missile (HACM), a scramjet-powered hypersonic missile by FY2027.
In March 2024 Stratolaunch Roc launched TA-1, a vehicle which is nearing Mach 5 at in a powered flight, a risk-reduction exercise for TA-2. In a similar development Castelion launched its low-cost hypersonic platform in the Mojave desert, in March 2024.
Iran
In 2022, Iran was believed to have constructed their first hypersonic missile. Amir Ali Hajizadeh, the commander of the Air Force of the Islamic Republic of Iran's Revolutionary Guards Corps, announced the construction of the Islamic Republic's first hypersonic missile. He noted: "This new missile was produced to counter air defense shields and passes through all missile defense systems and which represents a big leap in the generation of missiles" and has a speed above Mach 13. but Col. Rob Lodwick, the spokesman for the Pentagon on Middle East affairs said that there are doubts in this regard.
In 2021, DoD was codifying flight test guidelines, knowledge gained from Conventional Prompt Strike (CPS), and the other hypersonics programs, for some 70 hypersonics R&D programs alone, as of 2021. In 2021-2023, Heidi Shyu, the Under Secretary of Defense for Research and Engineering (USD(R&E)) is pursuing a program of annual rapid joint experiments, including hypersonics capabilities, to bring down their cost of development. A hypersonic test bed aims to bring the frequency of tests to one per week.
Other programs
France, Australia, India, Germany, Japan, South Korea, North Korea, and Iran also have ongoing hypersonic weapon projects or research programs.
Australia and the US have begun joint development of air-launched hypersonic missiles, as announced by a Pentagon statement on 30 November 2020. The development will build on the $54 million Hypersonic International Flight Research Experimentation (HIFiRE) under which both nations collaborated on over a 15-year period. Small and large companies will all contribute to the development of these hypersonic missiles, named SCIFIRE in 2022.
Defenses
In May 2023 Ukraine shot down a Kinzhal with a Patriot. IBCS, or the Integrated Air and Missile Defense Battle Command System is an Integrated Air and Missile Defense (IAMD) capability designed to work with Patriots and other missiles.
Rand 2017 assessment
Rand Corporation (28 September 2017) estimates there is less than a decade to prevent Hypersonic Missile proliferation.
In the same way that anti-ballistic missiles were developed as countermeasures to ballistic missiles, counter-countermeasures to hypersonics systems were not yet in development, as of 2019. See the National Defense Space Architecture (2021), above. But by 2019, $157.4 million was allocated in the FY2020 Pentagon budget for hypersonic defense, out of $2.6 billion for all hypersonic-related research. $207 million of the FY2021 budget was allocated to defensive hypersonics, up from the FY2020 budget allocation of $157 million. Both the US and Russia withdrew from the Intermediate-Range Nuclear Forces (INF) Treaty in February 2019. This will spur arms development, including hypersonic weapons, in FY2021 and forward. By 2021 the Missile Defense Agency was funding regional countermeasures against hypersonic weapons in their glide phase. James Acton characterized the proliferation of hypersonic vehicles as never-ending in October 2021; Jeffery Lewis views the proliferation as additional arguments for ending the arms race. Doug Loverro assesses that both missile defense and competition need rethinking. CSIS assesses that hypersonic defense should be the US' priority over hypersonic weapons.
NDSA / PWSA
As part of their Hypersonic vehicle tracking mission, the Space Development Agency (SDA) launched four satellites and the Missile Defense Agency (MDA) launched two satellites on 14 February 2024 (launch USSF-124). The satellites will share the same orbit, which allows the SDA's wide field of view (WFOV) satellites and the MDA's medium field of view (MFOV) downward-looking satellites to traverse the same terrain of Earth. The SDA's four satellites are part of its Tranche 0 tracking layer (T0TL). The MDA's two satellites are HBTSS or Hypersonic and ballistic tracking space sensors.
Additional capabilities of Tranche 0 of the National defense space architecture (NDSA), also known as the Proliferated warfighting space architecture (PWSA) will be tested over the next two years.
Proposed
Aircraft
I-Plane
14-X
Espadon hypersonic combat aircraft concept (program conducted by the ONERA)
Avatar (spacecraft)
Advanced Technology Vehicle
DARPA XS-1
Destinus hydrogen-powered hypersonic aircraft. A prototype was tested last year.
Dream Chaser
NASA X-43
HyperSoar
HyperStar hypersonic passenger airliner
Falcon HTV-2
Boeing Commercial Airplanes hypersonic airliner Concept
Lockheed Martin SR-72
Kholod
Ayaks waverider spaceplane
Programme for Reusable In-orbit Demonstrator in Europe (PRIDE)
Sänger II
HyShot
Hytex
Horus
SHEFEX
Skylon
Reaction Engines A2
Hypersonic Air Vehicle Experimental (HVX) with Concept V aircraft
Spartan
HEXAFLY
SpaceLiner
STRATOFLY
Zero Emission Hyper Sonic Transport
Hermeus Quarterhourse unmanned hypersonic demonstrator designed to land and take-off on conventional runways.
Hermeus Halcyon hypersonic transport
Venus Aerospace Stargazer hypersonic airliner with rotating detonation rocket engine
POLARIS Raumflugzeuge GmbH is developing and testing a hypersonic spaceplane for the German Armed Forces in Peenemünde
Bombers
Expendable Hypersonic Air-Breathing Multi-Mission Demonstrator ("Mayhem") Based on § HAWC and HSSW: "solid rocket-boosted, air-breathing, hypersonic conventional cruise missile", a follow-on to AGM-183A. As of 2020 no design work had been done. By 2022 Mayhem was to be tasked with ISR and strike missions, as a possible bomber. Leidos is preparing a system requirements review, and a conceptual design for these missions. Draper Labs has begun a partnership with Leidos. Kratos is preparing a conceptual design for Mayhem, using Air Force Research Laboratory (AFRL) digital engineering techniques in a System design agent team, a collaboration with Leidos, Calspan, and Draper. DIU is soliciting additional Hypersonic and High-Cadence Airborne Testing Capabilities (HyCAT), for Mayhem.
Cruise missiles
Advanced Hypersonic Weapon (AHW)
Hypersonic Air-breathing Weapon Concept (HAWC, pronounced "hawk"). September 2021: HAWC is DARPA-funded. Built by Raytheon and Northrop Grumman, HAWC is the first US scramjet-powered hypersonic missile to successfully complete a free flight test in the 2020s. DARPA's goals for the test, which were successfully met, were: "vehicle integration and release sequence, safe separation from the launch aircraft, booster ignition and boost, booster separation and engine ignition, and cruise". HAWC is capable of sustained, powered maneuver in the atmosphere. HAWC appears to depend on a rocket booster to accelerate to scramjet velocities operating in an oxygen-rich environment. It is easier to put a seeker on a sub-sonic air-breathing vehicle. In mid-March 2022 a HAWC Scramjet was successfully tested in an air-launched flight by a second vendor. On 18 July 2022 Raytheon announced another successful test of its Hypersonic Air-breathing Weapon Concept (HAWC) scramjet, in free flight.
MoHAWC is a follow-on to DARPA's HAWC project. MoHAWC will seek "to further develop the vehicle’s scramjet propulsion system, upgrade integration algorithms, reduce the size of navigation components, and improve its manufacturing approach".
Hypersonic Conventional Strike Weapon (HCSW - pronounced "hacksaw") passed its critical design review (CDR) but this IDIQ (indefinite duration, indefinite quantity) contract was terminated in favor of ARRW because twice as many ARRWs will fit on a bomber.
ASN4G (air-launched, scramjet-powered, hypersonic cruise missile under development by MBDA France and the ONERA to succeed the ASMP)
Kh-45 (cancelled)
Zircon
Hypersonic Technology Demonstrator Vehicle
/ Brahmos-II
Hycore
Glide vehicles
AGM-183A air launched rapid response weapon (ARRW, pronounced "arrow") Telemetry data has been successfully transmitted from ARRW —AGM-183A IMV-2 (Instrumented Measurement Vehicle) to the Point Mugu ground stations, demonstrating the ability to accurately broadcast radio at hypersonic speeds; however, ARRW's launch sequence was not completed, as of 15 Dec 2021. Hundreds of ARRWs or other Hypersonic weapons are being sought by the Air Force. On 9 March 2022 Congress halved funding for ARRW and transferred the balance to ARRW's R&D account to allow for further testing, which puts the procurement contract at risk. A production decision on ARRW has been delayed for a year to complete flight testing. On 14 May 2022 an ARRW flight test was successfully completed, for the first time. There have been 3 successful tests of ARRW in 2022; however the Air Force is requiring 3 additional successful tests of an All-Up Round (AUR) before making a production decision. No production decision will be made in 2024. The USAF now intends to end the ARRW development program, as of 29 March 2023. A B-52 flying out of Anderson AFB in Guam fired an All-Up-Round AGM-183A Air-launched Rapid Response Weapon (ARRW); the AUR was tested at Reagan test site in the Pacific on 17 March 2024.
DARPA Tactical Boost Glide vehicle
VMaX-2 hypersonic glide vehicle (under development by ArianeGroup; first flight test scheduled for 2025)
HGV-202F
Flown
Aircraft
North American X-15 (crewed)
Lockheed X-17
NASA X-43
Boeing X-51
WZ-8
HSTDV
Glide vehicles
Avangard
DF-ZF
Hwasong-8
Unnamed
VMaX (developed by ArianeGroup; first flight test took place on 26 June 2023 and was a success)
Spaceplanes
Space Shuttle orbiter (crewed)
Buran (human-rated, only flew without crew)
RLV-TD
Boeing X-37
Shenlong
IXV
BOR-4
Martin X-23 PRIME
ASSET
HYFLEX
Reusable experimental spacecraft (disputed)
Jiageng-1
Cancelled
Aircraft
Silbervogel (Sänger bomber)
Keldysh bomber
Tupolev Tu-360, follow-on to Tu-160
Tupolev Tu-2000
Lockheed L-301
Glide vehicles
VERAS (hypersonic glide vehicle program launched in 1965 and cancelled in 1971)
Spaceplanes
Boeing X-20 Dyna-Soar
Rockwell X-30 (National Aerospace Plane)
Orbital Sciences X-34
Mikoyan-Gurevich MiG-105
Tsien Spaceplane 1949
HOPE-X
XCOR Lynx
Lockheed Martin X-33
Hermes
Prometheus
HL-20 Personnel Launch System
HL-42
BAC Mustard
Kliper
HOTOL
Valier Raketenschiff
Rockwell C-1057
See also
Hypersonic effect
Supersonic transport
Lifting body
List of X-planes
Thunderbird 1
Notes
References
Further reading
David Wright and Cameron Tracy, "Over-hyped: Physics dictates that hypersonic weapons cannot live up to the grand promises made on their behalf", Scientific American, vol. 325, no. 2 (August 2021), pp. 64–71. Quote from p. 71: "Failure to fully assess [the potential benefits and costs of hypersonic weapons] is a recipe for wasteful spending and increased global risk."
External links
A comparative analysis of the performance of long-range hypervelocity vehicles
(2022) Joint Air Power Competence Centre (JAPCC)
Aerodynamics
Aerospace engineering
Airspeed | Hypersonic flight | [
"Physics",
"Chemistry",
"Engineering"
] | 6,162 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
17,351,973 | https://en.wikipedia.org/wiki/Radical%20clock | In chemistry, a radical clock is a chemical compound that assists in the indirect methodology to determine the kinetics of a free-radical reaction. The radical-clock compound itself reacts at a known rate, which provides a calibration for determining the rate of another reaction.
Many organic mechanisms involve intermediates that cannot be identified directly but which are inferred from trapping reactions. When such intermediates are radicals, their lifetimes can be deduced from radical clocks. An alternative, perhaps more direct approach involves generation and isolation of the intermediates by flash photolysis and pulse radiolysis, but such methods are time-consuming and require expensive equipment. With an indirect approach of radical clocks, one can still obtain relative or absolute rate constants without the need for instruments or equipment beyond those normally needed for the reaction being studied.
Theory and technique
Radical clock reactions involve a competition between a unimolecular radical reaction with a known rate constant and a bimolecular radical reaction with an unknown rate constant to produce unrearranged and rearranged products. The rearrangement of an unrearranged radical, U•, proceeds to form R• (the clock reaction) with a known rate constant (kr). These radicals react with a trapping agent, AB, to form the unrearranged and rearranged products UA and RA, respectively.
The yield of the two products can be determined by gas chromatography (GC) or nuclear magnetic resonance (NMR). From the concentration of the trapping agent, the known rate constant of the radical clock, and the ratio of the products, the unknown rate constant can be indirectly established.
If a chemical equilibrium exists between U• and R•, the rearranged products are dominant. Because the unimolecular rearrangement reaction is first order and the bimolecular trapping reaction is second order (both irreversible), the unknown rate constant (kR) can be determined by:
Clock rates
The driving force behind radical clock reactions is their ability to rearrange. Some common radical clocks are radical cyclizations, ring openings, and 1,2-migrations. Two popular rearrangements are the cyclization of 5-hexenyl and the ring-opening of cyclopropylmethyl:
5-hexenyl radical undergoes cyclization to produce a five-membered ring because this is entropically and enthalpically more favored than the six-membered ring possibility. The rate-constant for this reaction is 2.3×105 s−1 at 298 K.
Cyclopropylmethyl radical undergoes a very rapid ring opening rearrangement that relieves the ring strain and is enthalpically favorable. The rate-constant for this reaction is 8.6×107 s−1 at 298 K.
In order to determine absolute rate constants for radical reactions, unimolecular clock reactions need to be calibrated for each group of radicals such as primary alkyls over a range of time. Through the usage of EPR spectroscopy, the absolute rate constants for unimolecular reactions can be measured with a variety of temperatures. The Arrhenius equation can then be applied to calculate the rate constant for a specific temperature at which the radical clock reactions are conducted.
When using a radical clock to study a reaction, there is an implicit assumption that the rearrangement rate of the radical clock is the same as when the rate of that rearrangement reaction rate is determined. A theoretical study of the rearrangement reactions of cyclobutylmethyl and of 5-hexenyl in a variety of solvents found that their reaction rates were only very slightly affected by the nature of the solvent.
The rates of radical clocks can be adjusted to increase or decrease by what types of substituents are attached to the radical clock. In the figure below, the rates of the radical clocks are shown with a variety of substituents attached to the clock.
By selecting among the general classes of radical clocks and the specific substituents on them, one can be chosen with a rate-constant suitable for studying reactions having a wide range of rates. Reactions having rates ranging from 10−1 to 1012 M−1 s−1 have been studied using radical clocks.
Examples of use
Radical clocks are used in reduction of alkyl halides with sodium naphthalenide, reaction of enones, the Wittig rearrangement, reductive elimination reactions of dialkylmercury compounds, dioxirane dihydroxylations, and electrophilic fluorinations.
References
External links
RADICAL CLOCKS: MOLECULAR STOPWATCHES FOR TIMING RADICAL REACTIONS
Radical Clock Reactions
Free radicals
Chemical kinetics | Radical clock | [
"Chemistry",
"Biology"
] | 974 | [
"Chemical reaction engineering",
"Free radicals",
"Senescence",
"Biomolecules",
"Chemical kinetics"
] |
17,353,312 | https://en.wikipedia.org/wiki/Cement%20render | Cement render or cement plaster is the application of a mortar mix of sand and cement, (optionally lime) and water to brick, concrete, stone, or mud brick. It is often textured, colored, or painted after application. It is generally used on exterior walls but can be used to feature an interior wall. Depending on the 'look' required, rendering can be fine or coarse, textured or smooth, natural or colored, pigmented or painted.
The cement rendering of brick, concrete and mud houses has been used for centuries to improve the appearance (and sometimes weather resistance) of exterior walls. It can be seen in different forms all over southern Europe. Different countries have their own styles and traditional colors. In the United Kingdom, cement is optional. In other countries, lime is optional. The cement in render hydrates the same way it does in concrete.
Render finishes
Different finishes can be created by using different tools such as trowels, sponges, or brushes. The art in traditional rendering is (apart from getting the mix right) the appearance of the top coat. Different tradesmen have different finishing styles and are able to produce different textures and decorative effects. Some of these special finishing effects may need to be created with a thin finishing top coat or a finishing wash.
Traditional rendering
Cement render consists of 6 parts clean sharp fine sand, 1 part cement, and 1 part lime in some parts of the world. The lime makes the render more workable and reduces cracking when the render dries. Any general purpose cement can be used. Various additives can be added to the mix to increase adhesion. Coarser sand is used in the base layer and slightly finer sand in the top layer.
The application process resembles the process of applying paint. To ensure adhesion, the surface to be rendered is initially hosed off to ensure it is free of any dirt and loose particles. Old paint or old render is scraped away. The surface is roughened to improve adhesion. For large areas, vertical battens are fixed to the wall every 1 to 1.5 meters, to keep the render flat and even.
Acrylic rendering
There is also a wide variety of premixed renders commercially available for different situations. Some have a polymer additive added to the traditional cement, lime and sand mix for enhanced water resistance, flexibility and adhesion.
Acrylic premixed renders have superior water resistance and strength. They can be used on a wider variety of surfaces than cement render, including concrete, cement blocks, and AAC concrete paneling. These acrylic modified renders may still be too brittle and cannot be applied over substrates like fiber cement sheeting, as they will crack on the joints and can allow water to enter the sheet and cause delamination of the coatings. The newer technology polymer exterior cladding such as expanded polystyrene (EPS) can have these acrylic modified renders applied to them with the inclusion of an alkali resistant mesh encapsulated between the render coats. Some premixed acrylic renders have a smoother complexion than traditional renders. There are also many various acrylic-bound pigmented 'designer' finishing coats that can be applied over acrylic render. Various finishes, patterns and textures are possible such as sand, sandstone, marble, stone, stone chip, lime wash or clay like finishes. There are stipple, glistening finishes, and those with enhanced water resistance and antifungal properties. Depending upon the product, they can be rolled, troweled or sponged on. A limited number can also be sprayed on. Acrylic renders usually take only 2 days to dry and thus much faster than the usual 28 days for traditional render.
A disadvantage of acrylic render vs. traditional rendering is that acrylic render lacks the sustainability and environmental compatibility of traditional cement-and-mineral render. All buildings have a finite lifetime, and their materials will eventually be either recycled or absorbed into the environment. As acrylics are synthetic polymers, they do not break down by natural weathering the same way that a cement, sand, and lime mixture will, and so will persist in the natural environment for much longer as synthetic chemical compounds that have unknown long-term effects on ecosystems. Also, the application and drying process of solvent based acrylic resin render involves the atmospheric evaporation of pollutant solvents—necessary for the application of the resin—which are hazardous to the health of humans and of many organisms on which humans depend. Synthetic polymers such as acrylic are manufactured from chemical feedstocks such as acetone, hydrogen cyanide, ethylene, isobutylene, and other petroleum derivatives. The polymer products cannot be fully recycled (using present technology or any that can be confidently expected to be developed), so new raw materials, taken from the finite and diminishing supply of raw natural resources, must always be put into their manufacture, making the process unsustainable. Traditional cement-based render does not have these problems, making it an arguably better choice in many cases, despite its working limitations. Using Waterborne resins will not have these disadvantages.
See also
Exterior insulation finishing system
Harling (wall finish)
Lath and plaster
Pargeting
Plaster
Plasterwork
Polished plaster
Siding
Stucco
Tadelakt
References
Further reading
Construction
Wallcoverings
Building materials
Plastering
nl:Pleister (bouw) | Cement render | [
"Physics",
"Chemistry",
"Engineering"
] | 1,110 | [
"Building engineering",
"Coatings",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
17,354,429 | https://en.wikipedia.org/wiki/Caesium%20nitrate | Caesium nitrate or cesium nitrate is a salt with the chemical formula CsNO3. An alkali metal nitrate, it is used in pyrotechnic compositions, as a colorant and an oxidizer, e.g. in decoys and illumination flares. The caesium emissions are chiefly due to two powerful spectral lines at 852.113 nm and 894.347 nm.
Caesium nitrate prisms are used in infrared spectroscopy, in x-ray phosphors, and in scintillation counters. It is also used in making optical glasses and lenses.
As with other alkali metal nitrates, caesium nitrate decomposes on gentle heating to give caesium nitrite:
Caesium also forms two unusual acid nitrates, which can be described as CsNO3·HNO3 and CsNO3·2HNO3 (melting points 100 °C and 36–38 °C respectively).
References
Caesium compounds
Nitrates
Pyrotechnic oxidizers
Pyrotechnic colorants | Caesium nitrate | [
"Chemistry"
] | 222 | [
"Inorganic compounds",
"Oxidizing agents",
"Inorganic compound stubs",
"Nitrates",
"Salts"
] |
17,355,301 | https://en.wikipedia.org/wiki/Plasma%20confinement | In plasma physics, plasma confinement refers to the act of maintaining a plasma in a discrete volume. Confining plasma is required in order to achieve fusion power. There are two major approaches to confinement: magnetic confinement and inertial confinement.
References
Plasma technology and applications
Fusion power | Plasma confinement | [
"Physics",
"Chemistry"
] | 57 | [
"Plasma physics",
"Plasma technology and applications",
"Fusion power",
"Plasma physics stubs",
"Nuclear fusion"
] |
17,356,800 | https://en.wikipedia.org/wiki/Acid%E2%80%93base%20disorder | Acid–base imbalance is an abnormality of the human body's normal balance of acids and bases that causes the plasma pH to deviate out of the normal range (7.35 to 7.45). In the fetus, the normal range differs based on which umbilical vessel is sampled (umbilical vein pH is normally 7.25 to 7.45; umbilical artery pH is normally 7.18 to 7.38). It can exist in varying levels of severity, some life-threatening.
Classification
An excess of acid is called acidosis or acidemia, while an excess in bases is called alkalosis or alkalemia. The process that causes the imbalance is classified based on the cause of the disturbance (respiratory or metabolic) and the direction of change in pH (acidosis or alkalosis). This yields the following four basic processes:
Mixed disorders
The presence of only one of the above derangements is called a simple acid–base disorder. In a mixed disorder, more than one is occurring at the same time. Mixed disorders may feature an acidosis and alkosis at the same time that partially counteract each other, or there can be two different conditions affecting the pH in the same direction. The phrase "mixed acidosis", for example, refers to metabolic acidosis in conjunction with respiratory acidosis. Any combination is possible, as metabolic acidosis and alkalosis can co exist together.
Calculation of imbalance
The traditional approach to the study of acid–base physiology has been the empirical approach. The main variants are the base excess approach and the bicarbonate approach. The quantitative approach introduced by Peter A Stewart in 1978 is newer.
Causes
There are numerous reasons that each of the four processes can occur (detailed in each article). Generally speaking, sources of acid gain include:
Retention of carbon dioxide
Production of nonvolatile acids from the metabolism of proteins and other organic molecules
Loss of bicarbonate in feces or urine
Intake of acids or acid precursors
Sources of acid loss include:
Use of hydrogen ions in the metabolism of various organic anions
Loss of acid in the vomitus or urine
Gastric aspiration in hospital
Severe diarrhea
Carbon dioxide loss through hyperventilation
Compensation
The body's acid–base balance is tightly regulated. Several buffering agents exist which reversibly bind hydrogen ions and impede any change in pH. Extracellular buffers include bicarbonate and ammonia, while proteins and phosphate act as intracellular buffers. The bicarbonate buffering system is especially key, as carbon dioxide (CO2) can be shifted through carbonic acid (H2CO3) to hydrogen ions and bicarbonate (HCO3−) as shown below.
HCO_3^- + H+ <=> H2CO3 <=> CO2 + H2O
Acid–base imbalances that overcome the buffer system can be compensated in the short term by changing the rate of ventilation. This alters the concentration of carbon dioxide in the blood, shifting the above reaction according to Le Chatelier's principle, which in turn alters the pH. For instance, if the blood pH drops too low (acidemia), the body will compensate by increasing breathing, expelling CO2, and shifting the reaction above to the right such that fewer hydrogen ions are free–thus the pH will rise back to normal. For alkalemia, the opposite occurs.
The kidneys are slower to compensate, but renal physiology has several powerful mechanisms to control pH by the excretion of excess acid or base. In responses to acidosis, tubular cells reabsorb more bicarbonate from the tubular fluid, collecting duct cells secrete more hydrogen and generate more bicarbonate, and ammoniagenesis leads to increased formation of the NH3 buffer. In responses to alkalosis, the kidney may excrete more bicarbonate by decreasing hydrogen ion secretion from the tubular epithelial cells, and lowering rates of glutamine metabolism and ammonia excretion.
References
External links
On-line text at AnaesthesiaMCQ.com
Overview at kumc.edu
Overview at mcgill.ca
Stewart's original text at acidbase.org
Overview at med.utah.edu
Overview at anaesthetist.com
Overview at anst.uu.se
Tutorial at acid-base.com
Online acid–base physiology text
Diagnoses at lakesidepress.com
Interpretation at nda.ox.ac.uk
Acid Base Tutorial
Human homeostasis
Acid–base physiology
Acid–base disturbances
Equilibrium chemistry
Respiratory therapy | Acid–base disorder | [
"Chemistry",
"Biology"
] | 943 | [
"Acid–base physiology",
"Human homeostasis",
"Equilibrium chemistry",
"Homeostasis",
"Acid–base disturbances"
] |
17,359,213 | https://en.wikipedia.org/wiki/Topological%20semigroup | In mathematics, a topological semigroup is a semigroup that is simultaneously a topological space, and whose semigroup operation is continuous.
Every topological group is a topological semigroup.
See also
References
Topological algebra
Topological groups | Topological semigroup | [
"Mathematics"
] | 44 | [
"Algebra stubs",
"Space (mathematics)",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Topology stubs",
"Topological groups",
"Topological algebra",
"Algebra"
] |
17,359,307 | https://en.wikipedia.org/wiki/Paratopological%20group | In mathematics, a paratopological group is a topological semigroup that is algebraically a group. In other words, it is a group G with a topology such that the group's product operation is a continuous function from G × G to G. This differs from the definition of a topological group in that the group inverse is not required to be continuous.
As with topological groups, some authors require the topology to be Hausdorff.
Compact paratopological groups are automatically topological groups.
References
Topological groups | Paratopological group | [
"Mathematics"
] | 103 | [
"Space (mathematics)",
"Topological spaces",
"Topology stubs",
"Topology",
"Topological groups"
] |
10,916,602 | https://en.wikipedia.org/wiki/Christopher%20Llewellyn%20Smith | Sir Christopher Hubert Llewellyn Smith (born 19 November 1942) is an Emeritus Professor of Physics at the University of Oxford.
Education
Llewellyn Smith was educated at the University of Oxford (BA) and completed his Doctor of Philosophy degree in theoretical physics at New College, Oxford in 1967.
Career and research
After his DPhil he worked at the Lebedev Physical Institute in Moscow, CERN and then the SLAC National Accelerator Laboratory before returning to Oxford in 1974. Llewellyn Smith was elected a Fellow of the Royal Society in 1984.
While Chairman of Oxford Physics (1987–92), he led the merger of five different departments into a single Physics Department. Llewellyn Smith was Director General of CERN from 1994 to 1998. Thereafter he served as Provost and President of University College London (1999–2002).
Awards and honours
Llewellyn Smith received the James Clerk Maxwell Medal and Prize in 1979, and Glazebrook Medal and Prize of the Institute of Physics in 1999 and was knighted in 2001. In 2004, he became Chairman of the Consultative Committee for Euratom on Fusion (CCE-FU). Until 2009 he was Director of UKAEA Culham Division, which holds the responsibility for the United Kingdom's fusion programme and operation of the Joint European Torus (JET). He is a member of the Advisory Council for the Campaign for Science and Engineering.
In 2013, he joined the National Institute of Science Education and Research (NISER), Bhubaneswar, India as a Distinguished Professor.
In 2015, he was awarded the Royal Medal of the Royal Society.
Personal life
Llewellyn Smith married in 1966 and has one son and one daughter.
References
1942 births
Living people
Alumni of New College, Oxford
British nuclear physicists
English physicists
People associated with CERN
Experimental particle physics
Experimental physicists
Academics of University College London
Fellows of New College, Oxford
Fellows of St John's College, Oxford
Fellows of the Royal Society
Foreign fellows of the Indian National Science Academy
International Centre for Synchrotron-Light for Experimental Science Applications in the Middle East people
Knights Bachelor
Maxwell Medal and Prize recipients
Particle physicists
Provosts of University College London
Department of Physics, University of Oxford | Christopher Llewellyn Smith | [
"Physics"
] | 452 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
10,916,658 | https://en.wikipedia.org/wiki/Linear%20transformer%20driver | A linear transformer driver (LTD) within physics and energy, is an annular parallel connection of switches and capacitors. The driver is designed to deliver rapid high power pulses. The LTD was invented at the Institute of High Current Electronics (IHCE) in Tomsk, Russia. The LTD is capable of producing high current pulses, up to 1 mega amps (106 ampere), with a risetime of less than 100 ns. This is an improvement over Marx generator based pulsed power devices which require pulse compression to achieve such fast risetimes. It is being considered as a driver for z-pinch based inertial confinement fusion.
LTDs at Sandia National Laboratories
Sandia National Laboratory is currently investigating a z-pinch as a possible ignition source for inertial confinement fusion. On its "Z machine", Sandia can achieve dense, high temperature plasmas by firing fast, 100-nanosecond current pulses exceeding 20 million amps through hundreds of tungsten wires with diameters on the order of tens of micrometres. The LTD is currently being investigated as a driver for the next generation of high power accelerators.
Sandia's roadmap includes another future Z machine version called ZN (Z Neutron) to test higher yields in fusion power and automation systems. ZN is planned to give between 20 and 30 MJ of hydrogen fusion power with a shot per hour thanks to LTDs replacing the current Marx generators. After 8 to 10 years of operation, ZN would become a transmutation pilot plant capable of a fusion shot every 100 seconds.
The next step planned would be the Z-IFE (Z-inertial fusion energy) test facility, the first true z-pinch driven prototype fusion power plant. It is suggested it would integrate Sandia's latest designs using LTDs. Sandia labs recently proposed a conceptual 1 petawatt (1015 watts) LTD Z-pinch power plant, where the electric discharge would reach 70 million amperes.
See also
References
External links
Development and tests of fast 1-MA linear transformer driver stages
http://www.sandia.gov/news/resources/releases/2007/rapid-fire-pulse.html
http://www-ners.engin.umich.edu/labs/plasma/Research/ZPinch.html
Power (physics) | Linear transformer driver | [
"Physics",
"Mathematics"
] | 487 | [
"Force",
"Physical quantities",
"Quantity",
"Power (physics)",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
10,917,170 | https://en.wikipedia.org/wiki/Isotropic%20quadratic%20form | In mathematics, a quadratic form over a field F is said to be isotropic if there is a non-zero vector on which the form evaluates to zero. Otherwise it is a definite quadratic form. More explicitly, if q is a quadratic form on a vector space V over F, then a non-zero vector v in V is said to be isotropic if . A quadratic form is isotropic if and only if there exists a non-zero isotropic vector (or null vector) for that quadratic form.
Suppose that is quadratic space and W is a subspace of V. Then W is called an isotropic subspace of V if some vector in it is isotropic, a totally isotropic subspace if all vectors in it are isotropic, and a definite subspace if it does not contain any (non-zero) isotropic vectors. The of a quadratic space is the maximum of the dimensions of the totally isotropic subspaces.
More generally, if the quadratic form is non-degenerate and has the signature , then its isotropy index is the minimum of a and b. An important example of an isotropic form over the reals occurs in pseudo-Euclidean space.
Hyperbolic plane
Let F be a field of characteristic not 2 and . If we consider the general element of V, then the quadratic forms and are equivalent since there is a linear transformation on V that makes q look like r, and vice versa. Evidently, and are isotropic. This example is called the hyperbolic plane in the theory of quadratic forms. A common instance has F = real numbers in which case and are hyperbolas. In particular, is the unit hyperbola. The notation has been used by Milnor and Husemoller for the hyperbolic plane as the signs of the terms of the bivariate polynomial r are exhibited.
The affine hyperbolic plane was described by Emil Artin as a quadratic space with basis satisfying , where the products represent the quadratic form.
Through the polarization identity the quadratic form is related to a symmetric bilinear form .
Two vectors u and v are orthogonal when . In the case of the hyperbolic plane, such u and v are hyperbolic-orthogonal.
Split quadratic space
A space with quadratic form is split (or metabolic) if there is a subspace which is equal to its own orthogonal complement; equivalently, the index of isotropy is equal to half the dimension. The hyperbolic plane is an example, and over a field of characteristic not equal to 2, every split space is a direct sum of hyperbolic planes.
Relation with classification of quadratic forms
From the point of view of classification of quadratic forms, spaces with definite quadratic forms are the basic building blocks for quadratic spaces of arbitrary dimensions. For a general field F, classification of definite quadratic forms is a nontrivial problem. By contrast, the isotropic forms are usually much easier to handle. By Witt's decomposition theorem, every inner product space over a field is an orthogonal direct sum of a split space and a space with definite quadratic form.
Field theory
If F is an algebraically closed field, for example, the field of complex numbers, and is a quadratic space of dimension at least two, then it is isotropic.
If F is a finite field and is a quadratic space of dimension at least three, then it is isotropic (this is a consequence of the Chevalley–Warning theorem).
If F is the field Qp of p-adic numbers and is a quadratic space of dimension at least five, then it is isotropic.
See also
Isotropic line
Polar space
Witt group
Witt ring (forms)
Universal quadratic form
References
Pete L. Clark, Quadratic forms chapter I: Witts theory from University of Miami in Coral Gables, Florida.
Tsit Yuen Lam (1973) Algebraic Theory of Quadratic Forms, §1.3 Hyperbolic plane and hyperbolic spaces, W. A. Benjamin.
Tsit Yuen Lam (2005) Introduction to Quadratic Forms over Fields, American Mathematical Society .
Quadratic forms
Bilinear forms | Isotropic quadratic form | [
"Mathematics"
] | 875 | [
"Quadratic forms",
"Number theory"
] |
10,921,962 | https://en.wikipedia.org/wiki/Cytoscape | Cytoscape is an open source bioinformatics software platform for visualizing molecular interaction networks and integrating with gene expression profiles and other state data. Additional features are available as plugins. Plugins are available for network and molecular profiling analyses, new layouts, additional file format support and connection with databases and searching in large networks. Plugins may be developed using the Cytoscape open Java software architecture by anyone and plugin community development is encouraged. Cytoscape also has a JavaScript-centric sister project named Cytoscape.js that can be used to analyse and visualise graphs in JavaScript environments, like a browser.
History
Cytoscape was originally created at the Institute of Systems Biology in Seattle in 2002. Now, it is developed by an international consortium of open source developers. Cytoscape was initially made public in July, 2002 (v0.8); the second release (v0.9) was in November, 2002, and v1.0 was released in March 2003. Version 1.1.1 is the last stable release for the 1.0 series. Version 2.0 was initially released in 2004; Cytoscape 2.83, the final 2.xx version, was released in May 2012. Version 3.0 was released Feb 1, 2013, and the latest version, 3.4.0, was released in May 2016.
Development
The Cytoscape core developer team continues to work on this project and released Cytoscape 3.0 in 2013. This represented a major change in the Cytoscape architecture; it is a more modularized, expandable and maintainable version of the software.
Usage
While Cytoscape is most commonly used for biological research applications, it is agnostic in terms of usage. Cytoscape can visualize and analyze network graphs of any kind involving nodes and edges (e.g., social networks). A vital aspect of the software architecture of Cytoscape is the use of plugins for specialized features. Plugins are developed by core developers and the greater user community.
See also
Computational genomics
Graph drawing
JavaScript framework
JavaScript library
Metabolic network modelling
Protein–protein interaction prediction
References
External links
https://cytoscape.org/screenshots.html
Cytoscape wiki
Cytoscape omictools webpage
Bioinformatics software
Systems biology
Mathematical and theoretical biology
Graph drawing software
Cross-platform software
Java platform software | Cytoscape | [
"Mathematics",
"Biology"
] | 498 | [
"Mathematical and theoretical biology",
"Bioinformatics software",
"Applied mathematics",
"Bioinformatics",
"Systems biology"
] |
5,556,198 | https://en.wikipedia.org/wiki/Trp%20operon | The trp operon''' is a group of genes that are transcribed together, encoding the enzymes that produce the amino acid tryptophan in bacteria. The trp operon was first characterized in Escherichia coli, and it has since been discovered in many other bacteria. The operon is regulated so that, when tryptophan is present in the environment, the genes for tryptophan synthesis are repressed.
The trp operon contains five structural genes: trpE, trpD, trpC, trpB, and trpA, which encode the enzymes needed to synthesize tryptophan. It also contains a repressive regulator gene called trpR. When tryptophan is present, the trpR protein binds to the operator, blocking transcription of the trp operon by RNA polymerase.
This operon is an example of repressible negative regulation of gene expression. The repressor protein binds to the operator in the presence of tryptophan (repressing transcription) and is released from the operon when tryptophan is absent (allowing transcription to proceed). The trp operon additionally uses attenuation to control expression of the operon, a second negative feedback control mechanism.
The trp operon is well-studied and is commonly used as an example of gene regulation in bacteria alongside the lac operon.
Genes trp operon contains five structural genes. The roles of their products are:
TrpE (): Anthranilate synthase produces anthranilate.
TrpD (): Cooperates with TrpE.
TrpC (): Phosphoribosylanthranilate isomerase domain first turns N-(5-phospho-β-D-ribosyl)anthranilate into 1-(2-carboxyphenylamino)-1-deoxy-D-ribulose 5-phosphate. The Indole-3-glycerol-phosphate synthase on the same protein then turns the product into (1S,2R)-1-C-(indol-3-yl)glycerol 3-phosphate.
TrpA (), TrpB (): two subunits of tryptophan synthetase. Combines TrpC's product with serine to produce tryptophan.
Repression
The operon operates by a negative repressible feedback mechanism. The repressor for the trp operon is produced upstream by the trpR gene, which is constitutively expressed at a low level. Synthesized trpR monomers associate into dimers. When tryptophan is present, these tryptophan repressor dimers bind to tryptophan, causing a change in the repressor conformation, allowing the repressor to bind to the operator. This prevents RNA polymerase from binding to and transcribing the operon, so tryptophan is not produced from its precursor. When tryptophan is not present, the repressor is in its inactive conformation and cannot bind the operator region, so transcription is not inhibited by the repressor.
Attenuation
Attenuation is a second mechanism of negative feedback in the trp operon. The repression system targets the intracellular trp concentration whereas the attenuation responds to the concentration of charged tRNAtrp. Thus, the trpR repressor decreases gene expression by altering the initiation of transcription, while attenuation does so by altering the process of transcription that's already in progress. While the TrpR repressor decreases transcription by a factor of 70, attenuation can further decrease it by a factor of 10, thus allowing accumulated repression of about 700-fold. Attenuation is made possible by the fact that in prokaryotes (which have no nucleus), the ribosomes begin translating the mRNA while RNA polymerase is still transcribing the DNA sequence. This allows the process of translation to affect transcription of the operon directly.
At the beginning of the transcribed genes of the trp operon is a sequence of at least 130 nucleotides termed the leader transcript (trpL; ). Lee and Yanofsky (1977) found that the attenuation efficiency is correlated with the stability of a secondary structure embedded in trpL, and the 2 constituent hairpins of the terminator structure were later elucidated by Oxender et al. (1979). This transcript includes four short sequences designated 1–4, each of which is partially complementary to the next one. Thus, three distinct secondary structures (hairpins) can form: 1–2, 2–3 or 3–4. The hybridization of sequences 1 and 2 to form the 1–2 structure is rare because the RNA polymerase waits for a ribosome to attach before continuing transcription past sequence 1, however if the 1–2 hairpin were to form it would prevent the formation of the 2–3 structure (but not 3–4). The formation of a hairpin loop between sequences 2–3 prevents the formation of hairpin loops between both 1–2 and 3–4. The 3–4 structure is a transcription termination sequence (abundant in G/C and immediately followed by several uracil residues), once it forms RNA polymerase will disassociate from the DNA and transcription of the structural genes of the operon can not occur (see below for a more detailed explanation). The functional importance of the 2nd hairpin for the transcriptional termination is illustrated by the reduced transcription termination frequency observed in experiments destabilizing the central G+C pairing of this hairpin.
Part of the leader transcript codes for a short polypeptide of 14 amino acids, termed the leader peptide. This peptide contains two adjacent tryptophan residues, which is unusual, since tryptophan is a fairly uncommon amino acid (about one in a hundred residues in a typical E. coli protein is tryptophan). The strand 1 in trpL encompasses the region encoding the trailing residues of the leader peptide: Trp, Trp, Arg, Thr, Ser; conservation is observed in these 5 codons whereas mutating the upstream codons do not alter the operon expression. If the ribosome attempts to translate this peptide while tryptophan levels in the cell are low, it will stall at either of the two trp codons. While it is stalled, the ribosome physically shields sequence 1 of the transcript, preventing the formation of the 1–2 secondary structure. Sequence 2 is then free to hybridize with sequence 3 to form the 2–3 structure, which then prevents the formation of the 3–4 termination hairpin, which is why the 2–3 structure is called an anti-termination hairpin. In the presence of the 2–3 structure, RNA polymerase is free to continue transcribing the operon. Mutational analysis and studies involving complementary oligonucleotides demonstrate that the stability of the 2–3 structure corresponds to the operon expression level. If tryptophan levels in the cell are high, the ribosome will translate the entire leader peptide without interruption and will only stall during translation termination at the stop codon. At this point the ribosome physically shields both sequences 1 and 2. Sequences 3 and 4 are thus free to form the 3–4 structure which terminates transcription. This terminator structure forms when no ribosome stalls in the vicinity of the Trp tandem (i.e. Trp or Arg codon): either the leader peptide is not translated or the translation proceeds smoothly along the strand 1 with abundant charged tRNAtrp. More over, the ribosome is proposed to only block about 10 nts downstream, thus ribosome stalling in either the upstream Gly or further downstream Thr do not seem to affect the formation of the termination hairpin. The end result is that the operon will be transcribed only when tryptophan is unavailable for the ribosome, while the trpL transcript is constitutively expressed.
This attenuation mechanism is experimentally supported. First, the translation of the leader peptide and ribosomal stalling are directly evidenced to be necessary for inhibiting the transcription termination. Moreover, mutational analysis destabilizing or disrupting the base-pairing of the antiterminator hairpin results in increased termination of several folds; consistent with the attenuation model, this mutation fails to relieve attenuation even with starved Trp. In contrast, complementary oligonucleotides targeting strand 1 increases the operon expression by promoting the antiterminator formation. Furthermore, in histidine operon, compensatory mutation shows that the pairing ability of strands 2–3 matters more than their primary sequence in inhibiting attenuation.
In attenuation, where the translating ribosome is stalled determines whether the termination hairpin will be formed. In order for the transcribing polymerase to concomitantly capture the alternative structure, the time scale of the structural modulation must be comparable to that of the transcription. To ensure that the ribosome binds and begins translation of the leader transcript immediately following its synthesis, a pause site exists in the trpL sequence. Upon reaching this site, RNA polymerase pauses transcription and apparently waits for translation to begin. This mechanism allows for synchronization of transcription and translation, a key element in attenuation.
A similar attenuation mechanism regulates the synthesis of histidine, phenylalanine and threonine.
Regulation of trp operon in Bacillus subtilis
The arrangement of the trp operon in E. coli and Bacillus subtilis differs. There are 5 structural genes in E. coli that are found under a single transcriptional unit. In Bacillus subtilis, there are 6 structural genes that are situated within a supraoperon. Three of these genes are found upstream while the other three genes are found downstream of the trp operon. There is a 7th gene in Bacillus subtiliss operon called trpG or pabA which is responsible for protein synthesis of tryptophan and folate. Regulation of trp operons in both organisms depends on the amount of trp present in the cell. However, the primary regulation of tryptophan biosynthesis in B. subtilis is via attenuation, rather than repression, of transcription. In B. subtilis'', tryptophan binds to the eleven-subunit tryptophan-activated RNA-binding attenuation protein (TRAP), which activates TRAP's ability to bind to the trp leader RNA. Binding of trp-activated TRAP to leader RNA results in the formation of a terminator structure that causes transcription termination. In addition, the activated TRAP inhibits the initiation of translation of trpP, trpE, trpG and ycbK genes. The gene trpP plays a role in trp transportation, while the gene trpG is utilized in the folate operon, and the gene ycbK is involved in synthesis of an efflux protein. The activated TRAP protein is regulated by an anti-TRAP protein and AT synthesis. AT can inactive TRAP to lower the transcription of tryptophan.
References
Further reading
External links
Animation of the Trp operon's regulation
Gene expression
Operons | Trp operon | [
"Chemistry",
"Biology"
] | 2,383 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Operons"
] |
8,737,421 | https://en.wikipedia.org/wiki/Series%20acceleration | In mathematics, a series acceleration method is any one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
Definition
Given an infinite series with a sequence of partial sums
having a limit
an accelerated series is an infinite series with a second sequence of partial sums
which asymptotically converges faster to than the original sequence of partial sums would:
A series acceleration method is a sequence transformation that transforms the convergent sequences of partial sums of a series into more quickly convergent sequences of partial sums of an accelerated series with the same limit. If a series acceleration method is applied to a divergent series then the proper limit of the series is undefined, but the sequence transformation can still act usefully as an extrapolation method to an antilimit of the series.
The mappings from the original to the transformed series may be linear sequence transformations or non-linear sequence transformations. In general, the non-linear sequence transformations tend to be more powerful.
Overview
Two classical techniques for series acceleration are Euler's transformation of series and Kummer's transformation of series. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method.
For alternating series, several powerful techniques, offering convergence rates from all the way to for a summation of terms, are described by Cohen et al.
Euler's transform
A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by
where is the forward difference operator, for which one has the formula
If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges.
A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation.
Conformal mappings
A series
can be written as , where the function f is defined as
The function can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point is close to or on the boundary of the disk of convergence, the series for will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to ends up deeper in the new disk of convergence.
The conformal transform needs to be chosen such that , and one usually chooses a function that has a finite derivative at w = 0. One can assume that without loss of generality, as one can always rescale w to redefine . We then consider the function
Since , we have . We can obtain the series expansion of by putting in the series expansion of because ; the first terms of the series expansion for will yield the first terms of the series expansion for if . Putting in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series.
Non-linear sequence transformations
Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations.
Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and therefore may be used as effective extrapolation methods.
Aitken method
A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,
defined by
This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.
See also
Shanks transformation
Minimum polynomial extrapolation
Van Wijngaarden transformation
References
C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991.
G. A. Baker Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996.
Herbert H. H. Homeier: Scalar Levin-Type Sequence Transformations, Journal of Computational and Applied Mathematics, vol. 122, no. 1–2, p 81 (2000). , .
Brezinski Claude and Redivo-Zaglia Michela : "The genesis and early developments of Aitken's process, Shanks transformation, the -algorithm, and related fixed point methods", Numerical Algorithms, Vol.80, No.1, (2019), pp.11-133.
Delahaye J. P. : "Sequence Transformations", Springer-Verlag, Berlin, ISBN 978-3540152835 (1988).
Sidi Avram : "Vector Extrapolation Methods with Applications", SIAM, ISBN 978-1-61197-495-9 (2017).
Brezinski Claude, Redivo-Zaglia Michela and Saad Yousef : "Shanks Sequence Transformations and Anderson Acceleration", SIAM Review, Vol.60, No.3 (2018), pp.646–669. doi:10.1137/17M1120725 .
Brezinski Claude : "Reminiscences of Peter Wynn", Numerical Algorithms, Vol.80(2019), pp.5-10.
Brezinski Claude and Redivo-Zaglia Michela : "Extrapolation and Rational Approximation", Springer, ISBN 978-3-030-58417-7 (2020).
External links
Convergence acceleration of series
GNU Scientific Library, Series Acceleration
Digital Library of Mathematical Functions
Numerical analysis
Asymptotic analysis
Summability methods
Perturbation theory | Series acceleration | [
"Physics",
"Mathematics"
] | 1,342 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Summability methods",
"Computational mathematics",
"Quantum mechanics",
"Mathematical relations",
"Asymptotic analysis",
"Numerical analysis",
"Approximations",
"Perturbation theory"
] |
8,740,164 | https://en.wikipedia.org/wiki/Bad%20Astronomy | Bad Astronomy: Misconceptions and Misuses Revealed, from Astrology to the Moon Landing "Hoax" is a non-fiction book by the American astronomer Phil Plait, who is also known as "the Bad Astronomer". The book was published in 2002 and deals with various misunderstandings about space and astronomy, such as sounds being audible in space (a misconception because in the vacuum of space, sound has no medium in which to propagate).
Plait's first book received generally favorable reviews within the academic and astronomy communities and was the first volume in the Bad Science series by John Wiley & Sons Publishing
Overview
Inspired by the author's web site, "Bad Astronomy", the book attempts to explore twenty-four common astronomical fallacies and explain the scientific consensus concerning these topics within the field of astronomy.
The book explains and corrects many ideas relating to space that, according to Plait, are mistaken but nevertheless often portrayed in popular movies. Plait also dedicates much of the book to debunking the idea of a Moon landing hoax and explains why astrology should not be taken seriously. A part of the book describes the Moon's tidal effects and explains the Coriolis effect, why the sky is blue, the Big Bang and other related topics.
Many of the book's topics and arguments also are found on Plait's page at the Slate magazine blog site, but Plait explores them in greater depth in the book. He states that the book is intended to debunk popular myths and also to describe science in an easily comprehensible way.
Reception
Tormod Guldvog writes in his review that "It is indeed a gem when it comes to teaching things about common astronomical phenomena. Plait discusses common ways bad astronomy is communicated, in the media, in the classroom, and perhaps, most of all, in our own minds."
Reviewing Bad Astronomy for the National Science Teachers Association, Deborah Teuscher, Director of Pike Planetarium, praised the work as "interesting, accurate, and fun to read," recommending the book as a resource for science teachers, scientifically interested lay persons, and high school and college students as a supplement to an astronomy unit.
Publishers Weekly gave a generally favorable review, stating of the planned John Wiley & Sons "Bad Science" series that "[i]f every entry in the series is as entertaining as Plait's, good science may have a fighting chance with the American public."
An April 2002 review for UniSci's "Daily University Science News" also praised Bad Astronomy as the "ideal accompaniment for International Astronomy Day (April 20)" and quoted the author, stating that it is "dangerous to be ignorant about science. Our lives and our livelihoods depend on it."
In an October 2002 review for Sky & Telescope, Bud Sadler praised Bad Astronomy for its humor, "easily understood explanations" and "simple demonstrations" to explain what he called "the most egregious examples of ill-informed astronomy."
Content
Bad Astronomy Begins at Home
Part I of Bad Astronomy, "Bad Astronomy Begins at Home", focuses on examples of astronomical misconceptions that are typically associated with the household or classroom, including the effect of the equinox on an egg's ability to balance upright without falling onto its side, the Coriolis effect's rumored effect on direction of whirlpools in household plumbing, and astronomical misunderstandings inherent in common English idioms, such as "meteoric rise" and "dark side of the Moon". "Idiom's Delight", the chapter dealing with scientific inaccuracies that appear in everyday expressions, such as the phrase "light years ahead".
From the Earth to the Moon
Part II of the book, "From the Earth to the Moon", focuses on Earth's orbit and atmosphere and the Moon, with particular emphasis on how photon scattering results in the sky appearing blue, the impact of axial tilt on seasons, the impact of the Moon's presence, and misconceptions regarding the "Moon Size Illusion", explaining why and how the Moon appears larger when closer to the horizon.
Skies at Night are Big and Bright
Part III, "Skies at Night are Big and Bright", concentrates on the viewing of objects farther away than the radius of the Moon's orbit around Earth, including the optical "twinkle" effect when viewing some stars, the brightness and color of stars, observation of meteors and asteroids, and using astronomical observations to study the beginning of the universe. Plait's chapter on meteors and asteroids delves into terms and distinctions and explains, for example, "why small meteors are cold, not hot, when they hit the ground."
Artificial Intelligence
Part IV, "Artificial Intelligence", attempts to tackle various conspiracy theories and alternate worldviews, including the so-called Moon Landing Hoax, Young-Earth Creationism, Immanuel Velikovsky's book Worlds in Collision (which asserts that a relatively young Venus was once a part of Jupiter), extraterrestrial claims regarding unidentified flying objects (UFOs), and astrology. In "Appalled at Apollo", the section devoted to Moon landing hoax conspiracy theories, Plait examines aspects of the hoax theory and compares its claims against basic laws of physics. Astronomical Society of the Pacific listed Chapter 17, "Appalled at Apollo", on a list of resources stating it was "good ammunition for debunking the notion that NASA never went to the Moon point by point." In the chapter "Misidentified Flying Objects", Plait discusses various ways that cameras sometimes distort images, which Plait writes are often responsible for examples of evidence presented by extraterrestrial UFO proponents. A chapter devoted to astrology explores the topic, explaining "why astrology doesn't work".
Beam Me Up
Part V, "Beam Me Up", explores additional topics, such as common misconceptions regarding the Hubble Space Telescope and its funding, star-naming companies, and astronomy myths and inaccuracies perpetuated by Hollywood, providing "The Top-Ten Examples of Bad Astronomy in Major Motion Pictures".
Publications
Bad Astronomy was the first volume in the planned series Bad Science published by John Wiley & Sons. A second volume, Bad Medicine, by Christopher Wanjek, was published in 2003 and was the most recent in the series.
In 2008, Plait published a second book on astronomy, Death from the Skies, which explored the various ways in which the human race could be rendered extinct by astronomical phenomena.
See also
Death from the Skies
References
External links
Plait's Bad Astronomy blog at Slate.com
Sample chapter from publisher.
Astronomy books
American non-fiction books
2002 non-fiction books
Wiley (publisher) books
Scientific skepticism mass media | Bad Astronomy | [
"Astronomy"
] | 1,413 | [
"Astronomy books",
"Works about astronomy"
] |
8,743,406 | https://en.wikipedia.org/wiki/Wagner-Jauregg%20reaction | The Wagner-Jauregg reaction is a classic organic reaction in organic chemistry, named after (son of Julius Wagner-Jauregg), describing the double Diels–Alder reaction of 2 equivalents of maleic anhydride with a 1,1-diarylethylene. After aromatization of the bis-adduct, the ultimate reaction product is a naphthalene compound with one phenyl substituent.
The reaction is unusual in that the anhydride reacts with the aromatic ring. The presence of the additional alpha-phenyl group on the phenylethene (the styryl group) activates the styryl for a Diels–Alder reaction even at the expense of its aromaticity. In contrast, unactivated styrene reacts instead at the alkene alone via a linear polymerization reaction. Styrene maleic anhydride copolymer is formed, retaining the aromaticity of the styrene.
The Diels–Alder product can be re-aromatized using elemental sulfur at high temperature, followed by a second rearomatization by decarboxylation with barium hydroxide and copper:
References
Carbon-carbon bond forming reactions
Cycloadditions
Name reactions | Wagner-Jauregg reaction | [
"Chemistry"
] | 267 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
11,882,145 | https://en.wikipedia.org/wiki/Hfq%20protein | The Hfq protein (also known as HF-I protein) encoded by the hfq gene was discovered in 1968 as an Escherichia coli host factor that was essential for replication of the bacteriophage Qβ. It is now clear that Hfq is an abundant bacterial RNA binding protein which has many important physiological roles that are usually mediated by interacting with Hfq binding sRNA.
In E. coli, Hfq mutants show multiple stress response related phenotypes. The Hfq protein is now known to regulate the translation of two major stress transcription factors ( σS (RpoS) and σE (RpoE) ) in Enterobacteria. It also regulates sRNA in Vibrio cholerae, a specific example being MicX sRNA.
In Salmonella typhimurium, Hfq has been shown to be an essential virulence factor as its deletion attenuates the ability of S.typhimurium to invade epithelial cells, secrete virulence factors or survive in cultured macrophages. In Salmonella, Hfq deletion mutants are also non motile and exhibit chronic activation of the sigma mediated envelope stress response. A CLIP-Seq study of Hfq in Salmonella has revealed 640 binding sites across the Salmonella transcriptome. The majority of these binding sites was found in mRNAs and sRNAs.
In Photorhabdus luminescens, a deletion of the hfq gene causes loss of secondary metabolite production.
Hfq mediates its pleiotropic effects through several mechanisms. It interacts with regulatory sRNA and facilitates their antisense interaction with their targets. It also acts independently to modulate mRNA decay (directing mRNA transcripts for degradation) and also acts as a repressor of mRNA translation. Genomic SELEX has been used to show that Hfq binding RNAs are enriched in the sequence motif 5'-AAYAAYAA-3'. Hfq was also found to act on ribosome biogenesis in E. coli, specifically on the 30S subunit. Hfq mutants accumulate higher levels of immature small subunits and decreased translation accuracy. This function on the bacterial ribosome could also account for the pleiotropic effect typical of Hfq deletion strains.
Electron microscopy imaging reveals that, in addition to the expected localization of this protein in cytoplasmic regions and in the nucleoid, an important fraction of Hfq is located in close proximity to the membrane.
Crystallographic structures
Six crystallographic structures of 4 different Hfq proteins have been published so far; E. coli Hfq (), P. aeruginosa Hfq in a low salt condition () and a high salt condition (), Hfq from S. aureus with bound RNA () and without (), and the Hfq(-like) protein from M. jannaschii ().
All six structures confirm the hexameric ring-shape of a Hfq protein complex.
See also
RNA-OUT
References
11. Mol Cell. 2002 Jan;9(1):23-30. Hfq: a bacterial Sm-like protein that mediates RNA-RNA interaction.Møller T1, Franch T, Højrup P, Keene DR, Bächinger HP, Brennan RG, Valentin-Hansen P.
12. EMBO J. 2002 Jul 1;21(13):3546-56.Structures of the pleiotropic translational regulator Hfq and an Hfq-RNA complex: a bacterial Sm-like protein. Schumacher MA1, Pearson RF, Møller T, Valentin-Hansen P, Brennan RG.
External links
Proteins
Bacterial proteins | Hfq protein | [
"Chemistry"
] | 804 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
11,883,272 | https://en.wikipedia.org/wiki/Heparin-binding%20EGF-like%20growth%20factor | Heparin-binding EGF-like growth factor (HB-EGF) is a member of the EGF family of proteins that in humans is encoded by the HBEGF gene.
HB-EGF-like growth factor is synthesized as a membrane-anchored mitogenic and chemotactic glycoprotein. An epidermal growth factor produced by monocytes and macrophages, due to an affinity for heparin is termed HB-EGF. It has been shown to play a role in wound healing, cardiac hypertrophy, and heart development and function. First identified in the conditioned media of human macrophage-like cells, HB-EGF is an 87-amino acid glycoprotein that displays highly regulated gene expression. Ectodomain shedding results in the soluble mature form of HB-EGF, which influences the mitogenicity and chemotactic factors for smooth muscle cells and fibroblasts. The transmembrane form of HB-EGF is the unique receptor for diphtheria toxin and functions in juxtacrine signaling in cells. Both forms of HB-EGF participate in normal physiological processes and in pathological processes including tumor progression and metastasis, organ hyperplasia, and atherosclerotic disease. HB-EGF can bind two locations on cell surfaces: heparan sulfate proteoglycans and EGF-receptors effecting cell-to-cell interactions.
Interactions
Heparin-binding EGF-like growth factor has been shown to interact with NRD1, Zinc finger and BTB domain-containing protein 16 and BAG1.
HB-EGF biological activities with these genes influence cell cycle progression, molecular chaperone regulation, cell survival, cellular functions, adhesion, and mediation of cell migration. The NRD1 gene codes for the protein nardilysin, an HB-EGF modulator. Zinc finger and BTB domain-containing protein 16 and BAG family molecular chaperone regulator function as co-chaperone proteins in processes involving HB-EGF.
Role in cancer
Recent studies indicate significant HB-EGF gene expression elevation in a number of human cancers as well as cancer-derived cell lines. Evidence indicates that HB-EGF plays a significant role in the development of malignant phenotypes contributing to the metastatic and invasive behaviors of tumors. The proliferative and chemotactic effects of HB-EGF results from the target influence on particular cells including fibroblasts, smooth muscles cells, and keratinocytes. For numerous cell types such as breast and ovarian tumor cells, human epithelial cells and keratinocytes HB-EGF is a potent mitogen resulting in evidenced upregulation of HB-EGF in such specimens. Both in vivo and in vitro studies of tumor formation in cancer derived cell lines indicate that expression of HB-EGF is essential for tumor development. As a result, studies implementing the use of specific HB-EGF inhibitors and monoclonal antibodies against HB-EGF show the potential for the development of novel therapies for treating cancers by targeting HB-EGF expression.
Role in cardiac development and vasculature
HB-EGF binding and activation of EGF receptors plays a critical role during cardiac valve tissue development and the maintenance of normal heart function in adults. During valve tissue development the interaction of HB-EGF with EGF receptors and heparan sulfate proteoglycans is essential for the prevention of malformation of valves due to enlargement. In the vascular system areas of disturbed flow show upregulation of HB-EGF with promotion of vascular lesions, atherogenesis, and hyperplasia of intimal tissue in vessels. The flow disturbance remodeling of the vascular tissues due to HB-EGF expression contributes to aortic valve disease, peripheral vascular disease, and conduit stenosis.
Role in wound healing
HB-EGF is the predominant growth factor in the epithelialization required for cutaneous wound healing. The mitogenic and migratory effects of HB-EGF on keratinocytes and fibroblasts promotes dermal repair and angiogenesis necessary for wound healing and is a major component of wound fluids. HB-EGF displays target cell specificity during the early stages of wound healing being released by macrophages, monocytes, and keratinocytes. HB-EGF cell surface binding to heparan sulfate proteoglycans enhances mitogen promoting capabilities increasing the rate of skin wound healing, decreasing human skin graft healing times, and promotes rapid healing of ulcers, burns, and epidermal split thickness wounds.
Role in other physiological processes
HB-EGF is recognized as an important component for the modulation of cell activity in various biological interactions. Found widely distributed in cerebral neurons and neuroglia, HB-EGF induced by brain hypoxia and or ischemia subsequently stimulates neurogenesis. Interactions between uterine HB-EGF and epidermal growth factor receptors of blastocysts influence embryo-uterine interactions and implantation. Studies show HB-EGF protects intestinal stem cells and intestinal epithelial cells in necrotizing enterocolitis, a disease affecting premature newborns. Associated with a breakdown in gut barrier function, necrotizing enterocolitis may be mediated by HB-EGF effects on intestinal mucosa. HB-EGF expressed during skeletal muscle contraction facilitates peripheral glucose removal, glucose tolerance and uptake. The upregulation of HB-EGF with exercise may explain the molecular basis for the decrease in metabolic disorders such as obesity and type 2 diabetes with regular exercise.
References
Further reading
External links
Growth factors
Morphogens | Heparin-binding EGF-like growth factor | [
"Chemistry",
"Biology"
] | 1,227 | [
"Growth factors",
"Morphogens",
"Induced stem cells",
"Signal transduction"
] |
11,884,960 | https://en.wikipedia.org/wiki/Extensin | Extensins are a family of flexuous, rodlike, hydroxyproline-rich glycoproteins (HRGPs) of the plant cell wall.
They are highly abundant proteins. There are around 20 extensins in Arabidopsis thaliana. They form crosslinked networks in the young cell wall. Typically they have two major diagnostic repetitive peptide motifs, one hydrophilic and the other hydrophobic, with potential for crosslinking. Extensins are thought to act as self-assembling amphiphiles essential for cell-wall assembly and growth by cell extension and expansion. The name "extensin" encapsulates the hypothesis that they are involved in cell extension.
Hydrophilic motif
This pentapeptide consists of serine (Ser) and four hydroxyprolines (Hyp): Ser-Hyp-Hyp-Hyp-Hyp. Hydroxyproline is unusual not only as a cyclic amino acid that restricts peptide flexibility but as an amino acid with no codon, being encoded as proline. Polypeptides targeted for secretion are subsequently hydroxylated by direct addition of molecular oxygen to proline at C-4. Extensin hydroxyproline is uniquely glycosylated with short chains of L-arabinose that further rigidify and increase hydrophilicity. Generally the serine has a single galactose attached.
Hydrophobic tyrosine crosslinking motif
Two tyrosines separated by a single amino acid, typically valine or another tyrosine, form a short intra-molecular diphenylether crosslink. This can be crosslinked further by the enzyme extensin peroxidase to form an inter-molecular bridge between extensin molecules and thus form networks and sheets.
References
Further reading
Kieliszewski M, Lamport DTA (1994) Extensin: repetitive motifs, functional sites, post-translational codes, and phylogeny Plant Journal 5: 157–172
Plant proteins
Structural proteins
Glycoproteins | Extensin | [
"Chemistry"
] | 431 | [
"Glycoproteins",
"Glycobiology"
] |
11,885,926 | https://en.wikipedia.org/wiki/Flow%20velocity | In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the flow velocity vector is scalar, the flow speed.
It is also called velocity field; when evaluated along a line, it is called a velocity profile (as in, e.g., law of the wall).
Definition
The flow velocity u of a fluid is a vector field
which gives the velocity of an element of fluid at a position and time
The flow speed q is the length of the flow velocity vector
and is a scalar field.
Uses
The flow velocity of a fluid effectively describes everything about the motion of a fluid. Many physical properties of a fluid can be expressed mathematically in terms of the flow velocity. Some common examples follow:
Steady flow
The flow of a fluid is said to be steady if does not vary with time. That is if
Incompressible flow
If a fluid is incompressible the divergence of is zero:
That is, if is a solenoidal vector field.
Irrotational flow
A flow is irrotational if the curl of is zero:
That is, if is an irrotational vector field.
A flow in a simply-connected domain which is irrotational can be described as a potential flow, through the use of a velocity potential with If the flow is both irrotational and incompressible, the Laplacian of the velocity potential must be zero:
Vorticity
The vorticity, , of a flow can be defined in terms of its flow velocity by
If the vorticity is zero, the flow is irrotational.
The velocity potential
If an irrotational flow occupies a simply-connected fluid region then there exists a scalar field such that
The scalar field is called the velocity potential for the flow. (See Irrotational vector field.)
Bulk velocity
In many engineering applications the local flow velocity vector field is not known in every point and the only accessible velocity is the bulk velocity or average flow velocity (with the usual dimension of length per time), defined as the quotient between the volume flow rate (with dimension of cubed length per time) and the cross sectional area (with dimension of square length):
.
See also
Displacement field (mechanics)
Drift velocity
Enstrophy
Group velocity
Particle velocity
Pressure gradient
Strain rate
Strain-rate tensor
Stream function
Velocity potential
Vorticity
Wind velocity
References
Fluid dynamics
Continuum mechanics
Vector calculus
Velocity
Spatial gradient
Vector physical quantities | Flow velocity | [
"Physics",
"Chemistry",
"Engineering"
] | 532 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Motion (physics)",
"Vector physical quantities",
"Piping",
"Velocity",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
11,887,250 | https://en.wikipedia.org/wiki/Stellar%20magnetic%20field | A stellar magnetic field is a magnetic field generated by the motion of conductive plasma inside a star. This motion is created through convection, which is a form of energy transport involving the physical movement of material. A localized magnetic field exerts a force on the plasma, effectively increasing the pressure without a comparable gain in density. As a result, the magnetized region rises relative to the remainder of the plasma, until it reaches the star's photosphere. This creates starspots on the surface, and the related phenomenon of coronal loops.
Measurement
A star's magnetic field can be measured using the Zeeman effect. Normally the atoms in a star's atmosphere will absorb certain frequencies of energy in the electromagnetic spectrum, producing characteristic dark absorption lines in the spectrum. However, when the atoms are within a magnetic field, these lines become split into multiple, closely spaced lines. The energy also becomes polarized with an orientation that depends on the orientation of the magnetic field. Thus the strength and direction of the star's magnetic field can be determined by examination of the Zeeman effect lines.
A stellar spectropolarimeter is used to measure the magnetic field of a star. This instrument consists of a spectrograph combined with a polarimeter. The first instrument to be dedicated to the study of stellar magnetic fields was NARVAL, which was mounted on the Bernard Lyot Telescope at the Pic du Midi de Bigorre in the French Pyrenees mountains.
Various measurements—including magnetometer measurements over the last 150 years; 14C in tree rings; and 10Be in ice cores—have established substantial magnetic variability of the Sun on decadal, centennial and millennial time scales.
Field generation
Stellar magnetic fields, according to solar dynamo theory, are caused within the convective zone of the star. The convective circulation of the conducting plasma functions like a dynamo. This activity destroys the star's primordial magnetic field, then generates a dipolar magnetic field. As the star undergoes differential rotation—rotating at different rates for various latitudes—the magnetism is wound into a toroidal field of "flux ropes" that become wrapped around the star. The fields can become highly concentrated, producing activity when they emerge on the surface.
The magnetic field of a rotating body of conductive gas or liquid develops self-amplifying electric currents, and thus a self-generated magnetic field, due to a combination of differential rotation (different angular velocity of different parts of body), Coriolis forces and induction. The distribution of currents can be quite complicated, with numerous open and closed loops, and thus the magnetic field of these currents in their immediate vicinity is also quite twisted. At large distances, however, the magnetic fields of currents flowing in opposite directions cancel out and only a net dipole field survives, slowly diminishing with distance. Because the major currents flow in the direction of conductive mass motion (equatorial currents), the major component of the generated magnetic field is the dipole field of the equatorial current loop, thus producing magnetic poles near the geographic poles of a rotating body.
The magnetic fields of all celestial bodies are often aligned with the direction of rotation, with notable exceptions such as certain pulsars.
Periodic field reversal
Another feature of this dynamo model is that the currents are AC rather than DC. Their direction, and thus the direction of the magnetic field they generate, alternates more or less periodically, changing amplitude and reversing direction, although still more or less aligned with the axis of rotation.
The Sun's major component of magnetic field reverses direction every 11 years (so the period is about 22 years), resulting in a diminished magnitude of magnetic field near reversal time. During this dormancy, the sunspots activity is at maximum (because of the lack of magnetic braking on plasma) and, as a result, massive ejection of high energy plasma into the solar corona and interplanetary space takes place. Collisions of neighboring sunspots with oppositely directed magnetic fields result in the generation of strong electric fields near rapidly disappearing magnetic field regions. This electric field accelerates electrons and protons to high energies (kiloelectronvolts) which results in jets of extremely hot plasma leaving the Sun's surface and heating coronal plasma to high temperatures (millions of kelvin).
If the gas or liquid is very viscous (resulting in turbulent differential motion), the reversal of the magnetic field may not be very periodic. This is the case with the Earth's magnetic field, which is generated by turbulent currents in a viscous outer core.
Surface activity
Starspots are regions of intense magnetic activity on the surface of a star. (On the Sun they are termed sunspots.) These form a visible component of magnetic flux tubes that are formed within a star's convection zone. Due to the differential rotation of the star, the tube becomes curled up and stretched, inhibiting convection and producing zones of lower than normal temperature. Coronal loops often form above starspots, forming from magnetic field lines that stretch out into the stellar corona. These in turn serve to heat the corona to temperatures over a million kelvins.
The magnetic fields linked to starspots and coronal loops are linked to flare activity, and the associated coronal mass ejection. The plasma is heated to tens of millions of kelvins, and the particles are accelerated away from the star's surface at extreme velocities.
Surface activity appears to be related to the age and rotation rate of main-sequence stars. Young stars with a rapid rate of rotation exhibit strong activity. By contrast middle-aged, Sun-like stars with a slow rate of rotation show low levels of activity that varies in cycles. Some older stars display almost no activity, which may mean they have entered a lull that is comparable to the Sun's Maunder minimum. Measurements of the time variation in stellar activity can be useful for determining the differential rotation rates of a star.
Magnetosphere
A star with a magnetic field will generate a magnetosphere that extends outward into the surrounding space. Field lines from this field originate at one magnetic pole on the star then end at the other pole, forming a closed loop. The magnetosphere contains charged particles that are trapped from the stellar wind, which then move along these field lines. As the star rotates, the magnetosphere rotates with it, dragging along the charged particles.
As stars emit matter with a stellar wind from the photosphere, the magnetosphere creates a torque on the ejected matter. This results in a transfer of angular momentum from the star to the surrounding space, causing a slowing of the stellar rotation rate. Rapidly rotating stars have a higher mass loss rate, resulting in a faster loss of momentum. As the rotation rate slows, so too does the angular deceleration. By this means, a star will gradually approach, but never quite reach, the state of zero rotation.
Magnetic stars
A T Tauri star is a type of pre-main-sequence star that is being heated through gravitational contraction and has not yet begun to burn hydrogen at its core. They are variable stars that are magnetically active. The magnetic field of these stars is thought to interact with its strong stellar wind, transferring angular momentum to the surrounding protoplanetary disk. This allows the star to brake its rotation rate as it collapses.
Small, M-class stars (with 0.1–0.6 solar masses) that exhibit rapid, irregular variability are known as flare stars. These fluctuations are hypothesized to be caused by flares, although the activity is much stronger relative to the size of the star. The flares on this class of stars can extend up to 20% of the circumference, and radiate much of their energy in the blue and ultraviolet portion of the spectrum.
Straddling the boundary between stars that undergo nuclear fusion in their cores and non-hydrogen fusing brown dwarfs are the ultracool dwarfs. These objects can emit radio waves due to their strong magnetic fields. Approximately 5–10% of these objects have had their magnetic fields measured. The coolest of these, 2MASS J10475385+2124234 with a temperature of 800-900 K, retains a magnetic field stronger than 1.7 kG, making it some 3000 times stronger than the Earth's magnetic field. Radio observations also suggest that their magnetic fields periodically change their orientation, similar to the Sun during the solar cycle.
Planetary nebulae are created when a red giant star ejects its outer envelope, forming an expanding shell of gas. However it remains a mystery why these shells are not always spherically symmetrical. 80% of planetary nebulae do not have a spherical shape; instead forming bipolar or elliptical nebulae. One hypothesis for the formation of a non-spherical shape is the effect of the star's magnetic field. Instead of expanding evenly in all directions, the ejected plasma tends to leave by way of the magnetic poles. Observations of the central stars in at least four planetary nebulae have confirmed that they do indeed possess powerful magnetic fields.
After some massive stars have ceased thermonuclear fusion, a portion of their mass collapses into a compact body of neutrons called a neutron star. These bodies retain a significant magnetic field from the original star, but the collapse in size causes the strength of this field to increase dramatically. The rapid rotation of these collapsed neutron stars results in a pulsar, which emits a narrow beam of energy that can periodically point toward an observer.
Compact and fast-rotating astronomical objects (white dwarfs, neutron stars and black holes) have extremely strong magnetic fields. The magnetic field of a newly born fast-spinning neutron star is so strong (up to 108 teslas) that it electromagnetically radiates enough energy to quickly (in a matter of few million years) damp down the star rotation by 100 to 1000 times. Matter falling on a neutron star also has to follow the magnetic field lines, resulting in two hot spots on the surface where it can reach and collide with the star's surface. These spots are literally a few feet (about a metre) across but tremendously bright. Their periodic eclipsing during star rotation is hypothesized to be the source of pulsating radiation (see pulsars).
An extreme form of a magnetized neutron star is the magnetar. These are formed as the result of a core-collapse supernova. The existence of such stars was confirmed in 1998 with the measurement of the star SGR 1806-20. The magnetic field of this star has increased the surface temperature to 18 million K and it releases enormous amounts of energy in gamma ray bursts.
Jets of relativistic plasma are often observed along the direction of the magnetic poles of active black holes in the centers of very young galaxies.
Star-planet interaction controversy
In 2008, a team of astronomers first described how as the exoplanet orbiting HD 189733 A reaches a certain place in its orbit, it causes increased stellar flaring. In 2010, a different team found that every time they observe the exoplanet at a certain position in its orbit, they also detected X-ray flares. Theoretical research since 2000 suggested that an exoplanet very near to the star that it orbits may cause increased flaring due to the interaction of their magnetic fields, or because of tidal forces. In 2019, astronomers combined data from Arecibo Observatory, MOST, and the Automated Photoelectric Telescope, in addition to historical observations of the star at radio, optical, ultraviolet, and X-ray wavelengths to examine these claims. Their analysis found that the previous claims were exaggerated and the host star failed to display many of the brightness and spectral characteristics associated with stellar flaring and solar active regions, including sunspots. They also found that the claims did not stand up to statistical analysis, given that many stellar flares are seen regardless of the position of the exoplanet, therefore debunking the earlier claims. The magnetic fields of the host star and exoplanet do not interact, and this system is no longer believed to have a "star-planet interaction."
See also
References
External links
Magnetic field
Magnetism in astronomy
Concepts in stellar astronomy | Stellar magnetic field | [
"Physics",
"Astronomy"
] | 2,495 | [
"Concepts in astrophysics",
"Concepts in stellar astronomy",
"Magnetism in astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
11,887,815 | https://en.wikipedia.org/wiki/Postreplication%20repair | Postreplication repair is the repair of damage to the DNA that takes place after replication.
Some example genes in humans include:
BRCA2 and BRCA1
BLM
NBS1
Accurate and efficient DNA replication is crucial for the health and survival of all living organisms. Under optimal conditions, the replicative DNA polymerases ε, δ, and α can work in concert to ensure that the genome is replicated efficiently with high accuracy in every cell cycle. However, DNA is constantly challenged by exogenous and endogenous genotoxic threats, including solar ultraviolet (UV) radiation and reactive oxygen species (ROS) generated as a byproduct of cellular metabolism. Damaged DNA can act as a steric block to replicative polymerases, thereby leading to incomplete DNA replication or the formation of secondary DNA strand breaks at the sites of replication stalling. Incomplete DNA synthesis and DNA strand breaks are both potential sources of genomic instability. An arsenal of DNA repair mechanisms exists to repair various forms of damaged DNA and minimize genomic instability. Most DNA repair mechanisms require an intact DNA strand as template to fix the damaged strand.
DNA damage prevents the normal enzymatic synthesis of DNA by the replication fork. At damaged sites in the genome, both prokaryotic and eukaryotic cells utilize a number of postreplication repair (PRR) mechanisms to complete DNA replication. Chemically modified bases can be bypassed by either error-prone or error-free translesion polymerases, or through genetic exchange with the sister chromatid. The replication of DNA with a broken sugar-phosphate backbone is most likely facilitated by the homologous recombination proteins that confer resistance to ionizing radiation. The activity of PRR enzymes is regulated by the SOS response in bacteria and may be controlled by the postreplication checkpoint response in eukaryotes.
The elucidation of PRR mechanisms is an active area of molecular biology research, and the terminology is currently in flux. For instance, PRR has recently been referred to as "DNA damage tolerance" to emphasize the instances in which postreplication DNA damage is repaired without removing the original chemical modification to the DNA. While the term PRR has most frequently been used to describe the repair of single-stranded postreplication gaps opposite damaged bases, a more broad usage has been suggested. In this case, the term PRR would encompasses all processes that facilitate the replication of damaged DNA, including those that repair replication-induced double-strand breaks.
Melanoma cells are commonly defective in postreplication repair of DNA damages that are in the form of cyclobutane pyrimidine dimers, a type of damage caused by ultraviolet radiation. A particular repair process that appears to be defective in melanoma cells is homologous recombinational repair. Defective postreplication repair of cyclobutane pyrimidine dimers can lead to mutations that are the primary driver of melanoma.
References
DNA repair | Postreplication repair | [
"Biology"
] | 616 | [
"Molecular genetics",
"Cellular processes",
"DNA repair"
] |
11,890,087 | https://en.wikipedia.org/wiki/Pulsometer%20pump | The Pulsometer steam pump is a pistonless pump which was patented in 1872 by American Charles Henry Hall. In 1875 a British engineer bought the patent rights of the Pulsometer and it was introduced to the market soon thereafter. The invention was inspired by the Savery steam pump invented by Thomas Savery. Around the turn of the century, it was a popular and effective pump for quarry pumping.
Construction and operation
This extremely simple pump was made of cast iron, and had no pistons, rods, cylinders, cranks, or flywheels. It operated by the direct action of steam on water. The mechanism consisted of two chambers. As the steam condensed in one chamber, it acted as a suction pump, while in the other chamber, steam was introduced under pressure and so it acted as a force pump. At the end of every stroke, a ball valve consisting of a small brass ball moved slightly, causing the two chambers to swap functions from suction-pump to force-pump and vice versa. The result was that the water was first suction pumped and then force pumped.
A good explanation can be found in the 1901 article referenced below: The operation of the pulsometer is as follows: The ball being at the entrance of the left-hand chamber, and the right-hand being full of water, steam enters, pressing on the surface of the water, and forcing it out through the discharge passage. A rapid condensation of steam occurs from contact with the water and with the walls of the chamber, previously cooled by the water. When the water level has reached the horizontal edge of the discharge passage, a large volume of steam suddenly escapes and is at once condensed by the relatively cold water between the chamber and the discharge valve. The pressure in the chamber quickly decreases; it cannot be sustained by steam from the boiler, for, in accordance with the inventor's first specifications, the steam pipe is small. If now the pressure in the left chamber is equal, or nearly equal, to that in the right, friction caused by the rapid flow of steam past the ball will draw the ball over and close the right-hand chamber. Cut off from further supply, the steam, in contact with water, begins to condense; a jet of cold water from the discharge pipe spurts up through the injection tube, and by breaking into spray against the side of the steam space, completes the condensation. The partial vacuum produced brings water through the suction valve to fill the chamber; but at the same time the air valve admits a little air, which passes up ahead of the water and forms an elastic cushion to prevent the water from striking violently against the steam ball. The air chamber is for the purpose of preventing water-hammer in the suction pipe.
Advantages
The pump ran automatically without attendance. It was praised for its "extreme simplicity of construction, operation, compact form, high efficiency, economy, durability, and adaptability". Later designs were improved upon to enhance efficiency and to make the machine more accessible for inspection and repairs, thus reducing maintenance costs.
Detailed analysis
In the January 1901 issue of Technology Quarterly and Proceedings of the Society of Arts, an article appeared by Joseph C. Riley describing key operational details and technical evaluation of the pulsometer pump's performance. Riley noted that although somewhat inefficient, the pulsometer's simplicity and robust construction made it well suited to pumping "thick liquids or semi-fluids, such as heavy syrups, or even liquid mud".
Pulsometer Engineering Company Limited
Pulsometer Engineering Company Limited was founded in Britain in 1875 after a British engineer bought the patent rights of the pulsometer pump from Thomas Hall. In 1901 the company moved from London to Reading, Berkshire. In 1961 Pulsometer merged with Sigmund Pumps of Gateshead to form Sigmund Pulsometer Pumps. SPP Pumps Ltd became one of the largest pump companies in Europe. SPP Pumps Ltd is now part of Kirloskar Brothers Ltd.
References
Kirloskar Brothers Limited
Pumps
Steam power | Pulsometer pump | [
"Physics",
"Chemistry"
] | 823 | [
"Pumps",
"Physical quantities",
"Turbomachinery",
"Steam power",
"Physical systems",
"Power (physics)",
"Hydraulics"
] |
11,890,372 | https://en.wikipedia.org/wiki/SahysMod | SahysMod is a computer program for the prediction of the salinity of soil moisture, groundwater and drainage water, the depth of the watertable, and the drain discharge in irrigated agricultural lands, using different hydrogeologic and aquifer conditions, varying water management options, including the use of ground water for irrigation, and several crop rotation schedules, whereby the spatial variations are accounted for through a network of polygons.
Rationale
There is a need for a computer program that is easier to operate and that requires a simpler data structure then most currently available models. Therefore, the SahysMod program was designed keeping in mind a relative simplicity of operation to facilitate the use by field technicians, engineers and project planners instead of specialized geo-hydrologists.
It aims at using input data that are generally available, or that can be estimated with reasonable accuracy, or that can be measured with relative ease. Although the calculations are done numerically and have to be repeated many times, the final results can be checked by hand using the formulas in this manual.
SahysMod's objective is to predict the long-term hydro-salinity in terms of general trends, not to arrive at exact predictions of how, for example, the situation would be on the first of April in ten years from now.
Further, SahysMod gives the option of the re-use of drainage and well water (e.g. for irrigation) and it can account for farmers' responses to waterlogging, soil salinity, water scarcity and over-pumping from the aquifer. Also it offers the possibility to introduce subsurface drainage systems at varying depths and with varying capacities so that they can be optimized.
Other features of SahysMod are found in the next section.
Methods
Calculation of aquifer conditions in polygons
The model calculates the ground water levels and the incoming and outgoing ground water flows between the polygons by a numerical solution of the well-known Boussinesq equation. The levels and flows influence each other mutually.
The ground water situation is further determined by the vertical groundwater recharge that is calculated from the agronomic water balance. These depend again on the levels of the ground water.
When semi-confined aquifers are present, the resistance to vertical flow in the slowly permeable top-layer and the overpressure in the aquifer, if any, are taken into account.
Hydraulic boundary conditions are given as hydraulic heads in the external nodes in combination with the hydraulic conductivity between internal and external nodes. If one wishes to impose a zero flow condition at the external nodes, the conductivity can be set at zero.
Further, aquifer flow conditions can be given for the internal nodes. These are required when a geological fault is present at the bottom of the aquifer or when flow occurs between the main aquifer and a deeper aquifer separated by a semi-confining layer.
The depth of the water table, the rainfall and salt concentrations of the deeper layers are assumed to be the same over the whole polygon. Other parameters can very within the polygons according to type of crops and cropping rotation schedule.
Seasonal approach
The model is based on seasonal input data and returns seasonal outputs. The number of seasons per year can be chosen between a minimum of one and a maximum of four. One can distinguish for example dry, wet, cold, hot, irrigation or fallow seasons. Reasons of not using smaller input/output periods are:
short-term (e.g., daily) inputs would require much information, which, in large areas, may not be readily available;
short-term outputs would lead to immense output files, which would be difficult to manage and interpret;
this model is especially developed to predict long-term trends, and predictions for the future are more reliably made on a seasonal (long-term) than on a daily (short-term) basis, due to the high variability of short-term data;
though the precision of the predictions for the future may be limited, a lot is gained when the trend is sufficiently clear. For example, it need not be a major constraint to the design of appropriate soil salinity control measures when a certain salinity level, predicted by SahysMod to occur after 20 years, will in reality occur after 15 or 25 years.
Computational time steps
Many water balance factors depend on the level of the water table, which again depends on some of the water-balance factors. Due to these mutual influences there can be non-linear changes throughout the season. Therefore, the computer program performs daily calculations. For this purpose, the seasonal water-balance factors given with the inpu] are reduced automatically to daily values. The calculated seasonal water-balance factors, as given in the output, are obtained by summations of the daily calculated values. Groundwater levels and soil salinity (the state variables) at the end of the season are found by accumulating the daily changes of water and salt storage.
In some cases the program may detect that the time step must be taken less than 1 day for better accuracy. The necessary adjustments are made automatically.
Data requirements
Polygonal network
The model permits a maximum of 240 internal and 120 external polygons with a minimum of 3 and a maximum of 6 sides each. The subdivision of the area into polygons, based on nodal points with known coordinates, should be governed by the characteristics of the distribution of the cropping, irrigation, drainage and groundwater characteristics over the study area.
The nodes must be numbered, which can be done at will. With an index one indicates whether the node is internal or external. Nodes can be added and removed at will or changed from internal to external or vice versa. Through another index one indicates whether the internal nodes have an unconfined or semi-confined aquifer. This can also be changed at will.
Nodal network relations are to be given indicating the neighboring polygon numbers of each node. The program then calculates the surface area of each polygon, the distance between the nodes and the length of the sides between them using the Thiessen principle.
The hydraulic conductivity can vary for each side of the polygons.
The depth of the water table, the rainfall and salt concentrations of the deeper layers are assumed to be the same over the whole polygon. Other parameters can very within the polygons according to type of crops and cropping rotation schedule.
Hydrological data
The method uses seasonal water balance components as input data. These are related to the surface hydrology (like rainfall, potential evaporation, irrigation, use of drain and well water for irrigation, runoff), and the aquifer hydrology (e.g., pumping from wells). The other water balance components (like actual evaporation, downward percolation, upward capillary rise, subsurface drainage, groundwater flow) are given as output.
The quantity of drainage water, as output, is determined by two drainage intensity factors for drainage above and below drain level respectively (to be given with the input data) and the height of the water table above the given drain level. This height results from the computed water balance Further, a drainage reduction factor can be applied to simulate a limited operation of the drainage system. Variation of the drainage intensity factors and the drainage reduction factor gives the opportunity to simulate the effect of different drainage options.
To obtain accuracy in the computations of the ground water flow (sect. 2.8), the actual evaporation and the capillary rise, the computer calculations are done on a daily basis. For this purpose, the seasonal hydrological data are divided by the number of days per season to obtain daily values. The daily values are added to yield seasonal values.
Cropping patterns/rotations
The input data on irrigation, evaporation, and surface runoff are to be specified per season for three kinds of agricultural practices, which can be chosen at the discretion of the user:
A: irrigated land with crops of group A
B: irrigated land with crops of group B
U: non-irrigated land with rain-fed crops or fallow land
The groups, expressed in fractions of the total area, may consist of combinations of crops or just of a single kind of crop. For example, as the A-type crops one may specify the lightly irrigated cultures, and as the B type the more heavily irrigated ones, such as sugarcane and rice. But one can also take A as rice and B as sugar cane, or perhaps trees and orchards. A, B and/or U crops can be taken differently in different seasons, e.g. A=wheat plus barley in winter and A=maize in summer while B=vegetables in winter and B=cotton in summer. Non-irrigated land can be specified in two ways: (1) as U = 1−A−B and (2) as A and/or B with zero irrigation. A combination can also be made.
Further, a specification must be given of the seasonal rotation of the different land uses over the total area, e.g. full rotation, no rotation at all, or incomplete rotation. This occurs with a rotation index. The rotations are taken over the seasons within the year. To obtain rotations over the years it is advisable to introduce annual input changes as explained
When a fraction A1, B1 and/or U1 differs from the fraction A2, B2 and/or U2 in another season, because the irrigation regime changes in the different seasons, the program will detect that a certain rotation occurs. If one wishes to avoid this, one may specify the same fractions in all seasons (A2=A1, B2=B1, U2=U1) but the crops and irrigation quantities may be different and may need to be proportionally adjusted. One may even specify irrigated land (A or B) with zero irrigation, which is the same as un-irrigated land (U).
Cropping rotation schedules vary widely in different parts of the world. Creative combinations of area fractions, rotation indexes, irrigation quantities and annual input changes can accommodate many types of agricultural practices.
Variation of the area fractions and/or the rotational schedule gives the opportunity to simulate the effect of different agricultural practices on the water and salt balance.
Soil strata, type of aquifer
SahysMod accepts four different reservoirs of which three are in the soil profile:
s: a surface reservoir,
r: an upper (shallow) soil reservoir or root zone,
x: an intermediate soil reservoir or transition zone,
q: a deep reservoir or main aquifer.
The upper soil reservoir is defined by the soil depth, from which water can evaporate or be taken up by plant roots. It can be taken equal to the root zone. It can be saturated, unsaturated, or partly saturated, depending on the water balance. All water movements in this zone are vertical, either upward or downward, depending on the water balance. (In a future version of Sahysmod, the upper soil reservoir may be divided into two equal parts to detect the trend in the vertical salinity distribution.)
The transition zone can also be saturated, unsaturated or partly saturated. All flows in this zone are horizontal, except the flow to subsurface drains, which is radial.
If a horizontal subsurface drainage system is present, this must be placed in the transition zone, which is then divided into two parts: an upper transition zone (above drain level) and a lower transition zone (below drain level).
If one wishes to distinguish an upper and lower part of the transition zone in the absence of a subsurface drainage system, one may specify in the input data a drainage system with zero intensity.
The aquifer has mainly horizontal flow. Pumped wells, if present, receive their water from the aquifer only. The flow in the aquifer is determined in dependence of spatially varying depths of the aquifer, levels of the water table, and hydraulic conductivity.
SahysMod permits the introduction of phreatic (unconfined) and semi-confined aquifers. The latter may develop a hydraulic over or under pressure below the slowly permeable top-layer (aquitard).
Agricultural water balances
The agricultural water balances are calculated for each soil reservoir separately as shown in the article Hydrology (agriculture). The excess water leaving one reservoir is converted into incoming water for the next reservoir. The three soil reservoirs can be assigned different thickness and storage coefficients, to be given as input data. When, in a particular situation the transition zone or the aquifer is not present, they must be given a minimum thickness of 0.1 m.
The depth of the water table at the end of the previous time step, calculated from the water balances, is assumed to be the same within each polygon. If this assumption is not acceptable, the area must be divided into a larger number of polygons.
Under certain conditions, the height of the water table influences the water-balance components. For example, a rise of the water table towards the soil surface may lead to an increase of capillary rise, actual evaporation, and subsurface drainage, or a decrease of percolation losses. This, in turn, leads to a change of the water-balance, which again influences the height of the water table, etc. This chain of reactions is one of the reasons why Sahysmod has been developed into a computer program, in which the computations are made day by day to account for the chain of reactions with a sufficient degree of accuracy.
Drains, wells, and re-use
The sub-surface drainage can be accomplished through drains or pumped wells.
The subsurface drains, if any, are characterized by drain depth and drainage capacity. The drains are located in the transition zone. The subsurface drainage facility can be applied to natural or artificial drainage systems. The functioning of an artificial drainage system can be regulated through a drainage control factor.
By installing a drainage system with zero capacity one obtains the opportunity to have separate water and salt balances in the transition above and below drain level.
The pumped wells, if any, are located in the aquifer. Their functioning is characterized by the well discharge.
The drain and well water can be used for irrigation through a (re)use factor. This may affect the water and salt balance and on the irrigation efficiency or sufficiency.
Salt balances
The salt balances are calculated for each soil reservoir separately. They are based on their water balances, using the salt concentrations of the incoming and outgoing water. Some concentrations must be given as input data, like the initial salt concentrations of the water in the different soil reservoirs, of the irrigation water and of the incoming groundwater in the aquifer. The concentrations are expressed in terms of electric conductivity (EC in dS/m). When the concentrations are known in terms of g salt/L water, the rule of thumb: 1 g/L -> 1.7 dS/m can be used. Usually, salt concentrations of the soil are expressed in ECe, the electric conductivity of an extract of a saturated soil paste. In Sahysmod, the salt concentration is expressed as the EC of the soil moisture when saturated under field conditions. As a rule, one can use the conversion rate EC : ECe = 2 : 1. The principles used are correspond to those described in the article soil salinity control.
Salt concentrations of outgoing water (either from one reservoir into the other or by subsurface drainage) are computed on the basis of salt balances, using different leaching or salt mixing efficiencies to be given with the input data. The effects of different leaching efficiencies can be simulated varying their input value.
If drain or well water is used for irrigation, the method computes the salt concentration of the mixed irrigation water in the course of the time and the subsequent effect on the soil and ground water salinity, which again influences the salt concentration of the drain and well water. By varying the fraction of used drain or well water (through the input), the long-term effect of different fractions can be simulated.
The dissolution of solid soil minerals or the chemical precipitation of poorly soluble salts is not included in the computation method. However, but to some extent, it can be accounted for through the input data, e.g. increasing or decreasing the salt concentration of the irrigation water or of the incoming water in the aquifer. In a future version, the precipitation of gypsum may be introduced.
Farmers' responses
If required, farmers' responses to waterlogging and soil salinity can be automatically accounted for. The method can gradually decrease:
The amount of irrigation water applied when the water table becomes shallower depending on the kind of crop (paddy rice and non-rice)
The fraction of irrigated land when the available irrigation water is scarce;
The fraction of irrigated land when the soil salinity increases; for this purpose, the salinity is given a stochastic interpretation;
The groundwater abstraction by pumping from wells when the water table drops.
The farmers' responses influence the water and salt balances, which, in turn, slows down the process of water logging and salinization. Ultimately a new equilibrium situation will arise.
The user can also introduce farmers' responses by manually changing the relevant input data. Perhaps it will be useful first to study the automatic farmers' responses and their effect first and thereafter decide what the farmers' responses will be in the view of the user.
Annual input changes
The program runs either with fixed input data for the number of years determined by the user. This option can be used to predict future developments based on long-term average input values, e.g. rainfall, as it will be difficult to assess the future values of the input data year by year.
The program also offers the possibility to follow historic records with annually changing input values (e.g. rainfall, irrigation, cropping rotations), the calculations must be made year by year. If this possibility is chosen, the program creates a transfer file by which the final conditions of the previous year (e.g. water table and salinity) are automatically used as the initial conditions for the subsequent period. This facility makes it also possible to use various generated rainfall sequences drawn randomly from a known rainfall probability distribution and to obtain a stochastic prediction of the resulting output parameters.
Some input parameters should not be changed, like the nodal network relations, the system geometry, the thickness of the soil layers, and the total porosity, otherwise illogical jumps occur in the water and salt balances. These parameters are also stored in the transfer file, so that any impermissible change is overruled by the transfer data. In some cases of incorrect changes, the program will stop and request the user to adjust the input.
Output data
The output is given for each season of any year during any number of years, as specified with the input data. The output data comprise hydrological and salinity aspects.
As the soil salinity is very variable from place to place (figure left) SahysMod includes frequency distributions in the output. The figure was made with the CumFreq program.
The output data are filed in the form of tables that can be inspected directly, through the user menu, that calls selected groups of data either for a certain polygon over time, or for a certain season over the polygons.
The model includes mapping facilities of output data. Also, the program has the facility to store the selected data in a spreadsheet format for further analysis and for import into a GIS program.
Different users may wish to establish different cause-effect relationships. The program offers only a limited number of standard graphics, as it is not possible to foresee all different uses that may be made. This is the reason why the possibility for further analysis through spreadsheet programs was created.
Although the computations need many iterations, all the results can be checked by hand using the equations presented in the manual.
See also
DPHM-RS
References
External links and download location
Free download location of SahysMod software from : or from :
Soil chemistry
Soil physics
Environmental soil science
Environmental chemistry
Agricultural soil science
Hydrogeology
Hydrology models
Irrigation
Drainage
Land management
Land reclamation
Scientific simulation software | SahysMod | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 4,208 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Biological models",
"Environmental chemistry",
"Soil physics",
"Soil chemistry",
"Hydrology models",
"nan",
"Environmental soil science",
"Environmental modelling",
"Hydrogeology"
] |
11,890,691 | https://en.wikipedia.org/wiki/Stewart%E2%80%93Tolman%20effect | The Stewart–Tolman effect is a phenomenon in electrodynamics caused by the finite mass of electrons in conducting metal, or, more generally, the finite mass of charge carriers in an electrical conductor.
It is named after T. Dale Stewart and Richard C. Tolman, two American physicists who carried out their experimental work in the 1910s. This eponym appears to be first used by Lev Landau.
In a conducting body undergoing accelerating motion, inertia causes the electrons in the body to "lag" behind the overall motion. In the case of linear acceleration, negative charge accumulates at the end of the body; while for rotation the negative charge accumulates at the outer rim. The accumulation of charges can be measured by a galvanometer.
This effect is proportional to the mass of the charge carriers. It is much more significant in electrolyte conductors than metals, because ions in the former are 103-104 times more massive than electrons in the latter.
Notes
External links
R.C. Tolman, T.D. Stewart: The electromotive force produced by the acceleration of metals. The original article of Physical Review from 1916.
Electrodynamics | Stewart–Tolman effect | [
"Materials_science",
"Mathematics"
] | 240 | [
"Electrodynamics",
"Materials science stubs",
"Electromagnetism stubs",
"Dynamical systems"
] |
2,201,259 | https://en.wikipedia.org/wiki/Ballistic%20coefficient | In ballistics, the ballistic coefficient (BC, C) of a body is a measure of its ability to overcome air resistance in flight. It is inversely proportional to the negative acceleration: a high number indicates a low negative acceleration—the drag on the body is small in proportion to its mass. BC can be expressed with the units kilogram-force per square meter (kgf/m2) or pounds per square inch (lb/in2) (where 1 lb/in2 corresponds to ).
Formulas
General
where:
Cb,physics, ballistic coefficient as used in physics and engineering
m, mass
A, cross-sectional area
Cd, drag coefficient
, density
, characteristic body length
Ballistics
The formula for calculating the ballistic coefficient for small and large arms projectiles only is as follows:
where:
Cb,projectile, ballistic coefficient as used in point mass trajectory from the Siacci method (less than 20 degrees).
m, mass of bullet
d, measured cross-sectional diameter of projectile
i, coefficient of form
The coefficient of form, i, can be derived by 6 methods and applied differently depending on the trajectory models used: G model, Beugless/Coxe; 3 Sky Screen; 4 Sky Screen; target zeroing; Doppler radar.
Here are several methods to compute i or Cd:
where:
or
A drag coefficient can also be calculated mathematically:
where:
Cd, drag coefficient.
, density of the projectile.
v, projectile velocity at range.
π (pi) = 3.14159...
d, measured cross-sectional diameter of projectile
or
From standard physics as applied to "G" models:
where:
i, coefficient of form.
CG, drag coefficient of 1.00 from any "G" model, reference drawing, projectile.
Cp, drag coefficient of the actual test projectile at range.
Commercial use
This formula is for calculating the ballistic coefficient within the small arms shooting community, but is redundant with Cb,projectile:
where:
Cb,small-arms, ballistic coefficient
SD, sectional density
i, coefficient of form (form factor)
History
Background
In 1537, Niccolò Tartaglia performed test firing to determine the maximum angle and range for a shot. His conclusion was near 45 degrees. He noted that the shot trajectory was continuously curved.
In 1636, Galileo Galilei published results in "Dialogues Concerning Two New Sciences". He found that a falling body had a constant acceleration. This allowed Galileo to show that a bullet's trajectory was a curve.
Circa 1665, Sir Isaac Newton derived the law of air resistance. Newton's experiments on drag were through air and fluids. He showed that drag on shot increases proportionately with the density of the air (or the fluid), cross sectional area, and the square of the speed. Newton's experiments were only at low velocities to about .
In 1718, John Keill challenged the Continental Mathematica, "To find the curve that a projectile may describe in the air, on behalf of the simplest assumption of gravity, and the density of the medium uniform, on the other hand, in the duplicate ratio of the velocity of the resistance". This challenge supposes that air resistance increases exponentially to the velocity of a projectile. Keill gave no solution for his challenge. Johann Bernoulli took up this challenge and soon thereafter solved the problem and air resistance varied as "any power" of velocity; known as the Bernoulli equation. This is the precursor to the concept of the "standard projectile".
In 1742, Benjamin Robins invented the ballistic pendulum. This was a simple mechanical device that could measure a projectile's velocity. Robins reported muzzle velocities ranging from to . In his book published that same year "New Principles of Gunnery", he uses numerical integration from Euler's method and found that air resistance varies as the square of the velocity, but insisted that it changes at the speed of sound.
In 1753, Leonhard Euler showed how theoretical trajectories might be calculated using his method as applied to the Bernoulli equation, but only for resistance varying as the square of the velocity.
In 1864, the Electro-ballistic chronograph was invented, and by 1867 one electro-ballistic chronograph was claimed by its inventor to be able to resolve one ten-millionth of a second, but the absolute accuracy is unknown.
Test firing
Many countries and their militaries carried out test firings from the mid eighteenth century on using large ordnance to determine the drag characteristics of each individual projectile. These individual test firings were logged and reported in extensive ballistics tables.
Of the test firing, most notably were: Francis Bashforth at Woolwich Marshes & Shoeburyness, England (1864-1889) with velocities to and M. Krupp (1865–1880) of Friedrich Krupp AG at Meppen, Germany. Friedrich Krupp AG continued these test firings to 1930; to a lesser extent General Nikolai V. Mayevski, then a Colonel (1868–1869) at St. Petersburg, Russia; the Commission d'Experience de Gâvre (1873 to 1889) at Le Gâvre, France with velocities to and The British Royal Artillery (1904–1906).
The test projectiles (shot) used, vary from spherical, spheroidal, ogival; being hollow, solid and cored in design with the elongated ogival-headed projectiles having 1, , 2 and 3 caliber radii. These projectiles varied in size from, at to at
Methods and the standard projectile
Many militaries up until the 1860s used calculus to compute the projectile trajectory. The numerical computations necessary to calculate just a single trajectory was lengthy, tedious and done by hand. So, investigations to develop a theoretical drag model began. The investigations led to a major simplification in the experimental treatment of drag. This was the concept of a "standard projectile". The ballistic tables are made up for a factitious projectile being defined as: "a factitious weight and with a specific shape and specific dimensions in a ratio of calibers". This simplifies calculation for the ballistic coefficient of a standard model projectile, which could mathematically move through the standard atmosphere with the same ability as any actual projectile could move through the actual atmosphere.
The Bashforth method
In 1870, Bashforth publishes a report containing his ballistic tables. Bashforth found that the drag of his test projectiles varied with the square of velocity (v2) from to and with the cube of velocity (v3) from to . As of his 1880 report, he found that drag varied by v6 from to . Bashforth used rifled guns of , , and ; smooth-bore guns of similar caliber for firing spherical shot and howitzers propelled elongated projectiles having an ogival-head of caliber radius.
Bashforth uses b as the variable for ballistic coefficient. When b is equal to or less than v2, then b is equal to P for the drag of a projectile. It would be found that air does not deflect off the front of a projectile in the same direction, when there are of differing shapes. This prompted the introduction of a second factor to b, the coefficient of form (i). This is particularly true at high velocities, greater than . Hence, Bashforth introduced the "undetermined multiplier" of any power called the k factor that compensate for this unknown effects of drag above ; k > i. Bashforth then integrated k and i as K.
Although Bashforth did not conceive the "restricted zone", he showed mathematically there were 5 restricted zones. Bashforth did not propose a standard projectile, but was well aware of the concept.
Mayevski–Siacci method
In 1872, Mayevski published his report Traité de Balistique Extérieure, which included the Mayevski model. Using his ballistic tables along with Bashforth's tables from the 1870 report, Mayevski created an analytical math formula that calculated the air resistances of a projectile in terms of log A and the value n. Although Mayevski's math used a differing approach than Bashforth, the resulting calculation of air resistance was the same. Mayevski proposed the restricted zone concept and found there to be six restricted zones for projectiles.
Circa 1886, Mayevski published the results from a discussion of experiments made by M. Krupp (1880). Though the ogival-headed projectiles used varied greatly in caliber, they had essentially the same proportions as the standard projectile, being mostly 3 caliber in length, with an ogive of 2 calibers radius. Giving the standard projectile dimensionally as and .
In 1880, Colonel Francesco Siacci published his work "Balistica". Siacci found as did those who came before him that the resistance and density of the air becomes greater and greater as a projectile displaced the air at higher and higher velocities.
Siacci's method was for flat-fire trajectories with angles of departure of less than 20 degrees. He found that the angle of departure is sufficiently small to allow for air density to remain the same and was able to reduce the ballistics tables to easily tabulated quadrants giving distance, time, inclination and altitude of the projectile. Using Bashforth's k and Mayevski's tables, Siacci created a four-zone model. Siacci used Mayevski's standard projectile. From this method and standard projectile, Siacci formulated a shortcut.
Siacci found that within a low-velocity restricted zone, projectiles of similar shape, and velocity in the same air density behave similarly; or . Siacci used the variable for ballistic coefficient. Meaning, air density is the generally the same for flat-fire trajectories, thus sectional density is equal to the ballistic coefficient and air density can be dropped. Then as the velocity rises to Bashforth's for high velocity when requires the introduction of . Following within today's currently used ballistic trajectory tables for an average ballistic coefficient: would equal equals as .
Siacci wrote that within any restricted zone, C being the same for two or more projectiles, the trajectories differences will be minor. Therefore, C agrees with an average curve, and this average curve applies for all projectiles. Therefore, a single trajectory can be computed for the standard projectile without having to resort to tedious calculus methods, and then a trajectory for any actual bullet with known C can be computed from the standard trajectory with just simple algebra.
The ballistic tables
The aforementioned ballistics tables are generally: functions, air density, projectile time at range, range, degree of projectile departure, weight and diameter to facilitate the calculation of ballistic formulae. These formulae produce the projectile velocity at range, drag and trajectories. The modern day commercially published ballistic tables or software computed ballistics tables for small arms, sporting ammunition are exterior ballistic, trajectory tables.
The 1870 Bashforth tables were to . Mayevski, using his tables, supplemented by the Bashforth tables (to 6 restricted zones) and the Krupp tables. Mayevski conceived a 7th restricted zone and extended the Bashforth tables to . Mayevski converted Bashforth's data from Imperial units of measure to metric units of measure (now in SI units of measure). In 1884, James Ingalls published his tables in the U.S. Army Artillery Circular M using the Mayevski tables. Ingalls extended Mayevski's ballistics tables to within an 8th restricted zone, but still with the same n value (1.55) as Mayevski's 7th restricted zone. Ingalls, converted Mayevski's results back to Imperial units. The British Royal Artillery results were very similar to those of Mayevski's and extended their tables to within the 8th restricted zone changing the n value from 1.55 to 1.67. These ballistic tables were published in 1909 and almost identical to those of Ingalls. In 1971 the Sierra Bullet company calculated their ballistic tables to 9 restricted zones but only within .
The G model
In 1881, the Commission d'Experience de Gâvre did a comprehensive survey of data available from their tests as well as other countries. After adopting a standard atmospheric condition for the drag data the Gavre drag function was adopted. This drag function was known as the Gavre function and the standard projectile adopted was the Type 1 projectile. Thereafter, the Type 1 standard projectile was renamed by Ballistics Section of Aberdeen Proving Grounds in Maryland, USA as G1 after the Commission d'Experience de Gâvre. For practical purposes the subscript 1 in G1 is generally written in normal font size as G1.
The general form for the calculations of trajectory adopted for the G model is the Siacci method. The standard model projectile is a "fictitious projectile" used as the mathematical basis for the calculation of actual projectile's trajectory when an initial velocity is known. The G1 model projectile adopted is in dimensionless measures of 2 caliber radius ogival-head and 3.28 caliber in length. By calculation this leaves the body length 1.96 caliber and head, 1.32 caliber long.
Over the years there has been some confusion as to adopted size, weight and radius ogival-head of the G1 standard projectile. This misconception may be explained by Colonel Ingalls in the 1886 publication, Exterior Ballistics in the Plan Fire; page 15, In the following tables the first and second columns give the velocities and corresponding resistance, in pounds, to an elongated one inch in diameter and having an ogival head of one and a half calibers. They were deduced from Bashforth's experiments by Professor A. G. Greenhill, and are taken from his papers published in the Proceedings of the Royal Artillery Institution, Number 2, Volume XIII. Further it is discussed that said projectile's weight was one pound.
For the purposes of mathematical convenience for any standard projectile (G) the C is 1.00. Where as the projectile's sectional density (SD) is dimensionless with a mass of 1 divided by the square of the diameter of 1 caliber equaling an SD of 1. Then the standard projectile is assigned a coefficient of form of 1. Following that . C, as a general rule, within flat-fire trajectory, is carried out to 2 decimal points. C is commonly found within commercial publications to be carried out to 3 decimal points as few sporting, small arms projectiles rise to the level of 1.00 for a ballistic coefficient.
When using the Siacci method for different G models, the formula used to compute the trajectories is the same. What differs is retardation factors found through testing of actual projectiles that are similar in shape to the standard project reference. This creates slightly different set of retardation factors between differing G models. When the correct G model retardation factors are applied within the Siacci mathematical formula for the same G model C, a corrected trajectory can be calculated for any G model.
Another method of determining trajectory and ballistic coefficient was developed and published by Wallace H. Coxe and Edgar Beugless of DuPont in 1936. This method is by shape comparison an logarithmic scale as drawn on 10 charts. The method estimates the ballistic coefficient related to the drag model of the Ingalls tables. When matching an actual projectile against the drawn caliber radii of Chart No. 1, it will provide i and by using Chart No. 2, C can be quickly calculated. Coxe and Beugless used the variable C for ballistic coefficient.
The Siacci method was abandoned by the end of the World War I for artillery fire. But the U.S. Army Ordnance Corps continued using the Siacci method into the middle of the 20th century for direct (flat-fire) tank gunnery. The development of the electromechanical analog computer contributed to the calculation of aerial bombing trajectories during World War II. After World War II the advent of the silicon semiconductor based digital computer made it possible to create trajectories for the guided missiles/bombs, intercontinental ballistic missiles and space vehicles.
Between World War I and II the U.S. Army Ballistics research laboratories at Aberdeen Proving Grounds, Maryland, USA developed the standard models for G2, G5, G6. In 1965, Winchester Western published a set of ballistics tables for G1, G5, G6 and GL. In 1971 Sierra Bullet Company retested all their bullets and concluded that the G5 model was not the best model for their boat tail bullets and started using the G1 model. This was fortunate, as the entire commercial sporting and firearms industries had based their calculations on the G1 model. The G1 model and Mayevski/Siacci Method continue to be the industry standard today. This benefit allows for comparison of all ballistic tables for trajectory within the commercial sporting and firearms industry.
In recent years there have been vast advancements in the calculation of flat-fire trajectories with the advent of Doppler radar and the personal computer and handheld computing devices. Also, the newer methodology proposed by Dr. Arthur Pejsa and the use of the G7 model used by Mr. Bryan Litz, ballistic engineer for Berger Bullets, LLC for calculating boat tailed spitzer rifle bullet trajectories and 6 Dof model based software have improved the prediction of flat-fire trajectories.
Differing mathematical models and bullet ballistic coefficients
Most ballistic mathematical models and hence tables or software take for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistic coefficient. Those models do not differentiate between wadcutter, flat-based, spitzer, boat-tail, very-low-drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several different drag curve models optimized for several standard projectile shapes are available, however.
The resulting drag curve models for several standard projectile shapes or types are referred to as:
G1 or Ingalls (flatbase with 2 caliber (blunt) nose ogive - by far the most popular)
G2 (Aberdeen J projectile)
G5 (short 7.5° boat-tail, 6.19 calibers long tangent ogive)
G6 (flatbase, 6 calibers long secant ogive)
G7 (long 7.5° boat-tail, 10 calibers secant ogive, preferred by some manufacturers for very-low-drag bullets)
G8 (flatbase, 10 calibers long secant ogive)
GL (blunt lead nose)
Since these standard projectile shapes differ significantly the Gx BC will also differ significantly from the Gy BC for an identical bullet. To illustrate this the bullet manufacturer Berger has published the G1 and G7 BCs for most of their target, tactical, varmint and hunting bullets. Other bullet manufacturers like Lapua and Nosler also published the G1 and G7 BCs for most of their target bullets. Many of these manufacturer and other independently verified G1 and G7 Ballistic Coefficients for most of the modern bullets gets published and updated regularly in freely published bullet database. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The applied reference projectile shape always has a form factor (i) of exactly 1. When a particular projectile has a sub 1 form factor (i) this indicates that the particular projectile exhibits lower drag than the applied reference projectile shape. A form factor (i) greater than 1 indicates the particular projectile exhibits more drag than the applied reference projectile shape. In general the G1 model yields comparatively high BC values and is often used by the sporting ammunition industry.
The transient nature of bullet ballistic coefficients
Variations in BC claims for exactly the same projectiles can be explained by differences in the ambient air density used to compute specific values or differing range-speed measurements on which the stated G1 BC averages are based. Also, the BC changes during a projectile's flight, and stated BCs are always averages for particular range-speed regimes. Further explanation about the variable nature of a projectile's G1 BC during flight can be found at the external ballistics article. The external ballistics article implies that knowing how a BC was determined is almost as important as knowing the stated BC value itself.
For the precise establishment of BCs (or perhaps the scientifically better expressed drag coefficients), Doppler radar-measurements are required. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Weibel 1000e or Infinition BR-1001 Doppler radars are used by governments, professional ballisticians, defense forces, and a few ammunition manufacturers to obtain exact real-world data on the flight behavior of projectiles of interest.
Doppler radar measurement results for a lathe turned monolithic solid .50 BMG very-low-drag bullet (Lost River J40 , monolithic solid bullet / twist rate 1:) look like this:
The initial rise in the BC value is attributed to a projectile's always present yaw and precession out of the bore. The test results were obtained from many shots, not just a single shot. The bullet was assigned 1.062 lb/in2 (746.7 kg/m2) for its BC number by the bullet's manufacturer, Lost River Ballistic Technologies, before it went out of business.
Measurements on other bullets can give totally different results. How different speed regimes affect several 8.6 mm (.338 in calibre) rifle bullets made by the Finnish ammunition manufacturer Lapua can be seen in the .338 Lapua Magnum product brochure which states Doppler radar established BC data.
General trends
Sporting bullets, with a calibre d ranging from , have C in the range 0.12 lb/in2 to slightly over 1.00 lb/in2 (84 kg/m2 to 703 kg/m2). Those bullets with the higher BCs are the most aerodynamic, and those with low BCs are the least. Very-low-drag bullets with C ≥ 1.10 lb/in2 (over 773 kg/m2) can be designed and produced on CNC precision lathes out of mono-metal rods, but they often have to be fired from custom made full bore rifles with special barrels.
Ammunition makers often offer several bullet weights and types for a given cartridge. Heavy-for-caliber pointed (spitzer) bullets with a boattail design have BCs at the higher end of the normal range, whereas lighter bullets with square tails and blunt noses have lower BCs. The 6 mm and 6.5 mm cartridges are probably the most well known for having high BCs and are often used in long range target matches of – . The 6 and 6.5 have relatively light recoil compared to high BC bullets of greater caliber and tend to be shot by the winner in matches where accuracy is key. Examples include the 6mm PPC, 6mm Norma BR, 6×47mm SM, 6.5×55mm Swedish Mauser, 6.5×47mm Lapua, 6.5 Creedmoor, 6.5 Grendel, .260 Remington, and the 6.5-284.
In the United States, hunting cartridges such as the .25-06 Remington (a 6.35 mm caliber), the .270 Winchester (a 6.8 mm caliber), and the .284 Winchester (a 7 mm caliber) are used when high BCs and moderate recoil are desired. The .30-06 Springfield and .308 Winchester cartridges also offer several high-BC loads, although the bullet weights are on the heavy side for the available case capacity, and thus are velocity limited by the maximum allowable pressure.
In the larger caliber category, the .338 Lapua Magnum and the .50 BMG are popular with very high BC bullets for shooting beyond 1,000 meters. Newer chamberings in the larger caliber category are the .375 and .408 Cheyenne Tactical and the .416 Barrett.
Information sources
For many years, bullet manufacturers were the main source of ballistic coefficients for use in trajectory calculations. However, in the past decade or so, it has been shown that ballistic coefficient measurements by independent parties can often be more accurate than manufacturer specifications. Since ballistic coefficients depend on the specific firearm and other conditions that vary, it is notable that methods have been developed for individual users to measure their own ballistic coefficients.
Satellites and reentry vehicles
Satellites in low Earth orbit (LEO) with high ballistic coefficients experience smaller perturbations to their orbits due to atmospheric drag.
The ballistic coefficient of an atmospheric reentry vehicle has a significant effect on its behavior. A very high ballistic coefficient vehicle would lose velocity very slowly and would impact the Earth's surface at higher speeds. In contrast, a low ballistic coefficient vehicle would reach subsonic speeds before reaching the ground.
In general, reentry vehicles carrying human beings or other sensitive payloads back to Earth from space have high drag and a correspondingly low ballistic coefficient (less than approx. 100 lb/ft2). Vehicles that carry nuclear weapons launched by an intercontinental ballistic missile (ICBM), by contrast, have a high ballistic coefficient, ranging between 100 and 5000 lb/ft2, enabling a significantly faster descent from space to the surface. This in turn makes the weapon less affected by crosswinds or other weather phenomena, and harder to track, intercept, or otherwise defend against.
See also
External ballistics - The behavior of a projectile in flight.
Trajectory of a projectile
References
External links
Aerospace Corporation Definition
Chuck Hawks Article on Ballistic Coefficient
Ballistic Coefficient Tables
Exterior Ballistics.com
How do bullets fly? The ballistic coefficient (bc) by Ruprecht Nennstiel, Wiesbaden, Germany
Ballistic Coefficients - Explained
Ballistic calculators
Projectiles
Aerodynamics
Ballistics | Ballistic coefficient | [
"Physics",
"Chemistry",
"Engineering"
] | 5,336 | [
"Applied and interdisciplinary physics",
"Aerodynamics",
"Aerospace engineering",
"Ballistics",
"Fluid dynamics"
] |
2,201,417 | https://en.wikipedia.org/wiki/Bulk%20density | In materials science, bulk density, also called apparent density, is a material property defined as the mass of the many particles of the material divided by the bulk volume. Bulk volume is defined as the total volume the particles occupy, including particle's own volume, inter-particle void volume, and the particles' internal pore volume.
Bulk density is useful for materials such as powders, granules, and other "divided" solids, especially used in reference to mineral components (soil, gravel), chemical substances, pharmaceutical ingredients, foodstuff, or any other masses of corpuscular or particulate matter (particles).
Bulk density is not the same as the particle density, which is an intrinsic property of the solid and does not include the volume for voids between particles (see: density of non-compact materials).
Bulk density is an extrinsic property of a material; it can change depending on how the material is handled. For example, a powder poured into a cylinder will have a particular bulk density; if the cylinder is disturbed, the powder particles will move and usually settle closer together, resulting in a higher bulk density. For this reason, the bulk density of powders is usually reported both as "freely settled" (or "poured" density) and "tapped" density (where the tapped density refers to the bulk density of the powder after a specified compaction process, usually involving vibration of the container.)
Soil
The bulk density of soil depends greatly on the mineral make up of soil and the degree of compaction. The density of quartz is around but the (dry) bulk density of a mineral soil is normally about half that density, between . In contrast, soils rich in soil organic carbon and some friable clays tend to have lower bulk densities () due to a combination of the low-density of the organic materials themselves and increased porosity. For instance, peat soils have bulk densities from . In a detailed study which has used 6,000 analysed samples in the European Union, a high resolution map (100m) of soil bulk density for the 0-20cm using regression model. Croplands have almost 1.5 times higher bulk density compared to woodlands.
Bulk density of soil is usually determined from a core sample which is taken by driving a metal corer into the soil at the desired depth and horizon. This gives a soil sample of known total volume, . From this sample the wet bulk density and the dry bulk density can be determined.
For the wet bulk density (total bulk density) this sample is weighed, giving the mass . For the dry bulk density, the sample is oven dried and weighed, giving the mass of soil solids, . The relationship between these two masses is , where is the mass of substances lost on oven drying (often, mostly water). The dry and wet bulk densities are calculated as
Dry bulk density = mass of soil/ volume as a whole
Wet bulk density = mass of soil plus liquids/ volume as a whole
The dry bulk density of a soil is inversely related to the porosity of the same soil: the more pore space in a soil the lower the value for bulk density. Bulk density of a region in the interior of the Earth is also related to the seismic velocity of waves travelling through it: for P-waves, this has been quantified with Gardner's relation. The higher the density, the faster the velocity.
See also
Brazil nut effect
Characterisation of pore space in soil
Effective porosity
Density meter
Number density
Notes
External links
University of Leicester podcast 'How to measure dry bulk density'
Bulk density calculator
'Determination of bulk density'
Mass density
Particulates
Soil physics | Bulk density | [
"Physics",
"Chemistry"
] | 749 | [
"Mechanical quantities",
"Applied and interdisciplinary physics",
"Physical quantities",
"Mass",
"Soil physics",
"Intensive quantities",
"Volume-specific quantities",
"Particulates",
"Density",
"Particle technology",
"Mass density",
"Matter"
] |
2,201,758 | https://en.wikipedia.org/wiki/Tishchenko%20reaction | The Tishchenko reaction is an organic chemical reaction that involves disproportionation of an aldehyde in the presence of an alkoxide. The reaction is named after Russian organic chemist Vyacheslav Tishchenko, who discovered that aluminium alkoxides are effective catalysts for the reaction.
In the related Cannizzaro reaction, the base is sodium hydroxide and then the oxidation product is a carboxylic acid and the reduction product is an alcohol.
History
The reaction involving benzaldehyde was discovered by Claisen using sodium benzylate as base. The reaction produces benzyl benzoate.
Enolizable aldehydes are not amenable to Claisen's conditions. Vyacheslav Tishchenko discovered that aluminium alkoxides allowed the conversion of enolizable aldehydes to esters.
Examples
The Tishchenko reaction of acetaldehyde gives the commercially important solvent ethyl acetate. The reaction is catalyzed by aluminium alkoxides.
The Tishchenko reaction is used to obtain isobutyl isobutyrate, a specialty solvent.
Hydroxypivalic acid neopentyl glycol ester is produced by a Tishchenko reaction from hydroxypivaldehyde in the presence of a basic catalyst (e.g., aluminium oxide).
The Tishchenko reaction of paraformaldehyde in the presence of aluminum methylate or magnesium methylate forms methyl formate.
Paraformaldehyde reacts with boric acid to form methyl formate. The key step in the reaction mechanism for this reaction is a 1,3-hydride shift in the hemiacetal intermediate formed from two successive nucleophilic addition reactions, the first one from the catalyst. The hydride shift regenerates the alkoxide catalyst.
See also
Aldol–Tishchenko reaction
Baylis–Hillman reaction
Cannizzaro reaction
Meerwein–Ponndorf–Verley reduction
Oppenauer oxidation
References
Further reading
; 482–540. (in Russian)
В. Е. Тищенко and Г. Н. Григорьева (V. E. Tishchenko and G. N. Grigor'eva) (1906) "О действии амальгамы магния на изомасляного альдегида" (On the effect of magnesium amalgam on isobutyric aldehyde), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 38 : 540–547. (in Russian)
М. П. Воронҝова and В. Е. Тищенко (M. P. Voronkova and V. E. Tishchenko) (1906) "О действии амальгамы магния на уксусный альдегид" (On the effect of magnesium amalgam on acetic aldehyde), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 38 : 547–550. (in Russian)
В. Тищенко (V. Tishchenko) (1899) "Действие амальгамированного алюминия на алкоголь. Алкоголятов алюминия, их свойства и реакции." (Effect of amalgamated aluminium on alcohol. Aluminium alkoxides, their properties and reactions.), Журнал Русского Физико-Химического Общества (Journal of the Russian Physico-Chemical Society), 31 : 694–770. (in Russian)
Organic reactions
Name reactions | Tishchenko reaction | [
"Chemistry"
] | 961 | [
"Name reactions",
"Organic redox reactions",
"Organic reactions"
] |
2,203,789 | https://en.wikipedia.org/wiki/Amplified%20fragment%20length%20polymorphism | Amplified fragment length polymorphism (AFLP-PCR or AFLP) is a PCR-based tool used in genetics research, DNA fingerprinting, and in the practice of genetic engineering. Developed in the early 1990s by Pieter Vos, AFLP uses restriction enzymes to digest genomic DNA, followed by ligation of adaptors to the sticky ends of the restriction fragments. A subset of the restriction fragments is then selected to be amplified. This selection is achieved by using primers complementary to the adaptor sequence, the restriction site sequence and a few nucleotides inside the restriction site fragments (as described in detail below). The amplified fragments are separated and visualized on denaturing on agarose gel electrophoresis, either through autoradiography or fluorescence methodologies, or via automated capillary sequencing instruments.
Although AFLP should not be used as an acronym, it is commonly referred to as "Amplified fragment length polymorphism". However, the resulting data are not scored as length polymorphisms, but instead as presence-absence polymorphisms.
AFLP-PCR is a highly sensitive method for detecting polymorphisms in DNA. The technique was originally described by Vos and Zabeau in 1993. In detail, the procedure of this technique is divided into three steps:
Digestion of total cellular DNA with one or more restriction enzymes and ligation of restriction half-site specific adaptors to all restriction fragments.
Selective amplification of some of these fragments with two PCR primers that have corresponding adaptor and restriction site specific sequences.
Electrophoretic separation of amplicons on a gel matrix, followed by visualisation of the band pattern.
Applications
The AFLP technology has the capability to detect various polymorphisms in different genomic regions simultaneously. It is also highly sensitive and reproducible. As a result, AFLP has become widely used for the identification of genetic variation in strains or closely related species of plants, fungi, animals, and bacteria. The AFLP technology has been used in criminal and paternity tests, also to determine slight differences within populations, and in linkage studies to generate maps for quantitative trait locus (QTL) analysis.
There are many advantages to AFLP when compared to other marker technologies including randomly amplified polymorphic DNA (RAPD), restriction fragment length polymorphism (RFLP), and microsatellites. AFLP not only has higher reproducibility, resolution, and sensitivity at the whole genome level compared to other techniques, but it also has the capability to amplify between 50 and 100 fragments at one time. In addition, no prior sequence information is needed for amplification (Meudt & Clarke 2007). As a result, AFLP has become extremely beneficial in the study of taxa including bacteria, fungi, and plants, where much is still unknown about the genomic makeup of various organisms.
The AFLP technology is covered by patents and patent applications of Keygene N.V. AFLP is a registered trademark of Keygene N.V.
References
External links
Software for analyzing AFLP data
CLIQS 1D Pro Automated electrophoresis (gel-based or capillary) band-matching and databasing of AFLP fragments
BioNumerics Gelcompar II (Discontinued) One universal platform to manage and analyze all your biological data including AFLP
KeyGene Quantar Suite Versatile marker scoring software
SoftGenetics GeneMarker fragment analysis software
Freeware for analyzing AFLP data
SourceForge Genographer Free software for manual scoring (Java application)
SourceForge RawGeno Free automated scoring (R CRAN environment, including a user-friendly GUI)
Online programs for simulation of AFLP-PCR
ALFIE - BProkaryotes or uploaded sequences
In silico AFLP-PCR for prokaryotes, some eukaryotes or uploaded sequences
Enzymes for AFLP New England Biolabs
AFLP Technology note at KeyGene
AFLP Applications
Molecular biology
DNA
DNA profiling techniques | Amplified fragment length polymorphism | [
"Chemistry",
"Biology"
] | 824 | [
"Biochemistry",
"Genetics techniques",
"DNA profiling techniques",
"Molecular biology"
] |
2,204,768 | https://en.wikipedia.org/wiki/Renninger%20negative-result%20experiment | In quantum mechanics, the Renninger negative-result experiment is a thought experiment that illustrates some of the difficulties of understanding the nature of wave function collapse and measurement in quantum mechanics. The statement is that a particle need not be detected in order for a quantum measurement to occur, and that the lack of a particle detection can also constitute a measurement. The thought experiment was first posed in 1953 by Mauritius Renninger. The non-detection of a particle in one arm of an interferometer implies that the particle must be in the other arm. It can be understood to be a refinement of the paradox presented in the Mott problem.
The Mott problem
The Mott problem concerns the paradox of reconciling the spherical wave function describing the emission of an alpha ray by a radioactive nucleus, with the linear tracks seen in a cloud chamber. Formulated in 1927 by Albert Einstein and Max Born, it was resolved by a calculation done by Sir Nevill Francis Mott that showed that the correct quantum mechanical system must include the wave functions for the atoms in the cloud chamber as well as that for the alpha ray. The calculation showed that the resulting probability is non-zero only on straight lines raying out from the decayed atom; that is, once the measurement is performed, the wave-function becomes non-vanishing only near the classical trajectory of a particle.
Renninger's negative-result experiment
In Renninger's 1960 formulation, the cloud chamber is replaced by a pair of hemispherical particle detectors, completely surrounding a radioactive atom at the center that is about to decay by emitting an alpha ray. For the purposes of the thought experiment, the detectors are assumed to be 100% efficient, so that the emitted alpha ray is always detected.
By consideration of the normal process of quantum measurement, it is clear that if one detector registers the decay, then the other will not: a single particle cannot be detected by both detectors. The core observation is that the non-observation of a particle on one of the shells is just as good a measurement as detecting it on the other.
The strength of the paradox can be heightened by considering the two hemispheres to be of different diameters; with the outer shell a good distance farther away. In this case, after the non-observation of the alpha ray on the inner shell, one is led to conclude that the (originally spherical) wave function has "collapsed" to a hemisphere shape, and (because the outer shell is distant) is still in the process of propagating to the outer shell, where it is guaranteed to eventually be detected.
In the standard quantum-mechanical formulation, the statement is that the wave-function has partially collapsed, and has taken on a hemispherical shape. The full collapse of the wave function, down to a single point, does not occur until it interacts with the outer hemisphere. The conundrum of this thought experiment lies in the idea that the wave function interacted with the inner shell, causing a partial collapse of the wave function, without actually triggering any of the detectors on the inner shell. This illustrates that wave function collapse can occur even in the absence of particle detection.
Common objections
There are a number of common objections to the standard interpretation of the experiment. Some of these objections, and standard rebuttals, are listed below.
Finite radioactive lifetime
It is sometimes noted that the time of the decay of the nucleus cannot be controlled, and that the finite half-life invalidates the result. This objection can be dispelled by sizing the hemispheres appropriately with regards to the half-life of the nucleus. The radii are chosen so that the more distant hemisphere is much farther away than the half-life of the decaying nucleus, times the flight-time of the alpha ray.
To lend concreteness to the example, assume that the half-life of the decaying nucleus is 0.01 microsecond (most elementary particle decay half-lives are much shorter; most nuclear decay half-lives are much longer; some atomic electromagnetic excitations have a half-life about this long). If one were to wait 0.4 microseconds, then the probability that the particle will have decayed will be ; that is, the probability will be very very close to one. The outer hemisphere is then placed at (speed of light) times (0.4 microseconds) away: that is, at about 120 meters away. The inner hemisphere is taken to be much closer, say at 1 meter.
If, after (for example) 0.3 microseconds, one has not seen the decay product on the inner, closer, hemisphere, one can conclude that the particle has decayed with almost absolute certainty, but is still in-flight to the outer hemisphere. The paradox then concerns the correct description of the wave function in such a scenario.
Classical trajectories
Another common objection states that the decay particle was always travelling in a straight line, and that only the probability of the distribution is spherical. This, however, is a mis-interpretation of the Mott problem, and is false. The wave function was truly spherical, and is not the incoherent superposition (mixed state) of a large number of plane waves. The distinction between mixed and pure states is illustrated more clearly in a different context, in the debate comparing the ideas behind local-hidden variables and their refutation by means of the Bell inequalities.
Diffraction
A true quantum-mechanical wave would diffract from the inner hemisphere, leaving a diffraction pattern to be observed on the outer hemisphere. This is not really an objection, but rather an affirmation that a partial collapse of the wave function has occurred. If a diffraction pattern were not observed, one would be forced to conclude that the particle had collapsed down to a ray, and stayed that way, as it passed the inner hemisphere; this is clearly at odds with standard quantum mechanics. Diffraction from the inner hemisphere is expected.
Complex decay products
In this objection, it is noted that in real life, a decay product is either spin-1/2 (a fermion) or a photon (spin-1). This is taken to mean that the decay is not truly sphere symmetric, but rather has some other distribution, such as a p-wave. However, on closer examination, one sees this has no bearing on the spherical symmetry of the wave-function. Even if the initial state could be polarized; for example, by placing it in a magnetic field, the non-spherical decay pattern is still properly described by quantum mechanics.
Non-relativistic language
The above formulation is inherently phrased in a non-relativistic language; and it is noted that elementary particles have relativistic decay products. This objection only serves to confuse the issue. The experiment can be reformulated so that the decay product is slow-moving. At any rate, special relativity is not in conflict with quantum mechanics.
Imperfect detectors
This objection states that in real life, particle detectors are imperfect, and sometimes neither the detectors on the one hemisphere, nor the other, will go off. This argument only serves to confuse the issue, and has no bearing on the fundamental nature of the wave-function.
See also
Interaction-free measurement
Elitzur–Vaidman bomb-tester
Counterfactual definiteness
References
English translation at https://arxiv.org/abs/physics/0504043v1
Louis de Broglie, The Current Interpretation of Wave Mechanics, (1964) Elsevier, Amsterdam. (Provides discussion of the Renninger experiment.)
(Section 4.1 reviews Renninger's experiment).
Quantum measurement
Thought experiments in quantum mechanics | Renninger negative-result experiment | [
"Physics"
] | 1,589 | [
"Quantum measurement",
"Quantum mechanics",
"Thought experiments in quantum mechanics"
] |
13,413,238 | https://en.wikipedia.org/wiki/Optical%20wireless | Optical wireless is the combined use of "optical" (optical fibre) and "wireless" (radio frequency) communication to provide telecommunication to clusters of end points which are geographically distant. The high capacity optical fibre is used to span the longest distances. A lower cost wireless link carries the signal for the last mile to nearby users.
See also
4.5G / 5G
References
Definition: Optical Wireless, SearchMobileComputing website.
Optical communications
Local loop
Wireless | Optical wireless | [
"Engineering"
] | 94 | [
"Optical communications",
"Wireless",
"Telecommunications engineering"
] |
13,413,355 | https://en.wikipedia.org/wiki/Neutral%20fat | Neutral fats, also known as true fats, are simple lipids that are produced by the dehydration synthesis of one or more fatty acids with an alcohol like glycerol. Neutral fats are also known as triacylglycerols, these lipids are dense as well as hydrophobic due to their long carbon chain and are there main function is to store energy. Neutral fats can be made from the compact packing of fatty acids. Triacylglycerols can also serve to part of lipid membranes, which serve to provide flexibility to the membranes, they can also serve as parts for signaling molecules. Many types of neutral fats are possible both because of the number and variety of fatty acids that could form part of it and because of the different bonding locations for the fatty acids. An example is a monoglyceride, which has one fatty acid combined with glycerol, a diglyceride, which has two fatty acids combined with glycerol, or a triglyceride, which has three fatty acids combined with glycerol.
Triglycerides
Triglycerides are formed from the esterification of 3 molecules of fatty acids with one molecule of trihydric alcohol, glycerol (glycerine or trihydroxy propane). In the process, 3 molecules of water are eliminated. The word "triglyceride" refers to the number of fatty acids esterified to one molecule of glycerol.
In triglycerides, the three fatty acids are rarely similar and are thus called pure fats. For example, tripalmitin, tristearin, etc.
References
Lipids | Neutral fat | [
"Chemistry"
] | 350 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
13,414,189 | https://en.wikipedia.org/wiki/Automated%20X-ray%20inspection | Automated inspection (AXI) is a technology based on the same principles as automated optical inspection (AOI). It uses as its source, instead of visible light, to automatically inspect features, which are typically hidden from view.
Automated X-ray inspection is used in a wide range of industries and applications, predominantly with two major goals:
Process optimization, i.e. the results of the inspection are used to optimize following processing steps,
Anomaly detection, i.e. the result of the inspection serve as a criterion to reject a part (for scrap or re-work).
While AOI is mainly associated with electronics manufacturing (due to widespread use in printed circuit board manufacturing), AXI has a much wider range of applications. It ranges from the quality check of alloy wheels to the detection of bone fragments in processed meat. Wherever large numbers of very similar items are produced according to a defined standard, automatic inspection using advanced image processing and pattern recognition software (Computer vision) has become a useful tool to ensure quality and improve yield in processing and manufacturing.
Principle of Operation
While optical inspection produces full color images of the surface of the object, x-ray inspection transmits x-rays through the object and records gray scale images of the shadows cast. The image is then processed by image processing software that detects the position and size/ shape of expected features (for process optimization) or presence/ absence of unexpected/ unintended objects or features (for anomaly detection).
X-rays are generated by an x-ray tube, usually located directly above or below the object under inspection. A detector located the opposite side of the object records an image of the x-rays transmitted through the object. The detector either converts the x-rays first into visible light which is imaged by an optical camera, or detects directly using an x-ray sensor array. The object under inspection may be imaged at higher magnification by moving the object closer to the x-ray tube, or at lower magnification closer to the detector.
Since the image is produced due to the different absorption of x-rays when passing through the object, it can reveal structures inside the object that are hidden from outside view.
Applications
With the advancement of image processing software the number applications for automated x-ray inspection is huge and constantly growing. The first applications started off in industries where the safety aspect of components demanded a careful inspection of each part produced (e.g. welding seams for metal parts in nuclear power stations) because the technology was expectedly very expensive in the beginning. But with wider adoption of the technology, prices came down significantly and opened automated x-ray inspection up to a much wider field- partially fueled again by safety aspects (e.g. detection of metal, glass or other materials in processed food) or to increase yield and optimize processing (e.g. detection of size and location of holes in cheese to optimize slicing patterns).
In mass production of complex items (e.g. in electronics manufacturing), an early detection of defects can drastically reduce overall cost, because it prevents defective parts from being used in subsequent manufacturing steps. This results in three major benefits: a) it provides feedback at the earliest possible state that materials are defective or process parameters got out of control, b) it prevents adding value to components that are already defective and therefore reduces the overall cost of a defect, and c) it increases the likelihood of field defects of the final product, because the defect may not be detected at later stages in quality inspection or during functional testing due to the limited set of test patterns.
Use of AXI in the Food Industry
Foreign body detection, fill level control, and process control are the three main areas for the use of AXI in the food industry. Especially in packaged goods at the end of the filling and packaging line the use of X-ray scanners has become the norm, rather than the exception. It is often used in combination with other QA measures, especially inline check weighers.
Most of it is limited to a good/ bad check, i.e. it produces rejects after the AXI station, but in some applications it is directly used for process control where the data from the AXI are fed to the process and can control other variables. An often cited example is the control of the thickness of cheese slices after an AXI determined the distribution and position of 'holes' inside the cheese block. (to ensure consistent total package weight).
Recently, automated methods have been developed for X-ray inspection of food passing by on a conveyor belt.
Use of AXI in electronics manufacturing
The increasing usage of ICs (integrated circuits) with packages such as BGAs (ball grid array) where the connections are underneath the chip and not visible, means that ordinary optical inspection is impossible. Because the connections are underneath the chip package there is a greater need to ensure that the manufacturing process is able to accommodate these chips correctly. Additionally the chips that use BGA packages tend to be the larger ones with many connections. Therefore, it is essential that all the connections are made correctly.
The process of X-ray inspection is to obtain the internal structure of the test object, and then observe the internal information of the test object without breaking the test object.
AXI is often paired with the testing provided by boundary scan test, in-circuit test, and functional test.
Process
As BGA connections are not visible, the only alternative is to use a low level inspection. AXI is able to find faults such as opens, shorts, insufficient solder, excessive solder, missing electrical parts, and mis-aligned components. Defects are detected and repaired within short debug time.
These inspection systems are more costly than ordinary optical systems, but they are able to check all the connections, even those underneaths the chip package.
To achieve highest throughput, AXI machines use single 2D X-ray images where possible to make a decision. However, as the density of components on both sides of the PCB increases, it is harder to achieve a clear 2D image that is not obscured by other components. Techniques such as Tomosynthesis are often used to filter out background components by first creating a 3D model from multiple X-ray images taken from different angles.
Related technologies
The following are related technologies and are also used in electronic production to test for the correct operation of electronics printed circuit boards.
In-circuit test (ICT)
Joint Test Action Group (JTAG)
Automated optical inspection (AOI)
Functional testing (see acceptance testing)
External links
What is X-Ray Inspection
References
Hardware testing
X-rays
Printed circuit board manufacturing | Automated X-ray inspection | [
"Physics",
"Engineering"
] | 1,347 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electronic engineering",
"Electrical engineering",
"Printed circuit board manufacturing"
] |
13,415,486 | https://en.wikipedia.org/wiki/Bolaamphiphile | In chemistry, bolaamphiphiles (also known as bolaform surfactants, bolaphiles, or alpha-omega-type surfactants) are amphiphilic molecules that have hydrophilic groups at both ends of a sufficiently long hydrophobic hydrocarbon chain. Compared to single-headed amphiphiles, the introduction of a second head-group generally induces a higher solubility in water, an increase in the critical micelle concentration (CMC), and a decrease in aggregation number. The aggregate morphologies of bolaamphiphiles include spheres, cylinders, disks, and vesicles. Bolaamphiphiles are also known to form helical structures that can form monolayer microtubular self-assemblies.
References
Fuhrhop, J-H; Wang, T. Bolaamphiphile, Chem. Rev. (2004), 104(6), 2901-2937.
Chen, Yuxia; Liu, Yan; Guo, Rong. Aggregation behavior of an amino acid-derived bolaamphiphile and a conventional surfactant mixed system. Journal of Colloid and Interface Science (2009), 336(2), 766-772. CODEN: JCISA5 . AN 2009:776584
Yin, Shouchun; Wang, Chao; Song, Bo; Chen, Senlin; Wang, Zhiqiang. Self-Organization of a Polymerizable Bolaamphiphile Bearing a Diacetylene Group and L-Aspartic Acid Group. Langmuir (2009), 25(16), 8968-8973. CODEN: LANGD5 . CAN 151:173915 AN 2009:383258
Wang, H.; Li, M.; Xu, Z.; Qiao, W.; Li, Z. Interfacial tension of unsymmetrical bolaamphiphile surfactant in surfactant/alkali/crude oil systems. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects (2008), 30(16), 1442-1450. CODEN: ESPACB . CAN 150:475745 AN 2008:763292
Chen, Senlin; Song, Bo; Wang, Zhiqiang; Zhang, Xi. Self-Organization of Bolaamphiphile Bearing Biphenyl Mesogen and Aspartic-Acid Headgroups. Journal of Physical Chemistry C (2008), 112(9), 3308-3313. CODEN: JPCCCK . CAN 148:372219 AN 2008:176360
Feng Qiu, Chengkang Tang, Yongzhu Chen Amyloid-like aggregation of designer bolaamphiphilic peptides: Effect of hydrophobic section and hydrophilic heads. Journal of peptide science. (2017) DOI: 10.1002/psc.3062
Organic chemistry
Physical chemistry
Surfactants | Bolaamphiphile | [
"Physics",
"Chemistry"
] | 626 | [
"Physical chemistry",
"nan",
"Applied and interdisciplinary physics",
"Organic chemistry stubs"
] |
13,418,907 | https://en.wikipedia.org/wiki/IEC%2062379 | IEC 62379 is a control engineering standard for the common control interface for networked digital audio and video products. IEC 62379 uses Simple Network Management Protocol to communicate control and monitoring information.
It is a family of standards that specifies a control framework for networked audio and video equipment and is published by the International Electrotechnical Commission. It has been designed to provide a means for entering a common set of management commands to control the transmission across the network as well as other functions within the interfaced equipment.
Organization
The parts within this standard include:
Part 1: General,
Part 2: Audio,
Part 3: Video,
Part 4: Data,
Part 5: Transmission over networks,
Part 6: Packet transfer service,
Part 7: Measurement (for EBU ECN-IPM Group)
Part one is common to all equipment that conforms to IEC 62379 and a preview of the published document can be downloaded from the IEC web store here, a section of the International Electrotechnical Commission web site. More information is available at the project group web site.
History
2 October 2008
Part 2, Audio has now been published and a preview can be downloaded from the IEC web store, a section the International Electrotechnical Commission web site.
31 August 2011
A first edition of Part 3, Video has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
It contains the video MIB required by Part 7.
Part 7, Measurement, has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
This part specifies those aspects that are specific to the measurement requirements of the EBU ECN-IPM Group, a member of the Expert Communities Networks. An associated document EBU TECH 3345 has recently been published by the EBU European Broadcasting Union.
16 December 2011
Part 3 (Document 100/1896/NP) and Part 7 (Document 100/1897/NP) have been approved by IEC TC 100.
3 April 2014
Part 5.2, Transmission over Networks - Signalling, has now been published and can be downloaded from the IEC web store,
5 June 2015
IEC 62379-3:2015 Common control interface for networked digital audio and video products - Part 3: Video has now been published and can be downloaded from the IEC web store.
16 June 2015
IEC 62379-7:2015 Common control interface for networked digital audio and video products - Part 7: Measurements has now been published and can be downloaded from the IEC web store.
IEC 62379-7:2015 is the standardised (and extended) version of EBU TECH 3345 - End-to-End IP Network Measurement - MIB & Parameters, which can be obtained from here: published by the EBU European Broadcasting Union.
References
External links
Audio engineering
Networking standards
Broadcast engineering
62379
Control engineering
Systems engineering | IEC 62379 | [
"Technology",
"Engineering"
] | 589 | [
"Broadcast engineering",
"Systems engineering",
"Computer standards",
"Computer networks engineering",
"IEC standards",
"Electronic engineering",
"Control engineering",
"Networking standards",
"Electrical engineering",
"Audio engineering"
] |
13,419,758 | https://en.wikipedia.org/wiki/Ethylene%20vinyl%20alcohol | Ethylene vinyl alcohol (EVOH) is a formal copolymer of ethylene and vinyl alcohol. Because the latter monomer mainly exists as its tautomer acetaldehyde, the copolymer is prepared by polymerization of ethylene and vinyl acetate to give the ethylene vinyl acetate (EVA) copolymer followed by hydrolysis. EVOH copolymer is defined by the mole % ethylene content: lower ethylene content grades have higher barrier properties; higher ethylene content grades have lower temperatures for extrusion.
The plastic resin is commonly used as an oxygen barrier in food packaging. It is better than other plastics at keeping air out and flavors in, is highly transparent, weather resistant, oil and solvent resistant, flexible, moldable, recyclable, and printable. Its drawback is that it is difficult to make and therefore more expensive than other food packaging. Instead of making an entire package out of EVOH, manufacturers keep costs down by coextruding or laminating it as a thin layer between cardboard, foil, or other plastics.
It is also used as a hydrocarbon barrier in plastic fuel tanks and pipes.
Industrial production
Because of the high capital cost to build an EVOH plant, and the complexity of making a food grade product, only a few companies produce EVOH:
Kuraray produces EVOH resin under the name "EVAL," with a 10,000 ton plant in Okayama, Japan; a 58,000 ton plant in the U.S. (near Houston, TX) under its subsidiary Kuraray America; and a 35,000 ton plant in Belgium under its subsidiary EVAL Europe.
Nippon Gohsei produces EVOH under the trade name Soarnol. It has production sites in Mizushima, Japan; La Porte, Texas in the USA; and at Salt End, Hull, England.
Chang Chun Petrochemical produces EVOH under the trade name EVASIN. It has a single site in Taipei, Taiwan.
Food packaging
Due to its strong barrier against gasses (especially oxygen), odors and flavours, food packaging manufacturers use EVOH in their packaging structure to extend the shelf life of food products.
A downside of EVOH is its relatively high moisture sensitivity, meaning that the barrier capabilities of EVOH decrease in environments with high humidity. As such, EVOH is often applied within a multilayer film. Here, one or more inside layers contain EVOH, but the outside layers consist of a different plastic that is less sensitive to moisture, such as polyethylene.
Medical applications
EVOH is used in a liquid embolic system in interventional radiology, e.g. in Onyx. Dissolved in dimethyl sulfoxide (DMSO) and mixed with a radiopaque substance, ethylene vinyl alcohol copolymer is used to embolize blood vessels.
References
Interventional radiology
Plastics
Packaging materials
Copolymers | Ethylene vinyl alcohol | [
"Physics"
] | 616 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
3,037,867 | https://en.wikipedia.org/wiki/Spatial%20frequency | In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance.
The SI unit of spatial frequency is the reciprocal metre (m−1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm).
In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes :
Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by
Visual perception
In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase.
Spatial-frequency theory
The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, as noted by Teller (1984), it is probably not wise to treat the highest firing rate of a particular neuron as having a special significance with respect to its role in the perception of a particular stimulus, given that the neural code is known to be linked to relative firing rates. For example, in color coding by the three cones in the human retina, there is no special significance to the cone that is firing most strongly – what matters is the relative rate of firing of all three simultaneously. Teller (1984) similarly noted that a strong firing rate in response to a particular stimulus should not be interpreted as indicating that the neuron is somehow specialized for that stimulus, since there is an unlimited equivalence class of stimuli capable of producing similar firing rates.)
The spatial-frequency theory of vision is based on two physical principles:
Any visual stimulus can be represented by plotting the intensity of the light along lines running through it.
Any curve can be broken down into constituent sine waves by Fourier analysis.
The theory (for which empirical support has yet to be developed) states that in each functional module of the visual cortex, Fourier analysis (or its piecewise form ) is performed on the receptive field and the neurons in each module are thought to respond selectively to various orientations and frequencies of sine wave gratings. When all of the visual cortex neurons that are influenced by a specific scene respond together, the perception of the scene is created by the summation of the various sine-wave gratings. (This procedure, however, does not address the problem of the organization of the products of the summation into figures, grounds, and so on. It effectively recovers the original (pre-Fourier analysis) distribution of photon intensity and wavelengths across the retinal projection, but does not add information to this original distribution. So the functional value of such a hypothesized procedure is unclear. Some other objections to the "Fourier theory" are discussed by Westheimer (2001) ). One is generally not aware of the individual spatial frequency components since all of the elements are essentially blended together into one smooth representation. However, computer-based filtering procedures can be used to deconstruct an image into its individual spatial frequency components. Research on spatial frequency detection by visual neurons complements and extends previous research using straight edges rather than refuting it.
Further research shows that different spatial frequencies convey different information about the appearance of a stimulus. High spatial frequencies represent abrupt spatial changes in the image, such as edges, and generally correspond to featural information and fine detail. M. Bar (2004) has proposed that low spatial frequencies represent global information about the shape, such as general orientation and proportions. Rapid and specialised perception of faces is known to rely more on low spatial frequency information. In the general population of adults, the threshold for spatial frequency discrimination is about 7%. It is often poorer in dyslexic individuals.
Spatial frequency in MRI
When spatial frequency is used as a variable in a mathematical function, the function is said to be in k-space. Two dimensional k-space has been introduced into MRI as a raw data storage space. The value of each data point in k-space is measured in the unit of 1/meter, i.e. the unit of spatial frequency.
It is very common that the raw data in k-space shows features of periodic functions. The periodicity is not spatial frequency, but is temporal frequency. An MRI raw data matrix is composed of a series of phase-variable spin-echo signals. Each of the spin-echo signal is a sinc function of time, which can be described by
Where
Here is the gyromagnetic ratio constant, and is the basic resonance frequency of the spin. Due to the presence of the gradient G, the spatial information r is encoded onto the frequency . The periodicity seen in the MRI raw data is just this frequency , which is basically the temporal frequency in nature.
In a rotating frame, , and is simplified to . Just by letting , the spin-echo signal is expressed in an alternative form
Now, the spin-echo signal is in the k-space. It becomes a periodic function of k with r as the k-space frequency but not as the "spatial frequency", since "spatial frequency" is reserved for the name of the periodicity seen in the real space r.
The k-space domain and the space domain form a Fourier pair. Two pieces of information are found in each domain, the spatial information and the spatial frequency information. The spatial information, which is of great interest to all medical doctors, is seen as periodic functions in the k-space domain and is seen as the image in the space domain. The spatial frequency information, which might be of interest to some MRI engineers, is not easily seen in the space domain but is readily seen as the data points in the k-space domain.
See also
Fourier analysis
Superlens
Visual perception
Fringe visibility
Reciprocal space
References
External links
Mathematical physics
Space | Spatial frequency | [
"Physics",
"Mathematics"
] | 1,420 | [
"Applied mathematics",
"Theoretical physics",
"Space",
"Geometry",
"Spacetime",
"Mathematical physics"
] |
3,037,964 | https://en.wikipedia.org/wiki/Cascading%20gauge%20theory | In theoretical physics, a cascading gauge theory is a gauge theory whose coupling rapidly changes with the scale in such a way that Seiberg duality must be applied many times.
Igor Klebanov and Matthew Strassler studied this kind of N=1 gauge theory in the context of the AdS-CFT correspondence, which is dual to the warped deformed conifold.
See also
Ultraviolet fixed point
References
Gauge theories | Cascading gauge theory | [
"Physics"
] | 90 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs",
"Theoretical physics stubs"
] |
3,038,004 | https://en.wikipedia.org/wiki/Berezinian | In mathematics and theoretical physics, the Berezinian or superdeterminant is a generalization of the determinant to the case of supermatrices. The name is for Felix Berezin. The Berezinian plays a role analogous to the determinant when considering coordinate changes for integration on a supermanifold.
Definition
The Berezinian is uniquely determined by two defining properties:
where str(X) denotes the supertrace of X. Unlike the classical determinant, the Berezinian is defined only for invertible supermatrices.
The simplest case to consider is the Berezinian of a supermatrix with entries in a field K. Such supermatrices represent linear transformations of a super vector space over K. A particular even supermatrix is a block matrix of the form
Such a matrix is invertible if and only if both A and D are invertible matrices over K. The Berezinian of X is given by
For a motivation of the negative exponent see the substitution formula in the odd case.
More generally, consider matrices with entries in a supercommutative algebra R. An even supermatrix is then of the form
where A and D have even entries and B and C have odd entries. Such a matrix is invertible if and only if both A and D are invertible in the commutative ring R0 (the even subalgebra of R). In this case the Berezinian is given by
or, equivalently, by
These formulas are well-defined since we are only taking determinants of matrices whose entries are in the commutative ring R0. The matrix
is known as the Schur complement of A relative to
An odd matrix X can only be invertible if the number of even dimensions equals the number of odd dimensions. In this case, invertibility of X is equivalent to the invertibility of JX, where
Then the Berezinian of X is defined as
Properties
The Berezinian of is always a unit in the ring R0.
where denotes the supertranspose of .
Berezinian module
The determinant of an endomorphism of a free module M can be defined as the induced action on the 1-dimensional highest exterior power of M. In the supersymmetric case there is no highest exterior power, but there is a still a similar definition of the Berezinian as follows.
Suppose that M is a free module of dimension (p,q) over R. Let A be the (super)symmetric algebra S*(M*) of the dual M* of M. Then an automorphism of M acts on the ext module
(which has dimension (1,0) if q is even and dimension (0,1) if q is odd))
as multiplication by the Berezinian.
See also
Berezin integration
References
Super linear algebra
Determinants | Berezinian | [
"Physics"
] | 613 | [
"Supersymmetry",
"Symmetry",
"Super linear algebra"
] |
3,038,013 | https://en.wikipedia.org/wiki/Hamiltonian%20fluid%20mechanics | Hamiltonian fluid mechanics is the application of Hamiltonian methods to fluid mechanics. Note that this formalism only applies to nondissipative fluids.
Irrotational barotropic flow
Take the simple example of a barotropic, inviscid vorticity-free fluid.
Then, the conjugate fields are the mass density field ρ and the velocity potential φ. The Poisson bracket is given by
and the Hamiltonian by:
where e is the internal energy density, as a function of ρ.
For this barotropic flow, the internal energy is related to the pressure p by:
where an apostrophe ('), denotes differentiation with respect to ρ.
This Hamiltonian structure gives rise to the following two equations of motion:
where is the velocity and is vorticity-free. The second equation leads to the Euler equations:
after exploiting the fact that the vorticity is zero:
As fluid dynamics is described by non-canonical dynamics, which possess an infinite amount of Casimir invariants, an alternative formulation of Hamiltonian formulation of fluid dynamics can be introduced through the use of Nambu mechanics
See also
Luke's variational principle
Hamiltonian field theory
Notes
References
Fluid dynamics
Hamiltonian mechanics
Dynamical systems | Hamiltonian fluid mechanics | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 257 | [
"Chemical engineering",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Piping",
"Fluid dynamics",
"Dynamical systems"
] |
3,043,551 | https://en.wikipedia.org/wiki/Modern%20valence%20bond%20theory | Modern valence bond theory is the application of valence bond theory (VBT) with computer programs that are competitive in accuracy and economy, with programs for the Hartree–Fock or post-Hartree-Fock methods. The latter methods dominated quantum chemistry from the advent of digital computers because they were easier to program. The early popularity of valence bond methods thus declined. It is only recently that the programming of valence bond methods has improved. These developments are due to and described by Gerratt, Cooper, Karadakov and Raimondi (1997); Li and McWeeny (2002); Joop H. van Lenthe and co-workers (2002); Song, Mo, Zhang and Wu (2005); and Shaik and Hiberty (2004)
While molecular orbital theory (MOT) describes the electronic wavefunction as a linear combination of basis functions that are centered on the various atoms in a species (linear combination of atomic orbitals), VBT describes the electronic wavefunction as a linear combination of several valence bond structures. Each of these valence bond structures can be described using linear combinations of either atomic orbitals, delocalized atomic orbitals (Coulson-Fischer theory), or even molecular orbital fragments. Although this is often overlooked, MOT and VBT are equally valid ways of describing the electronic wavefunction, and are actually related by a unitary transformation. Assuming MOT and VBT are applied at the same level of theory, this relationship ensures that they will describe the same wavefunction, but will do so in different forms.
Theory
Bonding in H2
Heitler and London's original work on VBT attempts to approximate the electronic wavefunction as a covalent combination of localized basis functions on the bonding atoms. In VBT, wavefunctions are described as the sums and differences of VB determinants, which enforce the antisymmetric properties required by the Pauli exclusion principle. Taking H2 as an example, the VB determinant is
In this expression, N is a normalization constant, and a and b are basis functions that are localized on the two hydrogen atoms, often considered simply to be 1s atomic orbitals. The numbers are an index to describe the electron (i.e. a(1) represents the concept of ‘electron 1’ residing in orbital a). ɑ and β describe the spin of the electron. The bar over b in indicates that the electron associated with orbital b has β spin (in the first term, electron 2 is in orbital b, and thus electron 2 has β spin). By itself, a single VB determinant is not a proper spin-eigenfunction, and thus cannot describe the true wavefunction. However, by taking the sum and difference (linear combinations) of VB determinants, two approximate wavefunctions can be obtained:
ΦHL is the wavefunction as described by Heiter and London originally, and describes the covalent bonding between orbitals a and b in which the spins are paired, as expected for a chemical bond. ΦT is a representation of the bond that where the electron spins are parallel, resulting in a triplet state. This is a highly repulsive interaction, so this description of the bonding will not play a major role in determining the wave function.
Other ways of describing the wavefunction can also be constructed. Specifically, instead of considering a covalent interaction, the ionic interactions can be considered, resulting in the wavefunction
This wavefunction describes the bonding in H2 as the ionic interaction between an H+ and an H−.
Since none of these wavefunctions, ΦHL (covalent bonding) or ΦI (ionic bonding) perfectly approximates the wavefunction, a combination of these two can be used to describe the total wavefunction
where λ and μ are coefficients that can vary from 0 to 1. In determining the lowest energy wavefunction, these coefficients can be varied until a minimum energy is reached. λ will be larger in bonds that have more covalency, while μ will be larger in bonds that are more ionic. In the specific case of H2, λ ≈ 0.75, and μ ≈ 0.25.
The orbitals that were used as the basis (a and b) do not necessarily have to be localized on the atoms involved in bonding. Orbitals that are partially delocalized onto the other atom involved in bonding can also be used, as in the Coulson-Fischer theory. Even the molecular orbitals associated with a portion of a molecule can be used as a basis set, a processes referred to as using fragment orbitals.
For more complicated molecules, ΦVBT could consider several possible structures that all contribute in various degrees (there would be several coefficients, not just λ and μ). An example of this is the Kekule and Dewar structures used in describing benzene.
Note that all normalization constants were ignored in the discussion above for simplicity.
Relationship to molecular orbital theory
History
The application of VBT and MOT to computations that attempt to approximate the Schrödinger equation began near the middle of the 20th century, but MOT quickly became the preferred approach between the two. The relative computational ease of doing calculations with non-overlapping orbitals in MOT is said to have contributed to its popularity. In addition, the successful explanation of π-systems, pericyclic reactions, and extended solids further cemented MOT as the preeminent approach. Despite this, the two theories are just two different ways of representing the same wavefunction. As shown below, at the same level of theory, the two methods lead to the same results.
H2 - molecular orbital vs valence bond theory
The relationship between MOT and VBT can be made more clear by directly comparing the results of the two theories for the hydrogen molecule, H2. Using MOT, the same basis orbitals (a and b) can be used to describe the bonding. Combining them in a constructive and destructive manner gives two spin-orbitals
The ground state wavefunction of H2 would be that where the σ orbital is doubly occupied, which is expressed as the following Slater determinant (as required by MOT)
This expression for the wavefunction can be shown to be equivalent to the following wavefunction
which is now expressed in terms of VB determinants. This transformation does not alter the wavefunction in any way, only the way that the wavefunction is represented. This process of going from an MO description to a VB description can be referred to as ‘mapping MO wavefunctions onto VB wavefunctions’, and is fundamentally the same process as that used to generate localized molecular orbitals.
Rewriting the VB wavefunction derived above, we can clearly see the relationship between MOT and VBT
Thus, at its simplest level, MOT is just VBT, where the covalent and ionic contributions (the first and second terms, respectively) are equal. This is the basis of the claim that MOT does not correctly predict the dissociation of molecules. When MOT includes configuration interaction (MO-CI), this allows the relative contributions of the covalent and ionic contributions to be altered. This leads to the same description of bonding for both VBT and MO-CI. In conclusion, the two theories, when brought to a high enough level of theory, will converge. Their distinction is in the way they are built up to that description.
Note that in all of the aforementioned discussions, as with the derivation of H2 for VBT, normalization constants were ignored for simplicity.
'Failures' of valence bond theory
When describing the relationship between MOT and VBT, there are a few examples that are commonly cited as ‘failures’ of VBT. However, these often arise from an incomplete or inaccurate use of VBT.
Triplet ground state of oxygen
It is known that O2 has a triplet ground state, but a classic Lewis structure depiction of oxygen would not indicate that any unpaired electrons exist. Perhaps because Lewis structures and VBT often depict the same structure as the most stable state, this misinterpretation has persisted. However, as has been consistently demonstrated with VBT calculations, the lowest energy state is that with two, three electron π-bonds, which is the triplet state.
Ionization energy of methane
The photoelectron spectrum (PES) of methane is commonly used as an argument as to why MO theory is superior to VBT. From an MO calculation (or even just a qualitative MOT diagram), it can be seen that the HOMO is a triply degenerate state, while the HOMO-1 is a single degenerate state. By invoking Koopman's theorem, one can predict that there would be two distinct peaks in the ionization spectrum of methane. Those would be by exciting an electron from the t2 orbitals or the a1 orbital, which would result in a 3:1 ratio in intensity. This is corroborated by experiment. However, when one examines the VB description of CH4, it is clear that there are 4 equivalent bonds between C and H. If one were to invoke Koopman's Theorem (which is implicitly done when claiming that VBT is inadequate to describe PES), a single ionization energy peak would be predicted. However, Koopman's Theorem cannot be applied to orbitals that are not the canonical molecular orbitals, and thus a different approach is required to understand the ionization potentials of methane from VBT. To do this, the ionized product, CH4+ must be analyzed. The VB wavefunction of CH4+ would be an equal combination of 4 structures, each having 3 two-electron bonds, and 1 one-electron bond. Based on group theory arguments, these states must give rise to a triply degenerate T2 state and a single degenerate A1 state. A diagram showing the relative energies of the states is shown below, and it can be seen that there exist two distinct transitions from the CH4 state with 4 equivalent bonds to the two CH4+ states.
Valence bond theory methods
Listed below are a few notable VBT methods that are applied in modern computational software packages.
Generalized VBT (GVB)
This was one of the first ab initio computational methods developed that utilized VBT. Using Coulson-Fischer type basis orbitals, this method uses singly-occupied, instead of doubly-occupied orbitals, as the basis set. This allows from the distance between paired electrons to increase during variational optimization, lowering the resultant energy. The total wavefunction is described by a single set of orbitals, rather than a linear combination of multiple VB structures. GVB is considered to be a user-friendly method for new practitioners.
Spin-coupled generalized valence bond theory (SCGVB, or sometimes SCVB/full GVB)
SCGVB is an extension of GVB that still uses delocalized orbitals, whose delocalization can adjust with molecular structure. In addition, the electronic wavefunction is still a single product of orbitals. The difference is that the spin functions are allowed to adjust simultaneously with the orbitals during energy minimization procedures. This is considered to be one of the best VB descriptions of the wavefunction that relies on only a single configuration.
Complete active space valence bond method (CASVB)
This is a method that often gets confused as a traditional VB method. Instead, this is a localization procedure that maps the full configuration interaction Hartree-Fock wavefunction (CASSCF) onto valence bond structures.
Spin-coupled theory
There are a large number of different valence bond methods. Most use n valence bond orbitals for n electrons. If a single set of these orbitals is combined with all linear independent combinations of the spin functions, we have spin-coupled valence bond theory. The total wave function is optimized using the variational method by varying the coefficients of the basis functions in the valence bond orbitals and the coefficients of the different spin functions. In other cases only a sub-set of all possible spin functions is used. Many valence bond methods use several sets of the valence bond orbitals. It is important to note here that different authors use different names for these different valence bond methods.
Valence bond programs
Several groups have produced computer programs for modern valence bond calculations that are freely available.
References
Further reading
J. Gerratt, D. L. Cooper, P. B. Karadakov and M. Raimondi, "Modern Valence Bond Theory", Chemical Society Reviews, 26, 87, 1997, and several others by the same authors.
J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", Chemical Physics Letters 76, 138–142, 1980.
J. H. van Lenthe, G. G. Balint-Kurti, "The Valence Bond Self-Consistent Field (VBSCF) method", The Journal of Chemical Physics 78, 5699–5713, 1983.
J. Li and R. McWeeny, "VB2000: Pushing Valence Bond Theory to new limits", International Journal of Quantum Chemistry, 89, 208, 2002.
L. Song, Y. Mo, Q. Zhang and W. Wu, "XMVB: A program for ab initio nonorthogonal valence bond computations", Journal of Computational Chemistry, 26, 514, 2005.
S. Shaik and P. C. Hiberty, "Valence Bond theory, its History, Fundamentals and Applications. A Primer", Reviews of Computational Chemistry, 20, 1 2004. A recent review that covers, not only their own contributions, but the whole of modern valence bond theory.
Computational chemistry
Electronic structure methods | Modern valence bond theory | [
"Physics",
"Chemistry"
] | 2,911 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry"
] |
3,043,836 | https://en.wikipedia.org/wiki/Nuclear%20binding%20energy | Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means.
The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed.
The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products).
These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen.
Introduction
Nuclear energy
An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation.
The best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion. Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat.
In order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation.
The nuclear force
Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds.
The electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 newtons. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized. However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible.
After the proton and neutron magnetic moments were measured and verified, it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10−13 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy.
Therefore, another force, called the nuclear force (or residual strong force) holds the nucleons of nuclei together. This force is a residuum of the strong interaction, which binds quarks into nucleons at an even smaller level of distance.
The fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero.
Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place.
Physics of nuclei
There are around 94 naturally occurring elements on Earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons, which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element.
The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. The most common isotope of helium contains two protons and two neutrons, and those of carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect.
Mass defect
Mass defect (also called "mass deficit") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc2, which describes the equivalence of energy and mass. The decrease in mass is equal to the energy emitted in the reaction of an atom's creation divided by c2. By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass.
For lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron/nickel. For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity.
As a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory.
The reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction, which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up.
As nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years).
Nuclear reactions in the Sun
The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed.
Thermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force, nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure.
Different nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium.
A branch of physics, the study of controlled nuclear fusion, has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second.
Combining nuclei
Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge.
For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron.
With the nuclei of elements heavier than lead, the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles. This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei.
Nuclei heavier than lead (except for bismuth, thorium, and uranium) spontaneously break up too quickly to appear in nature as primordial elements, though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay.
Iron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do.
Nuclear binding energy
An example that illustrates nuclear binding energy is the nucleus of 12C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons.
The energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula —gives the binding energy of the nucleus.
Nuclear fusion
The binding energy of helium is the energy source of the Sun and of most stars. The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons.
The conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force. The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two.
The protons of hydrogen combine to helium only if they have enough velocity to overcome each other's mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Sun's strong gravity. The process of combining protons to form helium is an example of nuclear fusion.
Producing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium. Research is being undertaken on developing a process using deuterium and tritium. The Earth's oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium, and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earth's radiation belt) are guided by magnetic field lines.
The binding energy maximum and ways to approach it by decay
In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated.
The net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 (56Fe) is the most efficiently bound nucleus meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. (Nickel-62's higher binding energy does not translate to a larger mean mass loss than 56Fe, because 62Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62's average mass per nucleon).
To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay, meaning the nuclide will be radioactive.
The two methods for this conversion are mediated by the weak force, and involve types of beta decay. In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron.
Among the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles, which consist of two protons and two neutrons (alpha particles are fast helium nuclei). (Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104.
The curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium 238U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions preceding the formation of the Solar System. The most common isotope of thorium, 232Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead.
Calculation of nuclear binding energy
Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the nuclear mass defect, converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon.
Conversion of nuclear mass defect into energy
Nuclear mass defect is defined as the difference between the nuclear mass, and the sum of the masses of the constituent nucleons. It is given by
where:
Z is the proton number (atomic number).
A is the nucleon number (mass number).
mp is the mass of proton.
mn is the mass of neutron.
M is the nuclear mass.
N is the neutron number.
The nuclear mass defect is usually converted into nuclear binding energy, which is the minimum energy required to disassemble the nucleus into its constituent nucleons. This conversion is done with the mass-energy equivalence: . However it must be expressed as energy per mole of atoms or as energy per nucleon.
Fission and fusion
Nuclear energy is released by the splitting (fission) or merging (fusion) of the nuclei of atom(s). The conversion of nuclear mass–energy to a form of energy, which can remove some mass when the energy is removed, is consistent with the mass–energy equivalence formula:
ΔE = Δm c2,
where
ΔE = energy release,
Δm = mass defect,
and c = the speed of light in vacuum.
Nuclear energy was first discovered by French physicist Henri Becquerel in 1896, when he found that photographic plates stored in the dark near uranium were blackened like X-ray plates (X-rays had recently been discovered in 1895).
Nickel-62 has the highest binding energy per nucleon of any isotope. If an atom of lower average binding energy per nucleon is changed into two atoms of higher average binding energy per nucleon, energy is emitted. (The average here is the weighted average.) Also, if two atoms of lower average binding energy fuse into an atom of higher average binding energy, energy is emitted. The chart shows that fusion, or combining, of hydrogen nuclei to form heavier atoms releases energy, as does fission of uranium, the breaking up of a larger nucleus into smaller parts.
Nuclear energy is released by three exoenergetic (or exothermic) processes:
Radioactive decay, where a neutron or proton in the radioactive nucleus decays spontaneously by emitting either particles, electromagnetic radiation (gamma rays), or both. Note that for radioactive decay, it is not strictly necessary for the binding energy to increase. What is strictly necessary is that the mass decrease. If a neutron turns into a proton and the energy of the decay is less than 0.782343 MeV, the difference between the masses of the neutron and proton multiplied by the speed of light squared, (such as rubidium-87 decaying to strontium-87), the average binding energy per nucleon will actually decrease.
Fusion, two atomic nuclei fuse together to form a heavier nucleus
Fission, the breaking of a heavy nucleus into two (or more rarely three) lighter nuclei, and some neutrons
The energy-producing nuclear interaction of light elements requires some clarification. Frequently, all light element energy-producing nuclear interactions are classified as fusion, however by the given definition above fusion requires that the products include a nucleus that is heavier than the reactants. Light elements can undergo energy-producing nuclear interactions by fusion or fission. All energy-producing nuclear interactions between two hydrogen isotopes and between hydrogen and helium-3 are fusion, as the product of these interactions include a heavier nucleus. However, the energy-producing nuclear interaction of a neutron with lithium–6 produces Hydrogen-3 and Helium-4, each a lighter nucleus. By the definition above, this nuclear interaction is fission, not fusion. When fission is caused by a neutron, as in this case, it is called induced fission.
Binding energy for atoms
The binding energy of an atom (including its electrons) is not exactly the same as the binding energy of the atom's nucleus. The measured mass deficits of isotopes are always listed as mass deficits of the neutral atoms of that isotope, and mostly in . As a consequence, the listed mass deficits are not a measure of the stability or binding energy of isolated nuclei, but for the whole atoms. There is a very practical reason for this, namely that it is very hard to totally ionize heavy elements, i.e. strip them of all of their electrons.
This practice is useful for other reasons, too: stripping all the electrons from a heavy unstable nucleus (thus producing a bare nucleus) changes the lifetime of the nucleus, or the nucleus of a stable neutral atom can likewise become unstable after stripping, indicating that the nucleus cannot be treated independently. Examples of this have been shown in bound-state β decay experiments performed at the GSI heavy ion accelerator.
This is also evident from phenomena like electron capture. Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not orbit in a strict sense, but has a non-vanishing probability of being located inside the nucleus).
A nuclear decay happens to the nucleus, meaning that properties ascribed to the nucleus change in the event. In the field of physics the concept of "mass deficit" as a measure for "binding energy" means "mass deficit of the neutral atom" (not just the nucleus) and is a measure for stability of the whole atom.
Nuclear binding energy curve
In the periodic table of elements, the series of light elements from hydrogen up to sodium is observed to exhibit generally increasing binding energy per nucleon as the atomic mass increases. This increase is generated by increasing forces per nucleon in the nucleus, as each additional nucleon is attracted by other nearby nucleons, and thus more tightly bound to the whole. Helium-4 and oxygen-16 are particularly stable exceptions to the trend (see figure on the right). This is because they are doubly magic, meaning their protons and neutrons both fill their respective nuclear shells.
The region of increasing binding energy is followed by a region of relative stability (saturation) in the sequence from about mass 30 through about mass 90. In this region, the nucleus has become large enough that nuclear forces no longer completely extend efficiently across its width. Attractive nuclear forces in this region, as atomic mass increases, are nearly balanced by repellent electromagnetic forces between protons, as the atomic number increases.
Finally, in the heavier elements, there is a gradual decrease in binding energy per nucleon as atomic number increases. In this region of nuclear size, electromagnetic repulsive forces are beginning to overcome the strong nuclear force attraction.
At the peak of binding energy, nickel-62 is the most tightly bound nucleus (per nucleon), followed by iron-58 and iron-56. This is the approximate basic reason why iron and nickel are very common metals in planetary cores, since they are produced profusely as end products in supernovae and in the final stages of silicon burning in stars. However, it is not binding energy per defined nucleon (as defined above), which controls exactly which nuclei are made, because within stars, neutrons and protons can inter-convert to release even more energy per generic nucleon. In fact, it has been argued that photodisintegration of 62Ni to form 56Fe may be energetically possible in an extremely hot star core, due to this beta decay conversion of neutrons to protons. This favors the creation of 56Fe, the nuclide with the lowest mass per nucleon. However, at high temperatures not all matter will be in the lowest energy state. This energetic maximum should also hold for ambient conditions, say and , for neutral condensed matter consisting of 56Fe atoms—however, in these conditions nuclei of atoms are inhibited from fusing into the most stable and low energy state of matter.
Elements with high binding energy per nucleon, like iron and nickel, cannot undergo fission, but they can theoretically undergo fusion with hydrogen, deuterium, helium, and carbon, for instance:
Ni + C → Se Q = 5.467 MeV
It is generally believed that iron-56 is more common than nickel isotopes in the universe for mechanistic reasons, because its unstable progenitor nickel-56 is copiously made by staged build-up of 14 helium nuclei inside supernovas, where it has no time to decay to iron before being released into the interstellar medium in a matter of a few minutes, as the supernova explodes. However, nickel-56 then decays to cobalt-56 within a few weeks, then this radioisotope finally decays to iron-56 with a half life of about 77.3 days. The radioactive decay-powered light curve of such a process has been observed to happen in type II supernovae, such as SN 1987A. In a star, there are no good ways to create nickel-62 by alpha-addition processes, or else there would presumably be more of this highly stable nuclide in the universe.
Binding energy and nuclide masses
The fact that the maximum binding energy is found in medium-sized nuclei is a consequence of the trade-off in the effects of two opposing forces that have different range characteristics. The attractive nuclear force (strong nuclear force), which binds protons and neutrons equally to each other, has a limited range due to a rapid exponential decrease in this force with distance. However, the repelling electromagnetic force, which acts between protons to force nuclei apart, falls off with distance much more slowly (as the inverse square of distance). For nuclei larger than about four nucleons in diameter, the additional repelling force of additional protons more than offsets any binding energy that results between further added nucleons as a result of additional strong force interactions. Such nuclei become increasingly less tightly bound as their size increases, though most of them are still stable. Finally, nuclei containing more than 209 nucleons (larger than about 6 nucleons in diameter) are all too large to be stable, and are subject to spontaneous decay to smaller nuclei.
Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). The nuclear fission of a few light elements (such as Lithium) occurs because Helium-4 is a product and a more tightly bound element than slightly heavier elements. Both processes produce energy as the sum of the masses of the products is less than the sum of the masses of the reacting nuclei.
As seen above in the example of deuterium, nuclear binding energies are large enough that they may be easily measured as fractional mass deficits, according to the equivalence of mass and energy. The atomic binding energy is simply the amount of energy (and mass) released, when a collection of free nucleons are joined to form a nucleus.
Nuclear binding energy can be computed from the difference in mass of a nucleus, and the sum of the masses of the number of free neutrons and protons that make up the nucleus. Once this mass difference, called the mass defect or mass deficiency, is known, Einstein's mass–energy equivalence formula can be used to compute the binding energy of any nucleus. Early nuclear physicists used to refer to computing this value as a "packing fraction" calculation.
For example, the dalton (1 Da) is defined as 1/12 of the mass of a 12C atom—but the atomic mass of a 1H atom (which is a proton plus electron) is 1.007825 Da, so each nucleon in 12C has lost, on average, about 0.8% of its mass in the form of binding energy.
Semiempirical formula for nuclear binding energy
For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula for the binding energy (EB) per nucleon is:
where the coefficients are given by: ; ; ; ; .
The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term is the Coulomb electrostatic repulsion; this becomes more important as increases. The symmetry correction term takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n–p interaction in a nucleus is stronger than either the n−n or p−p interaction. The pairing term is purely empirical; it is + for even–even nuclei and − for odd–odd nuclei. When A is odd, the pairing term is identically zero.
Example values deduced from experimentally measured atom nuclide masses
The following table lists some binding energies and mass defect values. Notice also that we use 1 Da = . To calculate the binding energy we use the formula Z (mp + me) + N mn − mnuclide where Z denotes the number of protons in the nuclides and N their number of neutrons. We take , and . The letter A denotes the sum of Z and N (number of nucleons in the nuclide). If we assume the reference nucleon has the mass of a neutron (so that all "total" binding energies calculated are maximal) we could define the total binding energy as the difference from the mass of the nucleus, and the mass of a collection of A free neutrons. In other words, it would be (Z + N) mn − mnuclide. The "total binding energy per nucleon" would be this value divided by A.
56Fe has the lowest nucleon-specific mass of the four nuclides listed in this table, but this does not imply it is the strongest bound atom per hadron, unless the choice of beginning hadrons is completely free. Iron releases the largest energy if any 56 nucleons are allowed to build a nuclide—changing one to another if necessary. The highest binding energy per hadron, with the hadrons starting as the same number of protons Z and total nucleons A as in the bound nucleus, is 62Ni. Thus, the true absolute value of the total binding energy of a nucleus depends on what we are allowed to construct the nucleus out of. If all nuclei of mass number A were to be allowed to be constructed of A neutrons, then 56Fe would release the most energy per nucleon, since it has a larger fraction of protons than 62Ni. However, if nuclei are required to be constructed of only the same number of protons and neutrons that they contain, then nickel-62 is the most tightly bound nucleus, per nucleon.
In the table above it can be seen that the decay of a neutron, as well as the transformation of tritium into helium-3, releases energy; hence, it manifests a stronger bound new state when measured against the mass of an equal number of neutrons (and also a lighter state per number of total hadrons). Such reactions are not driven by changes in binding energies as calculated from previously fixed N and Z numbers of neutrons and protons, but rather in decreases in the total mass of the nuclide/per nucleon, with the reaction. (Note that the Binding Energy given above for hydrogen-1 is the atomic binding energy, not the nuclear binding energy which would be zero.)
See also
Gravitational binding energy
Bond-dissociation energy (binding energy between the atoms in a chemical bond)
Electron binding energy (energy required to free an electron from its atomic orbital or from a solid)
Atomic binding energy (energy required to disassemble an atom into free electrons and a nucleus)
Quantum chromodynamics binding energy (addresses the mass and kinetic energy of the parts that bind the various quarks together inside a hadron)
References
External links
Nuclear physics
Nuclear chemistry
Nuclear fusion
Binding energy
ml:ആണവോർജ്ജം | Nuclear binding energy | [
"Physics",
"Chemistry"
] | 7,942 | [
"Nuclear fusion",
"Nuclear chemistry",
"nan",
"Nuclear physics"
] |
7,240,939 | https://en.wikipedia.org/wiki/Cement%20kiln | Cement kilns are used for the pyroprocessing stage of manufacture of portland and other types of hydraulic cement, in which calcium carbonate reacts with silica-bearing minerals to form a mixture of calcium silicates. Over a billion tonnes of cement are made per year, and cement kilns are the heart of this production process: their capacity usually defines the capacity of the cement plant. As the main energy-consuming and greenhouse-gas–emitting stage of cement manufacture, improvement of kiln efficiency has been the central concern of cement manufacturing technology. Emissions from cement kilns are a major source of greenhouse gas emissions, accounting for around 2.5% of non-natural carbon emissions worldwide.
The manufacture of cement clinker
A typical process of manufacture consists of three stages:
grinding a mixture of limestone and clay or shale to make a fine "rawmix" (see Rawmill);
heating the rawmix to sintering temperature (up to 1450 °C) in a cement kiln;
grinding the resulting clinker to make cement (see Cement mill).
In the second stage, the rawmix is fed into the kiln and gradually heated by contact with the hot gases from combustion of the kiln fuel. Successive chemical reactions take place as the temperature of the rawmix rises:
70 to 110 °C – Free water is evaporated.
400 to 600 °C – clay-like minerals are decomposed into their constituent oxides; principally SiO2 and Al2O3. dolomite (CaMg(CO3)2) decomposes to calcium carbonate (CaCO3), MgO and CO2.
650 to 900 °C – calcium carbonate reacts with SiO2 to form belite (Ca2SiO4) (also known as C2S in the Cement Industry).
900 to 1050 °C – the remaining calcium carbonate decomposes to calcium oxide (CaO) and CO2.
1300 to 1450 °C – partial (20–30%) melting takes place, and belite reacts with calcium oxide to form alite (Ca3O·SiO4) (also known as C3S in the Cement Industry).
Alite is the characteristic constituent of Portland cement. Typically, a peak temperature of 1400–1450 °C is required to complete the reaction. The partial melting causes the material to aggregate into lumps or nodules, typically of diameter 1–10 mm. This is called clinker.
The hot clinker next falls into a cooler which recovers most of its heat, and cools the clinker to around 100 °C, at which temperature it can be conveniently conveyed to storage.
The cement kiln system is designed to accomplish these processes.
Early history
Portland cement clinker was first made (in 1825) in a modified form of the traditional static lime kiln. The basic, egg-cup shaped lime kiln was provided with a conical or beehive shaped extension to increase draught and thus obtain the higher temperature needed to make cement clinker. For nearly half a century, this design, and minor modifications, remained the only method of manufacture. The kiln was restricted in size by the strength of the chunks of rawmix: if the charge in the kiln collapsed under its own weight, the kiln would be extinguished. For this reason, beehive kilns never made more than 30 tonnes of clinker per batch. A batch took one week to turn around: a day to fill the kiln, three days to burn off, two days to cool, and a day to unload. Thus, a kiln would produce about 1500 tonnes per year.
Around 1885, experiments began on design of continuous kilns. One design was the shaft kiln, similar in design to a blast furnace. Rawmix in the form of lumps and fuel were continuously added at the top, and clinker was continually withdrawn at the bottom. Air was blown through under pressure from the base to combust the fuel. The shaft kiln had a brief period of use before it was eclipsed by the rotary kiln, but it had a limited renaissance from 1970 onward in China and elsewhere, when it was used for small-scale, low-tech plants in rural areas away from transport routes. Several thousand such kilns were constructed in China. A typical shaft kiln produces 100-200 tonnes per day.
From 1885, trials began on the development of the rotary kiln, which today accounts for more than 95% of world production.
The rotary kiln
The rotary kiln consists of a tube made from steel plate, and lined with firebrick. The tube slopes slightly (1–4°) and slowly rotates on its axis at between 30 and 250 revolutions per hour. Rawmix is fed in at the upper end, and the rotation of the kiln causes it gradually to move downhill to the other end of the kiln. At the other end fuel, in the form of gas, oil, or pulverized solid fuel, is blown in through the "burner pipe", producing a large concentric flame in the lower part of the kiln tube. As material moves under the flame, it reaches its peak temperature, before dropping out of the kiln tube into the cooler. Air is drawn first through the cooler and then through the kiln for combustion of the fuel. In the cooler the air is heated by the cooling clinker, so that it may be 400 to 800 °C before it enters the kiln, thus causing intense and rapid combustion of the fuel.
The earliest successful rotary kilns were developed in Pennsylvania around 1890, based on a design by Frederick Ransome, and were about 1.5 m in diameter and 15 m in length. Such a kiln made about 20 tonnes of clinker per day. The fuel, initially, was oil, which was readily available in Pennsylvania at the time. It was particularly easy to get a good flame with this fuel. Within the next 10 years, the technique of firing by blowing in pulverized coal was developed, allowing the use of the cheapest available fuel. By 1905, the largest kilns were 2.7 x 60 m in size, and made 190 tonnes per day. At that date, after only 15 years of development, rotary kilns accounted for half of world production. Since then, the capacity of kilns has increased steadily, and the largest kilns today produce around 10,000 tonnes per day. In contrast to static kilns, the material passes through quickly: it takes from 3 hours (in some old wet process kilns) to as little as 10 minutes (in short precalciner kilns). Rotary kilns run 24 hours a day, and are typically stopped only for a few days once or twice a year for essential maintenance. One of the main maintenance works on rotary kilns is tyre and roller surface machining and grinding works which can be done while the kiln works in full operation at speeds up to 3.5 rpm. This is an important discipline, because heating up and cooling down are long, wasteful, and damaging processes. Uninterrupted runs as long as 18 months have been achieved.
The wet process and the dry process
From the earliest times, two different methods of rawmix preparation were used: the mineral components were either dry-ground to form a flour-like powder, or were wet-ground with added water to produce a fine slurry with the consistency of paint, and with a typical water content of 40–45%.
The wet process suffered the obvious disadvantage that, when the slurry was introduced into the kiln, a large amount of extra fuel was used in evaporating the water. Furthermore, a larger kiln was needed for a given clinker output, because much of the kiln's length was committed to the drying process. On the other hand, the wet process had a number of advantages. Wet grinding of hard minerals is usually much more efficient than dry grinding. When slurry is dried in the kiln, it forms a granular crumble that is ideal for subsequent heating in the kiln. In the dry process, it is very difficult to keep the fine powder rawmix in the kiln, because the fast-flowing combustion gases tend to blow it back out again. It became a practice to spray water into dry kilns in order to "damp down" the dry mix, and thus, for many years there was little difference in efficiency between the two processes, and the overwhelming majority of kilns used the wet process. By 1950, a typical large, wet process kiln, fitted with drying-zone heat exchangers, was 3.3 x 120 m in size, made 680 tonnes per day, and used about 0.25–0.30 tonnes of coal fuel for every tonne of clinker produced. Before the energy crisis of the 1970s put an end to new wet-process installations, kilns as large as 5.8 x 225 m in size were making 3000 tonnes per day.
An interesting footnote on the wet process history is that some manufacturers have in fact made very old wet process facilities profitable through the use of waste fuels. Plants that burn waste fuels enjoy a negative fuel cost (they are paid by industries needing to dispose of materials that have energy content and can be safely disposed of in the cement kiln thanks to its high temperatures and longer retention times). As a result, the inefficiency of the wet process is an advantage—to the manufacturer. By locating waste burning operations at older wet process locations, higher fuel consumption actually equates to higher profits for the manufacturer, although it produces correspondingly greater emission of CO2. Manufacturers who think such emissions should be reduced are abandoning the use of wet process.
Preheaters
In the 1930s, significantly, in Germany, the first attempts were made to redesign the kiln system to minimize waste of fuel. This led to two significant developments:
the grate preheater
the gas-suspension preheater.
Grate preheaters
The grate preheater consists of a chamber containing a chain-like high-temperature steel moving grate, attached to the cold end of the rotary kiln. A dry-powder rawmix is turned into a hard pellets of 10–20 mm diameter in a nodulizing pan, with the addition of 10-15% water. The pellets are loaded onto the moving grate, and the hot combustion gases from the rear of the kiln are passed through the bed of pellets from beneath. This dries and partially calcines the rawmix very efficiently. The pellets then drop into the kiln. Very little powdery material is blown out of the kiln. Because the rawmix is damped in order to make pellets, this is referred to as a "semi-dry" process. The grate preheater is also applicable to the "semi-wet" process, in which the rawmix is made as a slurry, which is first de-watered with a high-pressure filter, and the resulting "filter-cake" is extruded into pellets, which are fed to the grate. In this case, the water content of the pellets is 17-20%. Grate preheaters were most popular in the 1950s and 60s, when a typical system would have a grate 28 m long and 4 m wide, and a rotary kiln of 3.9 x 60 m, making 1050 tonnes per day, using about 0.11-0.13 tonnes of coal fuel for every tonne of clinker produced. Systems up to 3000 tonnes per day were installed.
Gas-suspension preheaters
The key component of the gas-suspension preheater is the cyclone. A cyclone is a conical vessel into which a dust-bearing gas-stream is passed tangentially. This produces a vortex within the vessel. The gas leaves the vessel through a co-axial "vortex-finder". The solids are thrown to the outside edge of the vessel by centrifugal action, and leave through a valve in the vertex of the cone. Cyclones were originally used to clean up the dust-laden gases leaving simple dry process kilns. If, instead, the entire feed of rawmix is forced to pass through the cyclone, a very efficient heat exchange takes place: the gas is efficiently cooled, hence producing less waste of heat to the atmosphere, and the raw mix is efficiently heated. The heat transfer efficiency is further increased if a number of cyclones are connected in series.
The number of cyclones stages used in practice varies from 1 to 6. Energy, in the form of fan-power, is required to draw the gases through the string of cyclones, and at a string of 6 cyclones, the cost of the added fan-power needed for an extra cyclone exceeds the efficiency advantage gained. It is normal to use the warm exhaust gas to dry the raw materials in the rawmill, and if the raw materials are wet, hot gas from a less efficient preheater is desirable. For this reason, the most commonly encountered suspension preheaters have 4 cyclones. The hot feed that leaves the base of the preheater string is typically 20% calcined, so the kiln has less subsequent processing to do, and can therefore achieve a higher specific output. Typical large systems installed in the early 1970s had cyclones 6 m in diameter, a rotary kiln of 5 x 75 m, making 2500 tonnes per day, using about 0.11-0.12 tonnes of coal fuel for every tonne of clinker produced.
A penalty paid for the efficiency of suspension preheaters is their tendency to block up. Salts, such as the sulfate and chloride of sodium and potassium, tend to evaporate in the burning zone of the kiln. They are carried back in vapor form, and re-condense when a sufficiently low temperature is encountered. Because these salts re-circulate back into the rawmix and re-enter the burning zone, a recirculation cycle establishes itself. A kiln with 0.1% chloride in the rawmix and clinker may have 5% chloride in the mid-kiln material. Condensation usually occurs in the preheater, and a sticky deposit of liquid salts glues dusty rawmix into a hard deposit, typically on surfaces against which the gas-flow is impacting. This can choke the preheater to the point that air-flow can no longer be maintained in the kiln. It then becomes necessary to manually break the build-up away. Modern installations often have automatic devices installed at vulnerable points to knock out build-up regularly. An alternative approach is to "bleed off" some of the kiln exhaust at the kiln inlet where the salts are still in the vapor phase, and remove and discard the solids in this. This is usually termed an "alkali bleed" and it breaks the recirculation cycle. It can also be of advantage for cement quality reasons, since it reduces the alkali content of the clinker. The alkali content is a critical property of cement. Indeed, cement with a too high alkali content can cause a harmful alkali–silica reaction (ASR) in concrete made with aggregates containing reactive amorphous silica. Hygroscopic and swelling sodium silicagel is formed inside the reactive aggregates which develop characteristics internal fissures. This expansive chemical reaction occurring in the concrete matrix generate high tensile stress in concrete and creates cracks that can ruine a concrete structure. However, hot gas is run to waste so the process is inefficient and increases kiln fuel consumption.
Precalciners
In the 1970s the precalciner was pioneered in Japan, and has subsequently become the equipment of choice for new large installations worldwide. The precalciner is a development of the suspension preheater. The philosophy is this: the amount of fuel that can be burned in the kiln is directly related to the size of the kiln. If part of the fuel necessary to burn the rawmix is burned outside the kiln, the output of the system can be increased for a given kiln size. Users of suspension preheaters found that output could be increased by injecting extra fuel into the base of the preheater. The logical development was to install a specially designed combustion chamber at the base of the preheater, into which pulverized coal is injected. This is referred to as an "air-through" precalciner, because the combustion air for both the kiln fuel and the calciner fuel all passes through the kiln. This kind of precalciner can burn up to 30% (typically 20%) of its fuel in the calciner. If more fuel were injected in the calciner, the extra amount of air drawn through the kiln would cool the kiln flame excessively. The feed is 40-60% calcined before it enters the rotary kiln.
The ultimate development is the "air-separate" precalciner, in which the hot combustion air for the calciner arrives in a duct directly from the cooler, bypassing the kiln. Typically, 60-75% of the fuel is burned in the precalciner. In these systems, the feed entering the rotary kiln is 100% calcined. The kiln has only to raise the feed to sintering temperature. In theory the maximum efficiency would be achieved if all the fuel were burned in the preheater, but the sintering operation involves partial melting and nodulization to make clinker, and the rolling action of the rotary kiln remains the most efficient way of doing this. Large modern installations typically have two parallel strings of 4 or 5 cyclones, with one attached to the kiln and the other attached to the precalciner chamber. A rotary kiln of 6 x 100 m makes 8,000–10,000 tonnes per day, using about 0.10-0.11 tonnes of coal fuel for every tonne of clinker produced. The kiln is dwarfed by the massive preheater tower and cooler in these installations. Such a kiln produces 3 million tonnes of clinker per year, and consumes 300,000 tonnes of coal. A diameter of 6 m appears to be the limit of size of rotary kilns, because the flexibility of the steel shell becomes unmanageable at or above this size, and the firebrick lining tends to fail when the kiln flexes.
A particular advantage of the air-separate precalciner is that a large proportion, or even 100%, of the alkali-laden kiln exhaust gas can be taken off as alkali bleed (see above). Because this accounts for only 40% of the system heat input, it can be done with lower heat wastage than in a simple suspension preheater bleed. Because of this, air-separate precalciners are now always prescribed when only high-alkali raw materials are available at a cement plant.
The accompanying figures show the movement towards the use of the more efficient processes in North America (for which data is readily available). But the average output per kiln in, for example, Thailand is twice that in North America.
Ancillary equipment
Essential equipment in addition to the kiln tube and the preheater are:
Cooler
Fuel mills
Fans
Exhaust gas cleaning equipment.
Coolers
Early systems used rotary coolers, which were rotating cylinders similar to the kiln, into which the hot clinker dropped. The combustion air was drawn up through the cooler as the clinker moved down, cascading through the air stream. In the 1920s, satellite coolers became common and remained in use until recently. These consist of a set (typically 7–9) of tubes attached to the kiln tube. They have the advantage that they are sealed to the kiln, and require no separate drive. From about 1930, the grate cooler was developed. This consists of a perforated grate through which cold air is blown, enclosed in a rectangular chamber. A bed of clinker up to 0.5 m deep moves along the grate. These coolers have two main advantages: (1) they cool the clinker rapidly, which is desirable from a clinker quality point of view; it avoids that alite (), thermodynamically unstable below 1250 °C, revert to belite () and free CaO (C) on slow cooling:
(an exothermic reaction favored by the heat release),
(as alite is responsible for the early strength development in cement setting and hardening, the highest possible content of the clinker in alite is desirable)
and, (2) because they do not rotate, hot air can be ducted out of them for use in fuel drying, or for use as precalciner combustion air. The latter advantage means that they have become the only type used in modern systems .
Fuel mills
Fuel systems are divided into two categories:
Direct firing
Indirect firing
In direct firing, the fuel is fed at a controlled rate to the fuel mill, and the fine product is immediately blown into the kiln. The advantage of this system is that it is not necessary to store the hazardous ground fuel: it is used as soon as it is made. For this reason it was the system of choice for older kilns. A disadvantage is that the fuel mill has to run all the time: if it breaks down, the kiln has to stop if no backup system is available.
In indirect firing, the fuel is ground by an intermittently run mill, and the fine product is stored in a silo of sufficient size to supply the kiln though fuel mill stoppage periods. The fine fuel is metered out of the silo at a controlled rate and blown into the kiln. This method is now favoured for precalciner systems, because both the kiln and the precalciner can be fed with fuel from the same system. Special techniques are required to store the fine fuel safely, and coals with high volatiles are normally milled in an inert atmosphere (e.g. CO2).
Fans
A large volume of gases has to be moved through the kiln system. Particularly in suspension preheater systems, a high degree of suction has to be developed at the exit of the system to drive this. Fans are also used to force air through the cooler bed, and to propel the fuel into the kiln. Fans account for most of the electric power consumed in the system, typically amounting to 10–15 kW·h per tonne of clinker.
Gas cleaning
The exhaust gases from a modern kiln typically amount to 2 tonnes (or 1500 cubic metres at STP) per tonne of clinker made. The gases carry a large amount of dust—typically 30 grams per cubic metre. Environmental regulations specific to different countries require that this be reduced to (typically) 0.1 gram per cubic metre, so dust capture needs to be at least 99.7% efficient. Methods of capture include electrostatic precipitators and bag-filters. See also cement kiln emissions.
Kiln fuels
Fuels that have been used for primary firing include coal, petroleum coke, heavy fuel oil, natural gas, landfill off-gas and oil refinery flare gas. Because the clinker is brought to its peak temperature mainly by radiant heat transfer, and a bright (i.e. high emissivity) and hot flame is essential for this, high carbon fuels such as coal which produces a luminous flame are often preferred for kiln firing. Where it is cheap and readily available, natural gas is also sometimes used. However, because it produces a much less luminous flame, it tends to result in lower kiln output.
Alternative fuels
In addition to these primary fuels, various combustible waste materials have been fed to kilns. These alternative fuels (AF) include:
Used motor-vehicle tires
Sewage sludge
Agricultural waste
Landfill gas
Refuse-derived fuel (RDF)
Chemical and other hazardous waste
Cement kilns are an attractive way of disposing of hazardous materials, because of:
the temperatures in the kiln, which are much higher than in other combustion systems (e.g. incinerators),
the alkaline conditions in the kiln, afforded by the high-calcium rawmix, which can absorb acidic combustion products,
the ability of the clinker to absorb heavy metals into its structure.
A notable example is the use of scrapped motor-vehicle tires, which are very difficult to dispose of by other means. Whole tires are commonly introduced in the kiln by rolling them into the upper end of a preheater kiln, or by dropping them through a slot midway along a long wet kiln. In either case, the high gas temperatures (1000–1200 °C) cause almost instantaneous, complete and smokeless combustion of the tire. Alternatively, tires are chopped into 5–10 mm chips, in which form they can be injected into a precalciner combustion chamber. The steel and zinc in the tires become chemically incorporated into the clinker, partially replacing iron that must otherwise be fed as raw material.
A high level of monitoring of both the fuel and its combustion products is necessary to maintain safe operation.
For maximum kiln efficiency, high quality conventional fuels are the best choice. However, burning any fuels, especially hazardous waste materials, can result in toxic emissions. Thus, it is necessary for operators of cement kilns to closely monitor many process variables to ensure emissions are continuously minimized. In the U.S., cement kilns are regulated as a major source of air pollution by the EPA and must meet stringent air pollution control requirements.
Kiln control
The objective of kiln operation is to make clinker with the required chemical and physical properties, at the maximum rate that the size of kiln will allow, while meeting environmental standards, at the lowest possible operating cost. The kiln is very sensitive to control strategies, and a poorly run kiln can easily double cement plant operating costs.
Formation of the desired clinker minerals involves heating the rawmix through the temperature stages mentioned above. The finishing transformation that takes place in the hottest part of the kiln, under the flame, is the reaction of belite () with calcium oxide to form alite ():
Also abbreviated in the cement chemist notation (CCN) as:
(endothermic reaction favored by a higher temperature)
Tricalcium silicate (, alite, ) is thermodynamically unstable below 1250 °C, but can be preserved in a metastable state at room temperature by fast cooling (quenching): on slow cooling it tends to revert to belite () and CaO.
If the reaction is incomplete, excessive amounts of free calcium oxide remain in the clinker. Regular measurement of the free CaO content is used as a means of tracking the clinker quality. As a parameter in kiln control, free CaO data is somewhat ineffective because, even with fast automated sampling and analysis, the data, when it arrives, may be 10 minutes "out of date", and more immediate data must be used for minute-to-minute control.
Conversion of belite to alite requires partial melting, the resulting liquid being the solvent in which the reaction takes place. The amount of liquid, and hence the speed of the finishing reaction, is related to temperature. To meet the clinker quality objective, the most obvious control is that the clinker should reach a peak temperature such that the finishing reaction takes place to the required degree. A further reason to maintain constant liquid formation in the hot end of the kiln is that the sintering material forms a dam that prevents the cooler upstream feed from flooding out of the kiln. The feed in the calcining zone, because it is a powder evolving carbon dioxide, is extremely fluid. Cooling of the burning zone, and loss of unburned material into the cooler, is called "flushing", and in addition to causing lost production can cause massive damage.
However, for efficient operation, steady conditions need to be maintained throughout the whole kiln system. The feed at each stage must be at a temperature such that it is "ready" for processing in the next stage. To ensure this, the temperature of both feed and gas must be optimized and maintained at every point. The external controls available to achieve this are few:
Feed rate: this defines the kiln output
Rotary kiln speed: this controls the rate at which the feed moves through the kiln tube
Fuel injection rate: this controls the rate at which the "hot end" of the system is heated
Exhaust fan speed or power: this controls gas flow, and the rate at which heat is drawn from the "hot end" of the system to the "cold end"
In the case of precalciner kilns, further controls are available:
Independent control of fuel to kiln and calciner
Independent fan controls where there are multiple preheater strings.
The independent use of fan speed and fuel rate is constrained by the fact that there must always be sufficient oxygen available to burn the fuel, and in particular, to burn carbon to carbon dioxide. If carbon monoxide is formed, this represents a waste of fuel, and also indicates reducing conditions within the kiln which must be avoided at all costs since it causes destruction of the clinker mineral structure. For this reason, the exhaust gas is continually analyzed for O2, CO, NO and SO2.
The assessment of the clinker peak temperature has always been problematic. Contact temperature measurement is impossible because of the chemically aggressive and abrasive nature of the hot clinker, and optical methods such as infrared pyrometry are difficult because of the dust and fume-laden atmosphere in the burning zone. The traditional method of assessment was to view the bed of clinker and deduce the amount of liquid formation by experience. As more liquid forms, the clinker becomes stickier, and the bed of material climbs higher up the rising side of the kiln. It is usually also possible to assess the length of the zone of liquid formation, beyond which powdery "fresh" feed can be seen. Cameras, with or without infrared measurement capability, are mounted on the kiln hood to facilitate this. On many kilns, the same information can be inferred from the kiln motor power drawn, since sticky feed riding high on the kiln wall increases the eccentric turning load of the kiln. Further information can be obtained from the exhaust gas analyzers. The formation of NO from nitrogen and oxygen takes place only at high temperatures, and so the NO level gives an indication of the combined feed and flame temperature. SO2 is formed by thermal decomposition of calcium sulfate in the clinker, and so also gives an indication of clinker temperature. Modern computer control systems usually make a "calculated" temperature, using contributions from all these information sources, and then set about controlling it.
As an exercise in process control, kiln control is extremely challenging, because of multiple inter-related variables, non-linear responses, and variable process lags. Computer control systems were first tried in the early 1960s, initially with poor results due mainly to poor process measurements. Since 1990, complex high-level supervisory control systems have been standard on new installations. These operate using expert system strategies, that maintain a "just sufficient" burning zone temperature, below which the kiln's operating condition will deteriorate catastrophically, thus requiring rapid-response, "knife-edge" control.
Cement kiln emissions
Emissions from cement works are determined both by continuous and discontinuous measuring methods, which are described in corresponding national guidelines and standards. Continuous measurement is primarily used for dust (particulates), NOx (nitrogen oxides) and SO2 (sulfur dioxide), while the remaining parameters relevant pursuant to ambient pollution legislation are usually determined discontinuously by individual measurements.
The following descriptions of emissions refer to modern kiln plants based on dry process technology.
Carbon dioxide
During the clinker burning process CO2 is emitted. CO2 accounts for the main share of these gases. CO2 emissions are both raw material-related and energy-related. Raw material-related emissions are produced during limestone decarbonation () and account for about half of total CO2 emissions. Use of fuels with higher hydrogen content than coal and use of alternative fuels can reduce net greenhouse gas emissions.
Dust
To manufacture 1 t of Portland cement, about 1.5 to 1.7 t raw materials, 0.1 t coal and 1 t clinker (besides other cement constituents and sulfate agents) must be ground to dust fineness during production. In this process, the steps of raw material processing, fuel preparation, clinker burning and cement grinding constitute major emission sources for particulate components. While particulate emissions of up to 3,000 mg/m3 were measured leaving the stack of cement rotary kiln plants as recently as in the 1960s, legal limits are typically 30 mg/m3 today, and much lower levels are achievable.
Nitrogen oxides (NOx)
The clinker burning process is a high-temperature process resulting in the formation of nitrogen oxides (NOx). The amount formed is directly related to the main flame temperature (typically 1850–2000 °C). Nitrogen monoxide (NO) accounts for about 95%, and nitrogen dioxide (NO2) for about 5% of this compound present in the exhaust gas of rotary kiln plants. As most of the NO is converted to NO2 in the atmosphere, emissions are given as NO2 per cubic metre exhaust gas.
Without reduction measures, process-related NOx contents in the exhaust gas of rotary kiln plants would in most cases considerably exceed the specifications of e.g. European legislation for waste burning plants (0.50 g/m3 for new plants and 0.80 g/m3 for existing plants). Reduction measures are aimed at smoothing and optimising plant operation. Technically, staged combustion and Selective Non-Catalytic NO Reduction (SNCR) are applied to cope with the emission limit values.
High process temperatures are required to convert the raw material mix to Portland cement clinker. Kiln charge temperatures in the sintering zone of rotary kilns range at around 1450 °C. To reach these, flame temperatures of about 2000 °C are necessary. For reasons of clinker quality the burning process takes place under oxidising conditions, under which the partial oxidation of the molecular nitrogen in the combustion air resulting in the formation of nitrogen monoxide (NO) dominates. This reaction is also called thermal NO formation. At the lower temperatures prevailing in a precalciner, however, thermal NO formation is negligible: here, the nitrogen bound in the fuel can result in the formation of what is known as fuel-related NO. Staged combustion is used to reduce NO: calciner fuel is added with insufficient combustion air. This causes CO to form.
The CO then reduces the NO into molecular nitrogen:
2 CO + 2 NO → 2 CO2 + N2.
Hot tertiary air is then added to oxidize the remaining CO.
Sulfur dioxide (SO2)
Sulfur is input into the clinker burning process via raw materials and fuels. Depending on their origin, the raw materials may contain sulfur bound as sulfide or sulfate. Higher SO2 emissions by rotary kiln systems in the cement industry are often attributable to the sulfides contained in the raw material, which become oxidised to form SO2 at the temperatures between 370 °C and 420 °C prevailing in the kiln preheater. Most of the sulfides are pyrite or marcasite contained in the raw materials. Given the sulfide concentrations found e.g. in German raw material deposits, SO2 emission concentrations can total up to 1.2 g/m3 depending on the site location. In some cases, injected calcium hydroxide is used to lower SO2 emissions.
The sulfur input with the fuels is completely converted to SO2 during combustion in the rotary kiln. In the preheater and the kiln, this SO2 reacts to form alkali sulfates, which are bound in the clinker, provided that oxidizing conditions are maintained in the kiln.
Carbon monoxide (CO) and total carbon
The exhaust gas concentrations of CO and organically bound carbon are a yardstick for the burn-out rate of the fuels utilised in energy conversion plants, such as power stations. By contrast, the clinker burning process is a material conversion process that must always be operated with excess air for reasons of clinker quality. In concert with long residence times in the high-temperature range, this leads to complete fuel burn-up.
The emissions of CO and organically bound carbon during the clinker burning process are caused by the small quantities of organic constituents input via the natural raw materials (remnants of organisms and plants incorporated in the rock in the course of geological history). These are converted during kiln feed preheating and become oxidized to form CO and CO2. In this process, small portions of organic trace gases (total organic carbon) are formed as well. In case of the clinker burning process, the content of CO and organic trace gases in the clean gas therefore may not be directly related to combustion conditions. The amount of released CO2 is about half a ton per ton of clinker.
Dioxins and furans (PCDD/F)
Rotary kilns of the cement industry and classic incineration plants mainly differ in terms of the combustion conditions prevailing during clinker burning. Kiln feed and rotary kiln exhaust gases are conveyed in counter-flow and mixed thoroughly. Thus, temperature distribution and residence time in rotary kilns afford particularly favourable conditions for organic compounds, introduced either via fuels or derived from them, to be completely destroyed. For that reason, only very low concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (colloquially "dioxins and furans") can be found in the exhaust gas from cement rotary kilns.
Polychlorinated biphenyls (PCB)
The emission behaviour of PCB is comparable to that of dioxins and furans. PCB may be introduced into the process via alternative raw materials and fuels. The rotary kiln systems of the cement industry destroy these trace components virtually completely.
Polycyclic aromatic hydrocarbons (PAH)
PAHs (according to EPA 610) in the exhaust gas of rotary kilns usually appear at a distribution dominated by naphthalene, which accounts for a share of more than 90% by mass. The rotary kiln systems of the cement industry destroy virtually completely the PAHs input via fuels. Emissions are generated from organic constituents in the raw material.
Benzene, toluene, ethylbenzene, xylene (BTEX)
As a rule benzene, toluene, ethylbenzene and xylene are present in the exhaust gas of rotary kilns in a characteristic ratio. BTEX is formed during the thermal decomposition of organic raw material constituents in the preheater.
Gaseous inorganic chlorine compounds (HCl)
Chlorides are a minor additional constituents contained in the raw materials and fuels of the clinker burning process. They are released when the fuels are burnt or the kiln feed is heated, and primarily react with the alkalis from the kiln feed to form alkali chlorides. These compounds, which are initially vaporous, condense on the kiln feed or the kiln dust, at temperatures between 700 °C and 900 °C, subsequently re-enter the rotary kiln system and evaporate again. This cycle in the area between the rotary kiln and the preheater can result in coating formation. A bypass at the kiln inlet allows effective reduction of alkali chloride cycles and to diminish coating build-up problems. During the clinker burning process, gaseous inorganic chlorine compounds are either not emitted at all, or in very small quantities only.
Gaseous inorganic fluorine compounds (HF)
Of the fluorine present in rotary kilns, 90 to 95% is bound in the clinker, and the remainder is bound with dust in the form of calcium fluoride stable under the conditions of the burning process. Ultra-fine dust fractions that pass through the measuring gas filter may give the impression of low contents of gaseous fluorine compounds in rotary kiln systems of the cement industry.
Trace elements and heavy metals
The emission behaviour of the individual elements in the clinker burning process is determined by the input scenario, the behaviour in the plant and the precipitation efficiency of the dust collection device. The trace elements (e.g., heavy metals) introduced into the burning process via the raw materials and fuels may evaporate completely or partially in the hot zones of the preheater and/or rotary kiln depending on their volatility, react with the constituents present in the gas phase, and condense on the kiln feed in the cooler sections of the kiln system. Depending on the volatility and the operating conditions, this may result in the formation of cycles that are either restricted to the kiln and the preheater or include the combined drying and grinding plant as well. Trace elements from the fuels initially enter the combustion gases, but are emitted to an extremely small extent only owing to the retention capacity of the kiln and the preheater.
Under the conditions prevailing in the clinker burning process, non-volatile elements (e.g. arsenic, vanadium, nickel) are completely bound in the clinker.
Elements such as lead and cadmium preferentially react with the excess chlorides and sulfates in the section between the rotary kiln and the preheater, forming volatile compounds. Owing to the large surface area available, these compounds condense on the kiln feed particles at temperatures between 700 °C and 900 °C. In this way, the volatile elements accumulated in the kiln-preheater system are precipitated again in the cyclone preheater, remaining almost completely in the clinker.
Thallium (as the chloride) condenses in the upper zone of the cyclone preheater at temperatures between 450 °C and 500 °C. As a consequence, a cycle can be formed between preheater, raw material drying and exhaust gas purification.
Mercury and its compounds are not precipitated in the kiln and the preheater. They condense on the exhaust gas route due to the cooling of the gas and are partially adsorbed by the raw material particles. This portion is precipitated in the kiln exhaust gas filter.
Owing to trace element behaviour during the clinker burning process and the high precipitation efficiency of the dust collection devices, trace element emission concentrations are on a low overall level.
References
Further reading
Cement
Concrete
Kilns
Industrial furnaces | Cement kiln | [
"Chemistry",
"Engineering"
] | 8,901 | [
"Structural engineering",
"Chemical equipment",
"Metallurgical processes",
"Kilns",
"Industrial furnaces",
"Concrete"
] |
7,244,053 | https://en.wikipedia.org/wiki/TU%20Delft%20Faculty%20of%20Aerospace%20Engineering | The Faculty of Aerospace Engineering at the Delft University of Technology in the Netherlands is the merger of two interrelated disciplines, aeronautical engineering and astronautical engineering. Aeronautical engineering works specifically with aircraft or aeronautics. Astronautical engineering works specifically with spacecraft or astronautics. At the Faculty of Aerospace Engineering, both of the fields are directly addressed along with expansion into fields such as wind energy.
Description
The Faculty is one of the largest of the eight faculties at TU Delft and one of the largest faculties devoted entirely to aerospace engineering in northern Europe. It is the only institute carrying out research and education directly related to aerospace engineering in the Netherlands.
Through the years, the Faculty has responded to the increasing demands of the aerospace industry by further expanding its facilities and laboratories. Today the Faculty has a student body of approximately 2300 undergraduates and graduates, 237 members of academic staff and 181 PhD students.
Around 34% of the student population is from outside the Netherlands.
The TU Delft scored 15th in the world in the 2013 "Engineering and Technology" QS World University Rankings. In 2023 the TU Delft reached the 3rd place in the "Mechanical, aerospace and Manufacturing Engineering" category of the QS World University Rankings. In 2013 this category got extended to "Mechanical, Aeronautical & Manufacturing Engineering" and the TU Delft jumped to the 18th position worldwide (6th place in Europe). In 2017, TU Delft ranked 4th worldwide, and 1st within Europe, in the subject of Aerospace Engineering in the Shanghai Ranking's Global Ranking of Academic Subjects. As of 2022, TU Delft ranks 8th worldwide, and 1st within Europe, in the subject of Aerospace Engineering in Shanghai Ranking's Global Ranking of Academic Subjects.
Research
Current areas of research include novel aerospace materials, Particle Image Velocimetry, CubeSat, Airborne Wind Energy and several others. Currently ten research chairs are grouped under four major departments:
Flow Physics and Technology (FPT)
Control and Operations (C&O)
Aerospace Structures and Materials (ASM)
Space Engineering (SpE)
Facilities
Extensive laboratory and testing facilities are used in research and teaching. The facilities include supersonic, hypersonic and subsonic wind-tunnels, a high-sensitivity navigation simulator, a structures and materials testing laboratory, and an ISO 8, class 100,000 clean room for the development of micro satellites. These facilities make it possible to conduct experiments in man-machine factors, flight control, structures and materials, aerodynamics, simulation, motion, navigation and spaceflight. The faculty owns and makes use of a Cessna Citation jet aeroplane which is a unique flying laboratory. The Citation is used in research as well as in education. Its modular interior enables the possibility to change quickly between research missions and educational flights with students.
Delft Aerospace Structures and Materials laboratory
The Delft Aerospace Structures and Materials laboratory is one of the largest facilities of the faculty of aerospace engineering with a footprint of over 3600 square meters. The laboratory is split up in multiple smaller laboratories which allow for a wide variety of research and educational activities. Amongst others the facility consists of labs for the production, handling and testing of composites, facilities suitable for performing mechanical tests, a chemical lab, a micro UAV testing and development facility and work spaces for students to manufacture and test parts that they designed during their studies. The Delft Aerospace Structures and Materials laboratory is also the home of a large collection of aircraft and spacecraft (parts), including a retired F16 of the Dutch air force, which are used for educational purposes. Furthermore the laboratory also houses the Aircraft Manufacturing Laboratory, which is a laboratory where graduate students of the faculty are building a fully functional RV12 aircraft.
Simona
The flight simulator Simona can be programmed to simulate any known aircraft, but also to mimic characteristics of a new design. The unique light design allows extremely realistic motion. The simulator is used for research, but is also the subject of some M.Sc. thesis projects.
Clean room
The eight floor of the faculty houses an ISO 8, class 100,000 cleanroom for the development of micro satellites. The facility is used both by staff and by graduate students from the space department of the faculty. The cleanroom is used for space related research and for the production of TU Delft's micro satellites, of which three are currently in orbit around the Earth: Delfi-C3, Delfi-n3Xt and Delfi-PQ. Contact with these satellites is maintained through a ground station housed on campus at the faculty of electrical engineering, computer science and mathematics.
National and international cooperation
The Faculty plays a significant role in national organisations such as the National Aerospace Laboratory, the Netherlands Agency for Aerospace Programmes and the Netherlands Organisation for Applied Scientific Research. Collaborations with numerous international and multinational industries through research groups abroad as well as in the Netherlands ensure that the Faculty remains at the forefront of the latest developments in the aerospace industry. The Faculty is a member of PEGASUS, the European network of prestigious aerospace universities. It also participates in exchanges of students and lecturers through the SOCRATES/ERASMUS programmes and agreements between several other partner universities. The faculty plays a major role in the IDEA League (TU Delft, ETH Zurich, RWTH Aachen, Chalmers institutes and universities).
References
Aeronautical engineering schools
Delft University of Technology | TU Delft Faculty of Aerospace Engineering | [
"Engineering"
] | 1,064 | [
"Aeronautical engineering schools",
"Engineering universities and colleges",
"Aeronautics organizations"
] |
7,246,977 | https://en.wikipedia.org/wiki/Quantum%20dot%20solar%20cell | A quantum dot solar cell (QDSC) is a solar cell design that uses quantum dots as the captivating photovoltaic material. It attempts to replace bulk materials such as silicon, copper indium gallium selenide (CIGS) or cadmium telluride (CdTe). Quantum dots have bandgaps that are adjustable across a wide range of energy levels by changing their size. In bulk materials, the bandgap is fixed by the choice of material(s). This property makes quantum dots attractive for multi-junction solar cells, where a variety of materials are used to improve efficiency by harvesting multiple portions of the solar spectrum.
As of 2022, efficiency exceeds 18.1%. Quantum dot solar cells have the potential to increase the maximum attainable thermodynamic conversion efficiency of solar photon conversion up to about 66% by utilizing hot photogenerated carriers to produce higher photovoltages or higher photocurrents.
Background
Solar cell concepts
In a conventional solar cell light is absorbed by a semiconductor, producing an electron-hole (e-h) pair; the pair may be bound and is referred to as an exciton. This pair is separated by an internal electrochemical potential (present in p-n junctions or Schottky diodes) and the resulting flow of electrons and holes creates an electric current. The internal electrochemical potential is created by doping one part of the semiconductor interface with atoms that act as electron donors (n-type doping) and another with electron acceptors (p-type doping) that results in a p-n junction. The generation of an e-h pair requires that the photons have energy exceeding the bandgap of the material. Effectively, photons with energies lower than the bandgap do not get absorbed, while those that are higher can quickly (within about 10−13 s) thermalize to the band edges, reducing output. The former limitation reduces current, while the thermalization reduces the voltage. As a result, semiconductor cells suffer a trade-off between voltage and current (which can be in part alleviated by using multiple junction implementations). The detailed balance calculation shows that this efficiency can not exceed 33% if one uses a single material with an ideal bandgap of 1.34 eV for a solar cell.
The band gap (1.34 eV) of an ideal single-junction cell is close to that of silicon (1.1 eV), one of the many reasons that silicon dominates the market. However, silicon's efficiency is limited to about 30% (Shockley–Queisser limit). It is possible to improve on a single-junction cell by vertically stacking cells with different bandgaps – termed a "tandem" or "multi-junction" approach. The same analysis shows that a two layer cell should have one layer tuned to 1.64 eV and the other to 0.94 eV, providing a theoretical performance of 44%. A three-layer cell should be tuned to 1.83, 1.16 and 0.71 eV, with an efficiency of 48%. An "infinity-layer" cell would have a theoretical efficiency of 86%, with other thermodynamic loss mechanisms accounting for the rest.
Traditional (crystalline) silicon preparation methods do not lend themselves to this approach due to lack of bandgap tunability. Thin-films of amorphous silicon, which due to a relaxed requirement in crystal momentum preservation can achieve direct bandgaps and intermixing of carbon, can tune the bandgap, but other issues have prevented these from matching the performance of traditional cells. Most tandem-cell structures are based on higher performance semiconductors, notably indium gallium arsenide (InGaAs). Three-layer InGaAs/GaAs/InGaP cells (bandgaps 0.94/1.42/1.89 eV) hold the efficiency record of 42.3% for experimental examples.
However, the QDSCs suffer from weak absorption and the contribution of the light absorption at room temperature is marginal. This can be addressed by utilizing multibranched Au nanostars.
Quantum dots
Quantum dots are semiconducting particles that have been reduced below the size of the Exciton Bohr radius and due to quantum mechanics considerations, the electron energies that can exist within them become finite, much alike energies in an atom. Quantum dots have been referred to as "artificial atoms". These energy levels are tuneable by changing their size, which in turn defines the bandgap. The dots can be grown over a range of sizes, allowing them to express a variety of bandgaps without changing the underlying material or construction techniques. In typical wet chemistry preparations, the tuning is accomplished by varying the synthesis duration or temperature.
The ability to tune the bandgap makes quantum dots desirable for solar cells. For the sun's photon distribution spectrum, the Shockley-Queisser limit indicates that the maximum solar conversion efficiency occurs in a material with a band gap of 1.34 eV. However, materials with lower band gaps will be better suited to generate electricity from lower-energy photons (and vice versa). Single junction implementations using lead sulfide (PbS) colloidal quantum dots (CQD) have bandgaps that can be tuned into the far infrared, frequencies that are typically difficult to achieve with traditional solar cells. Half of the solar energy reaching the Earth is in the infrared, most in the near infrared region. A quantum dot solar cell makes infrared energy as accessible as any other.
Moreover, CQD offer easy synthesis and preparation. While suspended in a colloidal liquid form they can be easily handled throughout production, with a fumehood as the most complex equipment needed. CQD are typically synthesized in small batches, but can be mass-produced. The dots can be distributed on a substrate by spin coating, either by hand or in an automated process. Large-scale production could use spray-on or roll-printing systems, dramatically reducing module construction costs.
Production
Early examples used costly molecular beam epitaxy processes. However, the lattice mismatch results in accumulation of strain and thus generation of defects, restricting the number of stacked layers. Droplet epitaxy growth technique shows its advantages on the fabrication of strain-free QDs. Alternatively, less expensive fabrication methods were later developed. These use wet chemistry (for CQD) and subsequent solution processing. Concentrated nanoparticle solutions are stabilized by long hydrocarbon ligands that keep the nanocrystals suspended in solution.
To create a solid, these solutions are cast down and the long stabilizing ligands are replaced with short-chain crosslinkers. Chemically engineering the nanocrystal surface can better passivate the nanocrystals and reduce detrimental trap states that would curtail device performance by means of carrier recombination. This approach produces an efficiency of 7.0%.
A more recent study uses different ligands for different functions by tuning their relative band alignment to improve the performance to 8.6%. The cells were solution-processed in air at room-temperature and exhibited air-stability for more than 150 days without encapsulation.
In 2014 the use of iodide as a ligand that does not bond to oxygen was introduced. This maintains stable n- and p-type layers, boosting the absorption efficiency, which produced power conversion efficiency up to 8%.
History
The idea of using quantum dots as a path to high efficiency was first noted by Burnham and Duggan in 1989. At the time, the science of quantum dots, or "wells" as they were known, was in its infancy and early examples were just becoming available.
DSSC efforts
Another modern cell design is the dye-sensitized solar cell, or DSSC. DSSCs use a sponge-like layer of as the semiconductor valve as well as a mechanical support structure. During construction, the sponge is filled with an organic dye, typically ruthenium-polypyridine, which injects electrons into the titanium dioxide upon photoexcitation. This dye is relatively expensive, and ruthenium is a rare metal.
Using quantum dots as an alternative to molecular dyes was considered from the earliest days of DSSC research. The ability to tune the bandgap allowed the designer to select a wider variety of materials for other portions of the cell. Collaborating groups from the University of Toronto and École Polytechnique Fédérale de Lausanne developed a design based on a rear electrode directly in contact with a film of quantum dots, eliminating the electrolyte and forming a depleted heterojunction. These cells reached 7.0% efficiency, better than the best solid-state DSSC devices, but below those based on liquid electrolytes.
Multi-junction
Traditionally, multi-junction solar cells are made with a collection of multiple semiconductor materials. Because each material has a different band gap, each material's p-n junction will be optimized for a different incoming wavelength of light. Using multiple materials enables the absorbance of a broader range of wavelengths, which increases the cell's electrical conversion efficiency.
However, the use of multiple materials makes multi-junction solar cells too expensive for many commercial uses. Because the band gap of quantum dots can be tuned by adjusting the particle radius, multi-junction cells can be manufactured by incorporating quantum dot semiconductors of different sizes (and therefore different band gaps). Using the same material lowers manufacturing costs, and the enhanced absorption spectrum of quantum dots can be used to increase the short-circuit current and overall cell efficiency.
Cadmium telluride (CdTe) is used for cells that absorb multiple frequencies. A colloidal suspension of these crystals is spin-cast onto a substrate such as a thin glass slide, potted in a conductive polymer. These cells did not use quantum dots, but shared features with them, such as spin-casting and the use of a thin film conductor. At low production scales quantum dots are more expensive than mass-produced nanocrystals, but cadmium and telluride are rare and highly toxic metals subject to price swings.
The Sargent Group used lead sulfide as an infrared-sensitive electron donor to produce then record-efficiency IR solar cells. Spin-casting may allow the construction of "tandem" cells at greatly reduced cost. The original cells used a gold substrate as an electrode, although nickel works just as well.
Hot-carrier capture
Another way to improve efficiency is to capture the extra energy in the electron when emitted from a single-bandgap material. In traditional materials like silicon, the distance from the emission site to the electrode where they are harvested is too far to allow this to occur; the electron will undergo many interactions with the crystal materials and lattice, giving up this extra energy as heat. Amorphous thin-film silicon was tried as an alternative, but the defects inherent to these materials overwhelmed their potential advantage. Modern thin-film cells remain generally less efficient than traditional silicon.
Nanostructured donors can be cast as uniform films that avoid the problems with defects. These would be subject to other issues inherent to quantum dots, notably resistivity issues and heat retention.
Multiple excitons
The Shockley-Queisser limit, which sets the maximum efficiency of a single-layer photovoltaic cell to be 33.7%, assumes that only one electron-hole pair (exciton) can be generated per incoming photon. Multiple exciton generation (MEG) is an exciton relaxation pathway which allows two or more excitons to be generated per incoming high energy photon. In traditional photovoltaics, this excess energy is lost to the bulk material as lattice vibrations (electron-phonon coupling). MEG occurs when this excess energy is transferred to excite additional electrons across the band gap, where they can contribute to the short-circuit current density.
Within quantum dots, quantum confinement increases coulombic interactions which drives the MEG process. This phenomenon also decreases the rate of electron-phonon coupling, which is the dominant method of exciton relaxation in bulk semiconductors. The phonon bottleneck slows the rate of hot carrier cooling, which allows excitons to pursue other pathways of relaxation; this allows MEG to dominate in quantum dot solar cells. The rate of MEG can be optimized by tailoring quantum dot ligand chemistry, as well as by changing the quantum dot material and geometry.
In 2004, Los Alamos National Laboratory reported spectroscopic evidence that several excitons could be efficiently generated upon absorption of a single, energetic photon in a quantum dot. Capturing them would catch more of the energy in sunlight. In this approach, known as "carrier multiplication" (CM) or "multiple exciton generation" (MEG), the quantum dot is tuned to release multiple electron-hole pairs at a lower energy instead of one pair at high energy. This increases efficiency through increased photocurrent. LANL's dots were made from lead selenide.
In 2010, the University of Wyoming demonstrated similar performance using DCCS cells. Lead-sulfur (PbS) dots demonstrated two-electron ejection when the incoming photons had about three times the bandgap energy.
In 2005, NREL demonstrated MEG in quantum dots, producing three electrons per photon and a theoretical efficiency of 65%. In 2007, they achieved a similar result in silicon.
Non-oxidizing
In 2014 a University of Toronto group manufactured and demonstrated a type of CQD n-type cell using PbS with special treatment so that it doesn't bind with oxygen. The cell achieved 8% efficiency, just shy of the current QD efficiency record. Such cells create the possibility of uncoated "spray-on" cells. However, these air-stable n-type CQD were actually fabricated in an oxygen-free environment.
Also in 2014, another research group at MIT demonstrated air-stable ZnO/PbS solar cells that were fabricated in air and achieved a certified 8.55% record efficiency (9.2% in lab) because they absorbed light well, while also transporting charge to collectors at the cell's edge. These cells show unprecedented air-stability for quantum dot solar cells that the performance remained unchanged for more than 150 days of storage in air.
Market Introduction
Commercial Providers
Although quantum dot solar cells have yet to be commercially viable on the mass scale, several small commercial providers have begun marketing quantum dot photovoltaic products. Investors and financial analysts have identified quantum dot photovoltaics as a key future technology for the solar industry.
Quantum Materials Corp. (QMC) and subsidiary Solterra Renewable Technologies are developing and manufacturing quantum dots and nanomaterials for use in solar energy and lighting applications. With their patented continuous flow production process for perovskite quantum dots, QMC hopes to lower the cost of quantum dot solar cell production in addition to applying their nanomaterials to other emerging industries.
QD Solar takes advantage of the tunable band gap of quantum dots to create multi-junction solar cells. By combining efficient silicon solar cells with infrared solar cells made from quantum dots, QD Solar aims to harvest more of the solar spectrum. QD Solar's inorganic quantum dots are processed with high-throughput and cost-effective technologies and are more light- and air- stable than polymeric nanomaterials.
UbiQD is developing photovoltaic windows using quantum dots as fluorophores. They have designed a luminescent solar concentrator (LSC) using near-infrared quantum dots which are cheaper and less toxic than traditional alternatives. UbiQD hopes to provide semi-transparent windows that convert passive buildings into energy generation units, while simultaneously reducing the heat gain of the building.
ML System S.A., a BIPV producer listed on Warsaw Stock Exchange intends to start volume production of its QuantumGlass product between 2020 and 2021.
Safety Concerns
Many heavy-metal quantum dot (lead/cadmium chalcogenides such as PbSe, CdSe) semiconductors can be cytotoxic and must be encapsulated in a stable polymer shell to prevent exposure. Non-toxic quantum dot materials such as AgBiS2 nanocrystals have been explored due to their safety and abundance; exploration with solar cells based with these materials have demonstrated comparable conversion efficiencies (> 9%) and short-circuit current densities (> 27 mA/cm2). UbiQD's CuInSe2−X quantum dot material is another example of a non-toxic semiconductor compound.
See also
Third-generation photovoltaic cell
Nanocrystalline silicon
Nanoparticle
Photoelectrochemical cell
Organic solar cell
References
External links
Science News Online, Quantum-Dots Leap: Tapping tiny crystals' inexplicable light-harvesting talent, June 3, 2006.
InformationWeek, Nanocrystal Discovery Has Solar Cell Potential, January 6, 2006.
Berkeley Lab, Berkeley Lab Air-stable Inorganic Nanocrystal Solar Cells Processed from Solution, 2005.
ScienceDaily, Sunny Future For Nanocrystal Solar Cells, October 23, 2005.
Solar cells
Quantum dots
Quantum electronics | Quantum dot solar cell | [
"Physics",
"Materials_science"
] | 3,522 | [
"Condensed matter physics",
"Nanotechnology",
"Quantum mechanics",
"Quantum electronics"
] |
9,414,239 | https://en.wikipedia.org/wiki/Paley%E2%80%93Zygmund%20inequality | In mathematics, the Paley–Zygmund inequality bounds the
probability that a positive random variable is small, in terms of
its first two moments. The inequality was
proved by Raymond Paley and Antoni Zygmund.
Theorem: If Z ≥ 0 is a random variable with
finite variance, and if , then
Proof: First,
The first addend is at most , while the second is at most by the Cauchy–Schwarz inequality. The desired inequality then follows. ∎
Related inequalities
The Paley–Zygmund inequality can be written as
This can be improved. By the Cauchy–Schwarz inequality,
which, after rearranging, implies that
This inequality is sharp; equality is achieved if Z almost surely equals a positive constant.
In turn, this implies another convenient form (known as Cantelli's inequality) which is
where and .
This follows from the substitution valid when .
A strengthened form of the Paley-Zygmund inequality states that if Z is a non-negative random variable then
for every .
This inequality follows by applying the usual Paley-Zygmund inequality to the conditional distribution of Z given that it is positive and noting that the various factors of cancel.
Both this inequality and the usual Paley-Zygmund inequality also admit versions: If Z is a non-negative random variable and then
for every . This follows by the same proof as above but using Hölder's inequality in place of the Cauchy-Schwarz inequality.
See also
Cantelli's inequality
Second moment method
Concentration inequality – a summary of tail-bounds on random variables.
References
Further reading
Probabilistic inequalities | Paley–Zygmund inequality | [
"Mathematics"
] | 345 | [
"Theorems in probability theory",
"Probabilistic inequalities",
"Inequalities (mathematics)"
] |
9,414,430 | https://en.wikipedia.org/wiki/Molecular%20sensor | A molecular sensor or chemosensor is a molecular structure (organic or inorganic complexes) that is used for sensing of an analyte to produce a detectable change or a signal. The action of a chemosensor, relies on an interaction occurring at the molecular level, usually involves the continuous monitoring of the activity of a chemical species in a given matrix such as solution, air, blood, tissue, waste effluents, drinking water, etc. The application of chemosensors is referred to as chemosensing, which is a form of molecular recognition. All chemosensors are designed to contain a signalling moiety and a recognition moiety, that is connected either directly to each other or through a some kind of connector or a spacer. The signalling is often optically based electromagnetic radiation, giving rise to changes in either (or both) the ultraviolet and visible absorption or the emission properties of the sensors. Chemosensors may also be electrochemically based. Small molecule sensors are related to chemosensors. These are traditionally, however, considered as being structurally simple molecules and reflect the need to form chelating molecules for complexing ions in analytical chemistry. Chemosensors are synthetic analogues of biosensors, the difference being that biosensors incorporate biological receptors such as antibodies, aptamers or large biopolymers.
Chemosensors describes molecule of synthetic origin that signal the presence of matter or energy. A chemosensor can be considered as type of an analytical device. Chemosensors are used in everyday life and have been applied to various areas such as in chemistry, biochemistry, immunology, physiology, etc. and within medicine in general, such as in critical care analysis of blood samples. Chemosensors can be designed to detect/signal a single analyte or a mixture of such species in solution. This can be achieved through either a single measurement or through the use of continuous monitoring. The signalling moiety acts as a signal transducer, converting the information (recognition event between the chemosensor and the analyte) into an optical response in a clear and reproducible manner.
Most commonly, the change (the signal) is observed by measuring the various physical properties of the chemosensor, such as the photo-physical properties seen in the absorption or emission, where different wavelengths of the electromagnetic spectrum are used. Consequently, most chemosensors are described as being either colorimetric (ground state) or luminescent (excited state, fluorescent or phosphorescent). Colorimetric chemosensors give rise to changes in their absorption properties (recorded using ultraviolet–visible spectroscopy), such as in absorption intensity and wavelength or in chirality (using circularly polarized light, and CD spectroscopy).
In contrast, then in the case of luminescent chemosensors, the detection of an analyte, using fluorescence spectroscopy, gives rise to spectral changes in the fluorescence excitation or in the emission spectra, which are recorded using a fluorimeter. Such changes can also occur in other excited state properties such as in the excited state life-time(s), quantum yield of fluorescence, and polarisation, etc. of the chemosensor. Fluorescence detection can be achieved at a low concentration (below ~ 10-6 M) with most fluorescence spectrometers. This offers the advantage of using the sensors directly within fibre optic systems. Examples of the use of chemosensors are to monitor blood content, drug concentrations, etc., as well as in environmental samples. Ions and molecules occur in abundance in biological and environmental systems where they are involved/effete biological and chemical processes. The development of molecular chemosensors as probes for such analytes is an annual multibillion-dollar business involving both small SMEs as well as large pharmaceutical and chemical companies.
Chemosensors were first used to describe the combination of a molecular recognition with some form of reporter so the presence of a guest can be observed (also referred to as the analyte, c.f. above). Chemosensors are designed to contain a signalling moiety and a molecular recognition moiety (also called the binding site or a receptor). Combining both of these components can be achieved in a number of ways, such as integrated, twisted or spaced. Chemosensors are consider as major component of the area of molecular diagnostics, within the discipline of supramolecular chemistry, which relies on molecular recognition. In terms of supramolecular chemistry, chemosensing is an example of host–guest chemistry, where the presence of a guest (the analyte) at the host site (the sensor) gives rise to recognition event (e.g. sensing) that can be monitored in real time. This requires the binding of the analyte to the receptor, using all kinds of binding interactions such as hydrogen bonding, dipole- and electrostatic interactions, solvophobic effect, metal chelation, etc. The recognition/binding moiety is responsible for selectivity and efficient binding of the guest/analyte, which depend on ligand topology, characteristics of the target (ionic radius, size of molecule, chirality, charge, coordination number and hardness, etc.) and the nature of the solvent (pH, ionic strength, polarity). Chemosensors are normally developed to be able to interact with the target species in reversible manner, which is a prerequisite for continuous monitoring.
Optical signalling methods (such as fluorescence) are sensitive and selective, and provide a platform for real-time response, and local observation. As chemosensors are designed to be both targeting (i.e. can recognize and bind a specific species) and sensitive to various concentration ranges, they can be used to observed real-live events on the cellular level. As each molecule can give rise to a signal/readout, that can be selectively measured, chemosensors are often said to be non-invasive and consequently have attracted significant attentions for their applications within biological matter, such as within living cells. Many examples of chemosensors have been developed for observing cellular function and properties, including monitoring ion flux concentrations and transports within cells such as Ca(II), Zn(II), Cu(II) and other physiologically important cations and anions, as well as biomolecules.
The design of ligands for the selective recognition of suitable guests such as metal cations and anions has been an important goal of supramolecular chemistry. The term supramolecular analytical chemistry has recently been coined to describe the application of molecular sensors to analytical chemistry. Small molecule sensors are related to chemosensors. However, these are traditionally considered as being structurally simple molecules and reflect the need to form chelating molecules for complexing ions in analytical chemistry.
History
While chemosensors were first defined in the 1980s, the first example of such a fluorescent chemosensor can be documented to be that of Friedrich Goppelsroder, who in 1867, developed a method for the determination/sensing of aluminium ion, using fluorescent ligand/chelate. This and subsequent work by others, gave birth to what is considered as modern analytical chemistry.
In the 1980s the development of chemosensing was achieved by Anthony W. Czarnik, A. Prasanna de Silva and Roger Tsien, in the book Fluorescent Chemosensors for Ion and Molecule Recognition. They focused on the analysis of various types of luminescent probes for ions and molecules in solutions and within biological cells, for real-time applications. Czarnik introduced the term ‘chemosensor’ to describe synthetic compounds capable of binding to analytes and providing a reversible signaling response. Tsien went on to studying and developing this area of research further by developing and studding fluorescent proteins for applications in biology, such as green fluorescent proteins (GFP) for which he was awarded the Nobel Prize in Chemistry in 2008. The work of Lynn Sousa in the late 1970s, on the detection of alkali metal ions, possibly resulting in one of the first examples of the use of supramolecular chemistry in fluorescent sensing design, as well as that of J.-M. Lehn, H. Bouas-Laurent and co-workers at Université Bordeaux I, France. The development of PET sensing of transition metal ions was developed by L. Fabbrizzi, among others.
In chemosensing, the use of fluorophore connected to the receptor via a covalent spacer is now commonly referred to as fluorophores-spacer-receptor principle. In such systems, the sensing event is normally described as being due to changes in the photophysical properties of the chemosensor systems due to chelation induced enhanced fluorescence (CHEF), and photoinduced electron transfer (PET), mechanisms. In principle the two mechanisms are based on the same idea; the communication pathway is in the form of a through-space electron transfer from the electron rich receptors to the electron deficient fluorophores (through space). This results in fluorescence quenching (active electron transfer), and the emission from the chemosensor is 'switched off,' for both mechanisms in the absence of the analytes. However, upon forming a host–guest complex between the analyte and receptor, the communication pathway is broken and the fluorescence emission from the fluorophores is enhanced, or 'switched on'. In other words, the fluorescence intensity and quantum yield are enhanced upon analyte recognition.
The fluorophores-receptor can also be integrated within the chemosensor. This leads to changes in the emission wavelength, which often results in change in colour. When the sensing event results in the formation of a signal that is visible to the naked eye, such sensors are normally referred to as colorimetric. Many examples of colorimetric chemosensors for ions such as fluoride have been developed. A pH indicator can be consider as a colorimetric chemosensors for protons. Such sensors have been developed for other cations, as well as anions and larger organic and biological molecules, such as proteins and carbohydrates.
Design principles
Chemosensors are nano-sized molecules and for application in vivo need to be non-toxic. A chemosensor must be able to give a measurable signal in direct response to the analyte recognition. Hence, the signal response is directly related to the magnitude of the sensing event (and, in turn concentration of the analyte). While the signalling moiety acts as a signal transducer, converting the recognition event into an optical response. The recognition moiety is responsible for binding to the analyte in a selective and reversible manner. If the binding sites are 'irreversible chemical reactions,' the indicators are described as fluorescent chemodosimeters, or fluorescent probes.
An active communication pathway has to be open between the two moieties for the sensor to operate. In colorimetric chemosensors, this usually relies on the receptor and transducer to be structurally integrated. In luminescent/fluorescent chemosensing these two parts can be 'spaced' out or connected with a covalent spacer. The communication pathway is through electron transfer or energy transfer for such fluorescent chemosensors. The effectiveness of the host–guest recognition between the receptor and the analyte depends on several factors, including the design of the receptor moiety, which is objective is to match as much the nature of the structural nature of the target analyte, as well as the nature of the environment that the sensing event occurs within (e.g. the type of media, i.e. blood, saliva, urine, etc. in biological samples). An extension to this approach is the development of molecular beacons, which are oligonucleotide hybridization probes based on fluorescence signalling where the recognition or the sensing event is communicated through enhancement or reduction in luminescence through the use of Förster resonance energy transfer (FRET) mechanism.
Fluorescent chemosensing
All chemosensors are designed to contain a signalling moiety and a recognition moiety. These are integrated directly or connected with a short covalent spacer depending on the mechanism involved in the signalling event. The chemosensor can be based on self-assembly of the sensor and the analyte. An example of such a design are the (indicator) displacement assays IDA. IDA sensor for anions such as citrate or phosphate ions have been developed whereby these ions can displace a fluorescent indicator in an indicator-host complex. The so-called UT taste chip (University of Texas) is a prototype electronic tongue and combines supramolecular chemistry with charge-coupled devices based on silicon wafers and immobilized receptor molecules.
Most examples of chemosensors for ions, such as those of alkali metal ions (Li+, Na+, K+, etc.) and alkali earth metal ions (Mg2+, Ca2+, etc.) are designed so that the excited state of the fluorophore component of the chemosensor is quenched by an electron transfer when the sensor is not complexed to these ions. No emission is thus observed, and the sensor is sometimes referred to as being 'switched off'. By complexing the sensor with a cation, the conditions for electron transfer are altered so that the quenching process is blocked, and fluorescence emission is 'switched on'. The probability of PET is governed by the overall free energy of the system (the Gibbs free energy ΔG). The driving force for PET is represented by ΔGET, the overall changes in the free energy for the electron transfer can be estimated using the Rehm-Weller equation. Electron transfer is distance dependent and decreases with increasing spacer length. Quenching by electron transfer between uncharged species leads to the formation of a radical ion pair. This is sometimes referred to as being the primary electron transfer. The possible electron transfer, which takes place after the PET, is referred to as the 'secondary electron transfer'. Chelation Enhancement Quenching (CHEQ) is the opposite effect seen for CHEF. In CHEQ, a reduction is observed in fluorescent emission of the chemosensor in comparison to that seen the originally for the 'free' sensor upon host–guest formation. As electron transfer is directional, such systems have also been described by the PET principle, being described as an enhancement in PET from the receptor to the fluorophore with enhanced degree of quenching. Such an effect has been demonstrated for the sensing of anions such as carboxylates and fluorides.
A large number of examples of chemosensors have been developed by scientists in physical, life and environmental sciences. The advantages of fluorescence emission being 'switched on' from 'off' upon the recognition event enabling the chemosensors to be compared to 'beacons in the night'. As the process is reversible, the emission enhancement is concentration dependent, only becoming 'saturated' at high concentrations (fully bound receptor). Hence, a correlation can be made between luminescence (intensity, quantum yield and in some cases lifetime) and the analyte concentration. Through careful design, and evaluation of the nature of the communication pathway, similar sensors based on the use of 'on-off' switching, or 'on-off-on,' or 'off-on-off' switching have been designed. The incorporation of chemosensors onto surfaces, such as quantum dots, nanoparticles, or into polymers is also a fast-growing area of research. Fluorescence sensing has also been combined with electrochemical techniques, conferring the advantages of both methods. Other examples of chemosensors that work on the principle of switching fluorescent emission either on or off include, Förster resonance energy transfer (FRET), internal charge transfer (ICT), twisted internal charge transfer (TICT), metal-based emission (such as in lanthanide luminescence), and excimer and exciplex emission and aggregation-induced emission (AIE). Chemosensors were one of the first examples of molecules that could result in switching between 'on' or 'off' states through the use of external stimuli and as such can be classed as synthetic molecular machine, to which the Nobel Prize in Chemistry was awarded to in 2016 to Jean-Pierre Sauvage, Fraser Stoddart and Bernard L. Feringa.
The application of these same design principles used in chemosensing also paved the way for the development of molecular logic gates mimics (MLGMs), being first proposed using PET based fluorescent chemosensors by de Silva and co-workers in 1993. Molecules have been made to operate in accordance with Boolean algebra that performs a logical operation based on one or more physical or chemical inputs. The field has advanced from the development of simple logic systems based on a single chemical input to molecules capable of carrying out complex and sequential operations.
Applications of Chemosensors
Chemosensors have been incorporated through surface functionalization onto particles and beads such as metal based nanoparticles, quantum dots, carbon-based particles and into soft materials such as polymers to facilitate their various applications.
Other receptors are sensitive not to a specific molecule but to a molecular compound class, these chemosensors are used in array- (or microarray) based sensors. Array-based sensors utilize analyte binding by the differential receptors. One example is the grouped analysis of several tannic acids that accumulate in ageing Scotch whisky in oak barrels. The grouped results demonstrated a correlation with the age but the individual components did not. A similar receptor can be used to analyze tartrates in wine.
The application of chemosensors in cellular imaging is particularly promising as most biological process are now monitored by using imaging technologies such as confocal fluorescence and super resolution microscopy, among others.
The compound saxitoxin is a neurotoxin found in shellfish and a chemical weapon. An experimental sensor for this compound is again based on PET. Interaction of saxitoxin with the sensor's crown ether moiety kills its PET process towards the fluorophore and fluorescence is switched from off to on. The unusual boron moiety causes the fluorescence to occur in the visible light part of the electromagnetic spectrum.
Chemosensors also have applications in chemistry, biochemistry, immunology, physiology, medicine and landmine detection. In 2003, Czarnik outlined a way to use chemosensors to track glucose levels in diabetic patients which, along with contributions from others, created an FDA-approved implantable continuous glucose monitor.
See also
Boronic acids in supramolecular chemistry: Saccharide recognition
Host–guest chemistry
Molecular machine
Molecular recognition
Microwave chemistry sensor
References
Supramolecular chemistry
Molecular machines | Molecular sensor | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 3,935 | [
"Machines",
"Physical systems",
"Molecular machines",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
9,417,783 | https://en.wikipedia.org/wiki/Phragmosome | The phragmosome is a sheet of cytoplasm forming in highly vacuolated plant cells in preparation for mitosis. In contrast to animal cells, plant cells often contain large central vacuoles occupying up to 90% of the total cell volume and pushing the nucleus against the cell wall. In order for mitosis to occur, the nucleus has to move into the center of the cell. This happens during G2 phase of the cell cycle.
Initially, cytoplasmic strands form that penetrate the central vacuole and provide pathways for nuclear migration. Actin filaments along these cytoplasmic strands pull the nucleus into the center of the cell. These cytoplasmic strands fuse into a transverse sheet of cytoplasm along the plane of future cell division, forming the phragmosome. Phragmosome formation is only clearly visible in dividing plant cells that are highly vacuolated.
Just before mitosis, a dense band of microtubules appears around the phragmosome and the future division plane just below the plasma membrane. This preprophase band marks the equatorial plane of the future mitotic spindle as well as the future fusion sites for the new cell plate with the existing cell wall. It disappears as soon as the nuclear envelope breaks down and the mitotic spindle forms.
When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the preprophase band.
References
Further reading
Cell cycle
Mitosis
Plant cells
Cell anatomy | Phragmosome | [
"Biology"
] | 345 | [
"Cell cycle",
"Cellular processes",
"Mitosis"
] |
9,418,169 | https://en.wikipedia.org/wiki/Piwi | Piwi (or PIWI) genes were identified as regulatory proteins responsible for stem cell and germ cell differentiation. Piwi is an abbreviation of P-element Induced WImpy testis in Drosophila. Piwi proteins are highly conserved RNA-binding proteins and are present in both plants and animals. Piwi proteins belong to the Argonaute/Piwi family and have been classified as nuclear proteins. Studies on Drosophila have also indicated that Piwi proteins have no slicer activity conferred by the presence of the Piwi domain. In addition, Piwi associates with heterochromatin protein 1, an epigenetic modifier, and piRNA-complementary sequences. These are indications of the role Piwi plays in epigenetic regulation. Piwi proteins are also thought to control the biogenesis of piRNA as many Piwi-like proteins contain slicer activity which would allow Piwi proteins to process precursor piRNA into mature piRNA.
Protein structure and function
The structure of several Piwi and Argonaute proteins (Ago) have been solved. Piwi proteins are RNA-binding proteins with 2 or 3 domains: The N-terminal PAZ domain binds the 3'-end of the guide RNA; the middle MID domain binds the 5'-phosphate of RNA; and the C-terminal PIWI domain acts as an RNase H endonuclease that can cleave RNA. The small RNA partners of Ago proteins are microRNAs (miRNAs). Ago proteins utilize miRNAs to silence genes post-transcriptionally or use small-interfering RNAs (siRNAs) in both transcription and post-transcription silencing mechanisms. Piwi proteins interact with piRNAs (28–33 nucleotides) that are longer than miRNAs and siRNAs (~20 nucleotides), suggesting that their functions are distinct from those of Ago proteins.
Human Piwi proteins
Presently there are four known human Piwi proteins—PIWI-like protein 1, PIWI-like protein 2, PIWI-like protein 3 and PIWI-like protein 4. Human Piwi proteins all contain two RNA binding domains, PAZ and Piwi. The four PIWI-like proteins have a spacious binding site within the PAZ domain which allows them to bind the bulky 2’-OCH3 at the 3’ end of piwi-interacting RNA.
One of the major human homologues, whose upregulation is implicated in the formation of tumours such as seminomas, is called hiwi (for human piwi).
Homologous proteins in mice have been called miwi (for mouse piwi).
Role in germline cells
PIWI proteins play a crucial role in fertility and germline development across animals and ciliates. Recently identified as a polar granule component, PIWI proteins appear to control germ cell formation so much so that in the absence of PIWI proteins there is a significant decrease in germ cell formation. Similar observations were made with the mouse homologs of PIWI, MILI, MIWI and MIWI2. These homologs are known to be present in spermatogenesis. Miwi is expressed in various stages of spermatocyte formation and spermatid elongation where Miwi2 is expressed in Sertoli cells. Mice deficient in either Mili or Miwi-2 have experienced spermatogenic stem cell arrest and those lacking Miwi-2 underwent a degradation of spermatogonia.
The effects of piwi proteins in human and mouse germlines seems to stem from their involvement in translation control as Piwi and the small noncoding RNA, piwi-interacting RNA (piRNA), have been known to co-fractionate polysomes. The piwi-piRNA pathway also induces heterochromatin formation at centromeres, thus affecting transcription. The piwi-piRNA pathway also appears to protect the genome. First observed in Drosophila, mutant piwi-piRNA pathways led to a direct increase in dsDNA breaks in ovarian germ cells. The role of the piwi-piRNA pathway in transposon silencing may be responsible for the reduction in dsDNA breaks in germ cells.
Role in RNA interference
The piwi domain is a protein domain found in piwi proteins and a large number of related nucleic acid-binding proteins, especially those that bind and cleave RNA. The function of the domain is double stranded-RNA-guided hydrolysis of single stranded-RNA that has been determined in the argonaute family of related proteins. Argonautes, the most well-studied family of nucleic-acid binding proteins, are RNase H-like enzymes that carry out the catalytic functions of the RNA-induced silencing complex (RISC). In the well-known cellular process of RNA interference, the argonaute protein in the RISC complex can bind both small interfering RNA (siRNA) generated from exogenous double-stranded RNA and microRNA (miRNA) generated from endogenous non-coding RNA, both produced by the ribonuclease Dicer, to form an RNA-RISC complex. This complex binds and cleaves complementary base pairing messenger RNA, destroying it and preventing its translation into protein. Crystallised piwi domains have a conserved basic binding site for the 5' end of bound RNA; in the case of argonaute proteins binding siRNA strands, the last unpaired nucleotide base of the siRNA is also stabilised by base stacking-interactions between the base and neighbouring tyrosine residues.
Recent evidence suggests that the functional role of piwi proteins in germ-line determination is due to their capacity to interact with miRNAs. Components of the miRNA pathway appear to be present in pole plasm and to play a key role in early development and morphogenesis of Drosophila melanogaster embryos, in which germ-line maintenance has been extensively studied.
piRNAs and transposon silencing
A novel class of longer-than-average miRNAs known as Piwi-interacting RNAs (piRNAs) has been defined in mammalian cells, about 26-31 nucleotides long as compared to the more typical miRNA or siRNA of about 21 nucleotides. These piRNAs are expressed mainly in spermatogenic cells in the testes of mammals. But studies have reported that piRNA expression can be found in the ovarian somatic cells and neuron cells in invertebrates, as well as in many other mammalian somatic cells. piRNAs have been identified in the genomes of mice, rats, and humans, with an unusual "clustered" genomic organization that may originate from repetitive regions of the genome such as retrotransposons or regions normally organized into heterochromatin, and which are normally derived exclusively from the antisense strand of double-stranded RNA. piRNAs have thus been classified as repeat-associated small interfering RNAs (rasiRNAs).
Although their biogenesis is not yet well understood, piRNAs and Piwi proteins are thought to form an endogenous system for silencing the expression of selfish genetic elements such as retrotransposons and thus preventing the gene products of such sequences from interfering with germ cell formation.
Footnotes
References
External links
– Piwi domain in SCOP
– Piwi domain in PROSITE
UNIPROT Piwi - Piwi domains
Proteins
Protein domains | Piwi | [
"Chemistry",
"Biology"
] | 1,555 | [
"Biomolecules by chemical classification",
"Protein classification",
"Protein domains",
"Molecular biology",
"Proteins"
] |
9,418,352 | https://en.wikipedia.org/wiki/Spongin | Spongin, a modified type of collagen protein, forms the fibrous skeleton of most organisms among the phylum Porifera, the sponges. It is secreted by sponge cells known as spongocytes.
Spongin gives a sponge its flexibility. True spongin is found only in members of the class Demospongiae.
Research directions
Use in the removal of phenolic compounds from wastewater
Researchers have found spongin to be useful in the photocatalytic degradation and removal of bisphenols (such as BPA) in wastewater. A heterogeneous catalyst consisting of a spongin scaffold for iron phthalocyanine (SFe) in conjunction with peroxide and UV radiation has been shown to remove phenolic wastes more quickly and efficiently than conventional methods. Other research using spongin scaffolds for the immobilization of Trametes versicolor Laccase has shown similar results in phenol degradation.
References
Marine biology
Collagens | Spongin | [
"Chemistry",
"Biology"
] | 215 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"Marine biology"
] |
9,421,020 | https://en.wikipedia.org/wiki/Copper%20silicide | Copper silicide can refer to either or pentacopper silicide, .
Pentacopper silicide is a binary compound of silicon with copper. It is an intermetallic compound, meaning that it has properties intermediate between an ionic compound and an alloy. This solid crystalline material is a silvery solid that is insoluble in water. It forms upon heating mixtures of copper and silicon.
Applications
Copper silicide thin film is used for passivation of copper interconnects, where it serves to suppress diffusion and electromigration and serves as a diffusion barrier.
Copper silicides are invoked in the Direct process, the industrial route to organosilicon compounds. In this process, copper, in the form of its silicide, catalyses the addition of methyl chloride to silicon. An illustrative reaction affords the industrially useful dimethyldichlorosilane:
2 CH3Cl + Si → (CH3)2SiCl2
References
Copper compounds
Transition metal silicides | Copper silicide | [
"Chemistry"
] | 212 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
9,421,082 | https://en.wikipedia.org/wiki/Order-5%20square%20tiling | In geometry, the order-5 square tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {4,5}.
Related polyhedra and tiling
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n).
This hyperbolic tiling is related to a semiregular infinite skew polyhedron with the same vertex figure in Euclidean 3-space.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
Medial rhombic triacontahedron
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-5 tilings
Regular tilings
Square tilings | Order-5 square tiling | [
"Physics"
] | 228 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
9,421,390 | https://en.wikipedia.org/wiki/Snub%20triheptagonal%20tiling | In geometry, the order-3 snub heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles and one heptagon on each vertex. It has Schläfli symbol of sr{7,3}. The snub tetraheptagonal tiling is another related hyperbolic tiling with Schläfli symbol sr{7,4}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Dual tiling
The dual tiling is called an order-7-3 floret pentagonal tiling, and is related to the floret pentagonal tiling.
Related polyhedra and tilings
This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons.
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Snub hexagonal tiling
Floret pentagonal tiling
Order-3 heptagonal tiling
Tilings of regular polygons
List of uniform planar tilings
Kagome lattice
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Chiral figures
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Snub tilings | Snub triheptagonal tiling | [
"Physics",
"Chemistry"
] | 449 | [
"Snub tilings",
"Semiregular tilings",
"Isogonal tilings",
"Tessellation",
"Stereochemistry",
"Chirality",
"Hyperbolic tilings",
"Stereochemistry stubs",
"Chiral figures",
"Symmetry"
] |
9,421,904 | https://en.wikipedia.org/wiki/Stream%20thrust%20averaging | In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second law of thermodynamics.
Equations for a perfect gas
Stream thrust:
Mass flow:
Stagnation enthalpy:
Solutions
Solving for yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
Second law of thermodynamics:
The values and are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity.
References
Equations of fluid dynamics
Fluid dynamics | Stream thrust averaging | [
"Physics",
"Chemistry",
"Engineering"
] | 272 | [
"Equations of fluid dynamics",
"Equations of physics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
16,090,662 | https://en.wikipedia.org/wiki/K-casein | Κ-casein, or kappa casein, is a mammalian milk protein involved in several important physiological processes. Chymosin (found in rennet) splits K-casein into an insoluble peptide (para kappa-casein) and water-soluble glycomacropeptide (GMP). GMP is responsible for an increased efficiency of digestion, prevention of neonate hypersensitivity to ingested proteins, and inhibition of gastric pathogens. The human gene for κ-casein is CSN3.
Structure
Caseins are a family of phosphoproteins (αS1, αS2, β, κ) that account for nearly 80% of bovine milk proteins and that form soluble aggregates are known as "casein micelles" in which κ-casein molecules stabilize the structure. There are several models that account for the spatial conformation of casein in the micelles. One of them proposes that the micellar nucleus is formed by several submicelles, the periphery consisting of microvillosities of κ-casein Another model suggests that the nucleus is formed by casein-interlinked fibrils. Finally, the most recent model proposes a double link among the caseins for gelling to take place. All 3 models consider micelles as colloidal particles formed by casein aggregates wrapped up in soluble κ-casein molecules.
Milk-clotting proteases act on the soluble portion, κ-casein, thus originating an unstable micellar state that
results in clot formation.
Milk clotting
Chymosin (EC 3.4.23.4) is an aspartic protease that specifically hydrolyzes the peptide bond in Phe105-Met106 of κ- casein and is considered to be the most efficient protease for the cheesemaking industry. However, there are milk-clotting proteases able to cleave other peptide bonds in the κ-casein chain, such as the endothiapepsin produced by Endothia parasitica. There are also several milk-clotting proteases that, being able to cleave the Phe105-Met106 bond in the κ-casein molecule, also cleave other peptide bonds in other caseins, such as those produced by Cynara cardunculus or even bovine chymosin. This allows the manufacture of different cheeses with a variety of rheological and organoleptic properties.
The milk-clotting process consists of three main phases:
Enzymatic degradation of κ-casein.
Micellar flocculation.
Gel formation.
Each step follows a different kinetic pattern, the limiting step in milk-clotting being the degradation rate of κ-casein. The kinetic pattern of the second step of the milk-clotting process is influenced by the cooperative nature of micellar flocculation, whereas the rheological properties of the gel formed depend on the type of action of the proteases, the type of milk, and the patterns of casein proteolysis. The overall process is influenced by several different factors, such as pH or temperature.
The conventional way of quantifying a given milk-clotting enzyme employs milk as the substrate and determines the time elapsed before the appearance of milk clots. However, milk clotting may take place without the participation of enzymes because of variations in physicochemical factors, such as low pH or high temperature. Consequently, this may lead to confusing and irreproducible results, particularly when the enzymes have low activity. At the same time, the classical method is not specific enough, in terms of setting the precise onset of milk gelation, such that the determination of the enzymatic units involved becomes difficult and unclear. Furthermore, although it has been reported that κ-casein hydrolysis follows typical Michaelis–Menten kinetics, it is difficult to determine with the classic milk-clotting assay.
To overcome this, several alternative methods have been proposed, such as the determination of halo diameter in agar-gelified milk, colorimetric measurement, or determination of the rate of degradation of casein previously labeled with either a radioactive tracer or a fluorochrome compound. All these methods use casein as the substrate to quantify proteolytic or milk-clotting activities.
FTC-Κ-casein assay
Κ-casein labeled with the fluorochrome fluorescein isothiocyanate (FITC) to yield the fluorescein thiocarbamoyl (FTC) derivative. This substrate is used to determinate the milk clotting activity of proteases.
FTC-κ-casein method affords accurate and precise determinations of κ-caseinolytic degradation, the first step in the milk-clotting process. This method is the result of a modification to the one described by S.S. Twining (1984). The main modification was substituting the substrate previously used (casein) by κ-casein labeled with the fluorochrome fluorescein isothiocyanate (FITC) to yield the fluorescein thiocarbamoyl (FTC) derivative. This variation allows quantification of the κ-casein molecules degraded in a more precise and specific way, detecting only those enzymes able to degrade such molecules. The method described by Twining (1984), however, was designed to detect the proteolytic activity of a considerably larger variety of enzymes.
FTC-κ-casein allows the detection of different types of proteases at levels when no milk clotting is yet apparent, demonstrating its higher sensitivity over currently used assay procedures.
Therefore, the method may find application as an indicator during the purification or characterization of new
milk-clotting enzymes.
Notes
References
External links
InterPro: IPR000117 Kappa casein
Fluorescein Thiocarbamoyl-Kappa-Casein Assay for the Specific Testing of Milk-Clotting Proteases
Biotechnology and Microbiology
Proteins
Laboratory techniques
Biochemistry | K-casein | [
"Chemistry",
"Biology"
] | 1,289 | [
"Biomolecules by chemical classification",
"nan",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
16,090,917 | https://en.wikipedia.org/wiki/European%20Technology%20Platform%20Nanomedicine | The European Technology Platform on Nanomedicine (ETP Nanomedicine) is a European Technology Platform initiative to improve the competitive situation of the European Union in the field of nanomedicine, the application of nanotechnology to medicine.
Overview
An important initiative, led by industry, has been set up together with the European Commission. A group of 53 European stakeholders, composed of industrial and academic experts, has established a European Technology Platform on nanomedicine. The first task of this high level group was to write a vision document for this highly future-oriented area of nanotechnology-based healthcare in which experts describe an extrapolation of needs and possibilities until 2020. Beginning of 2006 this Platform has been opened to a wider participation (December 2006: 150 member organisations) and has delivered a so-called Strategic Research Agenda showing a well elaborated common European way of working together for the healthcare of the future trying to match the high expectations that nanomedicine has raised so far.
Policy Objectives
Establish a clear strategic vision in the area resulting in a Strategic Research Agenda.
Decrease fragmentation in nano-medical research.
Mobilise additional public and private investment.
Identify priority areas.
Boost innovation in nanobiotechnologies for medical use.
Topics
Three key priorities have been confirmed by the stakeholders:
Nanotechnology-based diagnostics including imaging.
Targeted drug delivery and release.
Regenerative medicine.
Dissemination of knowledge, regulatory and IPR issues, standardisation, ethical, safety, environmental and toxicity concerns as well as public perception in general and the input from other stakeholders like insurance companies or patient organisations play an important role.
See also
European Technology Platform
Joint Technology Initiative
References
Vision document
Strategic Research Agenda
CERTH European Technology Platform Nanomedicine
Hyperion European Technology Platform Nanomedicine
External links
European Technology Platform on Nanomedicine
European Union and science and technology
Information technology organizations based in Europe
Science and technology in Europe | European Technology Platform Nanomedicine | [
"Materials_science"
] | 388 | [
"Nanomedicine",
"Nanotechnology"
] |
16,094,518 | https://en.wikipedia.org/wiki/Gauss%27s%20law%20for%20magnetism | In physics, Gauss's law for magnetism is one of the four Maxwell's equations that underlie classical electrodynamics. It states that the magnetic field has divergence equal to zero, in other words, that it is a solenoidal vector field. It is equivalent to the statement that magnetic monopoles do not exist. Rather than "magnetic charges", the basic entity for magnetism is the magnetic dipole. (If monopoles were ever found, the law would have to be modified, as elaborated below.)
Gauss's law for magnetism can be written in two forms, a differential form and an integral form. These forms are equivalent due to the divergence theorem.
The name "Gauss's law for magnetism" is not universally used. The law is also called "Absence of free magnetic poles". It is also referred to as the "transversality requirement" because for plane waves it requires that the polarization be transverse to the direction of propagation.
Differential form
The differential form for Gauss's law for magnetism is:
where denotes divergence, and is the magnetic field.
Integral form
The integral form of Gauss's law for magnetism states:
where is any closed surface (see image right), is the magnetic flux through , and is a vector, whose magnitude is the area of an infinitesimal piece of the surface , and whose direction is the outward-pointing surface normal (see surface integral for more details).
Gauss's law for magnetism thus states that the net magnetic flux through a closed surface equals zero.
The integral and differential forms of Gauss's law for magnetism are mathematically equivalent, due to the divergence theorem. That said, one or the other might be more convenient to use in a particular computation.
The law in this form states that for each volume element in space, there are exactly the same number of "magnetic field lines" entering and exiting the volume. No total "magnetic charge" can build up in any point in space. For example, the south pole of the magnet is exactly as strong as the north pole, and free-floating south poles without accompanying north poles (magnetic monopoles) are not allowed. In contrast, this is not true for other fields such as electric fields or gravitational fields, where total electric charge or mass can build up in a volume of space.
Vector potential
Due to the Helmholtz decomposition theorem, Gauss's law for magnetism is equivalent to the following statement:
The vector field is called the magnetic vector potential.
Note that there is more than one possible which satisfies this equation for a given field. In fact, there are infinitely many: any field of the form can be added onto to get an alternative choice for , by the identity (see Vector calculus identities):
since the curl of a gradient is the zero vector field:
This arbitrariness in is called gauge freedom.
Field lines
The magnetic field can be depicted via field lines (also called flux lines) – that is, a set of curves whose direction corresponds to the direction of , and whose areal density is proportional to the magnitude of . Gauss's law for magnetism is equivalent to the statement that the field lines have neither a beginning nor an end: Each one either forms a closed loop, winds around forever without ever quite joining back up to itself exactly, or extends to infinity.
Incorporating magnetic monopoles
If magnetic monopoles were to be discovered, then Gauss's law for magnetism would state the divergence of would be proportional to the magnetic charge density , analogous to Gauss's law for electric field. For zero net magnetic charge density (), the original form of Gauss's magnetism law is the result.
The modified formula for use with the SI is not standard and depends on the choice of defining equation for the magnetic charge and current; in one variation, magnetic charge has units of webers, in another it has units of ampere-meters.
where is the vacuum permeability.
So far, examples of magnetic monopoles are disputed in extensive search, although certain papers report examples matching that behavior.
History
This idea of the nonexistence of the magnetic monopoles originated in 1269 by Petrus Peregrinus de Maricourt. His work heavily influenced William Gilbert, whose 1600 work De Magnete spread the idea further. In the early 1800s Michael Faraday reintroduced this law, and it subsequently made its way into James Clerk Maxwell's electromagnetic field equations.
Numerical computation
In numerical computation, the numerical solution may not satisfy Gauss's law for magnetism due to the discretization errors of the numerical methods. However, in many cases, e.g., for magnetohydrodynamics, it is important to preserve Gauss's law for magnetism precisely (up to the machine precision). Violation of Gauss's law for magnetism on the discrete level will introduce a strong non-physical force. In view of energy conservation, violation of this condition leads to a non-conservative energy integral, and the error is proportional to the divergence of the magnetic field.
There are various ways to preserve Gauss's law for magnetism in numerical methods, including the divergence-cleaning techniques, the constrained transport method, potential-based formulations and de Rham complex based finite element methods where stable and structure-preserving algorithms are constructed on unstructured meshes with finite element differential forms.
See also
Magnetic moment
Vector calculus
Integral
Flux
Gaussian surface
Faraday's law of induction
Ampère's circuital law
Lorenz gauge condition
References
External links
Magnetism
Magnetic monopoles
Maxwell's equations
Magnetism | Gauss's law for magnetism | [
"Physics",
"Astronomy"
] | 1,169 | [
"Astronomical hypotheses",
"Equations of physics",
"Unsolved problems in physics",
"Magnetic monopoles",
"Maxwell's equations"
] |
16,094,972 | https://en.wikipedia.org/wiki/Annual%20Review%20of%20Materials%20Research | The Annual Review of Materials Research is a peer-reviewed journal that publishes review articles about materials science. It has been published by the nonprofit Annual Reviews since 1971, when it was first released under the title the Annual Review of Materials Science. Four people have served as editors, with the current editor Ram Seshadri stepping into the position in 2024. It has an impact factor of 10.6 as of 2024. As of 2023, it is being published as open access, under the Subscribe to Open model.
History
The Annual Review of Materials Science was first published in 1971 by the nonprofit publisher Annual Reviews, making it their sixteenth journal. Its first editor was Robert Huggins.
In 2001, its name was changed to the current form, the Annual Review of Materials Research. The name change was intended "to better reflect the broad appeal that materials research has for so many diverse groups of scientists and not simply those who identify themselves with the academic discipline of materials science." As of 2020, it was published both in print and electronically.
It defines its scope as covering significant developments in the field of materials science, including methodologies for studying materials and materials phenomena. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 10.6, ranking it forty-ninth of 438 titles in the category "Materials Science, Multidisciplinary". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, INSPEC, and Academic Search, among others.
Editorial processes
The Annual Review of Materials Research is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
Editors of volumes
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
Robert Huggins (1971–1993)
Elton N. Kaufmann (1994–2000)
David R. Clarke (2001–2024)
Ram Seshadri (2025-)
Current editorial committee
As of 2024, the editorial committee consists of the editor and the following members:
Don Lipkin
Vikram Jayaram
Wayne D. Kaplan
Christine Luscombe
Yang Shen
See also
List of materials science journals
References
Materials Research
Academic journals established in 1971
Materials science journals
English-language journals
Annual journals | Annual Review of Materials Research | [
"Materials_science",
"Engineering"
] | 623 | [
"Materials science journals",
"Materials science"
] |
16,095,000 | https://en.wikipedia.org/wiki/Annual%20Review%20of%20Plant%20Biology | Annual Review of Plant Biology is a peer-reviewed scientific journal published by Annual Reviews. It was first published in 1950 as the Annual Review of Plant Physiology. Sabeeha Merchant has been the editor since 2005, making her the longest-serving editor in the journal's history after Winslow Briggs (1973–1993). Journal Citation Reports lists the journal's 2023 impact factor as 21.3, ranking it first of 265 journal titles in the category "Plant Sciences". As of 2023, it is being published as open access, under the Subscribe to Open model.
History
Beginning in 1947, the publishing nonprofit Annual Reviews began asking plant physiologists if it would be useful to have an annual journal that published review articles summarizing the recent literature in the field. Responses indicated that this would be very favorable, and the Annual Review of Plant Physiology published its first volume in 1950. Its founding editor was Daniel I. Arnon. It was thus the seventh journal title to be published by Annual Reviews. Its scope was somewhat reduced by the publication of the Annual Review of Phytopathology, first released in 1963. In 1988, its named changed to the Annual Review of Plant Physiology and Plant Molecular Biology. In the 1990s, it began having color illustrations and was published online for the first time. Its name was changed once again in 2002 to its current version, the Annual Review of Plant Biology. As of 2020, it was published both in print and electronically.
The journal covers developments in the field of plant biology, including cell biology, genetics, genomics, molecular biology, cell differentiation, tissue, acclimation (including adaptation), and methods. The journal is abstracted and indexed in the following databases.
Chemical Abstracts Service
MEDLINE/PubMed
Science Citation Index
BIOSIS Previews
Editorial processes
The Annual Review of Plant Biology is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
Editors of volumes
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
Daniel I. Arnon (1950–1955)
Lawrence Rogers Blinks (1956)
Alden Springer Crafts (1957–1959)
Leonard Machlis (1959–1972)
Winslow Briggs (1973–1993)
Russell L. Jones (1994–2001)
Deborah Delmer (2002–2004)
Sabeeha Merchant (2005–present)
Current editorial committee
As of 2022, the editorial committee consists of the editor and the following members:
Wilhelm Gruissem
Donald R. Ort
Ian T. Baldwin
Magdalena Bezanilla
Xiaofeng Cao
Mark Estelle
Patricia León
Keiko U. Torii
Cyril Zipfel
References
External links
Annual Review of Plant Biology at SCImago Journal Rank
Molecular and cellular biology journals
Academic journals established in 1950
Botany journals
English-language journals
Plant Biology
Annual journals
1950 establishments in California | Annual Review of Plant Biology | [
"Chemistry"
] | 737 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
16,102,721 | https://en.wikipedia.org/wiki/Electrochemical%20reaction%20mechanism | In electrochemistry, an electrochemical reaction mechanism is the step-by-step sequence of elementary steps, involving at least one outer-sphere electron transfer, by which an overall electrochemical reaction occurs.
Overview
Elementary steps like proton coupled electron transfer and the movement of electrons between an electrode and substrate are special to electrochemical processes. Electrochemical mechanisms are important to all redox chemistry including corrosion, redox active photochemistry including photosynthesis, other biological systems often involving electron transport chains and other forms of homogeneous and heterogeneous electron transfer. Such reactions are most often studied with standard three electrode techniques such as cyclic voltammetry(CV), chronoamperometry, and bulk electrolysis as well as more complex experiments involving rotating disk electrodes and rotating ring-disk electrodes. In the case of photoinduced electron transfer the use of time-resolved spectroscopy is common.
Formalism
When describing electrochemical reactions an "E" and "C" formalism is often employed. The E represents an electron transfer; sometimes EO and ER are used to represent oxidations and reductions respectively. The C represents a chemical reaction which can be any elementary reaction step and is often called a "following" reaction. In coordination chemistry common C steps which "follow" electron transfer are ligand loss and association. The ligand loss or gain is associated with a geometric change in the complexes coordination sphere.
The reaction above would be called an EC reaction.
Characterization
The production of in the reaction above by the "following" chemical reaction produces a species directly at the electrode that could display redox chemistry anywhere in a CV plot or none at all. The change in coordination from to often prevents the observation of "reversible" behavior during electrochemical experiments like cyclic voltammetry. On the forward scan the expected diffusion wave is observed, in example above the reduction of to . However, on the return scan the corresponding wave is not observed, in the example above this would be the wave corresponding to the oxidation of to . In our example there is no to oxidize since it has been converted to through ligand loss. The return wave can sometimes be observed by increasing the scan rates so the following chemical reaction can be observed before the chemical reaction takes place. This often requires the use of ultramicroelectrodes (UME) capable of very high scan rates of 0.5 to 5.0 V/s. Plots of forward and reverse peak ratios against modified forms of the scan rate often identify the rate of the chemical reaction. It has become a common practice to model such plots with electrochemical simulations. The results of such studies are of disputed practical relevance since simulation requires excellent experimental data, better than that routinely obtained and reported. Furthermore, the parameters of such studies are rarely reported and often include an unreasonably high variable to data ratio (ref?). A better practice is to look for a simple, well documented relationship between observed results and implied phenomena; or to investigate a specific physical phenomenon using an alternative technique such as chronoamperometry or those involving a rotating electrode.
Electrocatalysis
Electrocatalysis is a catalytic process involving oxidation or reduction through the direct transfer of electrons. The electrochemical mechanisms of electrocatalytic processes are a common research subject for various fields of chemistry and associated sciences. This is important to the development of water oxidation and fuel cells catalysts. For example, half the water oxidation reaction is the reduction of protons to hydrogen, the subsequent half reaction.
This reaction requires some form of catalyst to avoid a large overpotential in the delivery of electrons. A catalyst can accomplish this reaction through different reaction pathways, two examples are listed below for the homogeneous catalysts .
Pathway 1
Pathway 2
Pathway 1 is described as an ECECC while pathway 2 would be described as an ECC. If the catalyst was being considered for solid support, pathway 1 which requires a single metal center to function would be a viable candidate. In contrast, a solid support system which separates the individual metal centers would render a catalysts that operates through pathway 2 useless, since it requires a step which is second order in metal center. Determining the reaction mechanism is much like other methods, with some techniques unique to electrochemistry. In most cases electron transfer can be assumed to be much faster than the chemical reactions. Unlike stoichiometric reactions where the steps between the starting materials and the rate limiting step dominate, in catalysis the observed reaction order is usually dominated by the steps between the catalytic resting state and the rate limiting step.
"Following" physical transformations
During potential variant experiments, it is common to go through a redox couple in which the major species is transformed from a species that is soluble in the solution to one that is insoluble. This results in a nucleation process in which a new species plates out on the working electrode. If a species has been deposited on the electrode during a potential sweep then on the return sweep a stripping wave is usually observed.
While the nucleation wave may be pronounced or difficult to detect, the stripping wave is usually very distinct. Often these phenomena can be avoided by reducing the concentration of the complex in solution. Neither of these physical state changes involve a chemical reaction mechanism but they are worth mentioning here since the resulting data is at times confused with some chemical reaction mechanisms.
References
Electrochemical concepts | Electrochemical reaction mechanism | [
"Chemistry"
] | 1,085 | [
"Electrochemistry",
"Electrochemical concepts"
] |
14,568,020 | https://en.wikipedia.org/wiki/Huntingtin-associated%20protein%201 | Huntingtin-associated protein 1 (HAP1) is a protein which in humans is encoded by the HAP1 gene. This protein was found to bind to the mutant huntingtin protein () in proportion to the number of glutamines present in the glutamine repeat region.
Huntington's disease (HD), a neurodegenerative disorder characterized by loss of striatal neurons, is caused by an expansion of a polyglutamine tract in the HD protein huntingtin. This gene encodes a protein that interacts with huntingtin, with two cytoskeletal proteins (dynactin and pericentriolar autoantigen protein 1), and with a hepatocyte growth factor-regulated tyrosine kinase substrate (HGS). The interactions with cytoskeletal proteins and a kinase substrate suggest a role for this protein in vesicular trafficking or organelle transport.
Variants
Huntingtin-associated protein 1 has two subtypes; HAP1A and HAP1B.
Function
HAP1 preferentially interacts with in a polyQ dependent manner. Its localization and possible interacting partners (other than Htt) have since been characterised, thus elucidating a possible role for this protein in HD pathogenesis. Martin et al. showed that HAP1 is localized in mitotic spindle of dividing striatal cells, and associated endosomes, microtubules and vesicles in the basal forebrain and striatial neurons – where HAP1B is preferentially expressed. Furthermore, Page and colleagues identified HAP1 mRNA in the following forebrain limbic nuclei: the amygdala, nucleus accumbens, dentate gyrus, septal nuclei, bed nucleus of the stria terminalis, and hypothalamus. They also identified HAP1 in numerous areas of the cortex, including the anterior cingulate cortex and the limbic cortex.
The subcellular location of HAP1 closely resembles that of Htt. Gutekunst and colleagues used immunogold labeling to identify subcellular localization of both HAP1 and , and identified a close similarity of the distribution of the two proteins. They did not find HAP1 labeling in protein aggregates in the cytoplasm and postulated that this indicated HAP1 in pre-aggregate related HD pathogenesis.
The role of HAP1 in HD pathogenesis may involve aberration of cell cycle processes, as high immunostaining of HAP1 during the cell cycle has been observed. It may have a part in spindle orientation, microtubule stabilization or chromosome movement. More importantly, HAP1 may also disrupt endocytosis, as it has been detected on vesicles involved in the early stages of this process. It is possible that the non-pathogenic activity of HAP1 is intracellular trafficking and that this is perturbed following its association with . HAP1 also interacts with proteins other than Htt and it is likely that their function is altered in HD pathogenesis. These include dynactin p150Glued, a cytoplasmic dynein accessory protein involved in retrograde transport of organelles, and kinesin-like protein which is another transport-mediation protein.
HAP1 also shows a similar CNS distribution pattern to that of neural nitric oxide synthase (nNos), especially in both of the pedunculopontine nuclei, the supraoptic nucleus, and the olfactory bulb. The possible significance of this interaction is that increased HAP1 interaction with muHtt may also increase nitric oxide (NO) thus facilitating neuronal damage.
HAP1 also interacts with other factors involved in vesicular trafficking including GABAA receptor,
Rho-GEF, and HGS.
References
Proteins | Huntingtin-associated protein 1 | [
"Chemistry"
] | 808 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,568,414 | https://en.wikipedia.org/wiki/Rule%20of%20Sarrus | In matrix theory, the rule of Sarrus is a mnemonic device for computing the determinant of a matrix named after the French mathematician Pierre Frédéric Sarrus.
Consider a matrix
then its determinant can be computed by the following scheme.
Write out the first two columns of the matrix to the right of the third column, giving five columns in a row. Then add the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals going from bottom to top (dashed). This yields
A similar scheme based on diagonals works for matrices:
Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes for larger matrices. Sarrus' rule can also be derived using the Laplace expansion of a matrix.
Another way of thinking of Sarrus' rule is to imagine that the matrix is wrapped around a cylinder, such that the right and left edges are joined.
References
External links
Sarrus' rule at Planetmath
Linear Algebra: Rule of Sarrus of Determinants at khanacademy.org
Linear algebra
Determinants
Mnemonics | Rule of Sarrus | [
"Mathematics"
] | 233 | [
"Linear algebra",
"Algebra"
] |
14,573,864 | https://en.wikipedia.org/wiki/Adparticle | An adparticle is an atom, molecule, or cluster of atoms or molecules that lies on a crystal surface. The term is used in surface chemistry. The word is a contraction of "adsorbed particle". An adparticle that is a single atom may be referred to as an "adatom".
References
Surface science | Adparticle | [
"Physics",
"Chemistry",
"Materials_science"
] | 71 | [
"Physical chemistry stubs",
"Condensed matter physics",
"Surface science"
] |
14,576,408 | https://en.wikipedia.org/wiki/Radiative%20transfer%20equation%20and%20diffusion%20theory%20for%20photon%20transport%20in%20biological%20tissue | Photon transport in biological tissue can be equivalently modeled numerically with Monte Carlo simulations or analytically by the radiative transfer equation (RTE). However, the RTE is difficult to solve without introducing approximations. A common approximation summarized here is the diffusion approximation. Overall, solutions to the diffusion equation for photon transport are more computationally efficient, but less accurate than Monte Carlo simulations.
Definitions
The RTE can mathematically model the transfer of energy as photons move inside a tissue. The flow of radiation energy through a small area element in the radiation field can be characterized by radiance with units
. Radiance is defined as energy flow per unit normal area per unit solid angle per unit time. Here, denotes position, denotes unit direction vector and denotes time (Figure 1).
Several other important physical quantities are based on the definition of radiance:
Fluence rate or intensity
Fluence
Current density (energy flux) . This is the vector counterpart of fluence rate pointing in the prevalent direction of energy flow.
Radiative transfer equation
The RTE is a differential equation describing radiance . It can be derived via conservation of energy. Briefly, the RTE states that a beam of light loses energy through divergence and extinction (including both absorption and scattering away from the beam) and gains energy from light sources in the medium and scattering directed towards the beam. Coherence, polarization and non-linearity are neglected. Optical properties such as refractive index , absorption coefficient μa, scattering coefficient μs, and scattering anisotropy are taken as time-invariant but may vary spatially. Scattering is assumed to be elastic.
The RTE (Boltzmann equation) is thus written as:
where
is the speed of light in the tissue, as determined by the relative refractive index
μtμa+μs is the extinction coefficient
is the phase function, representing the probability of light with propagation direction being scattered into solid angle around . In most cases, the phase function depends only on the angle between the scattered and incident directions, i.e. . The scattering anisotropy can be expressed as
describes the light source.
Diffusion theory
Assumptions
In the RTE, six different independent variables define the radiance at any spatial and temporal point (, , and from , polar angle and azimuthal angle from , and ). By making appropriate assumptions about the behavior of photons in a scattering medium, the number of independent variables can be reduced. These assumptions lead to the diffusion theory (and diffusion equation) for photon transport.
Two assumptions permit the application of diffusion theory to the RTE:
Relative to scattering events, there are very few absorption events. Likewise, after numerous scattering events, few absorption events will occur, and the radiance will become nearly isotropic. This assumption is sometimes called directional broadening.
In a primarily scattering medium, the time for substantial current density change is much longer than the time to traverse one transport mean free path. Thus, over one transport means free path, the fractional change in current density is much less than unity. This property is sometimes called temporal broadening.
Both of these assumptions require a high-albedo (predominantly scattering) medium.
The RTE in the diffusion approximation
Radiance can be expanded on a basis set of spherical harmonics n, m. In diffusion theory, radiance is taken to be largely isotropic, so only the isotropic and first-order anisotropic terms are used:
where n, m are the expansion coefficients. Radiance is expressed with 4 terms: one for n = 0 (the isotropic term) and 3 terms for n = 1 (the anisotropic terms). Using properties of spherical harmonics and the definitions of fluence rate and current density , the isotropic and anisotropic terms can respectively be expressed as follows:
Hence, we can approximate radiance as
Substituting the above expression for radiance, the RTE can be respectively rewritten in scalar and vector forms as follows (The scattering term of the RTE is integrated over the complete solid angle. For the vector form, the RTE is multiplied by direction before evaluation.):
The diffusion approximation is limited to systems where reduced scattering coefficients are much larger than their absorption coefficients and having a minimum layer thickness of the order of a few transport mean free path.
The diffusion equation
Using the second assumption of diffusion theory, we note that the fractional change in current density over one transport mean free path is negligible. The vector representation of the diffusion theory RTE reduces to Fick's law , which defines current density in terms of the gradient of fluence rate. Substituting Fick's law into the scalar representation of the RTE gives the diffusion equation:
is the diffusion coefficient and μ'sμs is the reduced scattering coefficient.
Notably, there is no explicit dependence on the scattering coefficient in the diffusion equation. Instead, only the reduced scattering coefficient appears in the expression for . This leads to an important relationship; diffusion is unaffected if the anisotropy of the scattering medium is changed while the reduced scattering coefficient stays constant.
Solutions to the diffusion equation
For various configurations of boundaries (e.g. layers of tissue) and light sources, the diffusion equation may be solved by applying appropriate boundary conditions and defining the source term as the situation demands.
Point sources in infinite homogeneous media
A solution to the diffusion equation for the simple case of a short-pulsed point source in an infinite homogeneous medium is presented in this section. The source term in the diffusion equation becomes , where is the position at which fluence rate is measured and is the position of the source. The pulse peaks at time . The diffusion equation is solved for fluence rate to yield the Green function for the diffusion equation:
The term represents the exponential decay in fluence rate due to absorption in accordance with Beer's law. The other terms represent broadening due to scattering. Given the above solution, an arbitrary source can be characterized as a superposition of short-pulsed point sources.
Taking time variation out of the diffusion equation gives the following for a time-independent point source :
is the effective attenuation coefficient and indicates the rate of spatial decay in fluence.
Boundary conditions
Fluence rate at a boundary
Consideration of boundary conditions permits use of the diffusion equation to characterize light propagation in media of limited size (where interfaces between the medium and the ambient environment must be considered). To begin to address a boundary, one can consider what happens when photons in the medium reach a boundary (i.e. a surface). The direction-integrated radiance at the boundary and directed into the medium is equal to the direction-integrated radiance at the boundary and directed out of the medium multiplied by reflectance :
where is normal to and pointing away from the boundary. The diffusion approximation gives an expression for radiance in terms of fluence rate and current density . Evaluating the above integrals after substitution gives:
Substituting Fick's law () gives, at a distance from the boundary z=0,
The extrapolated boundary
It is desirable to identify a zero-fluence boundary. However, the fluence rate at a physical boundary is, in general, not zero. An extrapolated boundary, at b for which fluence rate is zero, can be determined to establish image sources. Using a first order Taylor series approximation,
which evaluates to zero since . Thus, by definition, b must be z as defined above. Notably, when the index of refraction is the same on both sides of the boundary, F is zero and the extrapolated boundary is at b.
Pencil beam normally incident on a semi-infinite medium
Using boundary conditions, one may approximately characterize diffuse reflectance for a pencil beam normally incident on a semi-infinite medium. The beam will be represented as two point sources in an infinite medium as follows (Figure 2):
Set scattering anisotropy 2 for the scattering medium and set the new scattering coefficient μs2 to the original μs1 multiplied by 1, where 1 is the original scattering anisotropy.
Convert the pencil beam into an isotropic point source at a depth of one transport mean free path ' below the surface and power = '.
Implement the extrapolated boundary condition by adding an image source of opposite sign above the surface at 'b.
The two point sources can be characterized as point sources in an infinite medium via
is the distance from observation point to source location in cylindrical coordinates. The linear combination of the fluence rate contributions from the two image sources is
This can be used to get diffuse reflectance d via Fick's law:
is the distance from the observation point to the source at and is the distance from the observation point to the image source at b.
Properties of diffusion equation
Scaling
Let be the Green function solution to the diffusion equation for a homogeneous medium of optical properties , , then the Green function solution for a homogeneous medium which differs from the former only by optical properties , , such that , can be obtained with the following rescaling:
where and .
Such property can also be extended to the radiance in the more general general framework of the RTE, by substituting the transport coefficients , with the extinction coefficients , .
The usefulness of the property resides in taking the results obtained for a given geometry and set of optical properties, typical of a lab scale setting, rescaling them and extending them to contexts in which it would be complicated to perform measurements due to the sheer extension or inaccessibility.
Dependence on absorption
Let be the Green function solution to the diffusion equation for a non-absorbing homogeneous medium. Then, the Green function solution for the medium when its absorption coefficient is can be obtained as:
Again, the same property also holds for radiance within the RTE.
Diffusion theory solutions vs. Monte Carlo simulations
Monte Carlo simulations of photon transport, though time consuming, will accurately predict photon behavior in a scattering medium. The assumptions involved in characterizing photon behavior with the diffusion equation generate inaccuracies. Generally, the diffusion approximation is less accurate as the absorption coefficient μa increases and the scattering coefficient μs decreases.
For a photon beam incident on a medium of limited depth, error due to the diffusion approximation is most prominent within one transport mean free path of the location of photon incidence (where radiance is not yet isotropic) (Figure 3).
Among the steps in describing a pencil beam incident on a semi-infinite medium with the diffusion equation, converting the medium from anisotropic to isotropic (step 1) (Figure 4) and converting the beam to a source (step 2) (Figure 5) generate more error than converting from a single source to a pair of image sources (step 3) (Figure 6). Step 2 generates the most significant error.
See also
Monte Carlo method for photon transport
Radiative transfer
References
Further reading
(2011)
Scattering, absorption and radiative transfer (optics) | Radiative transfer equation and diffusion theory for photon transport in biological tissue | [
"Chemistry"
] | 2,229 | [
"Scattering",
" absorption and radiative transfer (optics)"
] |
14,578,984 | https://en.wikipedia.org/wiki/Frigorific%20mixture | A frigorific mixture is a mixture of two or more phases in a chemical system that, so long as none of the phases are completely consumed during equilibration, reaches an equilibrium temperature that is independent of the starting temperature of the phases before they are mixed. The equilibrium temperature is also independent of the quantities of the phases used as long as sufficient amounts of each are present to reach equilibrium without consuming one or more.
Ice
Liquid water and ice, for example, form a frigorific mixture at 0 °C or 32 °F. This mixture was once used to define 0 °C. That temperature is now defined as the triple point of Water with well-defined isotope ratios. A mixture of ammonium chloride, water, and ice form a frigorific mixture at about −17.8 °C or 0 °F. This mixture was once used to define 0 °F.
Explanation
The existence of frigorific mixtures can be viewed as a consequence of the Gibbs phase rule, which describes the relationship at equilibrium between the number of components, the number of coexisting phases, and the number of degrees of freedom permitted by the conditions of heterogeneous equilibrium. Specifically, at constant atmospheric pressure, in a system containing linearly independent chemical components, if +1 phases are specified to be present in equilibrium, then the system is fully determined (there are no degrees of freedom). That is, the temperature and the compositions of all phases are determined. Thus, in for example the chemical system H2O-NaCl, which has two components, the simultaneous presence of the three phases liquid, ice, and hydrohalite can exist only at atmospheric pressure at the unique temperature of –21.2 °C
. The approach to equilibrium of a frigorific mixture involves spontaneous temperature change driven by the conversion of latent heat into sensible heat as the phase proportions adjust to accommodate the decrease in thermodynamic potential associated with the approach to equilibrium.
Other examples
Other examples of frigorific mixtures include:
Uses
A frigorific mixture may be used to obtain a liquid medium that has a reproducible temperature below ambient temperature. Such mixtures were used to calibrate thermometers. In chemistry a cooling bath may be used to control the temperature of a strongly exothermic reaction.
A frigorific mixture may be used as an alternative to mechanical refrigeration. For example to fit two machined metal parts together, one part is placed in a frigorific mixture, causing it to contract so that may be easily inserted into the uncooled second part; on warming the two parts are held together tightly. Another example is the Piper process, used in the second half of the 19th century for freezing and cold storage of fish.
Limitations of acid base slushes
Mixtures relying on the use of acid base slushes are of limited practical value beyond producing melting point references as the enthalpy of dissolution for the melting point depressant is often significantly greater (e.g. ΔH -57.61 kJ/mol for KOH) than the enthalpy of fusion for water itself (ΔH 6.02 kJ/mol); for reference, ΔH for the dissolution of NaCl is 3.88 kJ/mol. This results in little to no net cooling capacity at the desired temperatures and an end mixture temperature that is higher than it was to begin with. The values claimed in the table are produced by first precooling and then combining each subsequent mixture with it surrounded by a mixture of the previous temperature increment; the mixtures must be 'stacked' within one another.
Such acid base slushes are corrosive and therefore present handling problems. Additionally, they can not be replenished easily, as the volume of the mixture increases with each addition of refrigerant; the container (be it a bath or cold finger) will eventually need emptying and refilling to prevent it from overflowing. This makes these mixtures largely unsuitable for use in synthetic applications, as there will be no cooling surface present during the emptying of the container.
See also
Cooling bath
References
Thermodynamics
Physical chemistry
Chemical thermodynamics | Frigorific mixture | [
"Physics",
"Chemistry",
"Mathematics"
] | 879 | [
"Applied and interdisciplinary physics",
"Thermodynamics",
"nan",
"Chemical thermodynamics",
"Physical chemistry",
"Dynamical systems"
] |
4,162,069 | https://en.wikipedia.org/wiki/Hybrid%20bond%20graph | A hybrid bond graph is a graphical description of a physical dynamic system with discontinuities (i.e., a hybrid dynamical system). Similar to
a regular bond graph, it is an energy-based technique. However, it allows instantaneous switching of the junction structure, which may violate the principle of continuity of power (Mosterman and Biswas, 1998).
References
Pieter Mosterman and Gautam Biswas, 1998: "A Theory of Discontinuities in Physical System Models" in Journal of the Franklin Institute, Volume 335B, Number 3, pp. 401-439, January, 1998.
Further reading
Pieter Mosterman, 2001: "HyBrSim - A Modeling and Simulation Environment for Hybrid Bond Graphs" in Journal of Systems and Control Engineering, vol. 216, Part I, pp. 35-46, 2002.
Cuijpers, P.J.L., Broenink, J.F., and Mosterman P.J., 2008: "Constitutive Hybrid Processes: a Process-Algebraic Semantics for Hybrid Bond Graphs" in SIMULATION, vol. 84, No. 7, pages 339-358, 2008.
Dynamical systems | Hybrid bond graph | [
"Physics",
"Mathematics"
] | 248 | [
"Mechanics",
"Dynamical systems"
] |
4,162,402 | https://en.wikipedia.org/wiki/Flame%20ionization%20detector | A flame ionization detector (FID) is a scientific instrument that measures analytes in a gas stream. It is frequently used as a detector in gas chromatography. The measurement of ions per unit time makes this a mass sensitive instrument. Standalone FIDs can also be used in applications such as landfill gas monitoring, fugitive emissions monitoring and internal combustion engine emissions measurement in stationary or portable instruments.
History
The first flame ionization detectors were developed simultaneously and independently in 1957 by McWilliam and Dewar at Imperial Chemical Industries of Australia and New Zealand (ICIANZ, see Orica history) Central Research Laboratory, Ascot Vale, Melbourne, Australia. and by Harley and Pretorius at the University of Pretoria in Pretoria, South Africa.
In 1959, Perkin Elmer Corp. included a flame ionization detector in its Vapor Fractometer.
Operating principle
The operation of the FID is based on the detection of ions formed during combustion of organic compounds in a hydrogen flame. The generation of these ions is proportional to the concentration of organic species in the sample gas stream.
To detect these ions, two electrodes are used to provide a potential difference. The positive electrode acts as the nozzle head where the flame is produced. The other, negative electrode is positioned above the flame. When first designed, the negative electrode was either tear-drop shaped or angular piece of platinum. Today, the design has been modified into a tubular electrode, commonly referred to as a collector plate. The ions thus are attracted to the collector plate and upon hitting the plate, induce a current. This current is measured with a high-impedance picoammeter and fed into an integrator. The manner in which the final data is displayed is based on the computer and software. In general, a graph is displayed that has time on the x-axis and total ion on the y-axis.
The current measured corresponds roughly to the proportion of reduced carbon atoms in the flame. Specifically how the ions are produced is not necessarily understood, but the response of the detector is determined by the number of carbon atoms (ions) hitting the detector per unit time. This makes the detector sensitive to the mass rather than the concentration, which is useful because the response of the detector is not greatly affected by changes in the carrier gas flow rate.
Response factor
FID measurements are usually reported "as methane," meaning as the quantity of methane which would produce the same response. The same quantity of different chemicals produces different amounts of current, depending on the elemental composition of the chemicals. The response factor of the detector for different chemicals can be used to convert current measurements into actual amounts of each chemical.
Hydrocarbons generally have response factors that are equal to the number of carbon atoms in their molecule (more carbon atoms produce greater current), while oxygenates and other species that contain heteroatoms tend to have a lower response factor. Carbon monoxide and carbon dioxide are not detectable by FID.
FID measurements are often labelled "total hydrocarbons" or "total hydrocarbon content" (THC), although a more accurate name would be "total volatile hydrocarbon content" (TVHC), as hydrocarbons which have condensed out are not detected, even though they are important, for example safety when handling compressed oxygen.
Description
The design of the flame ionization detector varies from manufacturer to manufacturer, but the principles are the same. Most commonly, the FID is attached to a gas chromatography system.
The eluent exits the gas chromatography column (A) and enters the FID detector’s oven (B). The oven is needed to make sure that as soon as the eluent exits the column, it does not come out of the gaseous phase and deposit on the interface between the column and FID. This deposition would result in loss of eluent and errors in detection. As the eluent travels up the FID, it is first mixed with the hydrogen fuel (C) and then with the oxidant (D). The eluent/fuel/oxidant mixture continues to travel up to the nozzle head where a positive bias voltage exists. This positive bias helps to repel the oxidized carbon ions created by the flame (E) pyrolyzing the eluent. The ions (F) are repelled up toward the collector plates (G) which are connected to a very sensitive ammeter, which detects the ions hitting the plates, then feeds that signal to an amplifier, integrator, and display system(H). The products of the flame are finally vented out of the detector through the exhaust port (J).
Advantages and disadvantages
Advantages
Flame ionization detectors are used very widely in gas chromatography because of a number of advantages.
Cost: Flame ionization detectors are relatively inexpensive to acquire and operate.
Low maintenance requirements: Apart from cleaning or replacing the FID jet, these detectors require little maintenance.
Rugged construction: FIDs are relatively resistant to misuse.
Linearity and detection ranges: FIDs can measure organic substance concentration at very low (10−13 g/s) and very high levels, having a linear response range of 107 g/s.
Disadvantages
Flame ionization detectors cannot detect inorganic substances and some highly oxygenated or functionalized species like infrared and laser technology can. In some systems, CO and CO2 can be detected in the FID using a methanizer, which is a bed of Ni catalyst that reduces CO and CO2 to methane, which can be in turn detected by the FID. The methanizer is limited by its inability to reduce compounds other than CO and CO2 and its tendency to be poisoned by a number of chemicals commonly found in gas chromatography effluents.
Another important disadvantage is that the FID flame oxidizes all oxidizable compounds that pass through it; all hydrocarbons and oxygenates are oxidized to carbon dioxide and water and other heteroatoms are oxidized according to thermodynamics. For this reason, FIDs tend to be the last in a detector train and also cannot be used for preparatory work.
Alternative solution
An improvement to the methanizer is the Polyarc reactor, which is a sequential reactor that oxidizes compounds before reducing them to methane. This method can be used to improve the response of the FID and allow for the detection of many more carbon-containing compounds. The complete conversion of compounds to methane and the now equivalent response in the detector also eliminates the need for calibrations and standards because response factors are all equivalent to those of methane. This allows for the rapid analysis of complex mixtures that contain molecules where standards are not available.
See also
Active fire protection
Flame detector
Gas chromatography
Photoelectric flame photometer
Photoionization detector
Thermal conductivity detector
References
Sources
Skoog, Douglas A., F. James Holler, & Stanley R. Crouch. Principles of Instrumental Analysis. 6th Edition. United States: Thomson Brooks/Cole, 2007.
G.H. JEFFERY, J.BASSET, J.MENDHAM, R.C.DENNEY, "VOGEL'S TEXTBOOK OF QUANTITATIVE CHEMICAL ANALYSIS."
Gas chromatography
Australian inventions
South African inventions | Flame ionization detector | [
"Chemistry"
] | 1,491 | [
"Chromatography",
"Gas chromatography"
] |
4,162,694 | https://en.wikipedia.org/wiki/Litmus | Litmus is a water-soluble mixture of different dyes extracted from lichens. It is often absorbed onto filter paper to produce one of the oldest forms of pH indicator, used to test materials for acidity. In an acidic medium, blue litmus paper turns red, while in a basic or alkaline medium, red litmus paper turns blue. In short, it is a dye and indicator which is used to place substances on a pH scale.
History
The word "litmus" comes from an Old Norse word for “moss used for dyeing”. About 1300, the Spanish physician Arnaldus de Villa Nova began using litmus to study acids and bases.
From the 16th century onwards, the blue dye was extracted from some lichens, especially in the Netherlands.
Natural sources
Litmus can be found in different species of lichens. The dyes are extracted from such species as Roccella tinctoria (South American), Roccella fuciformis (Angola and Madagascar), Roccella pygmaea (Algeria), Roccella phycopsis, Lecanora tartarea (Norway, Sweden), Variolaria dealbata, Ochrolechia parella, Parmotrema tinctorum, and Parmelia. Currently, the main sources are Roccella montagnei (Mozambique) and Dendrographa leucophoea (California).
Uses
The main use of litmus is to test whether a solution is acidic or basic, as blue litmus paper turns red under acidic conditions, and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at . Neutral litmus paper is purple. Wet litmus paper can also be used to test for water-soluble gases that affect acidity or basicity; the gas dissolves in the water and the resulting solution colors the litmus paper. For instance, ammonia gas, which is alkaline, turns red litmus paper blue. While all litmus paper acts as pH paper, the opposite is not true.
Litmus can also be prepared as an aqueous solution that functions similarly. Under acidic conditions, the solution is red, and under alkaline conditions, the solution is blue.
Chemical reactions other than acid–base can also cause a color change to litmus paper. For instance, chlorine gas turns blue litmus paper white; the litmus dye is bleached because hypochlorite ions are present. This reaction is irreversible, so the litmus is not acting as an indicator in this situation.
Chemistry
The litmus mixture has the CAS number 1393-92-6 and contains 10 to around 15 different dyes. All of the chemical components of litmus are likely to be the same as those of the related mixture known as orcein but in different proportions. In contrast with orcein, the principal constituent of litmus has an average molecular mass of 3300. Acid-base indicators on litmus owe their properties to a 7-hydroxyphenoxazone chromophore. Some fractions of litmus were given specific names including erythrolitmin (or erythrolein), azolitmin, spaniolitmin, leucoorcein, and leucazolitmin. Azolitmin shows nearly the same effect as litmus.
A recipe to make litmus out of the lichens, as outlined on a UC Santa Barbara website says:
Mechanism
Red litmus contains a weak diprotic acid. When it is exposed to a basic compound, the hydrogen ions react with the added base. The conjugate base formed from the litmus acid has a blue color, so the wet red litmus paper turns blue in an alkaline solution.
References
PH indicators
Paper products | Litmus | [
"Chemistry",
"Materials_science"
] | 803 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
4,163,498 | https://en.wikipedia.org/wiki/Minimum%20Data%20Set | The Minimum Data Set (MDS) is part of the U.S. federally mandated process for clinical assessment of all residents in Medicare or Medicaid certified nursing homes and non-critical access hospitals with Medicare swing bed agreements. (The term "swing bed" refers to the Social Security Act's authorizing small, rural hospitals to use their beds in both an acute care and Skilled Nursing Facility (SNF) capacity, as needed.)
Description
This process provides a comprehensive assessment of each resident's functional capabilities and helps nursing home and SNF staff identify health problems.
Resource Utilization Groups (RUG) are part of this process, and provide the foundation upon which a resident's individual care plan is formulated. MDS assessment forms are completed for all residents in certified nursing homes, including SNFs, regardless of source of payment for the individual resident. MDS assessments are required for residents on admission to the nursing facility and then periodically, within specific guidelines and time frames. Participants in the assessment process are health care professionals and direct care staff such as registered nurses, licensed practical or vocational nurses (LPN/LVN), Therapists, Social Services, Activities and Dietary staff employed by the nursing home. MDS information is transmitted electronically by nursing homes to the MDS database in their respective states. MDS information from the state databases is captured into the national MDS database at Centers for Medicare and Medicaid Services (CMS).
Sections of MDS (Minimum Data Set):
Identification Information
Hearing, Speech and Vision
Cognitive Patterns
Mood
Behavior
Preferences for Customary Routine and Activities
Functional Status
Functional Abilities and Goals
Bladder and Bowel
Active Diagnoses
Health Conditions
Swallowing/Nutritional Status
Oral/Dental Status
Skin Conditions
Medications
Special Treatments, Procedures and Programs
Restraints
Participation in Assessment and Goal Setting
Care Area Assessment (CAA) Summary
Correction Request
Assessment Administration
The MDS is updated by the Centers for Medicare and Medicaid Services. Specific coding regulations in completing the MDS can be found in the Resident Assessment Instrument User's Guide. Versions of the Minimum Data Set has been used or is being utilized in other countries.
See also
Nursing Minimum Data Set (NMDS), US
National minimum dataset, in health informatics
National Minimum Data Set for Social Care (NMDS-SC), England
References
General
CMS - MDS Quality Indicator and Resident Reports
Centers for Medicare & Medicaid Services Long Term Care Facility Resident Assessment Instrument 3.0 User's Manual Version 1.16 October 2018
Health informatics
Medicare and Medicaid (United States) | Minimum Data Set | [
"Biology"
] | 517 | [
"Health informatics",
"Medical technology"
] |
4,164,148 | https://en.wikipedia.org/wiki/Oliver%20E.%20Buckley%20Prize | The Oliver E. Buckley Condensed Matter Prize is an annual award given by the American Physical Society "to recognize and encourage outstanding theoretical or experimental contributions to condensed matter physics." It was endowed by AT&T Bell Laboratories as a means of recognizing outstanding scientific work. The prize is named in honor of Oliver Ellsworth Buckley, a former president of Bell Labs. Before 1982, it was known as the Oliver E. Buckley Solid State Prize. It is one of the most prestigious awards in the field of condensed matter physics.
The prize is normally awarded to one person but may be shared if multiple recipients contributed to the same accomplishments. Nominations are active for three years. The prize was endowed in 1952 and first awarded in 1953. Since 2012, the prize has been co-sponsored by HTC-VIA Group.
Recipients
See also
List of physics awards
References
External links
APS page on the Buckley Prize
Condensed matter physics awards
Awards of the American Physical Society
Awards established in 1953 | Oliver E. Buckley Prize | [
"Physics",
"Materials_science"
] | 194 | [
"Condensed matter physics awards",
"Condensed matter physics"
] |
4,165,915 | https://en.wikipedia.org/wiki/Spectronic%2020 | The Spectronic 20 is a brand of single-beam spectrophotometer, designed to operate in the visible spectrum across a wavelength range of 340 nm to 950 nm, with a spectral bandpass of 20 nm. It is designed for quantitative absorption measurement at single wavelengths. Because it measures the transmittance or absorption of visible light through a solution, it is sometimes referred to as a colorimeter. The name of the instrument is a trademark of the manufacturer.
Developed by Bausch & Lomb and launched in 1953, the Spectronic 20 was the first low-cost spectrophotometer. It rapidly became an industry standard due to its low cost, durability and ease of use, and has been referred to as an "iconic lab spectrophotometer". Approximately 600,000 units were sold over its nearly 60 year production run. It has been the most widely used spectrophotometer worldwide. Production was discontinued in 2011 when it was replaced by the Spectronic 200, but the Spectronic 20 is still in common use. It is sometimes referred to as the "Spec 20".
Design
The Bausch & Lomb Spectronic 20 colorimeter uses a diffraction grating monochromator combined with a system for the detection, amplification, and measurement of light wavelengths in the 340 nm to 950 nm range.
As shown in the schematic optical diagram (see left), polychromatic light from a source in the system passes through lenses which are reflected and dispersed by the diffraction grating to restrict the range of light wavelengths. This restricted range of wavelengths is then passed through the sample to be measured. The intensity of the transmitted light is determined by a phototube detector. Mechanical movement of the diffraction grating by means of the cam attached to the wavelength control enables the user to select for various wavelengths. This is the "λ knob", wherein λ refers to wavelength of light used for the measurement.
Quantitative measurements
Many substances absorb light in the ultraviolet - visible light range. Absorption at any particular wavelength in the ultraviolet visible range is proportional to the concentration of the substances in the solution or other medium, in accord with the Beer–Lambert relationship. In a practical sense, the Beer–Lambert relationship can be stated as:
A = ε x l x c
in which A is the absorbance measured by the instrument, ε is the molar absorption coefficient of the sample, l is the pathlength of the light beam through the sample, and c is the concentration of the substance in the solution or medium. The Spectronic 20 is thereby commonly used for quantitative determination of the concentration of a substance of interest. The Spectronic 20 measures the absorbance of light at a pre-determined concentration, and the concentration is calculated from the Beer–Lambert relationship.
The absorbance of the light is the base 10 logarithm of the ratio of the Transmittance of the pure solvent to the transmittance of the sample, and so the two absorbance and transmittance can be interconverted. Either transmittance or absorbance can therefore be plotted versus concentration using measurements from the Spectronic 20. Plotting a curve using percent transmittance of light yields an exponential curve. However, absorbance is linearly related to concentration, and so absorbance is often preferred for plotting a standard curve. This type of standard curve relates the concentration of the solution (on the x-axis) to measures of its absorbance (y-axis).
To obtain such a curve, a series of dilutions of known concentration of a solution are prepared and readings are obtained for each of the dilutions (see plot at left). In this plot, the slope of the line is the product ε x l. By measuring a series of standards and creating the standard curve, it is possible to quantify the amount or concentration of a substance within a sample by determining the absorbance on the Spec 20 and finding the corresponding concentration on the calibration curve. Alternatively, the logarithm of percent transmittance can be plotted versus concentration to create a standard curve using the same procedure.
The absorbance measured by the Spectronic 20 is the sum of the absorbance of each of the constituents of the solution. Therefore, the Spectronic 20 can be used to analyze more complex solutions. For example, if a sample solution has two light-absorbing compounds in it, then the user performs measurements at two different wavelengths and constructs standard curves for each compound. Then the concentration of each compound can be calculated algebraically.
The Spectronic 20 can be used for turbidimetric measurements. In microbiological work, the turbidity of a liquid culture of bacterial cells relates to the cell count, and OD600 measurements can be conducted for this purpose using the Spectronic 20. Likewise the turbidity of water suspensions of clays and other particles of size suitable for light scattering can be quantitatively determined by means of a Spectronic 20. In the past, the Spectronic 20 was used for clinical diagnostic purposes.
Use
Before testing a sample, the Spectronic 20 is calibrated using a blank solution, which is the pure solvent that is used in the experimental sample. It is typically water or an organic solvent. In this calibration, the transmittance is set at 100% using the calibration knob of the instrument (the amplifier control knob in the figure at right). The instrument can also optionally be calibrated with a stock solution of a sample at a concentration known to have an absorbance of 2 or else vendor supplied standards, using the light absorption knob in the diagram shown at right. After calibration, the user places a 1/2 inch test tube or cuvette containing the sample solution to be measured into the sample compartment. Calibration is repeated each time the wavelength is changed. It or a standard reference sample is generally used to periodically check for drift. To measure wavelengths above 650 nm, the bottom of the instrument is opened, and a red filter and a red-sensitive photocell is installed.
The original design of the Spectronic 20 utilized an analog dial for readout of transmission from 100%T to 1%T (top scale), 0A - 2A (lower scale). Using the original instrument requires manual setting of the wavelength and making readings from a moving-needle analog display.
Replacement
The Spectronic 20D (launched in 1985) and later the 20D+ replaced the analog dial with a red digital LED readout, offering greater precision in the readout, if not greater accuracy in the actual reading. A side-by-side comparison of the features of the 20+ and 20D+ is available in the 2001 operating manual.
The Spectronic 20 was replaced by the Spectronic 200 in the Thermo Scientific spectrophotometer product line in 2011. The Spectronic 200 utilizes an array detector and digital control of the measured wavelength, while retaining the characteristic λ knob of the Spec 20 for setting the wavelength. In addition to replicating the user modes of the Spec 20D+ (which it can emulate on a color LCD screen) the Spec 200 accommodates both test-tubes and square cuvettes without needing to install an adapter. Software modes described in the Spectronic 200's specifications include scanning, four wavelength simultaneous measurement, and quantitative analysis with up to four standards, in contrast to the SPEC 20D+ which offered only single point calibration.
Product line history
Originally introduced by Bausch & Lomb in 1953, the product line was sold to Milton Roy in 1985. Milton Roy sold its instrument group to Life Sciences International, renamed Spectronic Instruments, Inc. in 1995. Spectronics Instruments was purchased by Thermo Optek in 1997, renamed Spectronic-Unicam in 2001 and Thermo-Spectronic in 2002. In 2003 the product line was moved to Madison, WI and the brand renamed to Thermo Electron.
With the merger of Thermo Electron and Fisher Scientific in 2006 the brand changed to Thermo Scientific, and remained such until the end of the production run. Spectronic 20 instruments found in labs today may bear any of the Bausch and Lomb, Milton Roy, Spectronic, Thermo Electron or Thermo Scientific brand names.
Popular culture
The Spectronic 20 is apparently one of the few lab instruments to remain intact after the destruction of the laboratory in the movie Back to the Future.
References
External links
Spectronic 20, ChemLab Images and instructions (from Dartmouth College)
Manufacturer's SPEC 200 webpage (from current manufacturer)
Spectrometers
Scientific instruments | Spectronic 20 | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,791 | [
"Spectrum (physical sciences)",
"Scientific instruments",
"Measuring instruments",
"Spectrometers",
"Spectroscopy"
] |
4,166,537 | https://en.wikipedia.org/wiki/Thin-film%20bulk%20acoustic%20resonator | A thin-film bulk acoustic resonator (FBAR or TFBAR) is a device consisting of a piezoelectric material manufactured by thin film methods between two conductive – typically metallic – electrodes and acoustically isolated from the surrounding medium. The operation is based on the piezoelectricity of the piezolayer between the electrodes.
FBAR devices using piezoelectric films with thicknesses typically ranging from several micrometres down to tenths of micrometres resonate in the frequency range of 100 MHz to 20 GHz. FBAR or TFBAR resonators fall in the category of bulk acoustic resonators (BAW) and piezoelectric resonators and they are used in applications where high frequency, small size like thickness and/or weight is needed.
Industrial application areas of thin film bulk acoustic resonators include high-frequency signal filtering (e.g. for mobile telecommunication devices), crystal replacements, energy harvesting, sensing, sound emission (e.g. in hearing aids) and as part of mechanical qubits.
Piezoelectricity in thin films
The crystallographic orientation of a thin film depends on the piezomaterial selected and many other items like the surface on which the film is grown and various manufacturing - thin film growth - conditions (temperatures selected, pressure, gases used, vacuum conditions etc.).
Any material like lead zirconate titanate (PZT) or barium strontium titanate (BST) from the list of piezoelectric materials could act as an active material in an FBAR. However two compound materials aluminium nitride (AlN) and zinc oxide (ZnO) are the two most studied piezoelectric materials manufactured for high frequency FBAR realisations. This is due to the fact that the properties like stoichiometry of two compound materials can be easier to control compared to three compound materials manufactured by thin film methods. For example, it is known that thin film ZnO with C axis of the crystal structure (crystalline Z axis) normal to the substrate surface excites longitudinal (L) waves. Shear (transverse) (S) waves are excited if C axis of the film crystal structure is 41º tilted. It is also possible – depending on the crystal structure of the film – that both waves (L & S) are excited. Therefore, the understanding and control of the crystal structure of the manufactured piezoelectric film is crucial for the operation of the FBAR.
For high frequency purposes like filtering of signals the energy conversion efficiency is the most important item and therefore longitudinal (L) waves are favored and targeted to be used. For sensing and actuation purposes the structural deformation might be more important than energy conversion efficiency and shear-mode wave excitation will be the target of the manufacturing of the piezoelectric film. Tuneability of resonance frequency of the resonator depends on material choices and may extend application areas.
Despite the lower electromechanical coupling coefficient compared to zinc oxide, aluminum nitride, with a wider band gap has become the most used material in industrial applications, which require a wide bandwidth in signal processing. Compatibility with the silicon integrated circuit technology has supported AlN in FBAR resonator based products like radio frequency filters, duplexers, RF power amplifier or RF receiver modules.
Thin film piezoelectric sensors may be based on various piezoelectric materials depending on the application, but two compound piezoelectric materials are favored due to simplicity of manufacturing.
Doping or adding new materials like scandium (Sc), are new directions to improve material properties of AlN for FBARs. Research of new electrode materials or alternative materials to aluminium like by replacing one of the metal electrodes with very light materials like graphene for minimising the loading of the resonator has been demonstrated to lead better control of the resonance frequency.
Substrates for FBAR resonators and their applications
FBAR resonators can be manufactured on ceramic (Al2O3 or alumina), sapphire, glass or silicon substrates. However silicon wafer is the most common substrate due to its scalability towards mass manufacturing and compatibility with various manufacturing steps, often typical to semiconductor manufacturing, needed.
During early studies and experimentation phase of thin film resonators in 1967 cadmium sulfide (CdS) was evaporated on a resonant piece of bulk quartz crystal which served as a transducer providing a Q factor (quality factor) of 5000 at the resonance frequency (279 MHz). This was an enabler for tighter frequency control, for needs to use higher frequencies and utilising FBAR resonators. With the development of thin film technologies it was possible to keep the Q factor high enough, leave out the crystal and increase resonance frequency. The experimentation of utilising silicon as a support material and thin film ZnO as an active piezolayer was published in 1981 , which can be considered as a first experimentation of a thin film acoustic resonator on silicon.
Application areas
FBAR devices can be used for radio frequency filtering. Most smartphones in 2020 include at least one FBAR-based duplexer or filter and some 4/5G products may even include 20–30 functionalities based on FBAR technology mainly due to the increased complexity of radio frequency front end (RFFE, RF front end) electronics – both receiver and transmitter paths – and the antenna/antenna system. Trends to utilize RF spectrum more efficiently with higher frequencies than roughly 1.5–2.5 GHz and in some cases also simultaneously with increasing RF output power have supported FBAR technology to become one of the key enabling technologies in telecommunication realisations. FBAR technology complements and in some cases competes with surface acoustic wave (SAW) technology and FBAR resonators can replace crystals in crystal oscillators and crystal filters at frequencies more than 100 MHz.
Sensing and actuation is a developing area for FBAR resonators and structures based on them like in micro-mirror displays (DMD)s, as well as energy harvesting by utilizing nanogenerators.
Basic structures
As of 2022 there are two known structures for thin-film bulk acoustic wave (BAW) resonators: free-standing and solidly mounted (SMR) resonators. In a free-standing resonator structure air is used to separate the resonator from the substrate/surrounding. The structure of a free-standing resonator is based on some typical manufacturing steps used in micro-electromechanical systems MEMS. In an SMR structure acoustic mirror(s) providing an acoustic isolation is constructed between the resonator and the surrounding like the substrate. The acoustic mirror (such as a Bragg reflector) typically consists of an odd total number of materials with alternating layers of high and low acoustic impedance materials. The thickness of the mirror materials must also be optimized to be the quarter wavelength for maximum acoustic reflectivity. The basic principle of the SMR structure was introduced in 1965.
Schematic pictures of thin film resonators show only the basic principles of the potential structures. In reality some dielectric layers may be needed for other functions, such as for strengthening various parts of the structure. Additionally if needed – for simplifying the final filter layout in the application – resonator structures can be stacked e.g. built on top of each other, as in certain filter applications. However this approach increases the complexity of manufacturing.
Some performance requirements, such as tuning of the resonance frequency, may also require new materials, additional process steps, such as ion milling, which complicates the manufacturing process, and may have affect to system requirements like adding new functionality to produce tuning voltages.
The newest approach for developing better performing FBARs is to utilize single crystal AlN instead of polycrystalline AlN, and to place electrodes on the same side of the piezolayer.
In order to realize FBAR structures, many precise simulation steps are required during the design phase in order to predict the purity of the resonance frequency and other performance characteristics. At an early phase of the development, basic finite element method (FEM) based modelling techniques that are used for crystals can also be applied and modified for FBARs. Several new methods, such as scanning laser interferometry, are needed to visualise the functionality of the resonators and for helping to improve the design (layout and cross-sectional structure of the resonator) so as to achieve purity of the resonance and the desired resonance modes.
Application drivers
In many applications temperature behavior, stability vs. time, strength and purity of the wanted resonance frequency are forming the base for the performance of the applications based on FBAR resonators. Material choices, layout and design of resonator structures are contributing to the resonator performance and the final performance of the application. Mechanical performance and reliability are determined by the packaging and structure of the resonators in the applications.
A common application of FBARs is radio frequency (RF) filters for use in cell phones and other wireless applications like positioning (GPS, Glonass, BeiDou, Galileo (satellite navigation) etc.), Wi-Fi systems, small telecommunication cells and modules for those. Such filters are made from a network of resonators (either in half-ladder, full-ladder, lattice, a combination of lattice and ladder or stacked topologies) and are designed to remove unwanted frequencies from being transmitted in such devices, while allowing other specific frequencies to be received and transmitted. FBAR filters can also be found in duplexers. FBAR filter technology is complementing surface acoustic wave (SAW) filter technology in areas where increased power handling capability, and electrostatic discharge (ESD) tolerance is needed. Frequencies more than 1.5–2.5 GHz are well-suited for FBAR devices. FBARs on a silicon substrate can be manufactured in high volumes and the manufacturing is supported by all development of semiconductor device fabrication methods. Future requirements of new applications like filtering bandwidth with steep stopband attenuation and lowest possible insertion loss have effects on resonator performance and show development steps needed.
FBARs can also be used in oscillators and synchronizers to replace a crystal/crystals in applications where frequencies more than 100 MHz and/or very low jitter is one of the performance targets.
FBARs can also be used as sensors - gas and liquid. For instance, when a FBAR device is put under mechanical pressure its resonance frequency will shift. Sensing of humidity and volatile organic compounds (VOCs) are demonstrated by using FBARs. A tactile sensor array may also consist of FBAR devices, and gravimetric or mass sensing can be based on FBAR resonators.
As discrete components FBAR technology based parts like basic resonators and filters are packaged in miniaturised/small form factor like wafer level packages. FBARs can also be integrated with power amplifiers (PA) or low noise amplifiers (LNA) to form a module solution with the related electronic circuitry. Although monolithic integrated of FBARs on the same substrate with the electronic circuitry like CMOS has been demonstrated it requires several additional process steps and mask layers on top of IC technology increasing the cost of the solution. Therefore, monolithic solutions have not been progressed as much as module solutions in commercial applications. Typical module solutions are a power amplifier-duplexer module (PAD), or a low-noise amplifier (LNA)-filter module where and the related circuitry are packaged in the same package possibly on a separate module substrate.
FBARs can be integrated in complex communication like SimpleLink modules for avoiding area/space requirements of an external, packaged crystal. Therefore, FBAR technology has a key role in electronics miniaturisation specifically in applications where oscillators and precise high performance filters are needed.
Historical and industrial landscape
Resonators and high frequency filters/duplexers
The use of thin film piezoelectric materials in electronics began in the early 1960s at Bell Telephone Laboratories/Bell Labs. Earlier piezoelectric crystals were developed and used as resonators in applications like oscillators with frequencies up to 100 MHz. Thinning was applied for increasing the resonance frequency of the crystals. However, there were limitations of the thinning of crystals and new methods of thin film manufacturing were applied in the early 1970s for increasing accuracy of resonance frequency and targeting increasing manufacturing volumes.
TFR Technologies Inc., founded in 1989, was one of the pioneering companies in the field of FBAR resonators and filters mostly for space and military applications. The first products were delivered to customers in 1997. TFR Technologies Inc. was in 2005 acquired by TriQuint Semiconductor Inc. In early 2015, RF Micro Devices (RFMD), Inc. and TriQuint Semiconductor, Inc. announced a merger to form Qorvo active providing FBAR-based products.
HP Laboratories started a project on FBARs in 1993 concentrating in free-standing resonators and filters. In 1999 FBAR activity became part of Agilent Technologies Inc., which in 2001 delivered 25,000 FBAR duplexers for N-CDMA phones. Later in 2005, FBAR activity at Agilent was one of the technologies of Avago Technologies Ltd., which acquired Broadcom Corporation in 2015. In 2016 Avago Technologies Ltd. changed its name to Broadcom Inc., currently active in providing FBAR-based products.
Infineon Technologies AG started to work with SMR-FBARs in 1999, concentrating in telecommunication filters for mobile applications. The first product was delivered to Nokia Mobile Phones Ltd, which launched the first SMR-FBAR-based GSM three-band mobile phone product in 2001. Infineon's FBAR (BAW) filter group was acquired by Avago Technologies Ltd 2008 which later became part of Broadcom as described before.
After acquiring Panasonic's filtering business in 2016 Skyworks Solutions became one of the major players in BAW/FBAR devices additionally to Broadcom and Qorvo.
Additionally after acquiring rest of RF360 Holdings in 2019 Qualcomm and Kyocera are offering thin film resonator based products like RFFE modules and separate filters.
Still many companies like Akoustis Technologies, Inc. (founded in 2014),Newsonic, Saiwei Electronics,, Texas Instruments (TI), several universities and research institutes are offering and studying to improve FBAR technology, its performance, manufacturing capacity, advancing design capabilities of FBARs and exploring new application areas jointly with system manufacturers and companies providing simulation tools (Ansys, Comsol Multiphysics, and Resonant Inc. etc.).
Companies in acoustics have also found thin film piezoelectric resonators for miniaturising speakers. One of the pioneering companies utilizing thin film resonators in sensoring is Sorex Sensors Ltd.
Thin film resonator based sensors
Because thin film resonators can replace crystals in sensoring, the most potential sensor application area for FBAR resonators is similar to area for the quartz crystal microbalance (QCM). Sensing gaseous and liquid contents can be made with FBAR resonators.,
Thin film resonator based speakers and microphones
By adding several thin film resonators connected in parallel on bulk micro-machined silicon structure the structure can act as a speaker. The realisation of the FBAR based speaker can be very thin. Also small volume and light microphones can be based on FBAR.
See also
Resonance
Acoustic resonance
Acoustic impedance
RF and microwave filter
RF front end
Duplexer
Piezoelectric sensor
References
External links
University of Southern California explanation on the operation of FBAR's
PhD thesis of J. V. Tirado, Bulk Acoustic Wave Resonators and their Application to Microwave Devices, 2010, Universitat Autonoma Barcelona, Spain, 201 pages.
PhD thesis of J. Liu, Application of Bragg Reflection for Suppression of Spurious Transverse Mode Resonances in RF BAW Resonators, 2014, Chiba University, Japan, 151 pages.
Broadcom's products based on FBAR technology
FBAR technology opportunity in 5G telecommunication
Products of Qorvo based on BAW (FBAR)
Description of Texas Instrument's SimpleLink module
Akoustis Technologies Inc.
Example of Ansys acoustic tools
Example of FBAR/BAW related simulation tools with Comsol Multiphysics
Research on adding scandium in AlN for improved performance
IPR (Intellectual Property Rights) landscape of acoustic wave filters by KnowMade, 2019
SAW and BAW RF acoustic filters: same challenges, opposite dynamics by KnowMade, 2023
Sound
Acoustics
Resonators | Thin-film bulk acoustic resonator | [
"Physics"
] | 3,442 | [
"Classical mechanics",
"Acoustics"
] |
17,366,789 | https://en.wikipedia.org/wiki/Jiggle%20syphon | A jiggle syphon (or siphon) is the combination of a syphon pipe and a simple priming pump that uses mechanical shaking action to pump enough liquid up the pipe to reach the highest point, and thus start the syphoning action.
Principle of operation
The jiggle pump consists of a chamber, in line with the end of the pipe that sits in the liquid to be moved. The chamber is somewhat wider than the pipe, and narrows to approximately the pipe diameter at both ends. One end attaches to the pipe, the other end is open to the liquid. Within the chamber is a sphere, denser than the liquid to be pumped, small enough to move freely within the chamber but large enough to not be able to leave the chamber.
To begin with, gravity holds the sphere at the bottom, open, end of the chamber, although hydrostatic pressure will force the liquid up and around the sphere upon immersion. When the pipe is vigorously shaken up and down, the sphere moves upwards, lifting some liquid in the pipe; then when it falls down again, the increased hydrostatic pressure within the pipe (which now has a higher head of fluid in it than the surrounding container) pushes the sphere down and prevents the liquid flowing back. Repeated "jigglings" lift the fluid up the pipe until it reaches the highest point in the pipe, whereupon gravity causes it to start to flow down the other side, and the syphon action will "suck" the liquid through the system. This causes the pressure in the pipe to drop below the hydrostatic pressure in the container, so the sphere is lifted upwards, allowing the liquid to flow.
History
See also
Syphon for the principles and practice of syphoning.
References
Fluid dynamics | Jiggle syphon | [
"Chemistry",
"Engineering"
] | 353 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
5,558,601 | https://en.wikipedia.org/wiki/Transcription%20coregulator | In molecular biology and genetics, transcription coregulators are proteins that interact with transcription factors to either activate or repress the transcription of specific genes. Transcription coregulators that activate gene transcription are referred to as coactivators while those that repress are known as corepressors. The mechanism of action of transcription coregulators is to modify chromatin structure and thereby make the associated DNA more or less accessible to transcription. In humans several dozen to several hundred coregulators are known, depending on the level of confidence with which the characterisation of a protein as a coregulator can be made. One class of transcription coregulators modifies chromatin structure through covalent modification of histones. A second ATP dependent class modifies the conformation of chromatin.
Histone acetyltransferases
Nuclear DNA is normally tightly wrapped around histones rendering the DNA inaccessible to the general transcription machinery and hence this tight association prevents transcription of DNA. At physiological pH, the phosphate component of the DNA backbone is deprotonated which gives DNA a net negative charge. Histones are rich in lysine residues which at physiological pH are protonated and therefore positively charged. The electrostatic attraction between these opposite charges is largely responsible for the tight binding of DNA to histones.
Many coactivator proteins have intrinsic histone acetyltransferase (HAT) catalytic activity or recruit other proteins with this activity to promoters. These HAT proteins are able to acetylate the amine group in the sidechain of histone lysine residues which makes lysine much less basic, not protonated at physiological pH, and therefore neutralizes the positive charges in the histone proteins. This charge neutralization weakens the binding of DNA to histones causing the DNA to unwind from the histone proteins and thereby significantly increases the rate of transcription of this DNA.
Many corepressors can recruit histone deacetylase (HDAC) enzymes to promoters. These enzymes catalyze the hydrolysis of acetylated lysine residues restoring the positive charge to histone proteins and hence the tie between histone and DNA. PELP-1 can act as a transcriptional corepressor for transcription factors in the nuclear receptor family such as glucocorticoid receptors.
Nuclear receptor coactivators
Nuclear receptors bind to coactivators in a ligand-dependent manner. A common feature of nuclear receptor coactivators is that they contain one or more LXXLL binding motifs (a contiguous sequence of 5 amino acids where L = leucine and X = any amino acid) referred to as NR (nuclear receptor) boxes. The LXXLL binding motifs have been shown by X-ray crystallography to bind to a groove on the surface of ligand binding domain of nuclear receptors. Examples include:
ARA (androgen receptor associated protein)
ARA54 ()
ARA55 ()
ARA70 ()
AIRE
BCAS3 (breast carcinoma amplified sequence 3)
CREB-binding protein
CRTC (CREB regulated transcription coactivator)
CRTC1 ()
CRTC2 ()
CRTC3 ()
CARM1 (coactivator-associated arginine methyltransferase 1)
Nuclear receptor coactivator (NCOA)
NCOA1/SRC-1 (steroid receptor coactivator-1)/
NCOA2/GRIP1 (glucocorticoid receptor interacting protein 1)/ TIF2 (transcriptional intermediary factor 2)
NCOA3/AIB1 (amplified in breast)
NCOA4/ARA70 (androgen receptor associated protein 70)
NCOA5 ()
NCOA6 ()
NCOA7 ()
p300
PCAF (p300/CBP associating factor)
PGC1 (proliferator activated receptor gamma coactivator 1)
PPARGC1A ()
PPARGC1B ()
PNRC (proline-rich nuclear receptor coactivator 1)
PNRC1 ()
PNRC2 ()
Nuclear receptor corepressors
Corepressor proteins also bind to the surface of the ligand binding domain of nuclear receptors, but through a LXXXIXXX(I/L) motif of amino acids (where L = leucine, I = isoleucine and X = any amino acid). In addition, compressors bind preferentially to the apo (ligand free) form of the nuclear receptor (or possibly antagonist bound receptor).
CtBP 602618 (associates with class II histone deacetylases)
LCoR (ligand-dependent corepressor)
Nuclear receptor CO-Repressor (NCOR)
NCOR1 ()
NCOR2 ()/SMRT (Silencing Mediator (co-repressor) for Retinoid and Thyroid-hormone receptors) (associates with histone deacetylase-3)
Rb (retinoblastoma protein) (associates with histone deacetylase-1 and -2)
RCOR (REST corepressor)
RCOR1 ()
RCOR2 ()
RCOR3 ()
Sin3
SIN3A ()
SIN3B ()
TIF1 (transcriptional intermediary factor 1)
TRIM24 Tripartite motif-containing 24 ()
TRIM28 Tripartite motif-containing 28 ()
TRIM33 Tripartite motif-containing 33 ()
Dual function activator/repressors
NSD1 ()
PELP-1 (proline, glutamic acid and leucine rich protein 1)
RIP140 (receptor-interacting protein 140)
YAP
WWTR1 (TAZ)
ATP-dependent remodeling factors
SWI/SNF family
chromatin structure remodeling complex
ISWI protein ,
See also
Coactivator (genetics)
Corepressor (genetics)
Nuclear receptor coregulators
RNA polymerase control by chromatin structure
Transcription
Transcription factor
TcoF-DB
References
External links
Gene expression
Transcription coregulators | Transcription coregulator | [
"Chemistry",
"Biology"
] | 1,254 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
5,558,617 | https://en.wikipedia.org/wiki/BLOSUM | In bioinformatics, the BLOSUM (BLOcks SUbstitution Matrix) matrix is a substitution matrix used for sequence alignment of proteins. BLOSUM matrices are used to score alignments between evolutionarily divergent protein sequences. They are based on local alignments. BLOSUM matrices were first introduced in a paper by Steven Henikoff and Jorja Henikoff. They scanned the BLOCKS database for very conserved regions of protein families (that do not have gaps in the sequence alignment) and then counted the relative frequencies of amino acids and their substitution probabilities. Then, they calculated a log-odds score for each of the 210 possible substitution pairs of the 20 standard amino acids. All BLOSUM matrices are based on observed alignments; they are not extrapolated from comparisons of closely related proteins like the PAM Matrices.
Biological background
The genetic instructions of every replicating cell in a living organism are contained within its DNA. Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division, and the possibility exists that the DNA may be altered during these processes. This is known as a mutation. At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated.
The functionality of a protein is highly dependent on its structure. Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical disadvantage to the offspring, the possibility exists that this mutation will persist within the population. The possibility also exists that the change in function becomes advantageous.
The 20 amino acids translated by the genetic code vary greatly by the physical and chemical properties of their side chains. However, these amino acids can be categorised into groups with similar physicochemical properties. Substituting an amino acid with another from the same category is more likely to have a smaller impact on the structure and function of a protein than replacement with an amino acid from a different category.
Sequence alignment is a fundamental research method for modern biology. The most common sequence alignment for protein is to look for similarity between different sequences in order to infer function or establish evolutionary relationships. This helps researchers better understand the origin and function of genes through the nature of homology and conservation. Substitution matrices are utilized in algorithms to calculate the similarity of different sequences of proteins; however, the utility of Dayhoff PAM Matrix has decreased over time due to the requirement of sequences with a similarity more than 85%. In order to fill in this gap, Henikoff and Henikoff introduced BLOSUM (BLOcks SUbstitution Matrix) matrix which led to marked improvements in alignments and in searches using queries from each of the groups of related proteins.
Terminology
BLOSUM Blocks Substitution Matrix, a substitution matrix used for sequence alignment of proteins.
Scoring metrics (statistical versus biological) When evaluating a sequence alignment, one would like to know how meaningful it is. This requires a scoring matrix, or a table of values that describes the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Scores for each position are obtained frequencies of substitutions in blocks of local alignments of protein sequences.
BLOSUM r
The matrix built from blocks with less than r% of similarity
E.g., BLOSUM62 is the matrix built using sequences with less than 62% similarity (sequences with ≥ 62% identity were clustered together).
Note: BLOSUM 62 is the default matrix for protein BLAST. Experimentation has shown that the BLOSUM-62 matrix is among the best for detecting most weak protein similarities.
Several sets of BLOSUM matrices exist using different alignment databases, named with numbers. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM80 is used for closely related alignments, and BLOSUM45 is used for more distantly related alignments. The matrices were created by merging (clustering) all sequences that were more similar than a given percentage into one single sequence and then comparing those sequences (that were all more divergent than the given percentage value) only; thus reducing the contribution of closely related sequences. The percentage used was appended to the name, giving BLOSUM80 for example where sequences that were more than 80% identical were clustered.
Construction of BLOSUM matrices
BLOSUM matrices are obtained by using blocks of similar amino acid sequences as data, then applying statistical methods to the data to obtain the similarity scores.
Statistical Methods Steps :
Eliminating Sequences
Eliminate the sequences that are more than r% identical. There are two ways to eliminate the sequences. It can be done either by removing sequences from the block or just by finding similar sequences and replace them by new sequences which could represent the cluster. Elimination is done to remove protein sequences that are more similar than the specified threshold.
Calculating Frequency & Probability
A database storing the sequence alignments of the most conserved regions of protein families. These alignments are used to derive the BLOSUM matrices. Only the sequences with a percentage of identity lower than the threshold are used.
By using the block, counting the pairs of amino acids in each column of the multiple alignment.
Log odds ratio
It gives the ratio of the occurrence each amino acid combination in the observed data to the expected value of occurrence of the pair.
It is rounded off and used in the substitution matrix.
where is the probability of observing the pair and is the expected probability of such a pair occurring, given the background probabilities of each amino acid.
BLOSUM Matrices
The odds for relatedness are calculated from log odd ratio, which are then rounded off to get the substitution matrices BLOSUM matrices.
Score of the BLOSUM matrices
A scoring matrix or a table of values is required for evaluating the significance of a sequence alignment, such as describing the probability of a biologically meaningful amino-acid or nucleotide residue-pair occurring in an alignment. Typically, when two nucleotide sequences are being compared, all that is being scored is whether or not two bases are the same at one position. All matches and mismatches are respectively given the same score (typically +1 or +5 for matches, and -1 or -4 for mismatches). But it is different for proteins. Substitution matrices for amino acids are more complicated and implicitly take into account everything that might affect the frequency with which any amino acid is substituted for another. The objective is to provide a relatively heavy penalty for aligning two residues together if they have a low probability of being homologous (correctly aligned by evolutionary descent). Two major forces drive the amino-acid substitution rates away from uniformity: substitutions occur with the different frequencies, and lessen functionally tolerated than others. Thus, substitutions are selected against.
Commonly used substitution matrices include the blocks substitution (BLOSUM) and point accepted mutation (PAM) matrices. Both are based on taking sets of high-confidence alignments of many homologous proteins and assessing the frequencies of all substitutions, but they are computed using different methods.
Scores within a BLOSUM are log-odds scores that measure, in an alignment, the logarithm for the ratio of the likelihood of two amino acids appearing with a biological sense and the likelihood of the same amino acids appearing by chance. The matrices are based on the minimum percentage identity of the aligned protein sequence used in calculating them. Every possible identity or substitution is assigned a score based on its observed frequencies in the alignment of related proteins. A positive score is given to the more likely substitutions while a negative score is given to the less likely substitutions.
To calculate a BLOSUM matrix, the following equation is used:
Here, is the probability of two amino acids and replacing each other in a homologous sequence, and and are the background probabilities of finding the amino acids and in any protein sequence. The factor is a scaling factor, set such that the matrix contains easily computable integer values.
An example - BLOSUM62
BLOSUM80: more related proteins
BLOSUM62: midrange
BLOSUM45: distantly related proteins
An article in Nature Biotechnology revealed that the BLOSUM62 used for so many years as a standard is not exactly accurate according to the algorithm described by Henikoff and Henikoff. Surprisingly, the miscalculated BLOSUM62 improves search performance.
The BLOSUM62 matrix with the amino acids in the table grouped according to the chemistry of the side chain, as in (a). Each value in the matrix is calculated by dividing the frequency of occurrence of the amino acid pair in the BLOCKS database, clustered at the 62% level, divided by the probability that the same two amino acids might align by chance. The ratio is then converted to a logarithm and expressed as a log odds score, as for PAM. BLOSUM matrices are usually scaled in half-bit units. A score of zero indicates that the frequency with which a given two amino acids were found aligned in the database was as expected by chance, while a positive score indicates that the alignment was found more often than by chance, and negative score indicates that the alignment was found less often than by chance.
Some uses in bioinformatics
Research applications
BLOSUM scores was used to predict and understand the surface gene variants among hepatitis B virus carriers and T-cell epitopes.
Surface gene variants among hepatitis B virus carriers
DNA sequences of HBsAg were obtained from 180 patients, in which 51 were chronic HBV carrier and 129 newly diagnosed patients, and compared with consensus sequences built with 168 HBV sequences imported from GenBank. Literature review and BLOSUM scores were used to define potentially altered antigenicity.
Reliable prediction of T-cell epitopes
A novel input representation has been developed consisting of a combination of sparse encoding, Blosum encoding, and input derived from hidden Markov models. this method predicts T-cell epitopes for the genome of hepatitis C virus and discuss possible applications of the prediction method to guide the process of rational vaccine design.
Use in BLAST
BLOSUM matrices are also used as a scoring matrix when comparing DNA sequences or protein sequences to judge the quality of the alignment. This form of scoring system is utilized by a wide range of alignment software including BLAST.
Comparing PAM and BLOSUM
In addition to BLOSUM matrices, a previously developed scoring matrix can be used. This is known as a PAM. The two result in the same scoring outcome, but use differing methodologies. BLOSUM looks directly at mutations in motifs of related sequences while PAM's extrapolate evolutionary information based on closely related sequences.
Since both PAM and BLOSUM are different methods for showing the same scoring information, the two can be compared but due to the very different method of obtaining this score, a PAM100 does not equal a BLOSUM100.
The relationship between PAM and BLOSUM
The differences between PAM and BLOSUM
Software Packages
There are several software packages in different programming languages that allow easy use of Blosum matrices.
Examples are the blosum module for Python, or the BioJava library for Java.
See also
Sequence alignment
Point accepted mutation
References
External links
BLOCKS WWW server
Scoring systems for BLAST at NCBI
Data files of BLOSUM on the NCBI FTP server.
Interactive BLOSUM Network Visualization
Genetics
Biochemistry methods
Computational phylogenetics
Matrices | BLOSUM | [
"Chemistry",
"Mathematics",
"Biology"
] | 2,429 | [
"Biochemistry methods",
"Genetics techniques",
"Biological engineering",
"Computational phylogenetics",
"Mathematical objects",
"Matrices (mathematics)",
"Bioinformatics",
"Biochemistry",
"Phylogenetics"
] |
5,563,726 | https://en.wikipedia.org/wiki/Thin-film%20composite%20membrane | Thin-film composite membranes (TFC or TFM) are semipermeable membranes manufactured to provide selectivity with high permeability. Most TFC's are used in water purification or water desalination systems. They also have use in chemical applications such as gas separations, dehumidification, batteries and fuel cells. A TFC membrane can be considered a molecular sieve constructed in the form of a film from two or more layered materials. The additional layers provide structural strength and a low-defect surface to support a selective layer that is thin enough to be selective but not so thick that it causes low permeability.
TFC membranes for water treatment are commonly classified as nanofiltration (NF) and reverse osmosis (RO) membranes. Both types are typically made out of a thin polyamide layer (<200 nm) deposited on top of a polyethersulfone or polysulfone porous layer (about 50 microns) on top of a non-woven fabric support sheet. The three layer configuration gives the desired properties of high rejection of undesired materials (like salts), high filtration rate, and good mechanical strength. The polyamide top layer is responsible for the high rejection and is chosen primarily for its permeability to water and relative impermeability to various dissolved impurities including salt ions and other small, unfilterable molecules. Although not fully commercialized yet, TFC's are also used in other water treatment technologies, including Forward osmosis, membrane distillation, and electrodialysis.
History
The first viable reverse osmosis membrane was made from cellulose acetate as an integrally skinned asymmetric semi-permeable membrane. This membrane was made by Loeb and Sourirajan at UCLA in 1959 and patented in 1960. In 1972, John Cadotte of North Star Technologies (later FilmTec Corporation) developed the first interfacial polyamide (IP) thin-film-composite (TFC) membrane. The current generation of reverse osmosis (RO) membrane materials are based on a composite material patented by FilmTec Corporation in 1970 (now part of DuPont). Today, most such membranes for reverse osmosis and nanofiltration use a Polyamide active layer.
Structure and materials
As is suggested by the name, TFC membranes are composed of multiple layers. Membranes designed for desalination use an active thin-film layer of polyamide layered with polysulfone as a porous support layer. The active layers tend to be extremely thin and relatively nonporous. The chemistry of these layers often imparts selectivity. Meanwhile the support layers tend to need to be both extremely porous and robust to higher pressures.
Other materials, usually zeolites, are also used in the manufacture of TFC membranes.
Applications
Thin film composite membranes are used in
Water purification
In RO Plant;
as a chemical reaction buffer (batteries and fuel cells);
in industrial gas separations.
Limitations
Thin film composites membranes typically suffer from compaction effects under pressure. As the water pressure increases, the polymers are slightly reorganized into a tighter fitting structure that results in a lower porosity, ultimately limiting the efficiency of the system designed to use them. In general, the higher the pressure, the greater the compaction.
Surface fouling: Colloidal particulates, bacteria infestation (biofouling).
Chemical decomposition and oxidation.
Performance
A filtration membrane's performance is rated by selectivity, chemical resistance, operational pressure differential and the pure water flow rate per unit area.
Due to the importance of throughput, a membrane is manufactured as thinly as possible. These thin layers introduce defects that may affect selectivity, so system design usually trades off the desired throughput against both selectivity and operational pressure.
In applications other than filtration, parameters such as mechanical strength, temperature stability, and electrical conductivity may dominate.
Active research areas
Nano-composite membranes (TFN). Key points: multiple layers, multiple materials.
Mitigation of membrane fouling
New materials, synthetic zeolites, etc. to obtain higher performance.
NanoH2O Inc. commercialized a membrane in which zeolite nanoparticles were synthesized and embedded within an RO membrane to form a thin-film nanocomposite, or TFN, which has proven to be more than 50-100% more permeable compared to conventional RO membranes while maintaining the same level of salt rejection.
Fuel-cells.
Batteries.
See also
Maxwell–Stefan diffusion
Reverse Osmosis
Nanofiltration
References
Filters
Membrane technology
Water technology | Thin-film composite membrane | [
"Chemistry",
"Engineering"
] | 950 | [
"Separation processes",
"Chemical equipment",
"Filters",
"Membrane technology",
"Filtration",
"Water technology"
] |
5,565,460 | https://en.wikipedia.org/wiki/Compact%20Reconnaissance%20Imaging%20Spectrometer%20for%20Mars | The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) was a visible-infrared spectrometer aboard the Mars Reconnaissance Orbiter searching for mineralogic indications of past and present water on Mars. The CRISM instrument team comprised scientists from over ten universities and was led by principal investigator Scott Murchie. CRISM was designed, built, and tested by the Johns Hopkins University Applied Physics Laboratory.
Objectives
CRISM was being used to identify locations on Mars that may have hosted water, a solvent considered important in the search for past or present life on Mars. In order to do this, CRISM was mapping the presence of minerals and chemicals that may indicate past interaction with water - low-temperature or hydrothermal. These materials include iron and oxides, which can be chemically altered by water, and phyllosilicates and carbonates, which form in the presence of water. All of these materials have characteristic patterns in their visible-infrared reflections and were readily seen by CRISM. In addition, CRISM was monitoring ice and dust particulates in the Martian atmosphere to learn more about its climate and seasons.
Instrument overview
CRISM measured visible and infrared electromagnetic radiation from 362 to 3920 nanometers in 6.55 nanometer increments. The instrument had two modes, a multispectral untargeted mode and a hyperspectral targeted mode. In the untargeted mode, CRISM reconnoiters Mars, recording approximately 50 of its 544 measurable wavelengths at a resolution of 100 to 200 meters per pixel. In this mode CRISM mapped half of Mars within a few months after aerobraking and most of the planet after one year. The objective of this mode is to identify new scientifically interesting locations that could be further investigated. In targeted mode, the spectrometer measured energy in all 544 wavelengths. When the MRO spacecraft is at an altitude of 300 km, CRISM detects a narrow but long strip on the Martian surface about 18 kilometers across and 10,800 kilometers long. The instrument swept this strip across the surface as MRO orbits Mars to image the surface.
Instrument design
The data collecting part of CRISM was called the Optical Sensor Unit (OSU) and consisted of two spectrographs, one that detected visible light from 400 to 830 nm and one that detected infrared light from 830 to 4050 nm. The infrared detector was cooled to –173° Celsius (–280° Fahrenheit) by a radiator plate and three cryogenic coolers. While in targeted mode, the instrument gimbals in order to continue pointing at one area even though the MRO spacecraft is moving. The extra time collecting data over a targeted area increases the signal-to-noise ratio as well as the spatial and spectral resolution of the image. This scanning ability also allowed the instrument to perform emission phase functions, viewing the same surface through variable amounts of atmosphere, which would be used to determine atmospheric properties. The Data Processing Unit (DPU) of CRISM performs in-flight data processing including compressing the data before transmission.
Investigations
CRISM began its exploration of Mars in late 2006. Results from the OMEGA visible/near-infrared spectrometer on Mars Express (2003–present), the Mars Exploration Rovers (MER; 2003–2019), the TES thermal emission spectrometer on Mars Global Surveyor (MGS; 1997-2006), and the THEMIS thermal imaging system on Mars Odyssey (2004–present) helped to frame the themes for CRISM's exploration:
Where and when did Mars have persistently wet environments?
What is the composition of Mars' crust?
What are the characteristics of Mars' modern climate?
In November 2018, it was announced that CRISM had fabricated some additional pixels representing the minerals alunite, kieserite, serpentine and perchlorate. The instrument team found that some false positives were caused by a filtering step when the detector switches from a high luminosity area to shadows. Reportedly, 0.05% of the pixels were indicating perchlorate, now known to be a false high estimate by this instrument. However, both the Phoenix lander and the Curiosity rover measured 0.5% perchlorates in the soil, suggesting a global distribution of these salts. Perchlorate is of interest to astrobiologists, as it sequesters water molecules from the atmosphere and reduces its freezing point, potentially creating thin films of watery brine that —although toxic to most Earth life— it could potentially offer habitats for native Martian microbes in the shallow subsurface. (See: Life on Mars#Perchlorates)
Persistently wet environments
Aqueous minerals are minerals that form in water, either by chemical alteration of pre-existing rock or by precipitation out of solution. The minerals indicate where liquid water existed long enough to react chemically with rock. Which minerals form depends on temperature, salinity, pH, and composition of the parent rock. Which aqueous minerals are present on Mars therefore provides important clues to understanding past environments. The OMEGA spectrometer on the Mars Express orbiter and the MER rovers both uncovered evidence for aqueous minerals. OMEGA revealed two distinct kinds of past aqueous deposits. The first, containing sulfates such as gypsum and kieserite, is found in layered deposits of Hesperian age (Martian middle age, roughly from 3.7 to 3 billion years ago). The second, rich in several different kinds of phyllosilicates, instead occurs rocks of Noachian age (older than about 3.7 billion years). The different ages and mineral chemistries suggest an early water-rich environment in which phyllosilicates formed, followed by a dryer, more saline and acidic environment in which sulfates formed. The MER Opportunity rover spent years exploring sedimentary rocks formed in the latter environment, full of sulfates, salts, and oxidized iron minerals.
Soil forms from parent rocks through physical disintegration of rocks and by chemical alteration of the rock fragments. The types of soil minerals can reveal if the environment was cool or warm, wet or dry, or whether the water was fresh or salty. Because CRISM is able to detect many minerals in the soil or regolith, the instrument is being used to help decipher ancient Martian environments. CRISM has found a characteristic layering pattern of aluminum-rich clays overlying iron- and magnesium-rich clays in many areas scattered through Mars' highlands. Surrounding Mawrth Vallis, these "layered clays" cover hundreds of thousands of square kilometers. Similar layering occurs near the Isidis basin, in the Noachian plains surrounding Valles Marineris, and in Noachian plains surrounding the Tharsis plateau. The global distribution of layered clays suggests a global process. Layered clays are late Noachian in age, dating from the same time as water-carved valley networks. The layered clay composition is similar to what is expected for soil formation on Earth - a weathered upper layer leached of soluble iron and magnesium, leaving an insoluble aluminum-rich residue, with a lower layer that still retains its iron and magnesium. Some researchers have suggested that the Martian clay "layer cake" was created by soil-forming processes, including rainfall, at the time that valley networks formed.
Lake and marine environments on Earth are favorable for fossil preservation, especially where the sediments they left behind are rich in carbonates or clays. Hundreds of highland craters on Mars have horizontally layered, sedimentary rocks that may have formed in lakes. CRISM has taken many targeted observations of these rocks to measure their mineralogy and how the minerals vary between layers. Variation between layers helps us to understand the sequence of events that formed the sedimentary rocks. The Mars Orbiter Camera found that where valley networks empty into craters, commonly the craters contain fan-shaped deposits. However it was not completely clear if the fans formed by sediment deposition on dry crater floors (alluvial fans) or in crater lakes (deltas). CRISM discovered that in the fans' lowermost layers, there are concentrated deposits of clay. More clay occurs beyond the end of the fans on the crater floors, and in some cases there is also opal. On Earth, the lowermost layers of deltas are called bottom set beds, and they are made of clays that settled out of inflowing river water in quiet, deep parts of the lakes. This discovery supports the idea that many fans formed in crater lakes where, potentially, evidence for habitable environments could be preserved.
Not all ancient Martian lakes were fed by inflowing valley networks. CRISM discovered several craters on the western slope of Tharsis that contain "bathtub rings" of sulfate minerals and a kind of phyllosilicate called kaolinite. Both minerals can form together by precipitating out of acidic, saline water. These craters lack inflowing valley networks, showing that they were not fed by rivers - instead, they must have been fed by inflowing groundwater.
The identification of hot spring deposits was a priority for CRISM, because hot springs would have had energy (geothermal heat) and water, two basic requirements for life. One of the signatures of hot springs on Earth is deposits of silica. The MER Spirit rover explored a silica-rich deposit called "Home Plate" that is thought to have formed in a hot spring. CRISM has discovered other silica-rich deposits in many locations. Some are associated with central peaks of impact craters, which are sites of heating driven by meteor impact. Silica has also been identified on the flanks of volcanic inside the caldera of the Syrtis Major shield volcano, forming light-colored mounds that look like scaled-up versions of Home Plate. Elsewhere, in the westernmost parts of Valles Marineris, near the core of the Tharsis volcanic province, there are sulfate and clay deposits suggestive of "warm" springs. Hot spring deposits are one of the most promising areas on Mars to search for evidence for past life.
One of the leading hypotheses for why ancient Mars was wetter than today is that a thick, carbon dioxide-rich atmosphere created a global greenhouse, that warmed the surface enough for liquid water to occur in large amounts. Carbon dioxide ice in today's polar caps is too limited in volume to hold that ancient atmosphere. If a thick atmosphere ever existed, it was either blown into space by solar wind or impacts, or reacted with silicate rocks to become trapped as carbonates in Mars' crust. One of the goals that drove CRISM's design was to find carbonates, to try to solve this question about what happened to Mars' atmosphere. And one of CRISM's most important discoveries was the identification of carbonate bedrock in Nili Fossae in 2008. Soon thereafter, landed missions to Mars started identifying carbonates on the surface; the Phoenix Mars lander found between 3–5 wt% calcite (CaCO3) at its northern lowland landing site, while the MER Spirit rover identified outcrops rich in magnesium-iron carbonate (16–34 wt%) in the Columbia Hills of Gusev crater. Later CRISM analyses identified carbonates in the rim of Huygens crater which suggested that there could be extensive deposits of buried carbonates on Mars. However, a study by CRISM scientists estimated that all of the carbonate rock on Mars holds less than the present Martian atmosphere worth of carbon dioxide. They determined that if a dense ancient Martian atmosphere did exist, it is probably not trapped in the crust.
Crustal composition
Understanding the composition of Mars' crust and how it changed with time tells us about many aspects of Mars' evolution as a planet, and was a major goal of CRISM. Remote and landed measurements prior to CRISM, and analysis of Martian meteorites, all suggest that the Martian crust is made mostly of basaltic igneous rock composed mostly of feldspar and pyroxene. Images from the Mars Orbiter Camera on MGS showed that in some places the upper few kilometers of the crust is composed of hundreds of thin volcanic lava flows. TES and THEMIS both found mostly basaltic igneous rock, with scattered olivine-rich and even some quartz-rich rocks.
The first recognition of widespread sedimentary rock on Mars came from the Mars Orbiter Camera which found that several areas of the planet - including Valles Marineris and Terra Arabia - have horizontally layered, light-toned rocks. Follow-up observations of those rocks' mineralogy by OMEGA found that some are rich in sulfate minerals, and that other layered rocks around Mawrth Vallis are rich in phyllosilicates. Both class of minerals are signatures of sedimentary rocks. CRISM had used its improved spatial resolution to look for other deposits of sedimentary rock on Mars' surface, and for layers of sedimentary rock buried between layers of volcanic rock in Mars' crust.
Modern climates
To understand Mars' ancient climate, and whether it might have created environments habitable for life, first we need to understand Mars' climate today. Each mission to Mars has made new advances in understanding its climate. Mars has seasonal variations in the abundances of water vapor, water ice clouds and hazes, and atmospheric dust. During southern summer, when Mars is closest to the Sun (at perihelion), solar heating can raise massive dust storms. Regional dust storms - ones having a 1000-kilometer scale - show surprising repeatability Mars-year to Mars-year. Once every decade or so, they grow into global-scale events. In contrast, during northern summer when Mars is furthest from the Sun (at aphelion), there is an equatorial water-ice cloud belt and very little dust in the atmosphere. Atmospheric water vapor varies in abundance seasonally, with the greatest abundances in each hemisphere's summer after the seasonal polar caps have sublimated into the atmosphere. During winter, both water and carbon dioxide frost and ices form on Mars' surface. These ices form the seasonal and residual polar caps. The seasonal caps - which form each autumn and sublimate each spring - are dominated by carbon dioxide ice. The residual caps - which persist year after year - consist mostly of water ice at the north pole and water ice with a thin veneer (a few 10's of meters thick) of carbon dioxide ice at the south pole.
Mars' atmosphere is so thin and wispy that solar heating of dust and ice in the atmosphere - not heating of the atmospheric gases - is more important in driving weather. Small, suspended particles of dust and water ice - aerosols - intercept 20–30% of incoming sunlight, even under relatively clear conditions. So variations in the amounts of these aerosols have a huge influence on climate. CRISM had taken three major kinds of measurements of dust and ice in the atmosphere: targeted observations whose repeated views of the surface provide a sensitive estimate of aerosol abundance; special global grids of targeted observations every couple of months designed especially to track spatial and seasonal variations; and scans across the planet's limb to show how dust and ice vary with height above the surface.
The south polar seasonal cap has a bizarre variety of bright and dark streaks and spots that appear during spring, as carbon dioxide ice sublimates. Prior to MRO there were various ideas for processes that could form these strange features, a leading model being carbon dioxide geysers. CRISM had watched the dark spots grow during southern spring, and found that bright streaks forming alongside the dark spots are made of fresh, new carbon dioxide frost, pointing like arrows back to their sources - the same sources as the dark spots. The bright streaks probably form by expansion, cooling, and freezing of the carbon dioxide gas, forming a "smoking gun" to support the geyser hypothesis.
See also
Nadir and Occultation for Mars Discovery (another Spectrometer in Mars orbit since 2016, on ExoMars)
Ralph (New Horizons) (imaging spectrometer on New Horizons)
References
External links
CRISM official website
Browse Map of Images from JHUAPL.
Mars Reconnaissance Orbiter
Missions to Mars
Spectrometers | Compact Reconnaissance Imaging Spectrometer for Mars | [
"Physics",
"Chemistry"
] | 3,315 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.