id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
607,686 | https://en.wikipedia.org/wiki/Palatini%20variation | In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection.
In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric . In the Palatini variational method one takes as independent field variables not only the ten components but also the forty components of the affine connection , assuming, a priori, no dependence of the from the and their derivatives.
The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations.
Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro.
The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
See also
Palatini identity
Self-dual Palatini action
Tetradic Palatini action
References
[English translation by R. Hojman and C. Mukku in P. G. Bergmann and V. De Sabbata (eds.) Cosmology and Gravitation, Plenum Press, New York (1980)]
Lagrangian mechanics
General relativity | Palatini variation | [
"Physics",
"Mathematics"
] | 394 | [
"Lagrangian mechanics",
"Classical mechanics",
"General relativity",
"Relativity stubs",
"Theory of relativity",
"Dynamical systems"
] |
608,002 | https://en.wikipedia.org/wiki/Tyramine | Tyramine ( ) (also spelled tyramin), also known under several other names, is a naturally occurring trace amine derived from the amino acid tyrosine. Tyramine acts as a catecholamine releasing agent. Notably, it is unable to cross the blood-brain barrier, resulting in only non-psychoactive peripheral sympathomimetic effects following ingestion. A hypertensive crisis can result, however, from ingestion of tyramine-rich foods in conjunction with the use of monoamine oxidase inhibitors (MAOIs).
Occurrence
Tyramine occurs widely in plants and animals, and is metabolized by various enzymes, including monoamine oxidases. In foods, it often is produced by the decarboxylation of tyrosine during fermentation or decay. Foods that are fermented, cured, pickled, aged, or spoiled have high amounts of tyramine. Tyramine levels go up when foods are at room temperature or go past their freshness date.
Specific foods containing considerable amounts of tyramine include:
Strong or aged cheeses: cheddar, Swiss, Parmesan, Stilton, Gorgonzola or blue cheeses, Camembert, feta, Muenster
Meats that are cured, smoked, or processed: such as salami, pepperoni, dry sausages, hot dogs, bologna, bacon, corned beef, pickled or smoked fish, caviar, aged chicken livers, soups or gravies made from meat extract
Pickled or fermented foods: sauerkraut, kimchi, tofu (especially stinky tofu), pickles, miso soup, bean curd, tempeh, sourdough breads
Condiments: soy, shrimp, fish, miso, teriyaki, and bouillon-based sauces
Drinks: beer (especially tap or home-brewed), vermouth, red wine, sherry, liqueurs
Beans, vegetables, and fruits: fermented or pickled vegetables, overripe fruits
Chocolate
Scientists more and more consider tyramine in food as an aspect of safety. They propose projects of regulations aimed to enact control of biogenic amines in food by various strategies, including usage of proper fermentation starters, or preventing their decarboxylase activity. Some authors wrote that this has already given positive results, and tyramine content in food is now lower than it has been in the past.
In plants
Mistletoe (toxic and not used by humans as a food, but historically used as a medicine).
In animals
Tyramine also plays a role in animals including: In behavioral and motor functions in Caenorhabditis elegans; Locusta migratoria swarming behaviour; and various nervous roles in Rhipicephalus, Apis, Locusta, Periplaneta, Drosophila, Phormia, Papilio, Bombyx, Chilo, Heliothis, Mamestra, Agrotis, and Anopheles.
Biological activity
Tyramine is a norepinephrine and dopamine releasing agent (NDRA) and indirectly acting sympathomimetic. Evidence for the presence of tyramine in the human brain has been confirmed by postmortem analysis. Additionally, the possibility that tyramine acts directly as a neuromodulator was revealed by the discovery of a G protein-coupled receptor with high affinity for tyramine, called the trace amine-associated receptor (TAAR1). The TAAR1 receptor is found in the brain, as well as peripheral tissues, including the kidneys. Tyramine is a full agonist of the TAAR1 in rodents and humans.
Tyramine is physiologically metabolized by monoamine oxidases (primarily MAO-A), FMO3, PNMT, DBH, and CYP2D6. Human monoamine oxidase enzymes metabolize tyramine into 4-hydroxyphenylacetaldehyde. If monoamine metabolism is compromised by the use of monoamine oxidase inhibitors (MAOIs) and foods high in tyramine are ingested, a hypertensive crisis can result, as tyramine also can displace stored monoamines, such as dopamine, norepinephrine, and epinephrine, from pre-synaptic vesicles. Tyramine is considered a "false neurotransmitter", as it enters noradrenergic nerve terminals and displaces large amounts of norepinephrine, which enters the blood stream and causes vasoconstriction.
Additionally, cocaine has been found to block blood pressure rise that is originally attributed to tyramine, which is explained by the blocking of adrenaline by cocaine from reabsorption to the brain.
The first signs of this effect were discovered by a British pharmacist who noticed that his wife, who at the time was on MAOI medication, had severe headaches when eating cheese. For this reason, it is still called the "cheese reaction" or "cheese crisis", although other foods can cause the same problem.
Most processed cheeses do not contain enough tyramine to cause hypertensive effects, although some aged cheeses (such as Stilton) do.
A large dietary intake of tyramine (or a dietary intake of tyramine while taking MAO inhibitors) can cause the tyramine pressor response, which is defined as an increase in systolic blood pressure of 30 mmHg or more. The increased release of norepinephrine (noradrenaline) from neuronal cytosol or storage vesicles is thought to cause the vasoconstriction and increased heart rate and blood pressure of the pressor response. In severe cases, adrenergic crisis can occur. Although the mechanism is unclear, tyramine ingestion also triggers migraine attacks in sensitive individuals and can even lead to stroke. Vasodilation, dopamine, and circulatory factors are all implicated in the migraines. Double-blind trials suggest that the effects of tyramine on migraine may be adrenergic.
Research reveals a possible link between migraines and elevated levels of tyramine. A 2007 review published in Neurological Sciences presented data showing migraine and cluster diseases are characterized by an increase of circulating neurotransmitters and neuromodulators (including tyramine, octopamine, and synephrine) in the hypothalamus, amygdala, and dopaminergic system. People with migraine are over-represented among those with inadequate natural monoamine oxidase, resulting in similar problems to individuals taking MAO inhibitors. Many migraine attack triggers are high in tyramine.
If one has had repeated exposure to tyramine, however, there is a decreased pressor response; tyramine is degraded to octopamine, which is subsequently packaged in synaptic vesicles with norepinephrine (noradrenaline). Therefore, after repeated tyramine exposure, these vesicles contain an increased amount of octopamine and a relatively reduced amount of norepinephrine. When these vesicles are secreted upon tyramine ingestion, there is a decreased pressor response, as less norepinephrine is secreted into the synapse, and octopamine does not activate alpha or beta adrenergic receptors.
When using a MAO inhibitor (MAOI), an intake of approximately 10 to 25 mg of tyramine is required for a severe reaction, compared to 6 to 10 mg for a mild reaction.
Tyramine, like phenethylamine, is a monoaminergic activity enhancer (MAE) of serotonin, norepinephrine, and dopamine in addition to its catecholamine-releasing activity. That is, it enhances the action potential-mediated release of these monoamine neurotransmitters. The compound is active as a MAE at much lower concentrations than the concentrations at which it induces the release of catecholamines. The MAE actions of tyramine and other MAEs may be mediated by TAAR1 agonism.
Biosynthesis
Biochemically, tyramine is produced by the decarboxylation of tyrosine via the action of the enzyme tyrosine decarboxylase. Tyramine can, in turn, be converted to methylated alkaloid derivatives N-methyltyramine, N,N-dimethyltyramine (hordenine), and N,N,N-trimethyltyramine (candicine).
In humans, tyramine is produced from tyrosine, as shown in the following diagram.
Chemistry
In the laboratory, tyramine can be synthesized in various ways, in particular by the decarboxylation of tyrosine.
Society and culture
Legal status
United States
Tyramine is a Schedule I controlled substance, categorized as a hallucinogen, making it illegal to buy, sell, or possess in the state of Florida without a license at any purity level or any form whatsoever. The language in the Florida statute says tyramine is illegal in "any material, compound, mixture, or preparation that contains any quantity of [tyramine] or that contains any of [its] salts, isomers, including optical, positional, or geometric isomers, and salts of isomers, if the existence of such salts, isomers, and salts of isomers is possible within the specific chemical designation."
This ban is likely the product of lawmakers overly eager to ban substituted phenethylamines, which tyramine is, in the mistaken belief that ring-substituted phenethylamines are hallucinogenic drugs like the 2C series of psychedelic substituted phenethylamines. The further banning of tyramine's optical isomers, positional isomers, or geometric isomers, and salts of isomers where they exist, means that meta-tyramine and phenylethanolamine, a substance found in every living human body, and other common, non-hallucinogenic substances are also illegal to buy, sell, or possess in Florida. Given that tyramine occurs naturally in many foods and drinks (most commonly as a by-product of bacterial fermentation), e.g. wine, cheese, and chocolate, Florida's total ban on the substance may prove difficult to enforce.
Notes
References
Antihypotensive agents
Migraine
Monoamine oxidase inhibitors
Monoaminergic activity enhancers
Norepinephrine-dopamine releasing agents
Peripherally selective drugs
Phenethylamine alkaloids
Phenethylamines
TAAR1 agonists
Trace amines
4-Hydroxyphenyl compounds | Tyramine | [
"Chemistry"
] | 2,296 | [
"Alkaloids by chemical classification",
"Phenethylamine alkaloids"
] |
608,162 | https://en.wikipedia.org/wiki/Suprachiasmatic%20nucleus | The suprachiasmatic nucleus or nuclei (SCN) is a small region of the brain in the hypothalamus, situated directly above the optic chiasm. It is responsible for regulating sleep cycles in animals. Reception of light inputs from photosensitive retinal ganglion cells allow it to coordinate the subordinate cellular clocks of the body and entrain to the environment. The neuronal and hormonal activities it generates regulate many different body functions in an approximately 24-hour cycle.
The SCN also interacts with many other regions of the brain. It contains several cell types, neurotransmitters and peptides, including vasopressin and vasoactive intestinal peptide.
Disruptions or damage to the SCN has been associated with different mood disorders and sleep disorders, suggesting the significance of the SCN in regulating circadian timing
Neuroanatomy
The SCN is situated in the anterior part of the hypothalamus immediately dorsal, or superior (hence supra) to the optic chiasm bilateral to (on either side of) the third ventricle. It consists of two nuclei composed of approximately 10,000 neurons.
The morphology of the SCN is species dependent. Distribution of different cell phenotypes across specific SCN regions, such as the concentration of VP-IR neurons, can cause the shape of the SCN to change.
The nucleus can be divided into ventrolateral and dorsolateral portions, also known as the core and shell, respectively. These regions differ in their expression of the clock genes, the core expresses them in response to stimuli whereas the shell expresses them constitutively.
In terms of projections, the core receives innervation via three main pathways, the retinohypothalamic tract, geniculohypothalamic tract, and projections from some raphe nuclei. The dorsomedial SCN is mainly innervated by the core and also by other hypothalamic areas. Lastly, its output is mainly to the subparaventricular zone and dorsomedial hypothalamic nucleus which both mediate the influence SCN exerts over circadian regulation of the body.
The most abundant peptides found within the SCN are arginine-vasopressin (AVP), vasoactive intestinal polypeptide (VIP), and peptide histidine-isoleucine (PHI). Each of these peptides are localized in different regions. Neurons with AVP are found dorsomedially, whereas VIP-containing and PHI-containing neurons are found ventrolaterally.
Circadian clock
Different organisms such as bacteria, plants, fungi, and animals, show genetically based near-24-hour rhythms. Although all of these clocks appear to be based on a similar type of genetic feedback loop, the specific genes involved are thought to have evolved independently in each kingdom. Many aspects of mammalian behavior and physiology show circadian rhythmicity, including sleep, physical activity, alertness, hormone levels, body temperature, immune function, and digestive activity. Early experiments on the function of the SCN involved lesioning the SCN in hamsters. SCN lesioned hamsters lost their daily activity rhythms. Further, when the SCN of a hamster was transplanted into an SCN lesioned hamster, the hamster adopted the rhythms of the hamster from which the SCN was transplanted. Together, these experiments suggest that the SCN is sufficient for generating circadian rhythms in hamsters.
Later studies have shown that skeletal, muscle, liver, and lung tissues in rats generate 24-hour rhythms, which dampen over time when isolated in a dish, where the SCN maintains its rhythms. Together, these data suggest a model whereby the SCN maintains control across the body by synchronizing "slave oscillators," which exhibit their own near-24-hour rhythms and control circadian phenomena in local tissue.
The SCN receives input from specialized photosensitive ganglion cells in the retina via the retinohypothalamic tract. Neurons in the ventrolateral SCN (vlSCN) have the ability for light-induced gene expression. Melanopsin-containing ganglion cells in the retina have a direct connection to the ventrolateral SCN via the retinohypothalamic tract. When the retina receives light, the vlSCN relays this information throughout the SCN allowing entrainment, synchronization, of the person's or animal's daily rhythms to the 24-hour cycle in nature. The importance of entraining organisms, including humans, to exogenous cues such as the light/dark cycle, is reflected by several circadian rhythm sleep disorders, where this process does not function normally.
Neurons in the dorsomedial SCN (dmSCN) are believed to have an endogenous 24-hour rhythm that can persist under constant darkness (in humans averaging about 24 hours 11 min). A GABAergic mechanism is involved in the coupling of the ventral and dorsal regions of the SCN.
Circadian rhythms of endothermic (warm-blooded) and ectothermic (cold-blooded) vertebrates
Information about the direct neuronal regulation of metabolic processes and circadian rhythm-controlled behaviors is not well known among either endothermic or ectothermic vertebrates, although extensive research has been done on the SCN in model animals such as the mammalian mouse and ectothermic reptiles, particularly lizards. The SCN is known to be involved not only in photoreception through innervation from the retinohypothalamic tract, but also in thermoregulation of vertebrates capable of homeothermy as well as regulating locomotion and other behavioral outputs of the circadian clock within ectothermic vertebrates. The behavioral differences between both classes of vertebrates when compared to the respective structures and properties of the SCN as well as various other nuclei proximate to the hypothalamus provide insight into how these behaviors are the consequence of differing circadian regulation. Ultimately, many neuroethological studies must be done to completely ascertain the direct and indirect roles of the SCN on circadian-regulated behaviors of vertebrates.
The SCN of endotherms and ectotherms
In general, external temperature does not influence endothermic animal circadian rhythm because of the ability of these animals to keep their internal body temperature constant through homeostatic thermoregulation; however, peripheral oscillators (see Circadian rhythm) in mammals are sensitive to temperature pulses and will experience resetting of the circadian clock phase and associated genetic expression, suggesting how peripheral circadian oscillators may be separate entities from one another despite having a master oscillator within the SCN. Furthermore, when individual neurons of the SCN from a mouse were treated with heat pulses, a similar resetting of oscillators was observed, but when an intact SCN was treated with the same heat pulse treatment the SCN was resistant to temperature change by exhibiting an unaltered circadian oscillating phase. In ectothermic animals, particularly the ruin lizard, Podarcis siculus, temperature has been shown to affect the circadian oscillators within the SCN. This reflects a potential evolutionary relationship among endothermic and ectothermic vertebrates as ectotherms rely on environmental temperature to affect their circadian rhythms and behavior while endotherms have an evolved SCN that is resistant to external temperature fluctuations and uses photoreception as a means for entraining the circadian oscillators within their SCN. In addition, the differences of the SCN between endothermic and ectothermic vertebrates suggest that the neuronal organization of the temperature-resistant SCN in endotherms is responsible for driving thermoregulatory behaviors in those animals differently from those of ectotherms, since they rely on external temperature for engaging in certain behaviors.
Behaviors controlled by the SCN of vertebrates
Significant research has been conducted on the genes responsible for controlling circadian rhythm, in particular within the SCN. Knowledge of the gene expression of Clock (Clk) and Period2 (Per2), two of the many genes responsible for regulating circadian rhythm within the individual cells of the SCN, has allowed for a greater understanding of how genetic expression influences the regulation of circadian rhythm-controlled behaviors. Studies on thermoregulation of ruin lizards and mice have informed some connections between the neural and genetic components of both vertebrates when experiencing induced hypothermic conditions. Certain findings have reflected how evolution of SCN both structurally and genetically has resulted in the engagement of characteristic and stereotyped thermoregulatory behavior in both classes of vertebrates.
Mice: Among vertebrates, it is known that mammals are endotherms that are capable of homeostatic thermoregulation. It has been shown that mice display thermosensitivity within the SCN. However, the regulation of body temperature in hypothermic mice is more sensitive to the amount of light in their environment. Even while fasted, mice in darkened conditions and experiencing hypothermia maintained a stable internal body temperature. In light conditions, mice showed a drop in body temperature under the same fasting and hypothermic conditions. Through analyzing genetic expression of Clock genes in wild-type and knockout strains, as well as analyzing the activity of neurons within the SCN and connections to proximate nuclei of the hypothalamus in the aforementioned conditions, it has been shown that the SCN is the center of control for circadian body temperature rhythm. This circadian control, thus, includes both direct and indirect influence of many of the thermoregulatory behaviors that mammals engage in to maintain homeostasis.
Ruin lizards: Several studies have been conducted on the genes expressed in circadian oscillating cells of the SCN during various light and dark conditions, as well as effects from inducing mild hypothermia in reptiles. In terms of structure, the SCNs of lizards have a closer resemblance to those of mice, possessing a dorsomedial portion and a ventrolateral core. However, genetic expression of the circadian-related Per2 gene in lizards is similar to that in reptiles and birds, despite the fact that birds have been known to have a distinct SCN structure consisting of a lateral and medial portion. Studying the lizard SCN because of the lizard's small body size and ectothermy is invaluable to understanding how this class of vertebrates modifies its behavior within the dynamics of circadian rhythm, but it has not yet been determined whether the systems of cold-blooded vertebrates were slowed as a result of decreased activity in the SCN or showed decreases in metabolic activity as a result of hypothermia.
Other signals from the retina
The SCN is one of many nuclei that receive nerve signals directly from the retina.
Some of the others are the lateral geniculate nucleus (LGN), the superior colliculus, the basal optic system, and the pretectum:
The LGN passes information about color, contrast, shape, and movement on to the visual cortex and itself signals to the SCN.
The superior colliculus controls the movement and orientation of the eye.
The basal optic system also controls eye movements.
The pretectum controls the size of the pupil.
Genetic Basis of SCN Function
The SCN is the central circadian pacemaker of mammals, serving as the coordinator of mammalian circadian rhythms. Neurons in an intact SCN show coordinated circadian rhythms in electrical activity. Neurons isolated from the SCN have been shown to produce and sustain circadian rhythms in vitro, suggesting that each individual neuron of the SCN can function as an independent circadian oscillator at the cellular level. Each cell of the SCN synchronizes its oscillations to the cells around it, resulting in a network of mutually reinforced and precise oscillations constituting the SCN master clock.
Mammals
The SCN functions as a circadian biological clock in vertebrates including teleosts, reptiles, birds, and mammals. In mammals, the rhythms produced by the SCN are driven by a transcription-translation negative feedback loop (TTFL) composed of interacting positive and negative transcriptional feedback loops. Within the nucleus of an SCN cell, the genes Clock and Bmal1 (mop3) encode the BHLH-PAS transcription factors CLOCK and BMAL1 (MOP3), respectively. CLOCK and BMAL1 are positive activators that form CLOCK-BMAL1 heterodimers. These heterodimers then bind to E-boxes upstream of multiple genes, including per and cry, to enhance and promote their transcription and eventual translation. In mammals, there are three known homologs for the period gene in Drosophila, namely per1, per2, and per3.
As per and cry are transcribed and translated into PER and CRY, the proteins accumulate and form heterodimers in the cytoplasm. The heterodimers are phosphorylated at a rate that determines the length of the transcription-translation feedback loop (TTFL) and then translocate back into the nucleus where the phosphorylated PER-CRY heterodimers act on CLOCK and/or BMAL1 to inhibit their activity. Although the role of phosphorylation in the TTFL mechanism is known, the specific kinetics are yet to be elucidated. As a result, PER and CRY function as negative repressors and inhibit the transcription of per and cry. Over time, the PER-CRY heterodimers degrade and the cycle begins again with a period of about 24.5 hours. The integral genes involved, termed “clock genes," are highly conserved throughout both SCN-bearing vertebrates like mice, rats, and birds as well as in non-SCN bearing animals such as Drosophila.
Electrophysiology
Neurons in the SCN fire action potentials in a 24-hour rhythm, even under constant conditions. At mid-day, the firing rate reaches a maximum, and, during the night, it falls again. Rhythmic expression of circadian regulatory genes in the SCN requires depolarization in the SCN neurons via calcium and cAMP. Thus, depolarization of SCN neurons via cAMP and calcium contributes to the magnitude of the rhythmic gene expression in the SCN.
Further, the SCN synchronizes nerve impulses which spread to various parasympathetic and sympathetic nuclei. The sympathetic nuclei drive glucocorticoid output from the adrenal gland which activates Per1 in the body cells, thus resetting the circadian cycle of cells in the body. Without the SCN, rhythms in body cells dampen over time, which may be due to lack of synchrony between cells.
Many SCN neurons are sensitive to light stimulation via the retina. The photic response is likely linked to effects of light on circadian rhythms. In addition, application of melatonin in live rats and isolated SCN cells can decrease the firing rate of these neurons. Variances in light input due to jet lag, seasonal changes, and constant light conditions all change the firing rhythm in SCN neurons demonstrating the relationship between light and SCN neuronal functioning.
Clinical significance
Irregular sleep-wake rhythm disorder
Irregular sleep-wake rhythm (ISWR) disorder is thought to be caused by structural damage to the SCN, decreased responsiveness of the circadian clock to light and other stimuli, and decreased exposure to light. People who tend to stay indoors and limit their exposure to light experience decreased nocturnal melatonin production. The decrease in melatonin production at night corresponds with greater expression of SCN-generated wakefulness during night, causing irregular sleep patterns.
Major depressive disorder
Major depressive disorder (MDD) has been associated with altered circadian rhythms. Patients with MDD have weaker rhythms that express clock genes in the brain. When SCN rhythms were disturbed, anxiety-like behavior, weight gain, helplessness, and despair were reported in a study conducted with mice. Abnormal glucocorticoid levels occurred in mice with no Bmal1 expression in the SCN.
Alzheimer's disease
The functional disruption of the SCN can be observed in early stages of Alzheimer's disease (AD). Changes in the SCN and melatonin secretion are major factors that cause circadian rhythm disturbances. These disturbances cause the normal physiology of sleep to change, such as the biological clock and body temperature during rest. Patients with AD experience insomnia, hypersomnia, and other sleep disorders as a result of the degeneration of the SCN and changes in critical neurotransmitter concentrations.
History
The idea that the SCN is the main sleep cycle regulator in mammals was proposed by Robert Moore, who conducted experiments using radioactive amino acids to find where the termination of the retinohypothalamic projection occurs in rodents. Early lesioning experiments in mouse, guinea pig, cat, and opossum established how removal of the SCN results in ablation of circadian rhythm in mammals.
See also
Chronobiology
Photosensitive ganglion cell
Sense of time
Retinohypothalamic tract
Shift work sleep disorder
Non-24-hour sleep–wake disorder
References
External links
Diagram at thebrain.mcgill.ca
Hypothalamus
Circadian rhythm
Sleep physiology | Suprachiasmatic nucleus | [
"Biology"
] | 3,670 | [
"Behavior",
"Sleep physiology",
"Sleep",
"Circadian rhythm"
] |
609,125 | https://en.wikipedia.org/wiki/Expression%20%28mathematics%29 | In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers, variables, operations, and functions. Other symbols include punctuation marks and brackets, used for grouping where there is not a well-defined order of operations.
Expressions are commonly distinguished from formulas: expressions are a kind of mathematical object, whereas formulas are statements about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, is an expression, while the inequality is a formula.
To evaluate an expression means to find a numerical value equivalent to the expression. Expressions can be evaluated or simplified by replacing operations that appear in them with their result. For example, the expression simplifies to , and evaluates to
An expression is often used to define a function, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, and define the function that associates to each number its square plus one. An expression with no variables would define a constant function. Usually, two expressions are considered equal or equivalent if they define the same function. Such an equality is called a "semantic equality", that is, both expressions "mean the same thing."
History
Early written mathematics
The earliest written mathematics likely began with tally marks, where each mark represented one unit, carved into wood or stone. An example of early counting is the Ishango bone, found near the Nile and dating back over 20,000 years ago, which is thought to show a six-month lunar calendar. Ancient Egypt developed a symbolic system using hieroglyphics, assigning symbols for powers of ten and using addition and subtraction symbols resembling legs in motion. This system, recorded in texts like the Rhind Mathematical Papyrus (c. 2000–1800 BC), influenced other Mediterranean cultures. In Mesopotamia, a similar system evolved, with numbers written in a base-60 (sexagesimal) format on clay tablets written in Cuneiform, a technique originating with the Sumerians around 3000 BC. This base-60 system persists today in measuring time and angles.
Syncopated stage
The "syncopated" stage of mathematics introduced symbolic abbreviations for commonly used operations and quantities, marking a shift from purely geometric reasoning. Ancient Greek mathematics, largely geometric in nature, drew on Egyptian numerical systems (especially Attic numerals), with little interest in algebraic symbols, until the arrival of Diophantus of Alexandria, who pioneered a form of syncopated algebra in his Arithmetica, which introduced symbolic manipulation of expressions. His notation represented unknowns and powers symbolically, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called . The square of was ; the cube was ; the fourth power was ; the fifth power was ; and meant to subtract everything on the right from the left. So for example, what would be written in modern notation as:
Would be written in Diophantus's syncopated notation as:
In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. Greek and other ancient mathematical advances, were often trapped in cycles of bursts of creativity, followed by long periods of stagnation, but this began to change as knowledge spread in the early modern period.
Symbolic stage and early arithmetic
The transition to fully symbolic algebra began with Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī, (1412–1482) who introduced symbols for operations using Arabic characters. The plus sign (+) appeared around 1351 with Nicole Oresme, likely derived from the Latin et (meaning "and"), while the minus sign (−) was first used in 1489 by Johannes Widmann. Luca Pacioli included these symbols in his works, though much was based on earlier contributions by Piero della Francesca. The radical symbol (√) for square root was introduced by Christoph Rudolff in the 1500s, and parentheses for precedence by Niccolò Tartaglia in 1556. François Viète’s New Algebra (1591) formalized modern symbolic manipulation. The multiplication sign (×) was first used by William Oughtred and the division sign (÷) by Johann Rahn.
René Descartes further advanced algebraic symbolism in La Géométrie (1637), where he introduced the use of letters at the end of the alphabet (x, y, z) for variables, along with the Cartesian coordinate system, which bridged algebra and geometry. Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus in the late 17th century, with Leibniz's notation becoming the standard.
Variables and evaluation
In elementary algebra, a variable in an expression is a letter that represents a number whose value may change. To evaluate an expression with a variable means to find the value of the expression when the variable is assigned a given number. Expressions can be evaluated or simplified by replacing operations that appear in them with their result, or by combining like-terms.
For example, take the expression ; it can be evaluated at in the following steps:
, (replace x with 3)
(use definition of exponent)
(simplify)
A term is a constant or the product of a constant and one or more variables. Some examples include The constant of the product is called the coefficient. Terms that are either constants or have the same variables raised to the same powers are called like terms. If there are like terms in an expression, one can simplify the expression by combining the like terms. One adds the coefficients and keeps the same variable.
Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents an operation over constants and free variables and whose output is the resulting value of the expression.
For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in , depending on the context, the variable can be free and bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics.
Equivalence
An expression is often used to define a function, or denote compositions of functions, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, and define the function that associates to each number its square plus one. An expression with no variables would define a constant function. In this way, two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. The equivalence between two expressions is called an identity and is sometimes denoted with
For example, in the expression the variable is bound, and the variable is free. This expression is equivalent to the simpler expression ; that is The value for is 36, which can be denoted
Polynomial evaluation
A polynomial consists of variables and coefficients, that involve only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. The problem of polynomial evaluation arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing.
In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact.
For evaluating the univariate polynomial the most naive method would use multiplications to compute , use multiplications to compute and so on for a total of multiplications and additions. Using better methods, such as Horner's rule, this can be reduced to multiplications and additions. If some preprocessing is allowed, even more savings are possible.
Computation
A computation is any type of arithmetic or non-arithmetic calculation that is "well-defined". The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. All statements characterised in modern programming languages are well-defined, including C++, Python, and Java.
Common examples of computation are basic arithmetic and the execution of computer algorithms. A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. For example, multiplying 7 by 6 is a simple algorithmic calculation. Extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation.
Rewriting
Expressions can be computed by means of an evaluation strategy. To illustrate, executing a function call f(a,b) may first evaluate the arguments a and b, store the results in references or memory locations ref_a and ref_b, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy. Evaluation strategy is part of the semantics of the programming language definition. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies. Some languages define a calling convention.
In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. A rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term. One of the most common systems involves lambda calculus.
Well-defined expressions
The language of mathematics exhibits a kind of grammar (called formal grammar) about how expressions may be written. There are two considerations for well-definedness of mathematical expressions, syntax and semantics. Syntax is concerned with the rules used for constructing, or transforming the symbols of an expression without regard to any interpretation or meaning given to them. Expressions that are syntactically correct are called well-formed. Semantics is concerned with the meaning of these well-formed expressions. Expressions that are semantically correct are called well-defined.
Well-formed
The syntax of mathematical expressions can be described somewhat informally as follows: the allowed operators must have the correct number of inputs in the correct places (usually written with infix notation), the sub-expressions that make up these inputs must be well-formed themselves, have a clear order of operations, etc. Strings of symbols that conform to the rules of syntax are called well-formed, and those that are not well-formed are called, ill-formed, and are do not constitute mathematical expressions.
For example, in arithmetic, the expression 1 + 2 × 3 is well-formed, but
.
is not.
However, being well-formed is not enough to be considered well-defined. For example in arithmetic, the expression is well-formed, but it is not well-defined. (See Division by zero). Such expressions are called undefined.
Well-defined
Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions. An expression that defines a unique value or meaning is said to be well-defined. Otherwise, the expression is said to be ill defined or ambiguous. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator to designate an internal direct sum.
In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. The same syntactic expression 1 + 2 × 3 can have different values (mathematically 7, but also 9), depending on the order of operations implied by the context (See also Operations § Calculators).
For real numbers, the product is unambiguous because ; hence the notation is said to be well defined. This property, also known as associativity of multiplication, guarantees the result does not depend on the sequence of multiplications; therefore, a specification of the sequence can be omitted. The subtraction operation is non-associative; despite that, there is a convention that is shorthand for , thus it is considered "well-defined". On the other hand, Division is non-associative, and in the case of , parenthesization conventions are not well established; therefore, this expression is often considered ill-defined.
Unlike with functions, notational ambiguities can be overcome by means of additional definitions (e.g., rules of precedence, associativity of the operator). For example, in the programming language C, the operator - for subtraction is left-to-right-associative, which means that a-b-c is defined as (a-b)-c, and the operator = for assignment is right-to-left-associative, which means that a=b=c is defined as a=(b=c). In the programming language APL there is only one rule: from right to left – but parentheses first.
Formal definition
The term 'expression' is part of the language of mathematics, that is to say, it is not defined within mathematics, but taken as a primitive part of the language. To attempt to define the term would not be doing mathematics, but rather, one would be engaging in a kind of metamathematics (the metalanguage of mathematics), usually mathematical logic. Within mathematical logic, mathematics is usually described as a kind of formal language, and a well-formed expression can be defined recursively as follows:
The alphabet consists of:
A set of individual constants: Symbols representing fixed objects in the domain of discourse, such as numerals (1, 2.5, 1/7, ...), sets (, ...), truth values (T or F), etc.
A set of individual variables: A countably infinite amount of symbols representing variables used for representing an unspecified object in the domain. (Usually letters like , or )
A set of operations: Function symbols representing operations that can be performed on elements over the domain, like addition (+), multiplication (×), or set operations like union (∪), or intersection (∩). (Functions can be understood as unary operations)
Brackets ( )
With this alphabet, the recursive rules for forming a well-formed expression (WFE) are as follows:
Any constant or variable as defined are the atomic expressions, the simplest well-formed expressions (WFE's). For instance, the constant or the variable are syntactically correct expressions.
Let be a metavariable for any n-ary operation over the domain, and let be metavariables for any WFE's.
Then is also well-formed. For the most often used operations, more convenient notations (like infix notation) have been developed over the centuries.
For instance, if the domain of discourse is the real numbers, can denote the binary operation +, then is well-formed. Or can be the unary operation so is well-formed.
Brackets are initially around each non-atomic expression, but they can be deleted in cases where there is a defined order of operations, or where order doesn't matter (i.e. where operations are associative).
A well-formed expression can be thought as a syntax tree. The leaf nodes are always atomic expressions. Operations and have exactly two child nodes, while operations , and have exactly one. There are countably infinitely many WFE's, however, each WFE has a finite number of nodes.
Lambda calculus
Formal languages allow formalizing the concept of well-formed expressions.
In the 1930s, a new type of expression, the lambda expression, was introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. The lambda operators (lambda abstraction and function application) form the basis for lambda calculus, a formal system used in mathematical logic and programming language theory.
The equivalence of two lambda expressions is undecidable (but see unification (computer science)). This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem).
Types of expressions
Algebraic expression
An algebraic expression is an expression built up from algebraic constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by a rational number). For example, is an algebraic expression. Since taking the square root is the same as raising to the power , the following is also an algebraic expression:
See also: Algebraic equation and Algebraic closure
Polynomial expression
A polynomial expression is an expression built with scalars (numbers of elements of some field), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers; for example
Using associativity, commutativity and distributivity, every polynomial expression is equivalent to a polynomial, that is an expression that is a linear combination of products of integer powers of the indeterminates. For example the above polynomial expression is equivalent (denote the same polynomial as
Many author do not distinguish polynomials and polynomial expressions. In this case the expression of a polynomial expression as a linear combination is called the canonical form, normal form, or expanded form of the polynomial.
Computational expression
In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value or fail to terminate, in which case the expression is undefined. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called evaluation.
In simple settings, the resulting value is usually one of various primitive types, such as string, Boolean, or numerical (such as integer, floating-point, or complex).
In computer algebra, formulas are viewed as expressions that can be evaluated as a Boolean, depending on the values that are given to the variables occurring in the expressions. For example takes the value false if is given a value less than 1, and the value true otherwise.
Expressions are often contrasted with statements—syntactic entities that have no value (an instruction).
Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, a matrix may be represented as an expression with "matrix" as an operator and its rows as operands.
See: Computer algebra expression
Logical expression
In mathematical logic, a "logical expression" can refer to either terms or formulas. A term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula.
A first-order term is recursively constructed from constant symbols, variables, and function symbols.
An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation.
For example, is a term built from the constant 1, the variable , and the binary function symbols and ; it is part of the atomic formula which evaluates to true for each real-numbered value of .
Formal expression
A formal expression is a kind of string of symbols, created by the same production rules as standard expressions, however, they are used without regard to the meaning of the expression. In this way, two formal expressions are considered equal only if they are syntactically equal, that is, if they are the exact same expression. For instance, the formal expressions "2" and "1+1" are not equal.
See also
Analytic expression
Closed-form expression
Formal calculation
Functional programming
Infinite expression
Number sentence
Rewriting
Signature (logic)
Notes
References
Works Cited
Abstract algebra
Logical expressions
Elementary algebra | Expression (mathematics) | [
"Mathematics"
] | 4,613 | [
"Algebra",
"Mathematical logic",
"Elementary algebra",
"Elementary mathematics",
"Logical expressions",
"Abstract algebra"
] |
609,147 | https://en.wikipedia.org/wiki/Transmission%20%28mechanical%20device%29 | A transmission (also called a gearbox) is a mechanical device which uses a gear set—two or more gears working together—to change the speed, direction of rotation, or torque multiplication/reduction in a machine.
Transmissions can have a single fixed-gear ratio, multiple distinct gear ratios, or continuously variable ratios. Variable-ratio transmissions are used in all sorts of machinery, especially vehicles.
Applications
Early uses
Early transmissions included the right-angle drives and other gearing in windmills, horse-powered devices, and steam-powered devices. Applications of these devices included pumps, mills and hoists.
Bicycles
Bicycles traditionally have used hub gear or Derailleur gear transmissions, but there are other more recent design innovations.
Automobiles
Since the torque and power output of an internal combustion engine varies with its rpm, automobiles powered by ICEs require multiple gear ratios to keep the engine within its power band to produce optimal power, fuel efficiency, and smooth operation. Multiple gear ratios are also needed to provide sufficient acceleration and velocity for safe and reliable operation at modern highway speeds. ICEs typically operate over a range of approximately 600–7000 rpm, while the vehicle's speeds requires the wheels to rotate in the range of 0–1800 rpm.
In the early mass-produced automobiles, the standard transmission design was manual: the combination of gears was selected by the driver through a lever (the gear stick) that displaced gears and gear groups along their axes. Starting in 1939, cars using various types of automatic transmission became available in the US market. These vehicles used the engine's own power to change the effective gear ratio depending on the load so as to keep the engine running close to its optimal rotation speed. Automatic transmissions now are used in more than two thirds of cars globally, and on almost all new cars in the US.
Most currently-produced passenger cars with gasoline or diesel engines use transmissions with 4–10 forward gear ratios (also called speeds) and one reverse gear ratio. Electric vehicles typically use a fixed-gear or two-speed transmission with no reverse gear ratio.
Motorcycles
Fixed-ratio
The simplest transmissions used a fixed ratio to provide either a gear reduction or increase in speed, sometimes in conjunction with a change in the orientation of the output shaft. Examples of such transmissions are used in helicopters and wind turbines. In the case of a wind turbine, the first stage of the gearbox is usually a planetary gear, to minimize the size while withstanding the high torque inputs from the turbine.
Multi-ratio
Many transmissions – especially for transportation applications – have multiple gears that are used to change the ratio of input speed (e.g. engine rpm) to the output speed (e.g. the speed of a car) as required for a given situation. Gear (ratio) selection can be manual, semi-automatic, or automatic.
Manual
A manual transmission requires the driver to manually select the gears by operating a gear stick and clutch (which is usually a foot pedal for cars or a hand lever for motorcycles).
Most transmissions in modern cars use synchromesh to synchronise the speeds of the input and output shafts. However, prior to the 1950s, most cars used non-synchronous transmissions.
Sequential manual
A sequential manual transmission is a type of non-synchronous transmission used mostly for motorcycles and racing cars. It produces faster shift times than synchronized manual transmissions, through the use of dog clutches rather than synchromesh. Sequential manual transmissions also restrict the driver to selecting either the next or previous gear, in a successive order.
Semi-automatic
A semi-automatic transmission is where some of the operation is automated (often the actuation of the clutch), but the driver's input is required to move off from a standstill or to change gears.
Automated manual / clutchless manual
An automated manual transmission (AMT) is essentially a conventional manual transmission that uses automatic actuation to operate the clutch and/or shift between gears.
Many early versions of these transmissions were semi-automatic in operation, such as Autostick, which automatically control only the clutch, but still require the driver's input to initiate gear changes. Some of these systems are also referred to as clutchless manual systems. Modern versions of these systems that are fully automatic in operation, such as Selespeed and Easytronic, can control both the clutch operation and the gear shifts automatically, without any input from the driver.
Automatic
An automatic transmission does not require any input from the driver to change forward gears under normal driving conditions.
Hydraulic automatic
The most common design of automatic transmissions is the hydraulic automatic, which typically uses planetary gearsets that are operated using hydraulics. The transmission is connected to the engine via a torque converter (or a fluid coupling prior to the 1960s), instead of the friction clutch used by most manual transmissions and dual-clutch transmissions.
Dual-clutch (DCT)
A dual-clutch transmission (DCT) uses two separate clutches for odd and even gear sets. The design is often similar to two separate manual transmissions with their respective clutches contained within one housing, and working as one unit. In car and truck applications, the DCT functions as an automatic transmission, requiring no driver input to change gears.
Continuously-variable Ratio
A continuously variable transmission (CVT) can change seamlessly through a continuous range of gear ratios. This contrasts with other transmissions that provide a limited number of gear ratios in fixed steps. The flexibility of a CVT with suitable control may allow the engine to operate at a constant RPM while the vehicle moves at varying speeds.
CVTs are used in cars, tractors, side-by-sides, motor scooters, snowmobiles, bicycles, and earthmoving equipment.
The most common type of CVT uses two pulleys connected by a belt or chain; however, several other designs have also been used at times.
Noise and vibration
Gearboxes are often a major source of noise and vibration in vehicles and stationary machinery. Higher sound levels are generally emitted when the vehicle is engaged in lower gears. The design life of the lower ratio gears is shorter, so cheaper gears may be used, which tend to generate more noise due to smaller overlap ratio and a lower mesh stiffness etc. than the helical gears used for the high ratios. This fact has been used to analyze vehicle-generated sound since the late 1960s, and has been incorporated into the simulation of urban roadway noise and corresponding design of urban noise barriers along roadways.
See also
Bicycle gearing
Direct-drive mechanism
List of auto parts
Transfer case
References
Mechanisms (engineering) | Transmission (mechanical device) | [
"Physics",
"Engineering"
] | 1,326 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical engineering",
"Mechanisms (engineering)"
] |
609,717 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93silicon%20alloys | Aluminium–silicon alloys or Silumin is a general name for a group of lightweight, high-strength aluminium alloys based on an aluminum–silicon system (AlSi) that consist predominantly of aluminum - with silicon as the quantitatively most important alloying element. Pure AlSi alloys cannot be hardened, the commonly used alloys AlSiCu (with copper) and AlSiMg (with magnesium) can be hardened. The hardening mechanism corresponds to that of AlCu and AlMgSi.
AlSi alloys are by far the most important of all aluminum cast materials. They are suitable for all casting processes and have excellent casting properties. Important areas of application are in car parts, including engine blocks and pistons. In addition, their use as a functional material for high-energy heat storage in electric vehicles is currently being focused on.
Alloying elements
Aluminium-silicon alloys typically contain 3% to 25% silicon content. Casting is the primary use of aluminum-silicon alloys, but they can also be utilized in rapid solidification processes and powder metallurgy. Alloys used by powder metallurgy, rather than casting, may contain even more silicon, up to 50%. Silumin has a high resistance to corrosion, making it useful in humid environments.
The addition of silicon to aluminum also makes it less viscous when in liquid form, which, together with its low cost (as both component elements are relatively cheap to extract), makes it a very good casting alloy. Silumin with good castability may give a stronger finished casting than a potentially stronger alloy that is more difficult to cast.
All aluminum alloys also contain iron as an admixture. It is generally undesirable because it lowers strength and elongation at break. Together with Al and Si it forms the -phase AlFeSi, which is present in the structure in the form of small needles. However, iron also prevents the castings from sticking to the molds in die casting, so that special die-casting alloys contain a small amount of iron, while iron is avoided as far as possible in other alloys.
Manganese also reduces the tendency to stick, but affects the mechanical properties less than iron. Manganese forms a phase with other elements that is in the form of globulitic (round) grains.
Copper occurs in almost all technical alloys, at least as an admixture. From a content of 0.05% Cu, the corrosion resistance is reduced. Additions of about 1% Cu are alloyed to increase strength through solid solution strengthening. This also improves machinability. In the case of the AlSiCu alloys, higher proportions of copper are also added, which means that the materials can be hardened (see Aluminum-copper alloy).
Together with silicon, magnesium forms the Mg2Si (magnesium silicide) phase, which is the basis of hardenability, similar to aluminum-magnesium-silicon alloys (AlMgSi). In these there is an excess of Mg, so the structure consists of aluminum mixed crystal with magnesium and Mg2Si. In the AlSiMg alloys, on the other hand, there is an excess of silicon and the structure consists of aluminum mixed crystal, silicon and Mg2Si.
Silicon powders are used in aluminum-silicon alloys for enhancing strength and castability, providing better durability under high-stress conditions. It also improves the fluidity of molten aluminum which allows easier casting of complex shapes with fewer defects.
Small additions of titanium and boron serve to refine the grain.
Pure aluminium–silicon alloys
Aluminum forms a eutectic with silicon, which is at 577 °C, with a Si content of 12.5% or 12.6%. Up to 1.65% Si can be dissolved in aluminum at this temperature. However, the solubility decreases rapidly with temperature. At 500 °C it is still 0.8% Si, at 400 °C 0.3% Si and at 250 °C only 0.05% Si. At room temperature, silicon is practically insoluble. Aluminum cannot be dissolved in silicon at all, not even at high temperatures. Only in the molten state are both completely soluble. Increases in strength due to solid solution strengthening are negligible.
Pure AlSi alloys are smelted from primary aluminium, while AlSi alloys with other elements are usually smelted from secondary aluminium. The pure AlSi alloys are medium strength, non-hardenable, but corrosion resistant, even in salt water environments.
The exact properties depend on whether the composition of the alloy is above, near or below the eutectic point. Castability increases with increasing Si content and is best at about 17% Si; the mechanical properties are best at 6% to 12% Si.
The mold filling capacity reaches its maximum at 12% Si, but is also good with other contents.
The tendency to form cavities is lowest at 6% to 8% Si and considered low overall.
The tendency to hot cracking is low with less than 6% Si.
Otherwise, AlSi alloys generally have favorable casting properties: the shrinkage is only 1.25% and the influence of the wall thickness is small.
Hypereutectic alloys, with a silicon content of 16 to 19%, such as Alusil, can be used in high-wear applications such as pistons, cylinder liners and internal combustion engine blocks. The metal is etched after casting, exposing hard, wear-resistant silicon precipitates. The rest of the surface becomes slightly porous and retains oil. Overall this makes for an excellent bearing surface, and at lower cost than traditional bronze bearing bushes.
Hypoeutectic alloys
Hypoeutectic alloys (also hypoeutectic) have a silicon content of less than 12%. With them, the aluminum solidifies first. As the temperature falls and the proportion of solidified aluminum increases, the silicon content of the residual melt increases until the eutectic point is reached. Then the entire residual melt solidifies as a eutectic. The microstructure is consequently characterized by primary aluminium, which is often present in the form of dendrites, and the eutectic of the residual melt lying between them. The lower the silicon content, the larger the dendrites.
In pure AlSi alloys, the eutectic is often in a degenerate form. Instead of the fine structure that is otherwise typical of eutectics with its good mechanical properties, AlSi takes the form of a coarse-grained structure on slow cooling, in which silicon forms large plates or needles. These can sometimes be seen with the naked eye and make the material brittle. This is not a problem in chill casting, since the cooling rates are high enough to avoid degeneration.
In sand casting in particular, with its slow cooling rates, additional elements are added to the melt to prevent degeneration. Sodium, strontium and antimony are suitable. These elements are added to the melt at around 720 °C to 780 °C, causing supercooling that reduces the diffusion of silicon, resulting in a common fine eutectic, resulting in higher strength and elongation at break.
Eutectic and near-eutectic alloys
Alloys with 11% Si to 13% Si are counted among the eutectic alloys. Annealing improves elongation and fatigue strength. Solidification is shell -forming in untreated alloys and smooth-walled in refined alloys, resulting in very good castability. Above all, the flowability and mold filling ability is very good, which is why eutectic alloys are suitable for thin-walled parts.
Hypereutectic Alloys
Alloys with more than 13% Si are referred to as over- or hypereutectic. The Si content is usually up to 17%, with special piston alloys also over 20%. Hypereutectic alloys have very low thermal expansion and are very wear resistant. In contrast to many other alloys, AlSi alloys do not show their maximum fluidity near the eutectic, but at 14 to 16% Si, in the case of overheating at 17% to 18% Si. The tendency to hot cracking is minimal in the range from 10% to 14%. In the case of hypereutectic alloys, the silicon crystals solidify first in the melt, until the remaining melt solidifies as a eutectic. For grain refinement copper-phosphorus alloys are used. The hard and brittle silicon leads to increased tool wear during subsequent machining, which is why diamond tools are sometimes used (See also Machinability).
Aluminium–silicon–magnesium alloys
AlSiMg alloys with small additions of magnesium (below 0.3 to 0.6% Mg) can be hardened both cold and warm. The proportion of magnesium decreases with increasing silicon content, which is between 5% Si and 10% Si. They are related to the AlMgSi alloys: Both are based on the fact that magnesium silicide Mg2Si is precipitated, which is present in the material in the form of finely divided particles and thus increases the strength. In addition, magnesium increases the elongation at break. In contrast to AlSiCu, which can also be hardened, these alloys are corrosion-resistant and easy to cast. However, copper is present as an impurity in some AlSiMg alloys, which reduces corrosion resistance. This applies above all to materials that have been melted from secondary aluminium.
Aluminium–silicon–copper alloys
AlSiCu alloys are also heat-hardenable and additionally high-strength, but susceptible to corrosion and less, but still adequately, castable. It is often smelted from secondary aluminium. The hardening is based on the same mechanism as the AlCu alloys. The copper content is 1% to 4%, that of silicon 4% to 10%. Small additions of magnesium improve strength.
Compositions of standardized varieties
All data are in percent by mass. The rest is aluminum.
Wrought alloys
Cast Alloys
Mechanical properties of standardized and non-standard grades
4000 series
4000 series are alloyed with silicon. Variations of aluminium–silicon alloys intended for casting (and therefore not included in 4000 series) are also known as silumin.
Applications
Within the Aluminum Association numeric designation system, Silumin corresponds to alloys of two systems: 3xxx, aluminum–silicon alloys also containing magnesium and/or copper, and 4xx.x, binary aluminum–silicon alloys. Copper increases strength, but reduces corrosion resistance.
In general, AlSi alloys are mainly used in foundries, especially for vehicle construction. Wrought alloys are very rare. They are used as a filler metal (welding wire) or as a solder in brazing. In some cases, forged AlSi pistons are also built for aviation.
AlSi eutectic casting alloys are used for machine parts, cylinder heads, cylinder crankcases, impellers and ribbed bodies. Hypereutectic (high silicon) alloys are used for engine parts because of low thermal expansion and high strength and wear resistance. This also includes special piston alloys with around 25% Si.
Alloys with additions of magnesium (AlSiMg) can be hardened by heat treatment. An example use-case are wheel rims produced by low -pressure casting because of their good strength, corrosion resistance and elongation at break. Alloys with about 10% Si are used for cylinder heads, switch housings, intake manifolds, transformer tanks, wheel suspensions and oil pans. Alloys with 5% Si to 7% Si are used for chassis parts and wheels. At levels of 9%, they are suitable for structural components and body nodes.
The copper-containing AlSiCu alloys are used for gear housings, crankcases and cylinder heads because of their heat resistance and hardenability.
In addition to the use of AlSi alloys as a structural material, in which the mechanical properties are paramount, another area of application is latent heat storage. In the phase change of the alloy at 577 °C, thermal energy can be stored in the form of the enthalpy of fusion. AlSi can therefore also be used as a metallic phase change material (mPCM) be used. Compared to other phase change materials, metals are characterized by a high specific energy density combined with high thermal conductivity. The latter is important for the rapid entry and exit of heat in the storage material and thus increases the performance of a heat storage system. These advantageous properties of mPCM such as AlSi are of particular importance for vehicle applications, since low masses and volumes as well as high thermal performance are the main goals here. By using storage systems based on mPCM, the range of electric cars can be increased by thermally storing the necessary thermal energy for heating in the mPCM instead of taking it from the traction battery.
Almost eutectic AlSi melts are also used for hot-dip aluminizing. In the process of continuous strip galvanizing, steel strips are finished with a heat-resistant metallic coating 10-25 μm thick. Hot-dip aluminized sheet steel is an inexpensive material for thermally stressed components. Unlike zinc coatings, the coating does not provide cathodic protection under atmospheric conditions.
Characteristics
High castability, fluidity, corrosion resistance, ductility, and low density.
Usable for large castings, which can operate under heavy load conditions.
Considered to not be a heat-treatable alloy, but the addition of Mg & Cu can allow it to be heat treated, e.g. AΠ4 alloys.
Strengthened by solution treatment, e.g. adding 0.01% sodium (in the form of sodium fluoride [NaF] and sodium chloride [NaCl]) to the melt just before casting.
A disadvantage is a tendency for porosity in the casting, i.e. the casting can become foam-like. This can be avoided by casting under pressure in autoclaves.
References
Further reading
Aluminium alloys
Aluminium–silicon alloys | Aluminium–silicon alloys | [
"Chemistry"
] | 2,848 | [
"Alloys",
"Aluminium alloys"
] |
610,000 | https://en.wikipedia.org/wiki/EUMETSAT | The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) is an intergovernmental organisation created through an international convention agreed by a current total of 30 European Member States.
EUMETSAT's primary objective is to establish, maintain and exploit European systems of operational meteorological satellites. EUMETSAT is responsible for the launch and operation of the satellites and for delivering satellite data to end-users as well as contributing to the operational monitoring of climate and the detection of global climate changes.
The activities of EUMETSAT contribute to a global meteorological satellite observing system coordinated with other space-faring states.
Satellite observations are an essential input to numerical weather prediction systems and also assist the human forecaster in the diagnosis of potentially hazardous weather developments. Of growing importance is the capacity of weather satellites to gather long-term measurements from space in support of climate change studies.
EUMETSAT is not an institution or agency of the European Union, although the majority of its members are EU member states. The organisation became a signatory to the International Charter on Space and Major Disasters in 2012, thus providing for the global charitable use of its space assets.
Member and cooperating states
The national mandatory contributions of member states are proportional to their gross national income. However, the cooperating countries contribute only half of the fee they would pay for full membership. The convention establishing EUMETSAT was opened for signature in 1983 and entered into force on 19 June 1986.
Satellite programmes
There are two types of programmes:
Geostationary satellites, providing a continuous view of the Earth disc from a stationary position in space.
Polar-orbiting satellites, flying at a much lower altitude, sending back more precise details about atmospheric temperature and moisture profiles, although with less frequent global coverage.
High-level, stationary in space (Geostationary satellites)
The current provision of geostationary satellite surveillance is enabled by the Meteosat series of satellites operated by EUMETSAT, generating images of the full Earth disc and data for forecasting.
The first generation of Meteosat, launched in 1977, provided continuous, reliable observations to a large user group. In response to demand for more frequent and comprehensive data, Meteostat Second Generation (MSG) was developed with key improvements in swift recognition and prediction of thunderstorms, fog, and the small depressions which can lead to dangerous wind storms. MSG was launched in 2004. To capture foreseeable user needs up to 2025, a Meteostat Third Generation (MTG) is in active preparation.
Low-level orbiting (Polar satellites)
EUMETSAT Polar System
The lack of observational coverage in certain parts of the globe, particularly the Pacific Ocean and continents of the southern hemisphere, has led to the increasingly important role for polar-orbiting satellite data in numerical weather prediction and climate monitoring.
EUMETSAT Polar System (EPS) Metop mission consists of three polar orbiting Metop satellites, to be flown successively for more than 14 years. The first, Metop-A, was launched by a Russian Soyuz-2.1a rocket from Baikonur on October 19, 2006, at 22:28 Baikonur time (16:28 UTC). Metop-A was initially controlled by ESOC for the LEOP phase immediately following launch, with control handed over to EUMETSAT 72 hours after lift-off. EUMETSAT's first commands to the satellite were sent at 14:04 UTC on October 22, 2006.
The second EPS satellite, Metop-B, was launched from Baikonur on 17 September 2012, and the third, Metop-C, was launched from Centre Spatial Guyanais in Kourou, French Guiana on 7 November 2018 by Arianespace using a Soyuz ST-B launch vehicle with a Fregat-M upper stage.
Positioned at approximately above the Earth, special instruments on board Metop-A can deliver far more precise details about atmospheric temperature and moisture profiles than a geostationary satellite.
The satellites also ensure that the more remote regions of the globe, particularly in Northern Europe as well as the oceans in the Southern hemisphere, are fully covered.
The EPS programme is also the European half of a joint program with NOAA, called the International Joint Polar System (IJPS). NOAA has operated a continuous series of low earth orbiting meteorological satellite since April 1960. Many of the instruments on Metop are also operated on NOAA/POES satellites, providing similar data types across the IJPS.
Instruments on Metop
A/DCS (Advanced Data Collection System)
AMSU-A1 and AMSU-A2
ASCAT Advanced Scatterometer
AVHRR (Advanced Very High Resolution Radiometer)
GOME-2 (Global Ozone Monitoring Experiment) — instrument to monitor ozone levels
GRAS (GNSS Receiver for Atmospheric Sounding: global navigation satellite systems radio occultation)
HIRS (High Resolution Infrared Sounder)
IASI (Infrared atmospheric sounding interferometer)
MHS (Microwave Humidity Sounder)
SARP-3 and SARR (Search And Rescue Processor and Search And Rescue Repeater)
SEM (Space Environment Monitor, to measure the intensity of the Earth's radiation belts and the proton/electron flux.)
Jason / Sentinel-6
Jason-2
The Jason-2 programme is an international partnership across multiple organisations, including EUMETSAT, CNES, and the US agencies NASA and NOAA.
Jason-2 was launched successfully from Vandenberg Air Force Base aboard a Delta-II rocket on 20 June 2008, 7:46 UTC. EUMETSAT – What We Do – Jason-2 – Launch Description
Jason-2 reliably delivers detailed oceanographic data vital to our understanding of weather forecasting and climate change monitoring. Jason-2 provides data on the decadal (10-yearly) oscillations in large ocean basins, such as the Atlantic Ocean; mesoscale variability, and surface wind and wave conditions. Jason-2 measurements contribute to the European Centre for Medium-Range Weather Forecasts (ECMWF) satellite data assimilation, helping improve global atmosphere and ocean forecasting.
Altimetric data from Jason-2 have also helped create detailed decade-long global observations and analyses of the El Niño and La Niña phenomena, opening the way to new discoveries about ocean circulation and its effects on climate, and providing new insights into ocean tides, turbulent ocean eddies and marine gravity.
Jason-3
Jason-3 was Launched on 17 January 2016, Vandenberg Air Force Base in California, on a SpaceX Falcon 9 launcher. It is operational since 14 October 2016.
Jason-3 is on a non-Sun-synchronous low Earth orbit at 66° inclination and 1336 km altitude, optimised to eliminate tidal aliasing from sea surface height and mean sea level measurements. Jason-2, flies on the same orbit but at 162°.
It is built on the same cooperation as Jason-2, involving EUMETSAT, NOAA, CNES and NASA, with Copernicus expected to support the European contribution to operations, as part of its HPOA activity, which also covers contributions to the Jason-CS programme.
Sentinel-6/Jason-CS
The Jason satellites were succeeded by the Sentinel-6 for the radar altimeter mission, part of the European Union's Copernicus Programme for Earth observation, with the objective of providing an operational service for high-precision measurements of global sea-level. This mission is implemented as a multi-partner cooperation between the European Commission and EUMETSAT, ESA, NOAA and NASA, with support from the French space agency, CNES.
The mission, implemented through the two Sentinel-6/Jason-CS satellites (Sentinel-6 Michael Freilich and Sentinel-6B), aims to continue high precision ocean altimetry measurements in the 2020–2030 time-frame. A secondary objective is to collect high resolution vertical profiles of temperature, using the GNSS Radio-Occultation sounding technique, to assess temperature changes in the troposphere and stratosphere and to support Numerical Weather Prediction.
The launch of the first satellite – Sentinel-6 Michael Freilich – occurred successfully on 21 November 2020 from Vandenberg AFB in California, USA on a SpaceX Falcon-9 launch vehicle. The satellite was named in honour of Michael Freilich (oceanographer), an oceanographer and former director of NASA's Earth Science Division. Sentinel-6 Michael Freilich succeeded Jason-3 as the reference mission for satellite ocean altimetry in April 2022.
The launch of Sentinel-6B is foreseen for late-2025, also on a SpaceX Falcon-9.
See also
EUMETNET
the European Centre for Medium-Range Weather Forecasts (ECMWF)
the French CNES (CNES)
the US National Oceanic and Atmospheric Administration (NOAA), the US equivalent of EUMETSAT
the US NASA (NASA), the US equivalent of the ESA
References
External links
EUMETSAT weather satellite viewer Online EUMETSAT weather satellite viewer with 2 months of archived data.
European space programmes
Satellite meteorology
Meteorological organizations
Space organizations
Intergovernmental organizations established by treaty
International organisations based in Germany
1986 establishments in Europe
Scientific organizations established in 1986 | EUMETSAT | [
"Astronomy",
"Engineering"
] | 1,878 | [
"Space programs",
"European space programmes",
"Astronomy organizations",
"Space organizations"
] |
610,202 | https://en.wikipedia.org/wiki/Fine%20structure | In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. It was first measured precisely for the hydrogen atom by Albert A. Michelson and Edward W. Morley in 1887, laying the basis for the theoretical treatment by Arnold Sommerfeld, introducing the fine-structure constant.
Background
Gross structure
The gross structure of line spectra is the structure predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels only depend on the principal quantum number n. However, a more accurate model takes into account relativistic and spin effects, which break the degeneracy of the energy levels and split the spectral lines. The scale of the fine structure splitting relative to the gross structure energies is on the order of (Zα)2, where Z is the atomic number and α is the fine-structure constant, a dimensionless number equal to approximately 1/137.
Relativistic corrections
The fine structure energy corrections can be obtained by using perturbation theory. To perform this calculation one must add three corrective terms to the Hamiltonian: the leading order relativistic correction to the kinetic energy, the correction due to the spin–orbit coupling, and the Darwin term coming from the quantum fluctuating motion or zitterbewegung of the electron.
These corrections can also be obtained from the non-relativistic limit of the Dirac equation, since Dirac's theory naturally incorporates relativity and spin interactions.
Hydrogen atom
This section discusses the analytical solutions for the hydrogen atom as the problem is analytically solvable and is the base model for energy level calculations in more complex atoms.
Kinetic energy relativistic correction
The gross structure assumes the kinetic energy term of the Hamiltonian takes the same form as in classical mechanics, which for a single electron means
where is the potential energy, is the momentum, and is the electron rest mass.
However, when considering a more accurate theory of nature via special relativity, we must use a relativistic form of the kinetic energy,
where the first term is the total relativistic energy, and the second term is the rest energy of the electron ( is the speed of light). Expanding the square root for large values of , we find
Although there are an infinite number of terms in this series, the later terms are much smaller than earlier terms, and so we can ignore all but the first two. Since the first term above is already part of the classical Hamiltonian, the first order correction to the Hamiltonian is
Using this as a perturbation, we can calculate the first order energy corrections due to relativistic effects.
where is the unperturbed wave function. Recalling the unperturbed Hamiltonian, we see
We can use this result to further calculate the relativistic correction:
For the hydrogen atom,
and
where is the elementary charge, is the vacuum permittivity, is the Bohr radius, is the principal quantum number, is the azimuthal quantum number and is the distance of the electron from the nucleus. Therefore, the first order relativistic correction for the hydrogen atom is
where we have used:
On final calculation, the order of magnitude for the relativistic correction to the ground state is .
Spin–orbit coupling
For a hydrogen-like atom with protons ( for hydrogen), orbital angular momentum and electron spin , the spin–orbit term is given by:
where is the spin g-factor.
The spin–orbit correction can be understood by shifting from the standard frame of reference (where the electron orbits the nucleus) into one where the electron is stationary and the nucleus instead orbits it. In this case the orbiting nucleus functions as an effective current loop, which in turn will generate a magnetic field. However, the electron itself has a magnetic moment due to its intrinsic angular momentum. The two magnetic vectors, and couple together so that there is a certain energy cost depending on their relative orientation. This gives rise to the energy correction of the form
Notice that an important factor of 2 has to be added to the calculation, called the Thomas precession, which comes from the relativistic calculation that changes back to the electron's frame from the nucleus frame.
Since
by Kramers–Pasternack relations and
the expectation value for the Hamiltonian is:
Thus the order of magnitude for the spin–orbital coupling is:
When weak external magnetic fields are applied, the spin–orbit coupling contributes to the Zeeman effect.
Darwin term
There is one last term in the non-relativistic expansion of the Dirac equation. It is referred to as the Darwin term, as it was first derived by Charles Galton Darwin, and is given by:
The Darwin term affects only the s orbitals. This is because the wave function of an electron with vanishes at the origin, hence the delta function has no effect. For example, it gives the 2s orbital the same energy as the 2p orbital by raising the 2s state by .
The Darwin term changes potential energy of the electron. It can be interpreted as a smearing out of the electrostatic interaction between the electron and nucleus due to zitterbewegung, or rapid quantum oscillations, of the electron. This can be demonstrated by a short calculation.
Quantum fluctuations allow for the creation of virtual electron-positron pairs with a lifetime estimated by the uncertainty principle . The distance the particles can move during this time is , the Compton wavelength. The electrons of the atom interact with those pairs. This yields a fluctuating electron position . Using a Taylor expansion, the effect on the potential can be estimated:
Averaging over the fluctuations
gives the average potential
Approximating , this yields the perturbation of the potential due to fluctuations:
To compare with the expression above, plug in the Coulomb potential:
This is only slightly different.
Another mechanism that affects only the s-state is the Lamb shift, a further, smaller correction that arises in quantum electrodynamics that should not be confused with the Darwin term. The Darwin term gives the s-state and p-state the same energy, but the Lamb shift makes the s-state higher in energy than the p-state.
Total effect
The full Hamiltonian is given by
where is the Hamiltonian from the Coulomb interaction.
The total effect, obtained by summing the three components up, is given by the following expression:
where is the total angular momentum quantum number ( if and otherwise). It is worth noting that this expression was first obtained by Sommerfeld based on the old Bohr theory; i.e., before the modern quantum mechanics was formulated.
Exact relativistic energies
The total effect can also be obtained by using the Dirac equation. The exact energies are given by
This expression, which contains all higher order terms that were left out in the other calculations, expands to first order to give the energy corrections derived from perturbation theory. However, this equation does not contain the hyperfine structure corrections, which are due to interactions with the nuclear spin. Other corrections from quantum field theory such as the Lamb shift and the anomalous magnetic dipole moment of the electron are not included.
See also
Angular momentum coupling
Fine electronic structure
References
External links
Hyperphysics: Fine Structure
University of Texas: The fine structure of hydrogen
Atomic physics | Fine structure | [
"Physics",
"Chemistry"
] | 1,523 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
610,367 | https://en.wikipedia.org/wiki/Labrador%20Sea | The Labrador Sea (; ) is an arm of the North Atlantic Ocean between the Labrador Peninsula and Greenland. The sea is flanked by continental shelves to the southwest, northwest, and northeast. It connects to the north with Baffin Bay through the Davis Strait. It is a marginal sea of the Atlantic.
The sea formed upon separation of the North American Plate and Greenland Plate that started about 60 million years ago and stopped about 40 million years ago. It contains one of the world's largest turbidity current channel systems, the Northwest Atlantic Mid-Ocean Channel (NAMOC), that runs for thousands of kilometers along the sea bottom toward the Atlantic Ocean.
The Labrador Sea is a major source of the North Atlantic Deep Water, a cold water mass that flows at great depth along the western edge of the North Atlantic.
History
The Labrador Sea formed upon separation of the North American Plate and Greenland Plate that started about 60 million years ago (Paleocene) and stopped about 40 million years ago. A sedimentary basin, which is now buried under the continental shelves, formed during the Cretaceous. Onset of magmatic sea-floor spreading was accompanied by volcanic eruptions of picrites and basalts in the Paleocene at the Davis Strait and Baffin Bay.
Between about 500 BC and 1300 AD, the southern coast of the sea contained Dorset, Beothuk, and Inuit settlements; Dorset tribes were later replaced by Thule people.
Extent
The International Hydrographic Organization defines the limits of the Labrador Sea as follows:
On the North: the South limit of Davis Strait [The parallel of 60° North between Greenland and Labrador].
On the East: a line from Cape St. Francis (Newfoundland) to Cape Farewell (Greenland).
On the West: the East Coast of Labrador and Newfoundland and the Northeast limit of the Gulf of St. Lawrence – a line running from Cape Bauld (North point of Kirpon Island, ) to the East extreme of Belle Isle and on to the Northeast Ledge (). Thence a line joining this ledge with the East extreme of Cape St. Charles (52°13'N) in Labrador.
Natural Resources Canada uses a slightly different definition, putting the northern boundary of the Labrador Sea on a straight line from a headland on Killiniq Island abutting Lady Job Harbour to Cape Farewell.
Oceanography
The Labrador Sea is about deep and wide where it joins the Atlantic Ocean. It becomes shallower, to less than towards Baffin Bay (see depth map) and passes into the wide Davis Strait. A deep turbidity current channel system, which is about wide and long, runs on the bottom of the sea, near its center from the Hudson Strait into the Atlantic. It is called the Northwest Atlantic Mid-Ocean Channel (NAMOC) and is one of the world's longest drainage systems of Pleistocene age. It appears as a submarine river bed with numerous tributaries and is maintained by high-density turbidity currents flowing within the levees.
The water temperature varies between in winter and in summer. The salinity is relatively low, at 31–34.9 parts per thousand. Two-thirds of the sea is covered in ice in winter. Tides are semi-diurnal (i.e. occur twice a day), reaching .
There is an anticlockwise water circulation in the sea. It is initiated by the East Greenland Current and continued by the West Greenland Current, which brings warmer, more saline waters northwards, along the Greenland coasts up to the Baffin Bay. Then, the Baffin Island Current and Labrador Current transport cold and less saline water southward along the Canadian coast. These currents carry numerous icebergs and therefore hinder navigation and exploration of the gas fields beneath the sea bed. The speed of the Labrador current is typically , but can reach in some areas, whereas the Baffin Current is somewhat slower at about . The Labrador Current maintains the water temperature at and salinity between 30 and 34 parts per thousand.
The sea provides a significant part of the North Atlantic Deep Water (NADW) — a cold water mass that flows at great depth along the western edge of the North Atlantic, spreading out to form the largest identifiable water mass in the World Ocean. The NADW consists of three parts of different origin and salinity, and the top one, the Labrador Sea Water (LSW), is formed in the Labrador Sea. This part occurs at a medium depth and has a relatively low salinity (34.84–34.89 parts per thousand), low temperature () and high oxygen content compared to the layers above and below it. LSW also has a relatively low vorticity, i.e. the tendency to form vortices, than any other water in North Atlantic that reflects its high homogeneity. It has a potential density of 27.76–27.78 mg/cm3 relatively to the surface layers, meaning it is denser, and thus sinks under the surface and remains homogeneous and unaffected by the surface fluctuations.
Fauna
The northern and western parts of the Labrador Sea are covered in ice between December and June. This drift ice serves as a breeding ground for several types of pinnipeds (including Atlantic walrus and bearded, grey, harbor, harp, hooded and ringed seals). Several cetacean species feed in these abundant waters in early spring, including blue, fin, humpback, long-finned pilot, minke, North Atlantic right, sei and sperm whales. The sea contains one of the two primary populations of sei whales, the other being the Scotian Shelf. Pods of beluga (white) whales are more common further to the north, west and south (notably in Baffin Bay, where their population reaches around 20,000 animals), and further afield in Hudson Bay and the Gulf of Saint Lawrence. While somewhat rarer in the Labrador Sea—especially since the 1950s— some sightings still take place. Additionally, pods of orca are drawn to the sea by the large shoals of fish, as well as the many marine mammal species they may hunt (including other cetaceans and pinnipeds), such as harbour porpoise and Atlantic white-sided, common, striped and white-beaked dolphins.
The sea is also a feeding-ground for Atlantic salmon. Shrimp fisheries began in 1978, intensifying by 2000, in addition to cod fishing. However, by the 1990s, the cod fishing had already depleted the fishes' population near the Labrador and West Greenland banks, and was therefore halted in 1992. Other fishery targets include haddock, Atlantic herring, lobster, several species of flatfish, and pelagic fish, such as sand lance and capelin. They are most abundant in the southern parts of the sea.
The Labrador duck was a common bird on the Canadian coast until the 19th century, but is now extinct. Other coastal animals include the Labrador wolf (Canis lupus labradorius), woodland caribou (Rangifer tarandus caribou), moose (Alces alces), black bear (Ursus americanus), Canada lynx (Lynx canadensis), red fox (Vulpes vulpes), Arctic fox (Alopex lagopus), wolverine (G. gulo), American mink (Neogale vison), North American river otter (Lontra canadensis), snowshoe hare (Lepus americanus), grouse (Dendragapus spp.), osprey (Pandion haliaetus), raven (Corvus corax), ducks, geese, swans, partridge and pheasant. Occasionally, coastal polar bear (Ursus maritimus) sightings occur along the sea, mainly further north but sometimes as far south as Conception Bay and the mouth of the Gulf of Saint Lawrence.
Flora
Coastal vegetation includes black spruce (Picea mariana), tamarack, white spruce (P. glauca), dwarf birch (Betula spp.), aspen, willow (Salix spp.), ericaceous shrubs (Ericaceae), cottongrass (Eriophorum spp.), sedge (Carex spp.), lichens and moss. Evergreen bushes of Labrador tea, which is used to make herbal teas, are common in the area, both on the Greenland and Canadian coasts.
References
Oceanography
Seas of Greenland
Seas of the Atlantic Ocean
Bodies of water of Newfoundland and Labrador
Seas of Canada
Seas of North America
Geography of North America
Cenozoic rifts and grabens | Labrador Sea | [
"Physics",
"Environmental_science"
] | 1,753 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
610,582 | https://en.wikipedia.org/wiki/Escapement | An escapement is a mechanical linkage in mechanical watches and clocks that gives impulses to the timekeeping element and periodically releases the gear train to move forward, advancing the clock's hands. The impulse action transfers energy to the clock's timekeeping element (usually a pendulum or balance wheel) to replace the energy lost to friction during its cycle and keep the timekeeper oscillating. The escapement is driven by force from a coiled spring or a suspended weight, transmitted through the timepiece's gear train. Each swing of the pendulum or balance wheel releases a tooth of the escapement's escape wheel, allowing the clock's gear train to advance or "escape" by a fixed amount. This regular periodic advancement moves the clock's hands forward at a steady rate. At the same time, the tooth gives the timekeeping element a push, before another tooth catches on the escapement's pallet, returning the escapement to its "locked" state. The sudden stopping of the escapement's tooth is what generates the characteristic "ticking" sound heard in operating mechanical clocks and watches.
The first mechanical escapement, the verge escapement, was invented in medieval Europe during the 13th century and was the crucial innovation that led to the development of the mechanical clock. The design of the escapement has a large effect on a timepiece's accuracy, and improvements in escapement design drove improvements in time measurement during the era of mechanical timekeeping from the 13th through the 19th century.
Escapements are also used in other mechanisms besides timepieces. Manual typewriters used escapements to step the carriage as each letter (or space) was typed.
History
The invention of the escapement was an important step in the history of technology, as it made the all-mechanical clock possible. The first all-mechanical escapement, the verge escapement, was invented in 13th-century Europe. It allowed timekeeping methods to move from continuous processes such as the flow of water in water clocks, to repetitive oscillatory processes such as the swing of pendulums, enabling more accurate timekeeping. Oscillating timekeepers are the controlling devices in all modern clocks.
Liquid-driven escapements
The earliest liquid-driven escapement was described by the Greek engineer Philo of Byzantium in the 3rd century BC in chapter 31 of his technical treatise Pneumatics, as part of a washstand. A counterweighted spoon, supplied by a water tank, tips over in a basin when full, releasing a spherical piece of pumice in the process. Once the spoon has emptied, it is pulled up again by the counterweight, closing the door on the pumice by the tightening string. Remarkably, Philo's comment that "its construction is similar to that of clocks" indicates that such escapement mechanisms were already integrated in ancient water clocks.
In China, the Tang dynasty Buddhist monk Yi Xing, along with government official Liang Lingzan, made the escapement in 723 (or 725) AD for the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. Song dynasty horologists Zhang Sixun and Su Song duly applied escapement devices for their astronomical clock towers in the 10th century, where water flowed into a container on a pivot. However, the technology later stagnated and retrogressed. According to historian Derek J. de Solla Price, the Chinese escapement spread west and was the source of Western escapement technology.
According to Ahmad Y. Hassan, a mercury escapement in a Spanish work for Alfonso X in 1277 can be traced back to earlier Arabic sources. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts.
However, none of these were true mechanical escapements, since they still depended on the flow of liquid through a hole to measure time. In these designs, a container tipped over each time it filled up, thus advancing the clock's wheels each time an equal quantity of water was measured out. The time between releases depended on the rate of flow, as do all liquid clocks. The rate of flow of a liquid through a hole varies with temperature and viscosity changes and decreases with pressure as the level of liquid in the source container drops. The development of mechanical clocks depended on the invention of an escapement which would allow a clock's movement to be controlled by an oscillating weight, which would stay constant.
Mechanical escapements
The first mechanical escapement, the verge escapement, was used in a bell-ringing apparatus called an for several centuries before it was adapted to clocks. Some sources claim that French architect Villard de Honnecourt invented the first escapement in 1237, citing a drawing of a rope linkage to turn a statue of an angel to follow the sun, found in his notebooks; however, the consensus is that this was not an escapement.
Astronomer Robertus Anglicus wrote in 1271 that clockmakers were trying to invent an escapement, but had not yet been successful. Records in financial transactions for the construction of clocks point to the late 13th century as the most likely date for when tower clock mechanisms transitioned from water clocks to mechanical escapements. Most sources agree that mechanical escapement clocks existed by 1300.
However, the earliest available description of an escapement was not a verge escapement, but a variation called a strob escapement. Described in Richard of Wallingford's 1327 manuscript on the clock that he built at the Abbey of St. Albans, this consisted of a pair of escape wheels on the same axle, with alternating radial teeth. The verge rod was suspended between them, with a short crosspiece that rotated first in one direction and then the other as the staggered teeth pushed past. Although no other example is known, it is possible that this was the first clock escapement design.
The verge became the standard escapement used in all other early clocks and watches, and remained the only known escapement for 400 years. Its performance was limited by friction and recoil, but most importantly, the early balance wheels used in verge escapements, known as the foliot, lacked a balance spring and thus had no natural "beat", severely limiting their timekeeping accuracy.
A great leap in the accuracy of escapements happened after 1657, due to the invention of the pendulum and the addition of the balance spring to the balance wheel, which made the timekeepers in both clocks and watches harmonic oscillators. The resulting improvement in timekeeping accuracy enabled greater focus on the accuracy of the escapement. The next two centuries, the "golden age" of mechanical horology, saw the invention of over 300 escapement designs, although only about ten of these were ever widely used in clocks and watches.
The invention of the crystal oscillator and the quartz clock in the 1920s, which became the most accurate clock by the 1930s, shifted technological research in timekeeping to electronic methods, and escapement design ceased to play a role in advancing timekeeping precision.
Reliability
The reliability of an escapement depends on the quality of workmanship and the level of maintenance given. A poorly constructed or poorly maintained escapement will cause problems. The escapement must accurately convert the oscillations of the pendulum or balance wheel into rotation of the clock or watch gear train, and it must deliver enough energy to the pendulum or balance wheel to maintain its oscillation.
In many escapements, the unlocking of the escapement involves sliding motion; for example, in the animation shown above, the pallets of the anchor slide against the escapement wheel teeth as the pendulum swings. The pallets are often made of very hard materials such as polished stone (for example, artificial ruby), but even so, they normally require lubrication. Since lubricating oil degrades over time due to evaporation, dust, oxidation, etc., periodic re-lubrication is needed. If this is not done, the timepiece may work unreliably or stop altogether, and the escapement components may be subjected to rapid wear. The increased reliability of modern watches is due primarily to the higher-quality oils used for lubrication. Lubricant lifetimes can be greater than five years in a high-quality watch.
Some escapements avoid sliding friction; examples include the grasshopper escapement of John Harrison in the 18th century, This may avoid the need for lubrication in the escapement (though it does not obviate the requirement for lubrication of other parts of the gear train).
Accuracy
The accuracy of a mechanical clock is dependent on the accuracy of the timing device. If this is a pendulum, then the period of swing of the pendulum determines the accuracy. If the pendulum rod is made of metal it will expand and contract with heat, lengthening or shortening the pendulum; this changes the time taken for a swing. Special alloys are used in expensive pendulum-based clocks to minimize this distortion. The degrees of arc in which a pendulum may swing varies; highly accurate pendulum-based clocks have very small arcs in order to minimize the circular error.
Pendulum-based clocks can achieve outstanding accuracy. Even into the 20th century, pendulum-based clocks were reference timepieces in laboratories.
Escapements play a big part in accuracy as well. The precise point in the pendulum's travel at which impulse is supplied will affect how closely to time the pendulum will swing. Ideally, the impulse should be evenly distributed on either side of the lowest point of the pendulum's swing. This is called "being in beat." This is because pushing a pendulum when it is moving towards mid-swing makes it gain, whereas pushing it while it is moving away from mid-swing makes it lose. If the impulse is evenly distributed then it gives energy to the pendulum without changing the time of its swing.
The pendulum's period depends slightly on the size of the swing. If the amplitude changes from 4° to 3°, the period of the pendulum will decrease by about 0.013 percent, which translates into a gain of about 12 seconds per day. This is caused by the restoring force on the pendulum being circular not linear; thus, the period of the pendulum is only approximately linear in the regime of the small angle approximation. To be time-independent, the path must be cycloidal. To minimize the effect with amplitude, pendulum swings are kept as small as possible.
As a rule, whatever the method of impulse the action of the escapement should have the smallest effect on the oscillator which can be achieved, whether a pendulum or the balance in a watch. This effect, which all escapements have to a larger or smaller degree is known as the escapement error.
Any escapement with sliding friction will need lubrication, but as this deteriorates the friction will increase, and, perhaps, insufficient power will be transferred to the timing device. If the timing device is a pendulum, the increased frictional forces will decrease the Q factor, increasing the resonance band, and decreasing its precision. For spring-driven clocks, the impulse force applied by the spring changes as the spring is unwound, following Hooke's law. For gravity-driven clocks, the impulse force also increases as the driving weight falls and more chain suspends the weight from the gear train; in practice, however, this effect is only seen in large public clocks, and it can be avoided by a closed-loop chain.
Watches and smaller clocks do not use pendulums as the timing device. Instead, they use a balance spring: a fine spring connected to a metal balance wheel that oscillates (rotates back and forth). Most modern mechanical watches have a working frequency of 3–4 Hz (oscillations per second) or 6–8 beats per second (21,600–28,800 beats per hour; bph). Faster or slower speeds are used in some watches (33,600bph, or 19,800bph). The working frequency depends on the balance spring's stiffness (spring constant); to keep time, the stiffness should not vary with temperature. Consequently, balance springs use sophisticated alloys; in this area, watchmaking is still advancing. As with the pendulum, the escapement must provide a small kick each cycle to keep the balance wheel oscillating. Also, the same lubrication problem occurs over time; the watch will lose accuracy (typically it will speed up) when the escapement lubrication starts failing.
Pocket watches were the predecessor of modern wristwatches. Pocket watches, being in the pocket, were usually in a vertical orientation. Gravity causes some loss of accuracy as it magnifies over time any lack of symmetry in the weight of the balance. The tourbillon was invented to minimize this: the balance and spring are put in a cage that rotates (typically but not necessarily, once a minute), smoothing gravitational distortions. This very clever and sophisticated clockwork is a prized complication in wristwatches, even though the natural movement of the wearer tends to smooth gravitational influences anyway.
The most accurate commercially produced mechanical clock was the electromechanical Shortt-Synchronome free pendulum clock invented by W. H. Shortt in 1921, which had an uncertainty of about 1 second per year. The most accurate mechanical clock to date is probably the electromechanical Littlemore Clock, built by noted archaeologist E. T. Hall in the 1990s. In Hall's paper, he reports an uncertainty of 3 parts in 109 measured over 100 days (an uncertainty of about 0.02 seconds over that period). Both of these clocks are electromechanical clocks: they use a pendulum as the timekeeping element, but electrical power rather than a mechanical gear train to supply energy to the pendulum.
Mechanical escapements
Since 1658 when the introduction of the pendulum and balance spring made accurate timepieces possible, it has been estimated that more than three hundred different mechanical escapements have been devised, but only about 10 have seen widespread use. These are described below. In the 20th century, electric timekeeping methods replaced mechanical clocks and watches, so escapement design became a little-known curiosity.
Verge escapement
The earliest mechanical escapement, from the late 1200s was the verge escapement, also known as the crown-wheel escapement. It was used in the first mechanical clocks and was originally controlled by a foliot, a horizontal bar with weights at either end. The escapement consists of an escape wheel shaped somewhat like a crown, with pointed teeth sticking axially out of the side, oriented horizontally. In front of the crown wheel is a vertical shaft, attached to the foliot at the top, which carries two metal plates (pallets) sticking out like flags from a flag pole, oriented about ninety degrees apart, so only one engages the crown wheel teeth at a time. As the wheel turns, one tooth pushes against the upper pallet, rotating the shaft and the attached foliot. As the tooth pushes past the upper pallet, the lower pallet swings into the path of the teeth on the other side of the wheel. A tooth catches on the lower pallet, rotating the shaft back the other way, and the cycle repeats. A disadvantage of the escapement was that each time a tooth landed on a pallet, the momentum of the foliot pushed the crown wheel backward a short distance before the force of the wheel reversed the motion. This is called "recoil" and was a source of wear and inaccuracy.
The verge was the only escapement used in clocks and watches for 350 years. In spring-driven clocks and watches, it required a fusee to even out the force of the mainspring. It was used in the first pendulum clocks for about 50 years after the pendulum clock was invented in 1656. In a pendulum clock, the crown wheel and staff were oriented so they were horizontal, and the pendulum was hung from the staff. However, the verge is the most inaccurate of the common escapements, and after the pendulum was introduced in the 1650s, the verge began to be replaced by other escapements, being abandoned only by the late 1800s. By this time, the fashion for thin watches had required that the escape wheel be made very small, amplifying the effects of wear, and when a watch of this period is wound up today, it will often be found to run very fast, gaining many hours per day.
Cross-beat escapement
Jost Bürgi invented the cross-beat escapement in 1584, a variation of the verge escapement which had two foliots that rotated in opposite directions. According to contemporary accounts, his clocks achieved remarkable accuracy of within a minute per day, two orders of magnitude better than other clocks of the time. However, this improvement was probably not due to the escapement itself, but rather to better workmanship and his invention of the remontoire, a device that isolated the escapement from changes in drive force. Without a balance spring, the crossbeat would have been no more isochronous than the verge.
Galileo's escapement
Galileo's escapement is a design for a clock escapement, invented around 1637 by Italian scientist Galileo Galilei (1564 - 1642). It was the earliest design of a pendulum clock. Since he was by then blind, Galileo described the device to his son, who drew a sketch of it. The son began construction of a prototype, but both he and Galileo died before it was completed.
Anchor escapement
Invented around 1657 by Robert Hooke, the anchor (see animation to the right) quickly superseded the verge to become the standard escapement used in pendulum clocks through to the 19th century. Its advantage was that it reduced the wide pendulum swing angles of the verge to 3–6°, making the pendulum nearly isochronous, and allowing the use of longer, slower-moving pendulums, which used less energy. The anchor is responsible for the long narrow shape of most pendulum clocks, and for the development of the grandfather clock, the first anchor clock to be sold commercially, which was invented around 1680 by William Clement, who disputed credit for the escapement with Hooke.
The anchor consists of an escape wheel with pointed, backward slanted teeth, and an "anchor"-shaped piece pivoted above it which rocks from side to side, linked to the pendulum. The anchor has slanted pallets on the arms which alternately catch on the teeth of the escape wheel, receiving impulses. Operation is mechanically similar to the verge escapement, and it has two of the verge's disadvantages: (1) The pendulum is constantly being pushed by an escape wheel tooth throughout its cycle, and is never allowed to swing freely, which disturbs its isochronism, and (2) it is a recoil escapement; the anchor pushes the escape wheel backward during part of its cycle. This causes backlash, increased wear in the clock's gears, and inaccuracy. These problems were eliminated in the deadbeat escapement, which slowly replaced the anchor in precision clocks.
Deadbeat escapement
The Graham or deadbeat escapement was an improvement of the anchor escapement first made by Thomas Tompion to a design by Richard Towneley in 1675 although it is often credited to Tompion's successor George Graham who popularized it in 1715. In the anchor escapement the swing of the pendulum pushes the escape wheel backward during part of its cycle. This 'recoil' disturbs the motion of the pendulum, causing inaccuracy, and reverses the direction of the gear train, causing backlash and introducing high loads into the system, leading to friction and wear. The main advantage of the deadbeat is that it eliminated recoil.
In the deadbeat, the pallets have a second curved "locking" face on them, concentric about the pivot on which the anchor turns. During the extremities of the pendulum's swing, the escape wheel tooth rests against this locking face, providing no impulse to the pendulum, which prevents recoil. Near the bottom of the pendulum's swing, the tooth slides off the locking face onto the angled "impulse" face, giving the pendulum a push, before the pallet releases the tooth. The deadbeat was first used in precision regulator clocks, but because of its greater accuracy superseded the anchor in the 19th century. It is used in almost all modern pendulum clocks except for tower clocks which often use gravity escapements.
Pin wheel escapement
Invented around 1741 by Louis Amant, this version of a deadbeat escapement can be made quite rugged. Instead of using teeth, the escape wheel has round pins that are stopped and released by a scissors-like anchor. This escapement, which is also called Amant escapement or (in Germany) Mannhardt escapement, is used quite often in tower clocks.
Detent escapement
The detent or chronometer escapement was used in marine chronometers, although some precision watches during the 18th and 19th centuries also used it. It was considered the most accurate of the balance wheel escapements before the beginning of the 20th century, when lever escapement chronometers began to outperform them in competition. The early form was invented by Pierre Le Roy in 1748, who created a pivoted detent type of escapement, though this was theoretically deficient. The first effective design of detent escapement was invented by John Arnold around 1775, but with the detent pivoted. This escapement was modified by Thomas Earnshaw in 1780 and patented by Wright (for whom he worked) in 1783; however, as depicted in the patent it was unworkable. Arnold also designed a spring detent escapement but, with improved design, Earnshaw's version eventually prevailed as the basic idea underwent several minor modifications during the last decade of the 18th century. The final form appeared around 1800, and this design was used until mechanical chronometers became obsolete in the 1970s.
The detent is a detached escapement; it allows the balance wheel to swing undisturbed during most of its cycle, except the brief impulse period, which is only given once per cycle (every other swing). Because the driving escape wheel tooth moves almost parallel to the pallet, the escapement has little friction and does not need oiling. For these reasons among others, the detent was considered the most accurate escapement for balance wheel timepieces. John Arnold was the first to use the detent escapement with an overcoil balance spring (patented 1782), and with this improvement his watches were the first truly accurate pocket timekeepers, keeping time to within 1 or 2 seconds per day. These were produced from 1783 onwards.
However, the escapement had disadvantages that limited its use in watches: it was fragile and required skilled maintenance; it was not self-starting, so if the watch was jarred in use so the balance wheel stopped, it would not start up again; and it was harder to manufacture in volume. Therefore, the self-starting lever escapement became dominant in watches.
Cylinder escapement
The horizontal or cylinder escapement, invented by Thomas Tompion in 1695 and perfected by George Graham in 1726, was one of the escapements which replaced the verge escapement in pocketwatches after 1700. A major attraction was that it was much thinner than the verge, allowing watches to be made fashionably slim. Clockmakers found it suffered from excessive wear, so it was not much used during the 18th century, except in a few high-end watches with cylinders made from ruby. The French solved this problem by making the cylinder and escape wheel of hardened steel, and the escapement was used in large numbers in inexpensive French and Swiss pocketwatches and small clocks from the mid-19th to the 20th century.
Rather than pallets, the escapement uses a cutaway cylinder on the balance wheel shaft, which the escape teeth enter one by one. Each wedge-shaped tooth impulses the balance wheel by pressure on the cylinder edge as it enters, is held inside the cylinder as it turns, and impulses the wheel again as it leaves out the other side. The wheel usually had 15 teeth and impulsed the balance over an angle of 20° to 40° in each direction. It is a frictional rest escapement, with the teeth in contact with the cylinder over the whole balance wheel cycle, and so was not as accurate as "detached" escapements like the lever, and the high friction forces caused excessive wear and necessitated more frequent cleaning.
Duplex escapement
The duplex watch escapement was invented by Robert Hooke around 1700, improved by Jean Baptiste Dutertre and Pierre Le Roy, and put in final form by Thomas Tyrer, who patented it in 1782.
The early forms had two escape wheels. The duplex escapement was difficult to make but achieved much higher accuracy than the cylinder escapement, and could equal that of the (early) lever escapement and when carefully made was almost as good as a detent escapement.
It was used in quality English pocketwatches from about 1790 to 1860,
and in the Waterbury, a cheap American 'everyman's' watch, during 1880–1898.
In the duplex, as in the chronometer escapement to which it has similarities, the balance wheel only receives an impulse during one of the two swings in its cycle.
The escape wheel has two sets of teeth (hence the name 'duplex'); long locking teeth project from the side of the wheel, and short impulse teeth stick up axially from the top. The cycle starts with a locking tooth resting against the ruby disk. As the balance wheel swings counterclockwise through its center position, the notch in the ruby disk releases the tooth. As the escape wheel turns, the pallet is in just the right position to receive a push from an impulse tooth. Then the next locking tooth drops onto the ruby roller and stays there while the balance wheel completes its cycle and swings back clockwise (CW), and the process repeats. During the CW swing, the impulse tooth falls momentarily into the ruby roller notch again but is not released.
The duplex is technically a frictional rest escapement; the tooth resting against the roller adds some friction to the balance wheel during its swing but this is very minimal. As in the chronometer, there is little sliding friction during impulse since pallet and impulse tooth are moving almost parallel, so little lubrication is needed.
However, it lost favor to the lever; its tight tolerances and sensitivity to shock made duplex watches unsuitable for active people. Like the chronometer, it is not self-starting and is vulnerable to "setting;" if a sudden jar stops the balance during its CW swing, it cannot get started again.
Lever escapement
The lever escapement, invented by Thomas Mudge in 1750, has been used in the vast majority of watches since the 19th century. Its advantages are (1) it is a "detached" escapement; unlike the cylinder or duplex escapements the balance wheel is only in contact with the lever during the short impulse period when it swings through its centre position and swings freely the rest of its cycle, increasing accuracy, and (2) it is a self-starting escapement, so if the watch is shaken so that the balance wheel stops, it will automatically start again. The original form was the rack lever escapement, in which the lever and the balance wheel were always in contact via a gear rack on the lever. Later, it was realized that all the teeth from the gears could be removed except one, and this created the detached lever escapement. British watchmakers used the English detached lever, in which the lever was at right angles to the balance wheel. Later Swiss and American manufacturers used the inline lever, in which the lever is inline between the balance wheel and the escape wheel; this is the form used in modern watches. In 1798, Louis Perron invented an inexpensive, less accurate form called the pin-pallet escapement, which was used in cheap "dollar watches" in the early 20th century and is still used in cheap alarm clocks and kitchen timers.
Grasshopper escapement
A rare but interesting mechanical escapement is John Harrison's grasshopper escapement invented in 1722. In this escapement, the pendulum is driven by two hinged arms (pallets). As the pendulum swings, the end of one arm catches on the escape wheel and drives it slightly backwards; this releases the other arm which moves out of the way to allow the escape wheel to pass. When the pendulum swings back again, the other arm catches the wheel, pushes it back and releases the first arm, and so on. The grasshopper escapement has been used in very few clocks since Harrison's time. Grasshopper escapements made by Harrison in the 18th century are still operating. Most escapements wear far more quickly, and waste far more energy. However, like other early escapements, the grasshopper impulses the pendulum throughout its cycle; it is never allowed to swing freely, causing error due to variations in drive force, and 19th-century clockmakers found it uncompetitive with more detached escapements like the deadbeat. Nevertheless, with enough care in construction it is capable of accuracy. A modern experimental grasshopper clock, the Burgess Clock B, had a measured error of only of a second during 100 running days. After two years of operation, it had an error of only ±0.5 sec, after barometric correction.
Gravity escapement
A gravity escapement uses a small weight or a weak spring to give an impulse directly to the pendulum. The earliest form consisted of two arms which were pivoted very close to the suspension spring of the pendulum with one arm on each side of the pendulum. Each arm carried a small deadbeat pallet with an angled plane leading to it. When the pendulum lifted one arm far enough, its pallet would release the escape wheel. Almost immediately, another tooth on the escape wheel would start to slide up the angle face on the other arm thereby lifting the arm. It would reach the pallet and stop. The other arm meanwhile was still in contact with the pendulum and coming down again to a point lower than it had started from. This lowering of the arm provides the impulse to the pendulum. The design was developed steadily from the middle of the 18th century to the middle of the 19th century. It eventually became the escapement of choice for turret clocks, because their wheel trains are subjected to large variations in drive force caused by the large exterior hands, with their varying wind, snow, and ice loads. Since in a gravity escapement, the drive force from the wheel train does not itself impel the pendulum but merely resets the weights that provide the impulse, the escapement is not affected by variations in drive force.
The 'Double Three-legged Gravity Escapement' shown here is a form of escapement first devised by a barrister named Bloxam and later improved by Lord Grimthorpe. It is the standard for all accurate 'Tower' clocks.
In the animation shown here, the two "gravity arms" are coloured blue and red. The two three-legged escape wheels are also coloured blue and red. They work in two parallel planes so that the blue wheel only impacts the locking block on the blue arm and the red wheel only impacts the red arm. In a real escapement, these impacts give rise to loud audible "ticks" and these are indicated by the appearance of a * beside the locking blocks. The three black lifting pins are key to the operation of the escapement. They cause the weighted gravity arms to be raised by an amount indicated by the pair of parallel lines on each side of the escapement. This gain in potential energy is the energy given to the pendulum on each cycle. For the Trinity College Cambridge Clock, a mass of around 50 grams is lifted through 3 mm each 1.5 seconds - which works out to 1 mW of power. The driving power from the falling weight is about 12 mW, so there is a substantial excess of power used to drive the escapement. Much of this energy is dissipated in the acceleration and deceleration of the frictional "fly" attached to the escape wheels.
The great clock in Elizabeth Tower at Westminster that rings London's Big Ben uses a double three-legged gravity escapement.
Coaxial escapement
Invented around 1974 and patented 1980 by British watchmaker George Daniels, the coaxial escapement is one of the few new watch escapements adopted commercially in modern times.
It could be regarded as having its distant origins in the escapement invented by Robert Robin, C.1792, which gives a single impulse in one direction; with the locking achieved by passive lever pallets, the design of the coaxial escapement is more akin to that of another Robin variant, the Fasoldt escapement, which was invented and patented by the American Charles Fasoldt in 1859.
Both Robin and Fasoldt escapements give impulse in one direction only.
The latter escapement has a lever with unequal drops; this engages with two escape wheels of differing diameters. The smaller impulse wheel acts on the single pallet at the end of the lever, whilst the pointed lever pallets lock on the larger wheel.
The balance engages with and is impelled by the lever through a roller pin and lever fork. The lever 'anchor' pallet locks the larger wheel and, on this being unlocked, a pallet on the end of the lever is given an impulse by the smaller wheel through the lever fork. The return stroke is 'dead', with the 'anchor' pallets serving only to lock and unlock, impulse being given in one direction through the single lever pallet.
As with the duplex, the locking wheel is larger in order to reduce pressure and thus friction.
The Daniels escapement, however, achieves a double impulse with passive lever pallets serving only to lock and unlock the larger wheel. On one side, impulse is given by means of the smaller wheel acting on the lever pallet through the roller and impulse pin. On the return, the lever again unlocks the larger wheel, which gives an impulse directly onto an impulse roller on the balance staff.
The main advantage is that this enables both impulses to occur on or around the centre line, with disengaging friction in both directions.
This mode of impulse is in theory superior to the lever escapement, which has engaging friction on the entry pallet. For long, this was recognized as a disturbing influence on the isochronism of the balance.
Purchasers no longer buy mechanical watches primarily for their accuracy, so manufacturers had little interest in investing in the tooling required, although finally, Omega adopted it in 1990.
Other modern watch escapements
Since accuracy far greater than any mechanical watch is achievable with low-cost quartz watches, improved escapement designs are no longer motivated by practical timekeeping needs but as novelties in the high-end watch market. In an effort to attract publicity, in recent decades some high-end mechanical watchmakers have introduced new escapements. None of these have been adopted by any watchmakers beyond their original creator.
Based on patents initially submitted by Rolex on behalf of inventor Nicolas Déhon, the constant escapement was developed by Girard-Perregaux as working prototypes in 2008 (Nicolas Déhon was then head of Girard-Perregaux R&D department) and in watches by 2013.
The key component of this escapement is a silicon buckled-blade which stores elastic energy. This blade is flexed to a point close to its unstable state and is released with a snap each swing of the balance wheel to give the wheel an impulse, after which it is cocked again by the wheel train. The advantage claimed is that since the blade imparts the same amount of energy to the wheel each release, the balance wheel is isolated from variations in impulse force due to the wheel train and mainspring which cause inaccuracies in conventional escapements.
Parmigiani Fleurier with its Genequand escapement and Ulysse Nardin with its Ulysse Anchor escapement have taken advantage of the properties of silicon flat springs. The independent watchmaker, De Bethune, has developed a concept where a magnet makes a resonator vibrate at high frequency, replacing the traditional balance spring.
Electromechanical escapements
In the late 19th century, electromechanical escapements were developed for pendulum clocks. In these, a switch or phototube energised an electromagnet for a brief section of the pendulum's swing. On some clocks, the pulse of electricity that drove the pendulum also drove a plunger to move the gear train.
Hipp clock
In 1843, Matthäus Hipp first mentioned a purely mechanical clock being driven by a switch called "echappement à palette". A varied version of that escapement has been used from the 1860s inside electrically driven pendulum clocks, the so-called "hipp-toggle". Since the 1870s, in an improved version the pendulum drove a ratchet wheel via a pawl on the pendulum rod, and the ratchet wheel drove the rest of the clock train to indicate the time. The pendulum was not impelled on every swing or even at a set interval of time. It was only impelled when its arc of swing had decayed below a certain level. As well as the counting pawl, the pendulum carried a small vane, known as a Hipp's toggle, pivoted at the top, which was completely free to swing. It was placed so that it dragged across a triangular polished block with a vee-groove in the top of it. When the arc of swing of the pendulum was large enough, the vane crossed the groove and swung free on the other side. If the arc was too small the vane never left the far side of the groove, and when the pendulum swung back it pushed the block strongly downwards. The block carried a contact which completed the circuit to the electromagnet which impelled the pendulum. The pendulum was only impelled as required.
This type of clock was widely used as a master clock in large buildings to control numerous slave clocks. Most telephone exchanges used such a clock to control timed events such as were needed to control the setup and charging of telephone calls by issuing pulses of varying durations such as every second, six seconds and so on.
Synchronome switch
Designed in 1895 by Frank Hope-Jones, the Synchronome switch and gravity escapement were the basis for the majority of their clocks in the 20th century. And also the basis of the slave pendulum in the Shortt-Synchronome free pendulum clock. A gathering arm attached to the pendulum moves a 15-tooth count wheel in one position, with a pawl preventing movement in the reverse direction. The wheel has a vane attached which, once per 30-second turn, releases the gravity arm. When the gravity arm falls it pushes against a pallet attached directly to the pendulum, giving it a push. Once the arm has fallen, it makes an electrical contact that energises an electromagnet to reset the gravity arm and acts as the half-minute impulse for the slave clocks.
Free pendulum clock
In the 20th century, the English horologist William Hamilton Shortt invented a free pendulum clock, patented in September 1921 and manufactured by the Synchronome Company, with an accuracy of one-hundredth of a second a day. In this system the timekeeping "master" pendulum, whose rod is made from a special steel alloy with 36% nickel called Invar whose length changes very little with temperature, swings as free of external influence as possible sealed in a vacuum chamber and does no work. It is in mechanical contact with its escapement for only a fraction of a second every 30 seconds. A secondary "slave" pendulum turns a ratchet, which triggers an electromagnet slightly less than every thirty seconds. This electromagnet releases a gravity lever onto the escapement above the master pendulum. A fraction of a second later (but exactly every 30 seconds), the motion of the master pendulum releases the gravity lever to fall farther. In the process, the gravity lever gives a tiny impulse to the master pendulum, which keeps that pendulum swinging. The gravity lever falls onto a pair of contacts, completing a circuit that does several things:
energizes a second electromagnet to raise the gravity lever above the master pendulum to its top position,
sends a pulse to activate one or more clock dials, and
sends a pulse to a synchronizing mechanism that keeps the slave pendulum in step with the master pendulum.
Since it is the slave pendulum that releases the gravity lever, this synchronization is vital to the functioning of the clock. The synchronizing mechanism used a small spring attached to the shaft of the slave pendulum and an electromagnetic armature that would catch the spring if the slave pendulum was running slightly late, thus shortening the period of the slave pendulum for one swing. The slave pendulum was adjusted to run slightly slow, such that on approximately every other synchronization pulse the spring would be caught by the armature.
This form of clock became a standard for use in observatories (roughly 100 such clocks were manufactured), and was the first clock capable of detecting small variations in the speed of Earth's rotation.
See also
Escapement (radio control)
References
, p. 56-58
Notes
Further reading
External links
Mark Headrick's horology page, with animated pictures of many escapements
Performance Of The Daniels Coaxial Escapement, Horological Journal, August 2004
Watch and Clock Escapements, The Keystone (magazine), 1904, via Project Gutenberg: "A Complete Study in Theory and Practice of the Lever, Cylinder and Chronometer Escapements, Together with a Brief Account of the Origin and Evolution of the Escapement in Horology."
US Patent number 5140565, issued 23 March 1992, for a cycloidal pendulum similar to that of Huygens
findarticles.com: Obituary of Professor Edward Hall, The Independent (London), 16 August 2001
American Watchmakers-Clockmakers Institute, non-profit trade association
Federation of the Swiss Watch Industry FH, watch industry trade association
Method for transmitting bursts of mechanical energy from a power source to an oscillating
Alternative Escapements, Europa Star, September 2014
Evolution of the escapement, Monochrome-watches, Xavier Markl, February 2016
Ancient Greek technology
Ancient inventions
Chinese inventions
English inventions
Greek inventions
Hellenistic engineering
Mechanical power control
Timekeeping components | Escapement | [
"Physics",
"Technology"
] | 8,755 | [
"Timekeeping components",
"Mechanics",
"Mechanical power control",
"Components"
] |
4,210,737 | https://en.wikipedia.org/wiki/LIGA | LIGA is a fabrication technology used to create high-aspect-ratio microstructures. The term is a German acronym for – lithography, electroplating, and molding.
Overview
LIGA consists of three main processing steps: lithography, electroplating, and molding. There are two main LIGA-fabrication technologies: X-Ray LIGA, which uses X-rays produced by a synchrotron to create high-aspect-ratio structures, and UV LIGA, a more accessible method which uses ultraviolet light to create structures with relatively low aspect ratios.
Notable characteristics of X-ray LIGA-fabricated structures include:
high aspect ratios on the order of 100:1
parallel side walls with a flank angle on the order of 89.95°
smooth side walls with = , suitable for optical mirrors
structural heights from tens of micrometers to several millimeters
structural details on the order of micrometers over distances of centimeters
X-Ray LIGA
X-Ray LIGA is a fabrication process in microtechnology that was developed in the early 1980s by a team under the leadership of Erwin Willy Becker and Wolfgang Ehrfeld at the Institute for Nuclear Process Engineering (Institut für Kernverfahrenstechnik, IKVT) at the Karlsruhe Nuclear Research Center, since renamed to the Institute for Microstructure Technology (Institut für Mikrostrukturtechnik, IMT) at the Karlsruhe Institute of Technology (KIT). LIGA was one of the first major techniques to allow on-demand manufacturing of high-aspect-ratio structures (structures that are much taller than wide) with lateral precision below one micrometer.
In the process, an X-ray sensitive polymer photoresist, typically PMMA, bonded to an electrically conductive substrate, is exposed to parallel beams of high-energy X-rays from a synchrotron radiation source through a mask partly covered with a strong X-ray absorbing material. Chemical removal of exposed (or unexposed) photoresist results in a three-dimensional structure, which can be filled by the electrodeposition of metal. The resist is chemically stripped away to produce a metallic mold insert. The mold insert can be used to produce parts in polymers or ceramics through injection molding.
The LIGA technique's unique value is the precision obtained by the use of deep X-ray lithography (DXRL). The technique enables microstructures with high aspect ratios and high precision to be fabricated in a variety of materials (metals, plastics, and ceramics). Many of its practitioners and users are associated with, or are located close to, synchrotron facilities.
UV LIGA
UV LIGA utilizes an inexpensive ultraviolet light source, like a mercury lamp, to expose a polymer photoresist, typically SU-8. Because heating and transmittance are not an issue in optical masks, a simple chromium mask can be substituted for the technically sophisticated X-ray mask. These reductions in complexity make UV LIGA much cheaper and more accessible than its X-ray counterpart. However, UV LIGA is not as effective at producing precision molds and is thus used when cost must be kept low and very high aspect ratios are not required.
Process details
Mask
X-ray masks are composed of a transparent low-Z carrier, a patterned high-Z absorber, and a metallic ring for alignment and heat removal. Due to extreme temperature variations induced by the X-ray exposure, carriers are fabricated from materials with high thermal conductivity to reduce thermal gradients. Currently, vitreous carbon and graphite are considered the best material, as their use significantly reduces side-wall roughness. Silicon, silicon nitride, titanium, and diamond are also used as carrier substrates but not preferred, as the required thin membranes are comparatively fragile and titanium masks tend to round sharp features due to edge fluorescence. Absorbers are gold, nickel, copper, tin, lead, and other X-ray-absorbing metals.
Masks can be fabricated in several fashions. The most accurate and expensive masks are those created by electron-beam lithography, which provides resolutions as fine as in resist thick and features in resist thick. An intermediate method is the plated photomask, which provides resolution and can be outsourced at a cost on the order of $1000 per mask. The least expensive method is a direct photomask, which provides resolution in resist thick. In summary, masks can cost between $1000 and $20,000 and take between two weeks and three months for delivery. Due to the small size of the market, each LIGA group typically has its own mask-making capability. Future trends in mask creation include larger formats, from a diameter of to , and smaller feature sizes.
Substrate
The starting material is a flat substrate, such as a silicon wafer or a polished disc of beryllium, copper, titanium, or other material. The substrate, if not already electrically conductive, is covered with a conductive plating base, typically through sputtering or evaporation.
The fabrication of high-aspect-ratio structures requires the use of a photoresist able to form a mold with vertical sidewalls; thus, the photoresist must have a high selectivity and be relatively free from stress when applied in thick layers. The typical choice, poly(methyl methacrylate) (PMMA), is applied to the substrate by a glue-down process in which a precast, high-molecular-weight sheet of PMMA is attached to the plating base on the substrate. The applied photoresist is then milled down to the precise height by a fly cutter prior to pattern transfer by X-ray exposure. Because the layer must be relatively free from stress, this glue-down process is preferred over alternative methods such as casting. Further, the cutting of the PMMA sheet by the fly cutter requires specific operating conditions and tools to avoid introducing any stress and crazing of the photoresist.
Exposure
A key enabling technology of LIGA is the synchrotron, capable of emitting high-power, highly-collimated X-rays. This high collimation permits relatively large distances between the mask and the substrate without the penumbral blurring that occurs from other X-ray sources. In the electron storage ring or synchrotron, a magnetic field constrains electrons to follow a circular path, and the radial acceleration of the electrons causes electromagnetic radiation to be emitted forward. The radiation is thus strongly collimated in the forward direction and can be assumed to be parallel for lithographic purposes. Because of the much higher flux of usable collimated X-rays, shorter exposure times become possible. Photon energies for a LIGA exposure are approximately distributed between 2.5 and .
Unlike optical lithography, there are multiple exposure limits, identified as the top dose, bottom dose, and critical dose, whose values must be determined experimentally for a proper exposure. The exposure must be sufficient to meet the requirements of the bottom dose, the exposure under which a photoresist residue will remain, and the top dose, the exposure over which the photoresist will foam. The critical dose is the exposure at which unexposed resist begins to be attacked. Due to the insensitivity of PMMA, a typical exposure time for a -thick PMMA is six hours. During exposure, secondary radiation effects such as Fresnel diffraction, mask and substrate fluorescence, and the generation of Auger electrons and photoelectrons can lead to overexposure.
During exposure, the X-ray mask and the mask holder are heated directly by X-ray absorption and cooled by forced convection from nitrogen jets. Temperature rise in PMMA resist is mainly from heat conducted from the substrate backward into the resist and from the mask plate through the inner cavity air forward to the resist, with X-ray absorption being tertiary. Thermal effects include chemistry variations due to resist heating and geometry-dependent mask deformation.
Development
For high-aspect-ratio structures, the resist-developer system is required to have a ratio of dissolution rates in the exposed and unexposed areas of 1000:1. The standard, empirically optimized developer is a mixture of tetrahydro-1,4-oxazine (20%), 2-aminoethanol-1 (5%), 2-(2-butoxyethoxy)ethanol (60%), and water (15%). This developer provides the required ratio of dissolution rates and reduces stress-related cracking from swelling in comparison to conventional PMMA developers. After development, the substrate is rinsed with deionized water and dried either in a vacuum or by spinning. At this stage, the PMMA structures can be released as the final product (e.g., optical components) or can be used as molds for subsequent metal deposition.
Electroplating
In the electroplating step, nickel, copper, or gold is plated upward from the metalized substrate into the voids left by the removed photoresist. Taking place in an electrolytic cell, the current density, temperature, and solution are carefully controlled to ensure proper plating. In the case of nickel deposition from NiCl2 in a KCl solution, Ni is deposited on the cathode (metalized substrate) and Cl2 evolves at the anode. Difficulties associated with plating into PMMA molds include voids, where hydrogen bubbles nucleate on contaminants; chemical incompatibility, where the plating solution attacks the photoresist; and mechanical incompatibility, where film stress causes the plated layer to lose adhesion. These difficulties can be overcome through the empirical optimization of the plating chemistry and environment for a given layout.
Stripping
After exposure, development, and electroplating, the resist is stripped. One method for removing the remaining PMMA is to flood-expose the substrate and use the developing solution to cleanly remove the resist. Alternatively, chemical solvents can be used. Stripping of a thick resist chemically is a lengthy process, taking two to three hours in acetone at room temperature. In multilayer structures, it is common practice to protect metal layers against corrosion by backfilling the structure with a polymer-based encapsulant. At this stage, metal structures can be left on the substrate (e.g., microwave circuitry) or released as the final product (e.g., gears).
Replication
After stripping, the released metallic components can be used for mass replication through standard means of replication such as stamping or injection molding.
Commercialization
In the 1990s, LIGA was a cutting-edge MEMS fabrication technology, resulting in the design of components showcasing the technique's unique versatility. Several companies that begin using the LIGA process later changed their business model (e.g., Steag microParts becoming Boehringer Ingelheim microParts, Mezzo Technologies). Currently, only two companies, HTmicro and microworks, continue their work in LIGA, benefiting from limitations of other competing fabrication technologies. UV LIGA, due to its lower production cost, is employed more broadly by several companies, such as Veco, Tecan, Temicon, and Mimotec in Switzerland, who supply the Swiss watch market with metal parts made of nickel and nickel-phosphorus.
Gallery
Below is a gallery of LIGA-fabricated structures arranged by date.
Notes
See also
Photolithography
X-ray lithography
Electroplating
Molding
Synchrotron
PMMA
SU-8 photoresist
Enriched Uranium — Aerodynamic Processes
References
External links
LiMiNT - LIGA process from Singapore Synchrotron Light Source
LIGA process Karlsruhe Institute of Technology, Institute of Microstrucutre Technology
Illustrated LIGA-process by Arndt Last
Materials science
Microtechnology
Lithography (microfabrication) | LIGA | [
"Physics",
"Materials_science",
"Engineering"
] | 2,428 | [
"Applied and interdisciplinary physics",
"Microtechnology",
"Materials science",
"nan",
"Nanotechnology",
"Lithography (microfabrication)"
] |
4,211,219 | https://en.wikipedia.org/wiki/Kadomtsev%E2%80%93Petviashvili%20equation | In mathematics and physics, the Kadomtsev–Petviashvili equation (often abbreviated as KP equation) is a partial differential equation to describe nonlinear wave motion. Named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili, the KP equation is usually written as
where . The above form shows that the KP equation is a generalization to two spatial dimensions, x and y, of the one-dimensional Korteweg–de Vries (KdV) equation. To be physically meaningful, the wave propagation direction has to be not-too-far from the x direction, i.e. with only slow variations of solutions in the y direction.
Like the KdV equation, the KP equation is completely integrable. It can also be solved using the inverse scattering transform much like the nonlinear Schrödinger equation.
In 2002, the regularized version of the KP equation, naturally referred to as the Benjamin–Bona–Mahony–Kadomtsev–Petviashvili equation (or simply the BBM-KP equation), was introduced as an alternative model for small amplitude long waves in shallow water moving mainly in the x direction in 2+1 space.
where . The BBM-KP equation provides an alternative to the usual KP equation, in a similar way that the Benjamin–Bona–Mahony equation is related to the classical Korteweg–de Vries equation, as the linearized dispersion relation of the BBM-KP is a good approximation to that of the KP but does not exhibit the unwanted limiting behavior as the Fourier variable dual to x approaches . The BBM-KP equation can be viewed as a weak transverse perturbation of the Benjamin–Bona–Mahony equation. As a result, the solutions of their corresponding Cauchy problems share an intriguing and complex mathematical relationship. Aguilar et al. proved that the solution of the Cauchy problem for the BBM-KP model equation converges to the solution of the Cauchy problem associated to the Benjamin–Bona–Mahony equation in the -based Sobolev space for all , provided their corresponding initial data are close in as the transverse variable .
History
The KP equation was first written in 1970 by Soviet physicists Boris B. Kadomtsev (1928–1998) and Vladimir I. Petviashvili (1936–1993); it came as a natural generalization of the KdV equation (derived by Korteweg and De Vries in 1895). Whereas in the KdV equation waves are strictly one-dimensional, in the KP equation this restriction is relaxed. Still, both in the KdV and the KP equation, waves have to travel in the positive x-direction.
Connections to physics
The KP equation can be used to model water waves of long wavelength with weakly non-linear restoring forces and frequency dispersion. If surface tension is weak compared to gravitational forces, is used; if surface tension is strong, then . Because of the asymmetry in the way x- and y-terms enter the equation, the waves described by the KP equation behave differently in the direction of propagation (x-direction) and transverse (y) direction; oscillations in the y-direction tend to be smoother (be of small-deviation).
The KP equation can also be used to model waves in ferromagnetic media, as well as two-dimensional matter–wave pulses in Bose–Einstein condensates.
Limiting behavior
For , typical x-dependent oscillations have a wavelength of giving a singular limiting regime as . The limit is called the dispersionless limit.
If we also assume that the solutions are independent of y as , then they also satisfy the inviscid Burgers' equation:
Suppose the amplitude of oscillations of a solution is asymptotically small — — in the dispersionless limit. Then the amplitude satisfies a mean-field equation of Davey–Stewartson type.
See also
Novikov–Veselov equation
Schottky problem
Dispersionless KP equation
References
Further reading
. Translation of
External links
Partial differential equations
Exactly solvable models
Integrable systems
Solitons
Equations of fluid dynamics | Kadomtsev–Petviashvili equation | [
"Physics",
"Chemistry"
] | 893 | [
"Equations of fluid dynamics",
"Equations of physics",
"Integrable systems",
"Theoretical physics",
"Fluid dynamics"
] |
4,211,531 | https://en.wikipedia.org/wiki/Zero-energy%20building | A Zero-Energy Building (ZEB), also known as a Net Zero-Energy (NZE) building, is a building with net zero energy consumption, meaning the total amount of energy used by the building on an annual basis is equal to the amount of renewable energy created on the site or in other definitions by renewable energy sources offsite, using technology such as heat pumps, high efficiency windows and insulation, and solar panels.
The goal is that these buildings contribute less overall greenhouse gas to the atmosphere during operation than similar non-ZNE buildings. They do at times consume non-renewable energy and produce greenhouse gases, but at other times reduce energy consumption and greenhouse gas production elsewhere by the same amount. The development of zero-energy buildings is encouraged by the desire to have less of an impact on the environment, and their expansion is encouraged by tax breaks and savings on energy costs which make zero-energy buildings financially viable.
Terminology tends to vary between countries, agencies, cities, towns, and reports, so a general knowledge of this concept and its various uses is essential for a versatile understanding of clean energy and renewables. The International Energy Agency (IEA) and European Union (EU) most commonly use "Net Zero Energy", with the term "zero net" being mainly used in the US. A similar concept approved and implemented by the European Union and other agreeing countries is nearly Zero Energy Building (nZEB), with the goal of having all new buildings in the region under nZEB standards by 2020.
Overview
Typical code-compliant buildings consume 40% of the total fossil fuel energy in the US and European Union and are significant contributors of greenhouse gases. To combat such high energy usage, more and more buildings are starting to implement the carbon neutrality principle, which is viewed as a means to reduce carbon emissions and reduce dependence on fossil fuels. Although zero-energy buildings remain limited, even in developed countries, they are gaining importance and popularity.
Most zero-energy buildings use the electrical grid for energy storage but some are independent of the grid and some include energy storage onsite. The buildings are called "energy-plus buildings" or in some cases "low energy houses". These buildings produce energy onsite using renewable technology like solar and wind, while reducing the overall use of energy with highly efficient lightning and heating, ventilation, and air conditioning (HVAC) technologies. The zero-energy goal is becoming more practical as the costs of alternative energy technologies decrease and the costs of traditional fossil fuels increase.
The development of modern zero-energy buildings became possible largely through the progress made in new energy and construction technologies and techniques. These include highly insulating spray-foam insulation, high-efficiency solar panels, high-efficiency heat pumps and highly insulating, low emissivity, triple and quadruple-glazed windows. These innovations have also been significantly improved by academic research, which collects precise energy performance data on traditional and experimental buildings and provides performance parameters for advanced computer models to predict the efficacy of engineering designs.
Zero-energy buildings can be part of a smart grid. Some advantages of these buildings are as follows:
Integration of renewable energy resources
Integration of plug-in electric vehicles – called vehicle-to-grid
Implementation of zero-energy concepts
Although the net zero concept is applicable to a wide range of resources, water and waste, energy is usually the first resource to be targeted because:
Energy, particularly electricity and heating fuel like natural gas or heating oil, is expensive. Hence reducing energy use can save the building owner money. In contrast, water and waste are inexpensive for the individual building owner.
Energy, particularly electricity and heating fuel, has a high carbon footprint. Hence reducing energy use is a major way to reduce the building's carbon footprint.
There are well-established means to significantly reduce the energy use and carbon footprint of buildings. These include: adding insulation, using heat pumps instead of furnaces, using low emissivity, triple or quadruple-glazed windows and adding solar panels to the roof.
In some countries, there are government-sponsored subsidies and tax breaks for installing heat pumps, solar panels, triple or quadruple-glazed windows and insulation that greatly reduce the cost of getting to a net-zero energy building for the building owner.
Optimizing zero-energy building for climate impact
The introduction of zero-energy buildings makes buildings more energy efficient and reduces the rate of carbon emissions once the building is in operation; however, there is still a lot of pollution associated with a building's embodied carbon. Embodied carbon is the carbon emitted in the making and transportation of a building's materials and construction of the structure itself; it is responsible for 11% of global GHG emissions and 28% of global building sector emissions. The importance of embodied carbon will grow as it will begin to account for the greater portion of a building's carbon emissions. In some newer, energy efficient buildings, embodied carbon has risen to 47% of the building's lifetime emissions. Focusing on embodied carbon is part of optimizing construction for climate impact and zero carbon emissions requires slightly different considerations from optimizing only for energy efficiency.
A 2019 study found that between 2020 and 2030, reducing upfront carbon emissions and switching to clean or renewable energy is more important than increasing building efficiency because "building a highly energy efficient structure can actually produce more greenhouse gas than a basic code compliant one if carbon-intensive materials are used." The study stated that because "Net-zero energy codes will not significantly reduce emissions in time, policy makers and regulators must aim for true net zero carbon buildings, not net zero energy buildings."
One way to reduced embodied carbon is by using low-carbon materials for construction such as straw, wood, linoleum, or cedar. For materials like concrete and steel, options to reduce embodied emissions do exist, however, these are unlikely to be available at large scale in the short-term. In conclusion, it has been determined that the optimal design point for greenhouse gas reduction appeared to be at four story multifamily buildings of low-carbon materials, such as those listed above, which could be a template for low-carbon emitting structures.
Definitions
Despite sharing the name "zero net energy", there are several definitions of what the term means in practice, with a particular difference in usage between North America and Europe.
Zero net site energy use In this type of ZNE, the amount of energy provided by on-site renewable energy sources is equal to the amount of energy used by the building. In the United States, "zero net energy building" generally refers to this type of building.
Zero net source energy use This ZNE generates the same amount of energy as is used, including the energy used to transport the energy to the building. This type accounts for energy losses during electricity generation and transmission. These ZNEs must generate more electricity than zero net site energy buildings.
Net zero energy emissions Outside the United States and Canada, a ZEB is generally defined as one with zero net energy emissions, also known as a zero carbon building (ZCB) or zero emissions building (ZEB). Under this definition the carbon emissions generated from on-site or off-site fossil fuel use are balanced by the amount of on-site renewable energy production. Other definitions include not only the carbon emissions generated by the building in use, but also those generated in the construction of the building and the embodied energy of the structure. Others debate whether the carbon emissions of commuting to and from the building should also be included in the calculation. Recent work in New Zealand has initiated an approach to include building user transport energy within zero energy building frameworks.
Net zero cost In this type of building, the cost of purchasing energy is balanced by income from sales of electricity to the grid of electricity generated on-site. Such a status depends on how a utility credits net electricity generation and the utility rate structure the building uses.
Net off-site zero energy use A building may be considered a ZEB if 100% of the energy it purchases comes from renewable energy sources, even if the energy is generated off the site.
Off-the-gridOff-the-grid buildings are stand-alone ZEBs that are not connected to an off-site energy utility facility. They require distributed renewable energy generation and energy storage capability (for when the sun is not shining, wind is not blowing, etc.). An energy autarkic house is a building concept where the balance of the own energy consumption and production can be made on an hourly or even smaller basis. Energy autarkic houses can be taken off-the-grid.
Net Zero Energy Building Based on scientific analysis within the joint research program "Towards Net Zero Energy Solar Buildings" a methodological framework was set up which allows different definitions, in accordance with country's political targets, specific (climate) conditions and respectively formulated requirements for indoor conditions: The overall conceptual understanding of a Net ZEB is an energy efficient, grid-connected building enabled to generate energy from renewable sources to compensate its own energy demand (see figure 1).The wording "Net" emphasizes the energy exchange between the building and the energy infrastructure. By the building-grid interaction, the Net ZEBs becomes an active part of the renewable energy infrastructure. This connection to energy grids prevents seasonal energy storage and oversized on-site systems for energy generation from renewable sources like in energy autonomous buildings. The similarity of both concepts is a pathway of two actions: 1) reduce energy demand by means of energy efficiency measures and passive energy use; 2) generate energy from renewable sources. However, the Net ZEBs grid interaction and plans to widely increase their numbers of evoking considerations on increased flexibility in the shift of energy loads and reduced peak demands.
Positive Energy District Expanding some of the principles of zero-energy buildings to a city district level, Positive Energy Districts (PED) are districts or other urban areas that produce at least as much energy on an annual basis as they consume. The impetus to develop whole positive energy districts instead of single buildings is based on the possibility of sharing resources, managing energy efficiently systems across many buildings and reaching economics of scale.
Within this balancing procedure several aspects and explicit choices have to be determined:
The building system boundary is split into a physical boundary which determines which renewable resources are considered (e.g. in buildings footprint, on-site or even off-site) respectively how many buildings are included in the balance (single building, cluster of buildings) and a balance boundary which determines the included energy uses (e.g. heating, cooling, ventilation, hot water, lighting, appliances, IT, central services, electric vehicles, and embodied energy, etc.). It should be noticed that renewable energy supply options can be prioritized (e.g. by transportation or conversion effort, availability over the lifetime of the building or replication potential for future, etc.) and therefore create a hierarchy. It may be argued that resources within the building footprint or on-site should be given priority over off-site supply options.
The weighting system converts the physical units of different energy carriers into a uniform metric (site/final energy, source/primary energy renewable parts included or not, energy cost, equivalent carbon emissions and even energy or environmental credits) and allows their comparison and compensation among each other in one single balance (e.g. exported PV electricity can compensate for imported biomass). Politically influenced and therefore possibly asymmetrically or time-dependent conversion/weighting factors can affect the relative value of energy carriers and can influence the required energy generation capacity.
The balancing period is often assumed to be one year (suitable to cover all operation energy uses). A shorter period (monthly or seasonal) could also be considered as well as a balance over the entire life cycle (including embodied energy, which could also be annualized and counted in addition to operational energy uses).
The energy balance can be done in two balance types: 1) Balance of delivered/imported and exported energy (monitoring phase as self-consumption of energy generated on-site can be included); 2) Balance between (weighted) energy demand and (weighted) energy generation (for design phase as normal end users temporal consumption patterns -e.g. for lighting, appliances, etc.- are lacking). Alternatively, a balance based on monthly net values in which only residuals per month are summed up to an annual balance is imaginable. This can be seen either as a load/generation balance or as a special case of import/export balance where a "virtual monthly self-consumption" is assumed (see figure 2 and compare).
Besides the energy balance, the Net ZEBs can be characterized by their ability to match the building's load by its energy generation (load matching) or to work beneficially with respect to the needs of the local grid infrastructure (grind interaction). Both can be expressed by suitable indicators which are intended as assessment tools only.
Design and construction
The most cost-effective steps toward a reduction in a building's energy consumption usually occur during the design process. To achieve efficient energy use, zero energy design departs significantly from conventional construction practice. Successful zero energy building designers typically combine time tested passive solar, or artificial/fake conditioning, principles that work with the on-site assets. Sunlight and solar heat, prevailing breezes, and the cool of the earth below a building, can provide daylighting and stable indoor temperatures with minimum mechanical means. ZEBs are normally optimized to use passive solar heat gain and shading, combined with thermal mass to stabilize diurnal temperature variations throughout the day, and in most climates are superinsulated. All the technologies needed to create zero energy buildings are available off-the-shelf today.
Sophisticated 3-D building energy simulation tools are available to model how a building will perform with a range of design variables such as building orientation (relative to the daily and seasonal position of the sun), window and door type and placement, overhang depth, insulation type and values of the building elements, air tightness (weatherization), the efficiency of heating, cooling, lighting, and other equipment, as well as local climate. These simulations help the designers predict how the building will perform before it is built, and enable them to model the economic and financial implications on building cost benefit analysis, or even more appropriate – life-cycle assessment.
Zero-energy buildings are built with significant energy-saving features. The heating and cooling loads are lowered by using high-efficiency equipment (such as heat pumps rather than furnaces. Heat pumps are about four times as efficient as furnaces) added insulation (especially in the attic and in the basement of houses), high-efficiency windows (such as low emissivity, triple-glazed windows), draft-proofing, high efficiency appliances (particularly modern high-efficiency refrigerators), high-efficiency LED lighting, passive solar gain in winter and passive shading in the summer, natural ventilation, and other techniques. These features vary depending on climate zones in which the construction occurs. Water heating loads can be lowered by using water conservation fixtures, heat recovery units on waste water, and by using solar water heating, and high-efficiency water heating equipment. In addition, daylighting with skylights or solartubes can provide 100% of daytime illumination within the home. Nighttime illumination is typically done with fluorescent and LED lighting that use 1/3 or less power than incandescent lights, without adding unwanted heat. And miscellaneous electric loads can be lessened by choosing efficient appliances and minimizing phantom loads or standby power. Other techniques to reach net zero (dependent on climate) are Earth sheltered building principles, superinsulation walls using straw-bale construction, pre-fabricated building panels and roof elements plus exterior landscaping for seasonal shading.
Once the energy use of the building has been minimized it can be possible to generate all that energy on site using roof-mounted solar panels. See examples of zero net energy houses here.
Zero-energy buildings are often designed to make dual use of energy including that from white goods. For example, using refrigerator exhaust to heat domestic water, ventilation air and shower drain heat exchangers, office machines and computer servers, and body heat to heat the building. These buildings make use of heat energy that conventional buildings may exhaust outside. They may use heat recovery ventilation, hot water heat recycling, combined heat and power, and absorption chiller units.
Energy harvest
ZEBs harvest available energy to meet their electricity and heating or cooling needs. By far the most common way to harvest energy is to use roof-mounted solar photovoltaic panels that turn the sun's light into electricity. Energy can also be harvested with solar thermal collectors (which use the sun's heat to heat water for the building). Heat pumps can also harvest heat and cool from the air (air-sourced) or ground near the building (ground-sourced otherwise known as geothermal). Technically, heat pumps move heat rather than harvest it, but the overall effect in terms of reduced energy use and reduced carbon footprint is similar. In the case of individual houses, various microgeneration technologies may be used to provide heat and electricity to the building, using solar cells or wind turbines for electricity, and biofuels or solar thermal collectors linked to a seasonal thermal energy storage (STES) for space heating. An STES can also be used for summer cooling by storing the cold of winter underground. To cope with fluctuations in demand, zero energy buildings are frequently connected to the electricity grid, export electricity to the grid when there is a surplus, and drawing electricity when not enough electricity is being produced. Other buildings may be fully autonomous.
Energy harvesting is most often more effective in regards to cost and resource utilization when done on a local but combined scale, for example a group of houses, cohousing, local district or village rather than an individual house basis. An energy benefit of such localized energy harvesting is the virtual elimination of electrical transmission and electricity distribution losses. On-site energy harvesting such as with roof top mounted solar panels eliminates these transmission losses entirely. Energy harvesting in commercial and industrial applications should benefit from the topography of each location. However, a site that is free of shade can generate large amounts of solar powered electricity from the building's roof and almost any site can use geothermal or air-sourced heat pumps. The production of goods under net zero fossil energy consumption requires locations of geothermal, microhydro, solar, and wind resources to sustain the concept.
Zero-energy neighborhoods, such as the BedZED development in the United Kingdom, and those that are spreading rapidly in California and China, may use distributed generation schemes. This may in some cases include district heating, community chilled water, shared wind turbines, etc. There are current plans to use ZEB technologies to build entire off-the-grid or net zero energy use cities.
The "energy harvest" versus "energy conservation" debate
One of the key areas of debate in zero energy building design is over the balance between energy conservation and the distributed point-of-use harvesting of renewable energy (solar energy, wind energy, and thermal energy). Most zero energy homes use a combination of these strategies.
As a result of significant government subsidies for photovoltaic solar electric systems, wind turbines, etc., there are those who suggest that a ZEB is a conventional house with distributed renewable energy harvesting technologies. Entire additions of such homes have appeared in locations where photovoltaic (PV) subsidies are significant, but many so called "Zero Energy Homes" still have utility bills. This type of energy harvesting without added energy conservation may not be cost effective with the current price of electricity generated with photovoltaic equipment, depending on the local price of power company electricity. The cost, energy and carbon-footprint savings from conservation (e.g., added insulation, triple-glazed windows and heat pumps) compared to those from on-site energy generation (e.g., solar panels) have been published for an upgrade to an existing house here.
Since the 1980s, passive solar building design and passive house have demonstrated heating energy consumption reductions of 70% to 90% in many locations, without active energy harvesting. For new builds, and with expert design, this can be accomplished with little additional construction cost for materials over a conventional building. Very few industry experts have the skills or experience to fully capture benefits of the passive design. Such passive solar designs are much more cost-effective than adding expensive photovoltaic panels on the roof of a conventional inefficient building. A few kilowatt-hours of photovoltaic panels (costing the equivalent of about US$2-3 dollars per annual kWh production) may only reduce external energy requirements by 15% to 30%. A high seasonal energy efficiency ratio 14 conventional air conditioner requires over 7 kW of photovoltaic electricity while it is operating, and that does not include enough for off-the-grid night-time operation. Passive cooling, and superior system engineering techniques, can reduce the air conditioning requirement by 70% to 90%. Photovoltaic-generated electricity becomes more cost-effective when the overall demand for electricity is lower.
Combined approach in rapid retrofits for existing buildings
Companies in Germany and the Netherlands offer rapid climate retrofit packages for existing buildings, which add a custom designed shell of insulation to the outside of a building, along with upgrades for more sustainable energy use, such as heat pumps. Similar pilot projects are underway in the US.
Occupant behavior
The energy used in a building can vary greatly depending on the behavior of its occupants. The acceptance of what is considered comfortable varies widely. Studies of identical homes have shown dramatic differences in energy use in a variety of climates. An average widely accepted ratio of highest to lowest energy consumer in identical homes is about 3, with some identical homes using up to 20 times as much heating energy as the others. Occupant behavior can vary from differences in setting and programming thermostats, varying levels of illumination and hot water use, window and shading system operation and the amount of miscellaneous electric devices or plug loads used.
Utility concerns
Utility companies are typically legally responsible for maintaining the electrical infrastructure that brings power to our cities, neighborhoods, and individual buildings. Utility companies typically own this infrastructure up to the property line of an individual parcel, and in some cases own electrical infrastructure on private land as well.
In the US utilities have expressed concern that the use of Net Metering for ZNE projects threatens the utilities base revenue, which in turn impacts their ability to maintain and service the portion of the electrical grid that they are responsible for. Utilities have expressed concern that states that maintain Net Metering laws may saddle non-ZNE homes with higher utility costs, as those homeowners would be responsible for paying for grid maintenance while ZNE home owners would theoretically pay nothing if they do achieve ZNE status. This creates potential equity issues, as currently, the burden would appear to fall on lower-income households. A possible solution to this issue is to create a minimum base charge for all homes connected to the utility grid, which would force ZNE home owners to pay for grid services independently of their electrical use.
Additional concerns are that local distribution as well as larger transmission grids have not been designed to convey electricity in two directions, which may be necessary as higher levels of distributed energy generation come on line. Overcoming this barrier could require extensive upgrades to the electrical grid, however, as of 2010, this is not believed to be a major problem until renewable generation reaches much higher levels of penetration.
Development efforts
Wide acceptance of zero-energy building technology may require more government incentives or building code regulations, the development of recognized standards, or significant increases in the cost of conventional energy.
The Google photovoltaic campus and the Microsoft 480-kilowatt photovoltaic campus relied on US Federal, and especially California, subsidies and financial incentives. California is now providing US$3.2 billion in subsidies for residential-and-commercial near-zero-energy buildings. The details of other American states' renewable energy subsidies (up to US$5.00 per watt) can be found in the Database of State Incentives for Renewables and Efficiency. The Florida Solar Energy Center has a slide presentation on recent progress in this area.
The World Business Council for Sustainable Development has launched a major initiative to support the development of ZEB. Led by the CEO of United Technologies and the Chairman of Lafarge, the organization has both the support of large global companies and the expertise to mobilize the corporate world and governmental support to make ZEB a reality. Their first report, a survey of key players in real estate and construction, indicates that the costs of building green are overestimated by 300 percent. Survey respondents estimated that greenhouse gas emissions by buildings are 19 percent of the worldwide total, in contrast to the actual value of roughly 40 percent.
Influential zero-energy and low-energy buildings
Those who commissioned construction of passive houses and zero-energy homes (over the last three decades) were essential to iterative, incremental, cutting-edge, technology innovations. Much has been learned from many significant successes, and a few expensive failures.
The zero-energy building concept has been a progressive evolution from other low-energy building designs. Among these, the Canadian R-2000 and the German passive house standards have been internationally influential. Collaborative government demonstration projects, such as the superinsulated Saskatchewan House, and the International Energy Agency's Task 13, have also played their part.
Net zero energy building definition
The US National Renewable Energy Laboratory (NREL) published a report called Net-Zero Energy Buildings: A Classification System Based on Renewable Energy Supply Options. This is the first report to lay out a full spectrum classification system for Net Zero/Renewable Energy buildings that includes the full spectrum of Clean Energy sources, both on site and off site. This classification system identifies the following four main categories of Net Zero Energy Buildings/Sites/Campuses:
NZEB:A — A footprint renewables Net Zero Energy Building
NZEB:B — A site renewables Net Zero Energy Building
NZEB:C — An imported renewables Net Zero Energy Building
NZEB:D — An off-site purchased renewables Net Zero Energy Building
Applying this US Government Net Zero classification system means that every building can become net nero with the right combination of the key net zero technologies - PV (solar), GHP (geothermal heating and cooling, thermal batteries), EE (energy efficiency), sometimes wind, and electric batteries. A graphical exposé of the scale of impact of applying these NREL guidelines for net zero can be seen in the graphic at Net Zero Foundation titled "Net Zero Effect on U.S. Total Energy Use" showing a possible 39% US total fossil fuel use reduction by changing US residential and commercial buildings to net zero, 37% savings if we still use natural gas for cooking at the same level.
Net zero carbon conversion example
Many well known universities have professed to want to completely convert their energy systems off of fossil fuels. Capitalizing on the continuing developments in both photovoltaics and geothermal heat pump technologies, and in the advancing electric battery field, complete conversion to a carbon free energy solution is becoming easier. Large scale hydroelectric has been around since before 1900. An example of such a project is in the Net Zero Foundation's proposal at MIT to take that campus completely off fossil fuel use. This proposal shows the coming application of Net Zero Energy Buildings technologies at the District Energy scale.
Advantages and disadvantages
Advantages
isolation for building owners from future energy price increases
increased comfort due to more-uniform interior temperatures (this can be demonstrated with comparative isotherm maps)
reduced total cost of ownership due to improved energy efficiency
reduced total net monthly cost of living
reduced risk of loss from grid blackouts
Minimal to no future energy price increases for building owners reduced requirement for energy austerity and carbon emission taxes
improved reliability – photovoltaic systems have 25-year warranties and seldom fail during weather problems – the 1982 photovoltaic systems on the Walt Disney World EPCOT (Experimental Prototype Community of Tomorrow) Energy Pavilion were still in use until 2018, even through three hurricanes. They were taken down in 2018 in preparation for a new ride.
higher resale value as potential owners demand more ZEBs than available supply
the value of a ZEB building relative to similar conventional building should increase every time energy costs increase
contribute to the greater benefits of the society, e.g. providing sustainable renewable energy to the grid, reducing the need of grid expansion
Optimizing bottom-up urban building energy models (UBEM) can make strides in the exactness of reenactment of building vitality.
Disadvantages
initial costs can be higher – effort required to understand, apply, and qualify for ZEB subsidies, if they exist.
very few designers or builders have the necessary skills or experience to build ZEBs
possible declines in future utility company renewable energy costs may lessen the value of capital invested in energy efficiency
new photovoltaic solar cells equipment technology price has been falling at roughly 17% per year – It will lessen the value of capital invested in a solar electric generating system – Current subsidies may be phased out as photovoltaic mass production lowers future price
challenge to recover higher initial costs on resale of building, but new energy rating systems are being introduced gradually.
while the individual house may use an average of net zero energy over a year, it may demand energy at the time when peak demand for the grid occurs. In such a case, the capacity of the grid must still provide electricity to all loads. Therefore, a ZEB may not reduce risk of loss from grid blackouts.
without an optimized thermal envelope the embodied energy, heating and cooling energy and resource usage is higher than needed. ZEB by definition do not mandate a minimum heating and cooling performance level thus allowing oversized renewable energy systems to fill the energy gap.
solar energy capture using the house envelope only works in locations unobstructed from the sun. The solar energy capture cannot be optimized in north (for northern hemisphere, or south for southern Hemisphere) facing shade, or wooded surroundings.
ZEB is not free of carbon emissions, glass has a high embodied energy, and the production requires a lot of carbon.
Building regulations such as height restrictions or fire code may prevent implementation of wind or solar power or external additions to an existing thermal envelope.
Zero energy building versus green building
The goal of green building and sustainable architecture is to use resources more efficiently and reduce a building's negative impact on the environment. Zero energy buildings achieve one key goal of exporting as much renewable energy as it uses over the course of year; reducing greenhouse gas emissions. ZEB goals need to be defined and set, as they are critical to the design process. Zero energy buildings may or may not be considered "green" in all areas, such as reducing waste, using recycled building materials, etc. However, zero energy, or net-zero buildings do tend to have a much lower ecological impact over the life of the building compared with other "green" buildings that require imported energy and/or fossil fuel to be habitable and meet the needs of occupants.
Both terms, zero energy buildings and green buildings, have similarities and differences. "Green" buildings often focus on operational energy, and disregard the embodied carbon footprint from construction. According to the IPCC, embodied carbon will make up half of the total carbon emissions between now[2020] and 2050. On the other hand, zero energy buildings are specifically designed to produce enough energy from renewable energy sources to meet its own consumption requirements, and green buildings can be generally defined as a building that reduces negative impacts or positively impacts our natural environment [1-NEWUSDE]. There are several factors that must be considered before a building is determined to be a green building. Building a green building must include an efficient use of utilities such as water and energy, use of renewable energy, use of recycling and reusing practices to reduce waste, provide proper indoor air quality, use of ethically sourced and non-toxic materials, use of a design that allows the building to adapt to changing environmental climates, and aspects of the design, construction, and operational process that address the environment and quality of life of its occupants. The term green building can also be used to refer to the practice of green building which includes being resource efficient from its design, to its construction, to its operational processes, and ultimately to its deconstruction. The practice of green building differs slightly from zero energy buildings because it considers all environmental impacts such as use of materials and water pollution for example, whereas the scope of zero energy buildings only includes the buildings energy consumption and ability to produce an equal amount, or more, of energy from renewable energy sources.
There are many unforeseen design challenges and site conditions required to efficiently meet the renewable energy needs of a building and its occupants, as much of this technology is new. Designers must apply holistic design principles, and take advantage of the free naturally occurring assets available, such as passive solar orientation, natural ventilation, daylighting, thermal mass, and night time cooling. Designers and engineers must also experiment with new materials and technological advances, striving for more affordable and efficient production.
Zero energy building versus zero heating building
With advances in ultra low U-value glazing a (nearly) zero heating building is proposed to supersede nearly-zero energy buildings in EU. The zero heating building reduces on the passive solar design and makes the building more opened to conventional architectural design. The zero heating building removes the need for seasonal / winter utility power reserve.
The annual specific heating demand for the zero-heating house should not exceed 3 kWh/m2a. Zero heating building is simpler to design and to operate. For example: there is no need for modulated sun shading.
Certification
The two most common certifications for green building are Passive House, and LEED. The goal of Passive House is to be energy efficient and reduce the use of heating/cooling to below standard. LEED certification is more comprehensive in regards to energy use, a building is awarded credits as it demonstrates sustainable practices across a range of categories. Another certification that designates a building as a net zero energy building exists within the requirements of the Living Building Challenge (LBC) called the Net Zero Energy Building (NZEB) certification provided by the International Living Future Institute (ILFI). The designation was developed in November 2011 as the NZEB certification but was then simplified to the Zero Energy Building Certification in 2017. Included in the list of green building certifications, the BCA Green Mark rating system allows for the evaluation of buildings for their performance and impact on the environment
Worldwide
International initiatives
As a response to global warming and increasing greenhouse gas emissions, countries around the world have been gradually implementing different policies to tackle ZEB. Between 2008 and 2013, researchers from Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, the Republic of Korea, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, the United Kingdom and the US worked together in the joint research program called "Towards Net Zero Energy Solar Buildings". The program was created under the umbrella of International Energy Agency (IEA) Solar Heating and Cooling Program (SHC) Task 40 / Energy in Buildings and Communities (EBC, formerly ECBCS) Annex 52 with the intent of harmonizing international definition frameworks regarding net-zero and very low energy buildings by diving them into subtasks. In 2015, the Paris Agreement was created under the United Nations Framework Convention on Climate Change (UNFCC) with the intent of keeping the global temperature rise of the 21st century below 2 degrees Celsius and limiting temperature increase to 1.5 degrees Celsius by limiting greenhouse gas emissions. While there was no enforced compliance, 197 countries signed the international treaty which bound developed countries legally through a mutual cooperation where each party would update its INDC every five years and report annually to the COP. Due to the advantages of energy efficiency and carbon emission reduction, ZEBs are widely being implemented in many different countries as a solution to energy and environmental problems within the infrastructure sector.
Australia
National trajectory
In Australia, the Trajectory for Low Energy Buildings and its Addendum were agreed by all Commonwealth, state and territory energy ministers in 2019.
The Trajectory is a national plan that aims to achieve zero energy and carbon-ready commercial and residential buildings in Australia. It is a key initiative to address Australia’s 40% energy productivity improvement target by 2030 under the National Energy Productivity Plan.
On 7 July 2023, the Energy and Climate Change Ministerial Council agreed to update the Trajectory for Low Energy Buildings by the end of 2024.
The updates to the Trajectory will:
support the delivery of a low energy, net zero emissions residential and commercial building sector by 2050
consider the success of the existing program
help develop the policy pathway for the building sector to achieve net zero by 2050.
ZEB in Australia
Council House 2 (CH2)Council House 2(also known as CH2), is an office building located at 240 Little Collins Street in the Melbourne central business district, Australia. It is used by the City of Melbourne council, and in April 2005, became the first purpose-built office building in Australia to achieve a maximum Six Green Star rating.
Belgium
In Belgium there is a project with the ambition to make the Belgian city Leuven climate-neutral in 2030.
Brazil
In Brazil, the Ordinance No. 42, of February 24, 2021, approved the Inmetro Normative Instruction for the Classification of Energy Efficiency of Commercial, Service and Public Buildings (INI-C), which improves the Technical Quality Requirements for the Energy Efficiency Level of Commercial, Service and Public Buildings (RTQ-C), specifying the criteria and methods for classifying commercial, service and public buildings as to their energy efficiency. Annex D presents the procedures for determining the potential for local renewable energy generation and the assessment conditions for Near Zero Energy Buildings (NZEBs) and Positive Energy Buildings (PEBs).
Canada
The Canadian Home Builders Association - National oversees the Net Zero Homes certification label, a voluntary industry-led labeling initiative.
In December 2017, the BC Energy Step Code entered into legal force in British Columbia. Local British Columbia governments may use the standard to incentivize or require a level of energy efficiency in new construction that goes above and beyond the requirements of the base building code. The regulation is designed as a technical roadmap to help the province reach its target that all new buildings will attain a net zero energy ready level of performance by 2032.
In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy. The Build Smart strategy seeks to dramatically increase the energy efficiency of Canadian buildings in pursuit of a net zero energy ready level of performance.
In Canada the Net-Zero Energy Home Coalition is an industry association promoting net-zero energy home construction and the adoption of a near net-zero energy home (nNZEH), NZEH Ready and NZEH standard.
The Canada Mortgage and Housing Corporation is sponsoring the EQuilibrium Sustainable Housing Competition that will see the completion of fifteen zero-energy and near-zero-energy demonstration projects across the country starting in 2008.
The EcoTerra House in Eastman, Quebec is Canada's first nearly net-zero energy housing built through the CMHC EQuilibrium Sustainable Housing Competition. The house was designed by Assoc. Prof. Dr. Masa Noguchi of the University of Melbourne for Alouette Homes and engineered by Prof. Dr. Andreas K. Athienitis of Concordia University.
In 2014, the public library building in Varennes, QC, became the first ZNE institutional building in Canada. The library is also LEED gold certified.
The EcoPlusHome in Bathurst, New Brunswick. The Eco Plus Home is a prefabricated test house built by Maple Leaf Homes and with technology from Bosch Thermotechnology.
Mohawk College will be building Hamilton's first net Zero Building
China
With an estimated population of 1,439,323,776 people, China has become one of the world's leading contributor to greenhouse gas emissions due to its ongoing rapid urbanization. Even with the growing increase in building infrastructure, China has long been considered as a country where the overall energy demand has consistently grown less rapidly than the gross domestic product (GDP) of China. Since the late 1970s, China has been using half as much energy as it did in 1997, but due to its dense population and rapid growth of infrastructure, China has become the world's second largest energy consumer and is in a position to become the leading contributor to greenhouse gas emissions in the next century.
Since 2010, Chinese government has been driven by the release of new national policies to increase ZEB design standards and has also laid out a series of incentives to increase ZEB projects in China. In November 2015, China's Ministry of Housing and Urban-Rural Development (MOHURD) released a technical guide regarding passive and low energy green residential buildings. This guide was aimed at improving energy efficiency in China's infrastructure and was also the first of its kind to be formally released as a guide for energy efficiency. Also, with rapid growth in ZEBs in the last three years, there is an estimated influx of ZEBs to be built in China by 2020 along with the existing ZEB projects that are already built.
As a response to the Paris Agreement in 2015, China stated that it set a target of reducing peak carbon emissions around 2030 while also aiming to lower carbon dioxide emissions by 60-65 percent from 2005 emissions per unit of GDP. In 2020, Chinese Communist Party leader Xi Jinping released a statement in his address to the UN General Assembly declaring that China would be carbon neutral by 2060 pushing forward climate change reforms. With more than 95 percent of China's energy originating from fuel sources that emit carbon dioxide, carbon neutrality in China will require an almost complete transition to fuel sources such as solar power, wind, hydro, or nuclear power. In order to achieve carbon neutrality, China's proposed energy quota policy will have to incorporate new monitoring and mechanisms that ensure accurate measurements of energy performance of buildings. Future research should investigate the different possible challenges that could come up due to implementation of ZEB policies in China.
Net-zero energy projects in China
One of the new generation net-zero energy office buildings successfully constructed is the 71-story Pearl River Tower located in Guangzhou, China. Designed by Skidmore Owings Merrill LLP, the tower was designed with the idea that the building would generate the same amount of energy used on an annual basis while also following the four steps to net zero energy: reduction, absorption, reclamation, and generation. While initial plans for the Pearl River Tower included natural gas-fired microturbines used for generation electricity, photovoltaic panels integrated into the glazed roof and shading louvers and tactical building design in combination with the VAWT's electricity generation were chosen instead due to local regulations.
Denmark
Strategic Research Centre on Zero Energy Buildings was in 2009 established at Aalborg University by a grant from the Danish Council for Strategic Research (DSF), the Programme Commission for Sustainable Energy and Environment, and in cooperation with the Technical University of Denmark, Danish Technological Institute, The Danish Construction Association and some private companies. The purpose of the centre is through development of integrated, intelligent technologies for the buildings, which ensure considerable energy conservation and optimal application of renewable energy, to develop zero energy building concepts. In cooperation with the industry, the centre will create the necessary basis for a long-term sustainable development in the building sector.
Germany
Technische Universität Darmstadt won first place in the international zero energy design 2007 Solar Decathlon competition, with a passivhaus design (Passive house) + renewables, scoring highest in the Architecture, Lighting, and Engineering contests
Fraunhofer Institute for Solar Energy Systems ISE, Freiburg im Breisgau
Net zero energy, energy-plus or climate-neutral buildings in the next generation of electricity grids
India
India's first net zero building is Indira Paryavaran Bhawan, located in New Delhi, inaugurated in 2014. Features include passive solar building design and other green technologies. High-efficiency solar panels are proposed. It cools air from toilet exhaust using a thermal wheel in order to reduce load on its chiller system. It has many water conservation features.
Iran
In 2011, Payesh Energy House (PEH) or Khaneh Payesh Niroo by a collaboration of Fajr-e-Toseah Consultant Engineering Company and Vancouver Green Homes Ltd] under management of Payesh Energy Group (EPG) launched the first Net-Zero passive house in Iran. This concept makes the design and construction of PEH a sample model and standardized process for mass production by MAPSA.
Also, an example of the new generation of zero energy office buildings is the 24-story OIIC Office Tower, which is started in 2011, as the OIIC Company headquarters. It uses both modest energy efficiency, and a big distributed renewable energy generation from both solar and wind. It is managed by Rahgostar Naft Company in Tehran, Iran. The tower is receiving economic support from government subsidies that are now funding many significant fossil-fuel-free efforts.
Ireland
In 2005, a private company launched the world's first standardised passive house in Ireland, this concept makes the design and construction of passive house a standardised process.
Conventional low energy construction techniques have been refined and modelled on the PHPP (Passive House Design Package) to create the standardised passive house.
Building offsite allows high precision techniques to be utilised and reduces the possibility of errors in construction.
In 2009 the same company started a project to use 23,000 liters of water in a seasonal storage tank, heated up by evacuated solar tubes throughout the year, with the aim to provide the house with enough heat throughout the winter months thus eliminating the need for any electrical heat to keep the house comfortably warm. The system is monitored and documented by a research team from The University of Ulster and the results will be included in part of a PhD thesis.
In 2012 Cork Institute of Technology started renovation work on its 1974 building stock to develop a net zero energy building retrofit. The exemplar project will become Ireland's first zero energy testbed offering a post-occupancy evaluation of actual building performance against design benchmarks.
Jamaica
The first zero energy building in Jamaica and the Caribbean opened at the Mona Campus of the University of the West Indies (UWI) in 2017. The 2300 square foot building was designed to inspire more sustainable and energy efficient buildings in the area.
Japan
After the April 2011 Fukushima earthquake followed by the up with Fukushima Daiichi nuclear disaster, Japan experienced severe power crisis that led to the awareness of the importance of energy conservation.
In 2012 Ministry of Economy, Trade and Industry, Ministry of Land, Infrastructure, Transport and Tourism and Ministry of the Environment (Japan) summarized the road map for Low-carbon Society which contains the goal of ZEH and ZEB to be standard of new construction in 2020.
The Mitsubishi Electric Corporation is underway with the construction of Japan's first zero energy office building, set to be completed in October, 2020 (as of September 2020). The SUSTIE ZEB test facility is located in Kamakura, Japan, to develop ZEB technology. With the net zero certification, the facility projects to reduce energy consumption by 103%.
Japan has made it a goal that all new houses be net zero energy by 2030. The developing company Sekisui House introduced their first net zero home in 2013, and is now planning Japan's first zero energy condominium in Nagoya City, it is a three-story building with 12 units. There are solar panels on the roof and fuel cells for each unit to provide backup power.
Korea (Republic of)
South Korea's Mandatory ZEB requirements, which have been previously applied to buildings with a GFA of 1,000 m2+ in 2021 will expand to buildings with a GFA of 500 m2+ in 2022, until being applicable to all public buildings starting in 2024. For private buildings, ZEB certification will be mandated for building permits with a GFA of over 100,000 m2 from 2023. After 2025, zero-energy construction for private buildings will be expanded to GFAs over 1,000 m2. The goal of the policy is to convert all public sector buildings to ZEB grade 3 (an energy independence rate of 60% ~ 80%), and all private buildings to ZEB grade 5 (an energy independence rate of 20% ~ 40%) by 2030.
EnergyX DY-Building (에너지엑스 DY빌딩), the first commercial Net-Zero Energy Building (NZEB, or ZEB grade 1) and the first Plus Energy Building (+ZEB, or ZEB grade plus) in Korea was opened and introduced in 2023. The energy technology and sustainable architectural platform company EnergyX developed, designed, and engineered the building with its proprietary technologies and services. EnergyX DY-Building received the ZEB certification with an energy independence rate (or energy self-sufficiency rate) of 121.7%.
Malaysia
In October 2007, the Malaysia Energy Centre (PTM) successfully completed the development and construction of the PTM Zero Energy Office (ZEO) Building. The building has been designed to be a super-energy-efficient building using only 286 kWh/day. The renewable energy – photovoltaic combination is expected to result in a net zero energy requirement from the grid. The building is currently undergoing a fine tuning process by the local energy management team. Findings are expected to be published in a year.
In 2016, the Sustainable Energy Development Authority Malaysia (SEDA Malaysia) started a voluntary initiative called Low Carbon Building Facilitation Program. The purpose is to support the current low carbon cities program in Malaysia. Under the program, several project demonstration managed to reduce energy and carbon beyond 50% savings and some managed to save more than 75%. Continuous improvement of super energy efficient buildings with significant implementation of on-site renewable energy managed to make a few of them become nearly Zero Energy (nZEB) as well as Net-Zero Energy Building (NZEB). In March 2018, SEDA Malaysia has started the Zero Energy Building Facilitation Program.
Malaysia also has its own sustainable building tool special for Low Carbon and zero energy building, called GreenPASS that been developed by the Construction Industry Development Board Malaysia (CIDB) in 2012, and currently being administered and promoted by SEDA Malaysia. GreenPASS official is called the Construction Industry Standard (CIS) 20:2012.
Netherlands
In September 2006, the Dutch headquarters of the World Wildlife Fund (WWF) in Zeist was opened. This earth-friendly building gives back more energy than it uses. All materials in the building were tested against strict requirements laid down by the WWF and the architect.
Norway
In February 2009, the Research Council of Norway assigned The Faculty of Architecture and Fine Art at the Norwegian University of Science and Technology to host the Research Centre on Zero Emission Buildings (ZEB), which is one of eight new national Centres for Environment-friendly Energy Research (FME). The main objective of the FME-centres is to contribute to the development of good technologies for environmentally friendly energy and to raise the level of Norwegian expertise in this area. In addition, they should help to generate new industrial activity and new jobs. Over the next eight years, the FME-Centre ZEB will develop competitive products and solutions for existing and new buildings that will lead to market penetration of zero emission buildings related to their production, operation and demolition.
Singapore
Singapore unveiled a prominent development at the National University of Singapore that is a net-zero energy building. The building, called SDE4, is located within a group of three buildings in its School of Design and Environment (SDE). The design of the building achieved a Green Mark Platinum certification as it produces as much energy as it consumes with its solar panel covered rooftop and hybrid cooling system along with many integrated systems to achieve optimum energy efficiency. This development was the first new-build zero-energy building to come to fruition in Singapore, and the first zero-energy building at the NUS. The first retrofitted zero energy building to be developed in Singapore was a building at the Building and Construction Authority (BCA) academy by the Minister for National Development Mah Bow Tan at the inaugural Singapore Green Building Week on October 26, 2009. Singapore's Green Building Week (SGBW) promotes sustainable development and celebrates the achievements of successfully designed sustainable buildings.
A net-zero energy building unveiled more recently is the SMU Connexion (SMUC). It is the first net-zero energy building in the city that also utilizes mass engineered timber (MET). It is designed to meet the Building and Construction Authority (BCA) Green Mark Platinum certification and has been in operation since January 2020.
Switzerland
The Swiss MINERGIE-A-Eco label certifies zero energy buildings. The first building with this label, a single-family home, was completed in Mühleberg in 2011.
United Arab Emirates
Masdar City in Abu Dhabi
Dubai The Sustainable City in Dubai
United Kingdom
In December 2006, the government announced that by 2016 all new homes in England will be zero energy buildings. To encourage this, an exemption from Stamp Duty Land Tax is planned. In Wales the plan is for the standard to be met earlier in 2011, although it is looking more likely that the actual implementation date will be 2012. However, as a result of a unilateral change of policy published at the time of the March 2011 budget, a more limited policy is now planned which, it is estimated, will only mitigate two thirds of the emissions of a new home.
BedZED development
Hockerton Housing Project
In January 2019 the Ministry of Housing Communities and Local Government simply defined 'Zero Energy' as 'just meets current building standards' neatly solving this problem.
United States
In the US, ZEB research is currently being supported by the US Department of Energy (DOE) Building America Program, including industry-based consortia and researcher organizations at the National Renewable Energy Laboratory (NREL), the Florida Solar Energy Center (FSEC), Lawrence Berkeley National Laboratory (LBNL), and Oak Ridge National Laboratory (ORNL). From fiscal year 2008 to 2012, DOE plans to award $40 million to four Building America teams, the Building Science Corporation; IBACOS; the Consortium of Advanced Residential Buildings; and the Building Industry Research Alliance, as well as a consortium of academic and building industry leaders. The funds will be used to develop net-zero-energy homes that consume 50% to 70% less energy than conventional homes.
DOE is also awarding $4.1 million to two regional building technology application centers that will accelerate the adoption of new and developing energy-efficient technologies. The two centers, located at the University of Central Florida and Washington State University, will serve 17 states, providing information and training on commercially available energy-efficient technologies.
The U.S. Energy Independence and Security Act of 2007 created 2008 through 2012 funding for a new solar air conditioning research and development program, which should soon demonstrate multiple new technology innovations and mass production economies of scale.
The 2008 Solar America Initiative funded research and development into future development of cost-effective Zero Energy Homes in the amount of $148 million in 2008.
The Solar Energy Tax Credits have been extended until the end of 2016.
By Executive Order 13514, U.S. President Barack Obama mandated that by 2015, 15% of existing Federal buildings conform to new energy efficiency standards and 100% of all new Federal buildings be Zero-Net-Energy by 2030.
Energy Free Home Challenge
In 2007, the philanthropic Siebel Foundation created the Energy Free Home Foundation. The goal was to offer $20 million in global incentive prizes to design and build a 2,000 square foot (186 square meter) three-bedroom, two bathroom home with (1) net-zero annual utility bills that also has (2) high market appeal, and (3) costs no more than a conventional home to construct.
The plan included funding to build the top ten entries at $250,000 each, a $10 million first prize, and then a total of 100 such homes to be built and sold to the public.
Beginning in 2009, Thomas Siebel made many presentations about his Energy Free Home Challenge. The Siebel Foundation Report stated that the Energy Free Home Challenge was "Launching in late 2009".
The Lawrence Berkeley National Laboratory at the University of California, Berkeley participated in writing the "Feasibility of Achieving Zero-Net-Energy, Zero-Net-Cost Homes" for the $20-million Energy Free Home Challenge.
If implemented, the Energy Free Home Challenge would have provided increased incentives for improved technology and consumer education about zero energy buildings coming in at the same cost as conventional housing.
US Department of Energy Solar Decathlon
The US Department of Energy Solar Decathlon is an international competition that challenges collegiate teams to design, build, and operate the most attractive, effective, and energy-efficient solar-powered house. Achieving zero net energy balance is a major focus of the competition.
States
Arizona
Zero Energy House developed by the NAHB Research Center and John Wesley Miller Companies, Tucson.
California
The State of California has proposed that all new low- and mid-rise residential buildings, and all new commercial buildings, be designed and constructed to ZNE standards beginning in 2020 and 2030, respectively. The requirements, if implemented, will be promulgated via the California Building Code, which is updated on a three-year cycle and which currently mandates some of the highest energy efficiency standards in the United States. California is anticipated to further increase efficiency requirements by 2020, thus avoiding the trends discussed above of building standard housing and achieving ZNE by adding large amounts of renewables. The California Energy Commission is required to perform a cost-benefit analysis to prove that new regulations create a net benefit for residents of the state.
West Village, located on the University of California campus in Davis, California, was the largest ZNE-planned community in North America at the time of its opening in 2014. The development contains student housing for approximately 1,980 UC Davis students as well as leasable office space and community amenities including a community center, pool, gym, restaurant and convenience store. Office spaces in the development are currently leased by energy and transportation-related University programs. The project was a public-private partnership between the university and West Village Community Partnership LLC, led by Carmel Partners of San Francisco, a private developer, who entered into a 60-year ground lease with the university and was responsible for the design, construction, and implementation of the $300 million project, which is intended to be market-rate housing for Davis. This is unique as the developer designed the project to achieve ZNE at no added cost to themselves or to the residents. Designed and modeled to achieve ZNE, the project uses a mixture of passive elements (roof overhangs, well-insulated walls, radiant heat barriers, ducts in insulated spaces, etc.) as well as active approaches (occupancy sensors on lights, high-efficiency appliances and lighting, etc.). Designed to out-perform California's 2008 Title 24 energy codes by 50%, the project produced 87% of the energy it consumed during its first year in operation. The shortcoming in ZNE status is attributed to several factors, including improperly functioning heat pump water heaters, which have since been fixed. Occupant behavior is significantly different from that anticipated, with the all-student population using more energy on a per-capita basis than typical inhabitants of single-family homes in the area. One of the primary factors driving increased energy use appears to be the increased miscellaneous electrical loads (MEL, or plug loads) in the form of mini-refrigerators, lights, computers, gaming consoles, televisions, and other electronic equipment. The university continues to work with the developer to identify strategies for achieving ZNE status. These approaches include incentivizing occupant behavior and increasing the site's renewable energy capacity, which is a 4 MW photovoltaic array per the original design. The West Village site is also home to the Honda Smart Home US, a beyond-ZNE single-family home that incorporates cutting-edge technologies in energy management, lighting, construction, and water efficiency.
The IDeAs Z2 Design Facility is a net zero energy, zero carbon retrofit project occupied since 2007. It uses less than one fourth the energy of a typical U.S. office by applying strategies such as daylighting, radiant heating/cooling with a ground-source heat pump and high energy performance lighting and computing. The remaining energy demand is met with renewable energy from its building-integrated photovoltaic array. In 2009, building owner and occupant Integrated Design Associates (IDeAs) recorded actual measured energy use intensity of per year, with per year produced, for a net of per year. The building is also carbon neutral, with no gas connection, and with carbon offsets purchased to cover the embodied carbon of the building materials used in the renovation.
The Zero Net Energy Center, scheduled to open in 2013 in San Leandro, is to be a 46,000-square-foot electrician training facility created by the International Brotherhood of Electrical Workers Local 595 and the Northern California chapter of the National Electrical Contractors Association. Training will include energy-efficient construction methods.
The Green Idea House is a net zero energy, zero-carbon retrofit in Hermosa Beach.
George LeyVa Middle School Administrative Offices, occupied since fall 2011, is a net zero energy, net zero carbon emissions building of just over 9,000 square feet. With daylighting, variable refrigerant flow HVAC, and displacement ventilation, it is designed to use half of the energy of a conventional California school building, and, through a building-integrated solar array, provides 108% of the energy needed to offset its annual electricity use. The excess helps power the remainder of the middle school campus. It is the first publicly funded NZE K–12 building in California.
The Stevens Library at Sacred Heart Schools in California is the first net-zero library in the United States, receiving Net Zero Energy Building status from the International Living Future Institute, part of the PG&E Zero Net Energy Pilot Project.
The Santa Monica City Services Building is among the first net-zero energy, net-zero water public/municipal buildings in California. Completed in 2020, the 50,000-square-foot addition to the historic Santa Monica City Hall building was designed to provide its own energy and water, and to minimize energy use through efficient building systems.
At 402,000 square-feet, the California Air Resources Board Southern California Headquarters - Mary D. Nichols Campus, is the largest net-zero energy facility in the United States. A photovoltaic system covers 204,903 square-feet between the facility rooftop and parking pavilions. The +3.5 megawatt system is anticipated to generate roughly 6,235,000 kWh reusable energy per year. The facility was dedicated on November 18, 2021.
Colorado
The Moore House achieves net-zero energy usage with passive solar design, 'tuned' heat reflective windows, super-insulated and air-tight construction, natural daylighting, solar thermal panels for hot water and space heating, a photovoltaic (PV) system that generates more carbon-free electricity than the house requires, and an energy-recovery ventilator (ERV) for fresh air. The green building strategies used on the Moore House earned it a verified home energy rating system (HERS) score of −3.
The NREL Research Support Facility in Golden is a class A office building. Its energy efficiency features include: Thermal storage concrete structure, transpired solar collectors, 70 miles of radiant piping, high-efficiency office equipment, and an energy-efficient data center that reduces the data center's energy use by 50% over traditional approaches.
Wayne Aspinall Federal Building in Grand Junction, originally constructed in 1918, became the first Net Zero Energy building listed on the National Register of Historic Places. On-site renewable energy generation is intended to produce 100% of the building's energy throughout the year using the following energy efficiency features: Variable refrigerant flow for the HVAC, a geo-exchange system, advanced metering and building controls, high-efficient lighting systems, thermally enhanced building envelope, interior window system (to maintain historic windows), and advanced power strips (APS) with individual occupancy sensors.
Tutt Library at Colorado College was renovated to be a net-zero library in 2017, making it the largest ZNE academic library. It received an Innovation Award from the National Association of college and University Business Officers.
Florida
The 1999 side-by-side Florida Solar Energy Center Lakeland demonstration project was called the "Zero Energy Home". It was a first-generation university effort that significantly influenced the creation of the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Zero Energy Home program.
Illinois
The Walgreens store located on 741 Chicago Ave, Evanston, is the first of the company's stores to be built and or converted to a net zero energy building. It is the first net zero energy retail stores to be built and will pave the way to renovating and building net zero energy retail stores in the near future. The Walgreens store includes the following energy efficiency features: Geo-exchange system, energy-efficient building materials, LED lighting and daylight harvesting, and carbon dioxide refrigerant.
The Electrical and Computer Engineering building at the University of Illinois at Urbana-Champaign, which was built in 2014, is a net zero building.
Iowa
The MUM Sustainable Living Center was designed to surpass LEED Platinum qualification. The Maharishi University of Management (MUM) in Fairfield, Iowa, founded by Maharishi Mahesh Yogi (best known for having brought Transcendental Meditation to the West) incorporates principles of Bau Biology (a German system that focuses on creating a healthy indoor environment), as well as Maharishi Vedic Architecture (an Indian system of architecture focused on the precise orientation, proportions and placement of rooms). The building is one of the few in the country to qualify as net zero, and one of even fewer that can claim the banner of grid positive via its solar power system. A rainwater catchment system and on-site natural waste-water treatment likewise take the building off (sewer) grid with respect to water and waste treatment. Additional green features include natural daylighting in every room, natural and breathable earth block walls (made by the program's students), purified rainwater for both potable and non-potable functions; and an on-site water purification and recycling system consisting of plants, algae, and bacteria.
Kentucky
Richardsville Elementary School, part of the Warren County Public School District in south-central Kentucky, is the first Net Zero energy school in the United States. To reach Net Zero, innovative energy reduction strategies were used by CMTA Consulting Engineers and Sherman Carter Barnhart Architects including dedicated outdoor air systems (DOAS) with dynamic reset, new IT systems, alternative methods to prepare lunches, and the use of solar photovoltaics. The project has an efficient thermal envelope constructed with insulated concrete form (ICF) walls, geothermal water source heat pumps, low-flow fixtures, and features daylighting extensively throughout. It is also the first truly wireless school in Kentucky.
Locust Trace AgriScience Center, an agricultural-based vocational school serving Fayette County Public Schools and surrounding districts, features a Net Zero Academic Building engineered by CMTA Consulting Engineers and designed by Tate Hill Jacobs Architects. The facility, located in Lexington, Kentucky, also has a greenhouse, riding arena with stalls, and a barn. To reach Net Zero in the Academic Building the project utilizes an air-tight envelope, expanded indoor temperature setpoints in specified areas to more closely model real-world conditions, a solar thermal system, and geothermal water source heat pumps. The school has further reduced its site impact by minimizing municipal water use through the use of a dual system consisting of a standard leach field system and a constructed wetlands system and using pervious surfaces to collect, drain, and use rainwater for crop irrigation and animal watering.
Massachusetts
The government of Cambridge has enacted a plan for "net zero" carbon emissions from all buildings in the city by 2040.
John W. Olver Transit Center, designed by Charles Rose Architects Inc, is an intermodal transit hub in Greenfield, Massachusetts. Built with American Recovery and Reinvestment Act funds, the facility was constructed with solar panels, geothermal wells, copper heat screens and other energy efficient technologies.
Michigan
The Mission Zero House is the 110-year-old Ann Arbor home of Greenovation.TV host and Environment Report contributor Matthew Grocoff. As of 2011, the home is the oldest home in America to achieve net-zero energy. The owners are chronicling their project on Greenovation.TV and The Environment Report on public radio.
The Vineyard Project is a Zero Energy Home (ZEH) thanks to the Passive Solar Design, 3.3 Kws of Photovoltaics, Solar Hot Water and Geothermal Heating and Cooling. The home is pre-wired for a future wind turbine and only uses 600 kWh of energy per month while a minimum of 20 kWh of electricity per day with many days net-metering backwards. The project also used ICF insulation throughout the entire house and is certified as Platinum under the LEED for Homes certification. This Project was awarded Green Builder Magazine Home of the Year 2009.
The Lenawee Center for a Sustainable Future, a new campus for Lenawee Intermediate School District, serves as a living laboratory for the future of agriculture. It is the first Net Zero education building in Michigan, engineered by CMTA Consulting Engineers and designed by The Collaborative, Inc. The project includes solar arrays on the ground as well as the roof, a geothermal heating and cooling system, solar tubes, permeable pavement and sidewalks, a sedum green roof, and an overhang design to regulate building temperature.
Missouri
In 2010, architectural firm HOK worked with energy and daylighting consultant The Weidt Group to design a net zero carbon emissions Class A office building prototype in St. Louis, Missouri. The team chronicled its process and results on Netzerocourt.com.
New Jersey
The 31 Tannery Project, located in Branchburg, New Jersey, serves as the corporate headquarters for Ferreira Construction, the Ferreira Group, and Noveda Technologies. The 42,000-square-foot (3,900 m2) office and shop building was constructed in 2006 and is the first building in the state of New Jersey to meet New Jersey's Executive Order 54. The building is also the first Net Zero Electric Commercial Building in the United States.
New York
Green Acres, the first true zero-net energy development in America, is located in New Paltz, about north of New York City. Greenhill Contracting began construction on this development of 25 single family homes in summer 2008, with designs by BOLDER Architecture. After a full year of occupancy, from March 2009 to March 2010, the solar panels of the first occupied home in Green Acres generated 1490 kWh more energy than the home consumed. The second occupied home has also achieved zero-net energy use. As of June 2011, five houses have been completed, purchased and occupied, two are under construction, and several more are being planned. The homes are built of insulated concrete forms with spray foam insulated rafters and triple pane casement windows, heated and cooled by a geothermal system, to create extremely energy-efficient and long-lasting buildings. The heat recovery ventilator provides constant fresh air and, with low or no VOC (volatile organic compound) materials, these homes are very healthy to live in. To the best of our knowledge, Green Acres is the first development of multiple buildings, residential or commercial, that achieves true zero-net energy use in the United States, and the first zero-net energy development of single family homes in the world.
Greenhill Contracting has built two luxury zero-net energy homes in Esopus, completed in 2008. One house was the first Energy Star rated zero-net energy home in the Northeast and the first registered zero-net energy home on the US Department of Energy's Builder's Challenge website. These homes were the template for Green Acres and the other zero-net energy homes that Greenhill Contracting has built, in terms of methods and materials.
The headquarters of Hudson Solar, a dba of Hudson Valley Clean Energy, Inc., located in Rhinebeck and completed in 2007, was determined by NESEA (the Northeast Sustainable Energy Association) to have become the first proven zero-net energy commercial building in New York State and the ten northeast United States (October 2008). The building consumes less energy than it generates, using a solar electric system to generate power from the sun, geothermal heating and cooling, and solar thermal collectors to heat all its hot water.
Oklahoma
The first zero-energy design home was built in 1979 with support from President Carter's new United States Department of Energy. It relied heavily on passive solar building design for space heat, water heat and space cooling. It heated and cooled itself effectively in a climate where the summer peak temperature was 110 degrees Fahrenheit, and the winter low temperature was −10 F. It did not use active solar systems. It is a double envelope house that uses a gravity-fed natural convection air flow design to circulate passive solar heat from of south-facing glass on its greenhouse through a thermal buffer zone in the winter. A swimming pool in the greenhouse provided thermal mass for winter heat storage. In the summer, air from two underground earth tubes is used to cool the thermal buffer zone and exhaust heat through 7200 cfm of outer-envelope roof vents.
Oregon
Net Zero Energy Building Certification launched in 2011, with an international following. The first project, Painters Hall, is Pringle Creek's Community Center, café, office, art gallery, and event venue. Originally built in the 1930s, Painters Hall was renovated to LEED Platinum Net Zero energy building standards in 2010, demonstrating the potential of converting existing building stock into high‐performance, sustainable building sites. Painters Hall features simple low-cost solutions for energy reduction, such as natural daylighting and passive cooling lighting, that save money and increase comfort. A district ground-source geothermal loop serves the building's GSHP for highly efficient heating and air conditioning. Excess generation from the 20.2 kW rooftop solar array offsets pumping for the neighborhoods geo loop system. Open to the public, Painters Hall is a hub for gatherings of friends, neighbors, and visitors at the heart of a neighborhood designed around nature and community.
Pennsylvania
The Phipps Center for Sustainable Landscapes in Pittsburgh was designed to be one of the greenest buildings in the world. It achieved Net Zero Energy Building Certification from the Living Building Challenge in February 2014 and is pursuing full certification. The Phipps Center uses energy conservation technologies such as solar hot water collectors, carbon dioxide sensors, and daylighting, as well as renewable energy technologies to allow it to achieve Net Zero Energy status.
The Lombardo Welcome Center at Millersville University became the first building in the state to become zero-energy certified. This was the largest step in Millersville University's goal to be carbon neutral by 2040. According to the International Living Future Institute, The Lombardo Welcome Center is one of the highest-performing buildings throughout the country generating 75% more energy than currently being used.
Rhode Island
In Newport, the Paul W. Crowley East Bay MET School is the first Net Zero project to be constructed in Rhode Island. It is a 17,000 sq ft building, housing eight large classrooms, seven bathrooms and a kitchen. It will have PV panels to supply all necessary electricity for the building and a geothermal well which will be the source of heat.
Tennessee
civitas, designed by archimania, Memphis, Tennessee. civitas is a case study home on the banks of the Mississippi River, currently under construction. It aims to embrace cultural, climatic, and economic challenges. The home will set a precedent for Southeastern high-performance design.
Texas
The University of North Texas (UNT) constructed a Zero Energy Research Laboratory on its 300-acre research campus, Discovery Park, in Denton, Texas. The project was funded at over $1,150,000 and will primarily benefit students in mechanical and energy engineering (UNT became the first university to offer degrees in mechanical and energy engineering in 2006). This 1,200-square-foot structure is now competed and held ribbon-cutting ceremony for the University of North Texas' Zero Energy Laboratory on April 20, 2012.
The West Irving Library in Irving, Texas, became the first net zero library in Texas in 2011, running entirely off solar energy. Since then it has produced a surplus. It has LEED gold certification.
Vermont
The Putney School's net zero Field House was opened on October 10, 2009. In use for over a year, as of December 2010, the Field House used 48,374 kWh and produced a total of 51,371 kWh during the first 12 months of operation, thus performing at slightly better than net-zero. Also in December, the building won an AIA-Vermont Honor Award.
The Charlotte Vermont House designed by Pill-Maharam Architects is a verified net zero energy house completed in 2007. The project won the Northeast Sustainable Energy Association's Net Zero Energy award in 2009.
See also
References
Further reading
Nisson, J. D. Ned; and Gautam Dutt, "The Superinsulated Home Book", John Wiley & Sons, 1985, , .
Markvart, Thomas; Editor, "Solar Electricity" John Wiley & Sons; 2nd edition, 2000, .
Clarke, Joseph; "Energy Simulation in Building Design", Second Edition Butterworth-Heinemann; 2nd edition, 2001, .
National Renewable Energy Laboratory, 2000 ZEB meeting report
Noguchi, Masa, ed., "The Quest for Zero Carbon Housing Solutions", Open House International, Vol.33, No.3, 2008, Open House International
Voss, Karsten; Musall, Eike: "Net zero energy buildings – International projects of carbon neutrality in buildings", Munich, 2011, .
Low-energy building
Sustainable building
Sustainable architecture
Building biology
Energy economics
Environmental design
ru:Активный дом | Zero-energy building | [
"Engineering",
"Environmental_science"
] | 16,110 | [
"Environmental design",
"Sustainable building",
"Sustainable architecture",
"Building engineering",
"Energy economics",
"Construction",
"Environmental social science",
"Design",
"Building biology",
"Architecture"
] |
3,077,731 | https://en.wikipedia.org/wiki/Interdimensional | Interdimensional may refer to:
Interdimensional hypothesis
Interdimensional doorway
Interdimensional travel
Dimension | Interdimensional | [
"Physics"
] | 26 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
3,077,732 | https://en.wikipedia.org/wiki/Fenchel%27s%20theorem | In differential geometry, Fenchel's theorem is an inequality on the total absolute curvature of a closed smooth space curve, stating that it is always at least . Equivalently, the average curvature is at least , where is the length of the curve. The only curves of this type whose total absolute curvature equals and whose average curvature equals are the plane convex curves. The theorem is named after Werner Fenchel, who published it in 1929.
The Fenchel theorem is enhanced by the Fáry–Milnor theorem, which says that if a closed smooth simple space curve is nontrivially knotted, then the total absolute curvature is greater than .
Proof
Given a closed smooth curve with unit speed, the velocity is also a closed smooth curve (called tangent indicatrix). The total absolute curvature is its length .
The curve does not lie in an open hemisphere. If so, then there is such that , so , a contradiction. This also shows that if lies in a closed hemisphere, then , so is a plane curve.
Consider a point such that curves and have the same length. By rotating the sphere, we may assume and are symmetric about the axis through the poles. By the previous paragraph, at least one of the two curves and intersects with the equator at some point . We denote this curve by . Then .
We reflect across the plane through , , and the north pole, forming a closed curve containing antipodal points , with length . A curve connecting has length at least , which is the length of the great semicircle between . So , and if equality holds then does not cross the equator.
Therefore, , and if equality holds then lies in a closed hemisphere, and thus is a plane curve.
References
; see especially equation 13, page 49
Theorems in differential geometry
Theorems in plane geometry
Theorems about curves
Curvature (mathematics) | Fenchel's theorem | [
"Physics",
"Mathematics"
] | 373 | [
"Geometric measurement",
"Theorems in differential geometry",
"Physical quantities",
"Theorems in plane geometry",
"Theorems about curves",
"Theorems in geometry",
"Curvature (mathematics)"
] |
3,077,796 | https://en.wikipedia.org/wiki/Fluorodeoxyglucose%20%2818F%29 | {{DISPLAYTITLE:Fluorodeoxyglucose (18F)}}
[18F]Fluorodeoxyglucose (INN), or fluorodeoxyglucose F 18 (USAN and USP), also commonly called fluorodeoxyglucose and abbreviated [18F]FDG, 2-[18F]FDG or FDG, is a radiopharmaceutical, specifically a radiotracer, used in the medical imaging modality positron emission tomography (PET). Chemically, it is 2-deoxy-2-[18F]fluoro-D-glucose, a glucose analog, with the positron-emitting radionuclide fluorine-18 substituted for the normal hydroxyl group at the C-2 position in the glucose molecule.
The uptake of [18F]FDG by tissues is a marker for the tissue uptake of glucose, which in turn is closely correlated with certain types of tissue metabolism. After [18F]FDG is injected into a patient, a PET scanner can form two-dimensional or three-dimensional images of the distribution of [18F]FDG within the body.
Since its development in 1976, [18F]FDG had a profound influence on research in the neurosciences. The subsequent discovery in 1980 that [18F]FDG accumulates in tumors underpins the evolution of PET as a major clinical tool in cancer diagnosis. [18F]FDG is now the standard radiotracer used for PET neuroimaging and cancer patient management.
The images can be assessed by a nuclear medicine physician or radiologist to provide diagnoses of various medical conditions.
History
In 1968, Dr. Josef Pacák, Zdeněk Točík and Miloslav Černý at the Department of Organic Chemistry, Charles University, Czechoslovakia were the first to describe the synthesis of FDG. Later, in the 1970s, Tatsuo Ido and Al Wolf at the Brookhaven National Laboratory were the first to describe the synthesis of FDG labeled with fluorine-18. The compound was first administered to two normal human volunteers by Abass Alavi in August, 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of [18F]FDG in that organ (see history reference below).
Beginning in August 1990, and continuing throughout 1991, a shortage of oxygen-18, a raw material for FDG, made it necessary to ration isotope supplies. Israel's oxygen-18 facility had shut down due to the Gulf War, and the U.S. government had shut down its isotopes of carbon, oxygen and nitrogen facility at Los Alamos National Laboratory, leaving Isotec as the main supplier.
Synthesis
[18F]FDG was first synthesized via electrophilic fluorination with [18F]F2. Subsequently, a "nucleophilic synthesis" was devised with the same radioisotope.
As with all radioactive 18F-labeled radioligands, the fluorine-18 must be made initially as the fluoride anion in a cyclotron. Synthesis of complete [18F]FDG radioactive tracer begins with synthesis of the unattached fluoride radiotracer, since cyclotron bombardment destroys organic molecules of the type usually used for ligands, and in particular, would destroy glucose.
Cyclotron production of fluorine-18 may be accomplished by bombardment of neon-20 with deuterons, but usually is done by proton bombardment of 18O-enriched water, causing a (p-n) reaction (sometimes called a "knockout reaction"a common type of nuclear reaction with high probability where an incoming proton "knocks out" a neutron) in the 18O. This produces "carrier-free" dissolved [18F]fluoride ([18F]F−) ions in the water. The 109.8-minute half-life of fluorine-18 makes rapid and automated chemistry necessary after this point.
Anhydrous fluoride salts, which are easier to handle than fluorine gas, can be produced in a cyclotron. To achieve this chemistry, the [18F]F− is separated from the aqueous solvent by trapping it on an ion-exchange column, and eluted with an acetonitrile solution of 2,2,2-cryptand and potassium carbonate. Evaporation of the eluate gives [(crypt-222)K]+ [18F]F− (2) .
The fluoride anion is nucleophilic, so anhydrous conditions are required to avoid competing reactions involving hydroxide, which is also a good nucleophile. The use of the cryptand to sequester the potassium ions avoids ion-pairing between free potassium and fluoride ions, rendering the fluoride anion more reactive.
Intermediate 2 is treated with the protected mannose triflate (1); the fluoride anion displaces the triflate leaving group in an SN2 reaction, giving the protected fluorinated deoxyglucose (3). Base hydrolysis removes the acetyl protecting groups, giving the desired product (4) after removing the cryptand via ion-exchange:
Mechanism of action, metabolic end-products, and metabolic rate
[18F]FDG, as a glucose analog, is taken up by high-glucose-using cells such as brain, brown adipocytes, kidney, and cancer cells, where phosphorylation prevents the glucose from being released again from the cell, once it has been absorbed. The 2-hydroxyl group (–OH) in normal glucose is needed for further glycolysis (metabolism of glucose by splitting it), but [18F]FDG is missing this 2-hydroxyl. Thus, in common with its sister molecule 2-deoxy-D-glucose, FDG cannot be further metabolized in cells. The [18F]FDG-6-phosphate formed when [18F]FDG enters the cell cannot exit the cell before radioactive decay. As a result, the distribution of [18F]FDG is a good reflection of the distribution of glucose uptake and phosphorylation by cells in the body.
The fluorine in [18F]FDG decays radioactively via beta-decay to 18O−. After picking up a proton H+ from a hydronium ion in its aqueous environment, the molecule becomes glucose-6-phosphate labeled with harmless nonradioactive "heavy oxygen" in the hydroxyl at the C-2 position. The new presence of a 2-hydroxyl now allows it to be metabolized normally in the same way as ordinary glucose, producing non-radioactive end-products.
Although in theory all [18F]FDG is metabolized as above with a radioactivity elimination half-life of 110 minutes (the same as that of fluorine-18), clinical studies have shown that the radioactivity of [18F]FDG partitions into two major fractions. About 75% of the fluorine-18 activity remains in tissues and is eliminated with a half-life of 110 minutes, presumably by decaying in place to O-18 to form [18O]O-glucose-6-phosphate, which is non-radioactive (this molecule can soon be metabolized to carbon dioxide and water, after nuclear transmutation of the fluorine to oxygen ceases to prevent metabolism). Another fraction of [18F]FDG, representing about 20% of the total fluorine-18 activity of an injection, is excreted renally by two hours after a dose of [18F]FDG, with a rapid half-life of about 16 minutes (this portion makes the renal-collecting system and bladder prominent in a normal PET scan). This short biological half-life indicates that this 20% portion of the total fluorine-18 tracer activity is eliminated renally much more quickly than the isotope itself can decay. Unlike normal glucose, FDG is not fully reabsorbed by the kidney. Because of this rapidly excreted urine 18F, the urine of a patient undergoing a PET scan may therefore be especially radioactive for several hours after administration of the isotope.
All radioactivity of [18F]FDG, both the 20% which is rapidly excreted in the first several hours of urine which is made after the exam, and the 80% which remains in the patient, decays with a half-life of 110 minutes (just under two hours). Thus, within 24 hours (13 half-lives after the injection), the radioactivity in the patient and in any initially voided urine which may have contaminated bedding or objects after the PET exam will have decayed to 2−13 = of the initial radioactivity of the dose. In practice, patients who have been injected with [18F]FDG are told to avoid the close vicinity of especially radiation-sensitive persons, such as infants, children and pregnant women, for at least 12 hours (7 half-lives, or decay to the initial radioactive dose).
Production
Alliance Medical and Siemens Healthcare are the only producers in the United Kingdom. A dose of FDG in England costs about £130. In Northern Ireland, where there is a single supplier, doses cost up to £450. IBA Molecular North America and Zevacor Molecular, both of which are owned by Illinois Health and Science (IBAM having been purchased as of 1 August 2015), Siemens' PETNET Solutions (a subsidiary of Siemens Healthcare), and Cardinal Health are producers in the U.S.
Distribution
The labeled [18F]FDG compound has a relatively short shelf life which is dominated by the physical decay of fluorine-18 with a half-life of 109.8 minutes, or slightly less than two hours. Still, this half life is sufficiently long to allow shipping the compound to remote PET scanning facilities, in contrast to other medical radioisotopes like carbon-11. Due to transport regulations for radioactive compounds, delivery is normally done by specially licensed road transport, but means of transport may also include dedicated small commercial jet services. Transport by air allows expanding the distribution area around a [18F]FDG production site to deliver the compound to PET scanning centres even hundreds of miles away.
Recently, on-site cyclotrons with integral shielding and portable chemistry stations for making [18F]FDG have accompanied PET scanners to remote hospitals. This technology holds some promise in the future, for replacing some of the scramble to transport [18F]FDG from site of manufacture to site of use.
Applications
In PET imaging, [18F]FDG is primarily used for imaging tumors in oncology, where a static [18F]FDG PET scan is performed and the tumor [18F]FDG uptake is analyzed in terms of Standardized Uptake Value (SUV). FDG PET/CT can be used for the assessment of glucose metabolism in the heart and the brain. [18F]FDG is taken up by cells, and subsequently phosphorylated by hexokinase (whose mitochondrial form is greatly elevated in rapidly growing malignant tumours). Phosphorylated [18F]FDG cannot be further metabolised and is thus retained by tissues with high metabolic activity, such as most types of malignant tumours. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin's disease, non-Hodgkin lymphoma, colorectal cancer, breast cancer, melanoma, and lung cancer. It has also been approved for use in diagnosing Alzheimer's disease.
In body-scanning applications in searching for tumor or metastatic disease, a dose of [18F]-FDG in solution (typically 5 to 10 millicuries or 200 to 400 MBq) is typically injected rapidly into a saline drip running into a vein, in a patient who has been fasting for at least six hours, and who has a suitably low blood sugar. (This is a problem for some diabetics; usually PET scanning centers will not administer the isotope to patients with blood glucose levels over about 180 mg/dL = 10 mmol/L, and such patients must be rescheduled). The patient must then wait about an hour for the sugar to distribute and be taken up into organs which use glucose – a time during which physical activity must be kept to a minimum, in order to minimize uptake of the radioactive sugar into muscles (this causes unwanted artifacts in the scan, interfering with reading especially when the organs of interest are inside the body vs. inside the skull). Then, the patient is placed in the PET scanner for a series of one or more scans which may take from 20 minutes to as long as an hour (often, only about one-quarter of the body length may be imaged at a time).
References
Aldoses
Deoxy sugars
Medicinal radiochemistry
Neuroimaging
Organofluorides
PET radiotracers
Pyranoses
Radiopharmaceuticals
de:Fluordesoxyglucose | Fluorodeoxyglucose (18F) | [
"Chemistry"
] | 2,839 | [
"Carbohydrates",
"Medicinal radiochemistry",
"Deoxy sugars",
"PET radiotracers",
"Radiopharmaceuticals",
"Medicinal chemistry",
"Chemicals in medicine"
] |
3,079,231 | https://en.wikipedia.org/wiki/Carothers%20equation | In step-growth polymerization, the Carothers equation (or Carothers' equation) gives the degree of polymerization, , for a given fractional monomer conversion, .
There are several versions of this equation, proposed by Wallace Carothers, who invented nylon in 1935.
Linear polymers: two monomers in equimolar quantities
The simplest case refers to the formation of a strictly linear polymer by the reaction (usually by condensation) of two monomers in equimolar quantities. An example is the synthesis of nylon-6,6 whose formula is
from one mole of hexamethylenediamine, , and one mole of adipic acid, . For this case
In this equation
is the number-average value of the degree of polymerization, equal to the average number of monomer units in a polymer molecule. For the example of nylon-6,6 ( diamine units and diacid units).
is the extent of reaction (or conversion to polymer), defined by
is the number of molecules present initially as monomer
is the number of molecules present after time . The total includes all degrees of polymerization: monomers, oligomers and polymers.
This equation shows that a high monomer conversion is required to achieve a high degree of polymerization. For example, a monomer conversion, , of 98% is required for = 50, and = 99% is required for = 100.
Linear polymers: one monomer in excess
If one monomer is present in stoichiometric excess, then the equation becomes
r is the stoichiometric ratio of reactants, the excess reactant is conventionally the denominator so that r < 1. If neither monomer is in excess, then r = 1 and the equation reduces to the equimolar case above.
The effect of the excess reactant is to reduce the degree of polymerization for a given value of p. In the limit of complete conversion of the limiting reagent monomer, p → 1 and
Thus for a 1% excess of one monomer, r = 0.99 and the limiting degree of polymerization is 199, compared to infinity for the equimolar case. An excess of one reactant can be used to control the degree of polymerization.
Branched polymers: multifunctional monomers
The functionality of a monomer molecule is the number of functional groups which participate in the polymerization. Monomers with functionality greater than two will introduce branching into a polymer, and the degree of polymerization will depend on the average functionality fav per monomer unit. For a system containing N0 molecules initially and equivalent numbers of two functional groups A and B, the total number of functional groups is N0fav.
And the modified Carothers equation is
, where p equals to
Related equations
Related to the Carothers equation are the following equations (for the simplest case of linear polymers formed from two monomers in equimolar quantities):
where:
Xw is the weight average degree of polymerization,
Mn is the number average molecular weight,
Mw is the weight average molecular weight,
Mo is the molecular weight of the repeating monomer unit,
Đ is the dispersity index. (formerly known as polydispersity index, symbol PDI)
The last equation shows that the maximum value of the Đ is 2, which occurs at a monomer conversion of 100% (or p = 1). This is true for step-growth polymerization of linear polymers. For chain-growth polymerization or for branched polymers, the Đ can be much higher.
In practice the average length of the polymer chain is limited by such things as the purity of the reactants, the absence of any side reactions (i.e. high yield), and the viscosity of the medium.
References
Polymer chemistry
Equations | Carothers equation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 783 | [
"Equations",
"Mathematical objects",
"Materials science",
"Polymer chemistry"
] |
3,080,510 | https://en.wikipedia.org/wiki/High-level%20design | High-level design (HLD) explains the architecture that would be used to develop a system. The architecture diagram provides an overview of an entire system, identifying the main components that would be developed for the product and their interfaces.
The HLD can use non-technical to mildly technical terms which should be understandable to the administrators of the system. In contrast, low-level design further exposes the logical detailed design of each of these elements for use by engineers and programmers. HLD documentation should cover the planned implementation of both software and hardware.
Purpose
Preliminary design: In the preliminary stages of system development, the need is to size the project and to identify those parts which might be risky or time-consuming.
Design overview: As the project proceeds, the need is to provide an overview of how the various sub-systems and components of the system fit together.
In both cases, the high-level design should be a complete view of the entire system, breaking it down into smaller parts that are more easily understood. To minimize the maintenance overhead as construction proceeds and the lower-level design is done, it is best that the high-level design is elaborated only to the degree needed to satisfy these needs.
High-level design document
A high-level design document or HLDD adds the necessary details to the current project description to represent a suitable model for building. This document includes a high-level architecture diagram depicting the structure of the system, such as the hardware, database
architecture, application architecture (layers), application flow (navigation), security architecture and technology architecture.
Design overview
A high-level design provides an overview of a system, product, service, or process.
Such an overview helps supporting components be compatible to others.
The highest-level design should briefly describe all platforms, systems, products, services, and processes that it depends on, and include any important changes that need to be made to them.
In addition, there should be brief consideration of all significant commercial, legal, environmental, security, safety, and technical risks, along with any issues and assumptions.
The idea is to mention every work area briefly, clearly delegating the ownership of more detailed design activity whilst also encouraging effective collaboration between the various project teams.
Today, most high-level designs require contributions from a number of experts, representing many distinct professional disciplines.
Finally, every type of end-user should be identified in the high-level design and each contributing design should give due consideration to customer experience.
See also
Software development process
Systems development life cycle
References
External links
High Level Design Document sample format
Design
Software development
Software design | High-level design | [
"Technology",
"Engineering"
] | 524 | [
"Computer occupations",
"Software engineering",
"Design",
"Software design",
"Software development"
] |
3,081,780 | https://en.wikipedia.org/wiki/Peroxisomal%20targeting%20signal | In biochemical protein targeting, a peroxisomal targeting signal (PTS) is a region of the peroxisomal protein that receptors recognize and bind to. It is responsible for specifying that proteins containing this motif are localised to the peroxisome.
Overview
All peroxisomal proteins are synthesized in the cytoplasm and must be directed to the peroxisome. The first step in this process is the binding of the protein to a receptor. The receptor then directs the complex to the peroxisome. Receptors recognize and bind to a region of the peroxisomal protein called a peroxisomal targeting signal, or PTS.
Peroxisomes consist of a matrix surrounded by a specific membrane. Most peroxisomal matrix proteins contain a short sequence, usually three amino acids at the extreme carboxy tail of the protein, that serves as the PTS. The prototypic sequence (many variations exist) is serine-lysine-leucine (-SKL in the one-letter amino acid code). This motif, and its variations, is known as the PTS1, and the receptor is termed the PTS1 receptor.
It was found that the PTS1 receptor is encoded by the PEX5 gene. PEX5 imports folded proteins into the peroxisome, shuttling between the peroxisome and cytosol. PEX5 interacts with a large number of other proteins, including Pex8p, 10p, 12p, 13p, 14p.
A few peroxisomal matrix proteins have a different, and less conserved sequence, at their amino termini. This PTS2 signal is recognized by the PTS2 receptor, encoded by the PEX7 gene.
"PEX" refers to a group of genes that were identified as being important for peroxisomal synthesis. The numerical attributions, such as PEX5, generally refer to the order in which they were first discovered.
A distinct motif is used for proteins destined for the peroxisomal membrane called the "mPTS" motif, which is more poorly defined and may consist of discontinuous subdomains. One of these usually is a cluster of basic amino acids (arginines and lysines) within a loop of protein (i.e., between membrane spans) that will face the matrix. The mPTS receptor is the product of PEX19.
References
External links
Protein targeting
Signal transduction
Short linear motifs | Peroxisomal targeting signal | [
"Chemistry",
"Biology"
] | 511 | [
"Biotechnology stubs",
"Protein targeting",
"Signal transduction",
"Biochemistry stubs",
"Cellular processes",
"Biochemistry",
"Neurochemistry"
] |
3,082,158 | https://en.wikipedia.org/wiki/Multileaf%20collimator | A multileaf collimator (MLC) is a Collimator or beam-limiting device that is made of individual "leaves" of a high atomic numbered material, usually tungsten, that can move independently in and out of the path of a radiotherapy beam in order to shape it and vary its intensity.
MLCs are used in external beam radiotherapy to provide conformal shaping of beams. Specifically, conformal radiotherapy and Intensity Modulated Radiation Therapy (IMRT) can be delivered using MLCs.
The MLC has improved rapidly since its inception and the first use of leaves to shape structures in 1965 to modern day operation and use. MLCs are now widely used and have become an integral part of any radiotherapy department. MLCs were primarily used for conformal radiotherapy, and have allowed the cost-effective implementation of conformal treatment with significant time saving, and also have been adapted for use for IMRT treatments. For conformal radiotherapy the MLC allows conformal shaping of the beam to match the borders of the target tumour. For intensity modulated treatments the leaves of a MLC can be moved across the field to create IMRT distributions (MLCs really provide a fluence modulation rather than intensity modulation).
The MLC is an important tool for radiation therapy dose delivery. It was originally used as a surrogate for alloy block field shaping and is now widely used for IMRT. As with any tool used in radiotherapy the MLC must undergo commissioning and quality assurance. Additional commissioning measurements are completed to model a MLC for treatment planning. Various MLCs are provided by different vendors and they all have unique design features as determined by specifications of design, and these differences are quite significant.
References
Medical equipment
Medical physics
Particle accelerators
Radiation therapy | Multileaf collimator | [
"Physics",
"Biology"
] | 359 | [
"Medical equipment",
"Applied and interdisciplinary physics",
"Medical physics",
"Medical technology"
] |
3,082,309 | https://en.wikipedia.org/wiki/Earthquake%20Baroque | Earthquake Baroque, or Seismic Baroque, is a style of Baroque architecture found in the former Spanish East Indies (now the Philippines and some nearby Pacific islands) and in Guatemala, which were Spanish-ruled territories that suffered destructive earthquakes during the 17th and the 18th centuries. Large public buildings, such as churches, were then rebuilt in a Baroque style during the Spanish colonial periods in those countries.
Similar events led to the Pombaline architecture in Lisbon following the 1755 Lisbon earthquake and Sicilian Baroque in Sicily following the 1693 earthquake.
Characteristics
In the Spanish East Indies, destruction of earlier churches from frequent earthquakes have made the church proportion lower and wider; side walls were made thicker and heavily buttressed for stability during shaking. The upper structures were made with lighter materials. Instead of lighter materials thinner walls were introduced by progressively decreasing in thickness to the topmost levels.
Bell towers are usually lower and stouter compared to towers in less seismically active regions of the world. Towers are thicker in the lower levels, progressively narrowing to the topmost level. In some churches of the Philippines, aside from functioning as watchtowers against pirates, some bell towers are detached from the main church building to avoid damage in case of a falling bell tower due to an earthquake.
See also
Church architecture
Spanish Colonial architecture
Churrigueresque
Plateresque
References
External links
"Earthquake Baroque: Paoay Church in the Ilocos" from the Heritage Conservation Society
San Pedro de las Huertas, an Earthquake Baroque church in Guatemala
Earthquake baroque churches of the Philippines
Baroque architectural styles
Baroque architecture in the Philippines
Earthquake engineering | Earthquake Baroque | [
"Engineering"
] | 316 | [
"Earthquake engineering",
"Civil engineering",
"Structural engineering"
] |
2,233,425 | https://en.wikipedia.org/wiki/Aldrin | Aldrin is an organochlorine insecticide that was widely used until the 1990s, when it was banned in most countries. Aldrin is a member of the so-called "classic organochlorines" (COC) group of pesticides. COCs enjoyed a very sharp rise in popularity during and after World War II. Other noteworthy examples of COCs include dieldrin and DDT. After research showed that organochlorines can be highly toxic to the ecosystem through bioaccumulation, most were banned from use. Before the ban, it was heavily used as a pesticide to treat seed and soil. Aldrin and related "cyclodiene" pesticides (a term for pesticides derived from Hexachlorocyclopentadiene) became notorious as persistent organic pollutants.
Structure and Reactivity
Pure aldrin takes form as a white crystalline powder. Though it is not soluble in water (0.003% solubility), aldrin dissolves very well in organic solvents, such as ketones and paraffins. Aldrin decays very slowly once released into the environment. Though it is rapidly converted to dieldrin by plants and bacteria, dieldrin maintains the same toxic effects and slow decay of aldrin. Aldrin is easily transported through the air by dust particles. Aldrin does not react with mild acids or bases and is stable in an environment with a pH between 4 and 8. It is highly flammable when exposed to temperatures above 200 °C In the presence of oxidizing agents aldrin reacts with concentrated acids and phenols.
Synthesis
Aldrin is not formed in nature. It is named after the German chemist Kurt Alder, one of the coinventors of this kind of reaction. Aldrin is synthesized by combining hexachlorocyclopentadiene with norbornadiene in a Diels-Alder reaction to give the adduct. In 1967, the composition of technical-grade aldrin was reported to consist of 90.5% of hexachlorohexahydrodimethanonaphthalene (HHDN).
Similarly, an isomer of aldrin, known as isodrin, is produced by reaction of hexachloronobornadiene with cyclopentadiene. Isodrin is also produced as a byproduct of aldrin synthesis, with technical-grade aldrin containing about 3.5% isodrin.
An estimated 270 million kilograms of aldrin and related cyclodiene pesticides were produced between 1946 and 1976. The estimated production volume of aldrin in the US peaked in the mid-1960s at about 18 million pounds a year and then declined.
Available forms
There are multiple available forms of aldrin. One of these is the isomer isodrin, which cannot be found in nature, but needs to be synthesized like aldrin. When aldrin enters the human body or the environment it is rapidly converted to dieldrin. Degradation by ultraviolet radiation or microbes can convert dieldrin to photodieldrin and aldrin to photoaldrin.
Mechanism of action
Even though many toxic effects of aldrin have been discovered, the exact mechanisms underlying the toxicity are yet to be determined. The only toxic aldrin induced process that is largely understood is that of neurotoxicity.
Neurotoxicity
One of the effects that intoxication with aldrin gives rise to is neurotoxicity. Studies have shown that aldrin stimulates the central nervous system (CNS), which may cause hyperexcitation and seizures. This phenomenon exerts its effect through two different mechanisms.
One of the mechanisms uses the ability of aldrin to inhibit brain calcium ATPases. These ion pumps relieve the nerve terminal from calcium by actively pumping it out. However, when aldrin inhibits these pumps, the intracellular calcium levels rise. This results in an enhanced neurotransmitter release.
The second mechanism makes use of aldrin's ability to block gamma-aminobutyric acid (GABA) activity. GABA is a major inhibitory neurotransmitter in the central nervous system. Aldrin induces neurotoxic effects by blocking the GABAA receptor-chloride channel complex. By blocking this receptor, chloride is unable to move into the synapse, which prevents hyperpolarization of neuronal synapses. Therefore, the synapses are more likely to generate action potentials.
Metabolism
The metabolism of oral aldrin exposure has not been studied in humans. However, animal studies are able to provide an extensive overview of the metabolism of aldrin. This data can be related to humans.
Biotransformation of aldrin starts with epoxidation of aldrin by mixed-function oxidases (CYP-450), which forms dieldrin. This conversion happens mainly in the liver. Tissues with low CYP-450 expression use the prostaglandin endoperoxide synthase (PES) instead. This oxidative pathway bisdioxygenises the arachidonic acid to prostaglandin G2 (PGG2). Subsequently, PGG2 is reduced to prostaglandin H2 (PGH2) by hydroperoxidase.
Dieldrin can then be directly oxidized by cytochrome oxidases, which forms 9-hydroxydieldrin. An alternative for oxidation involves the opening of the epoxide ring by epoxide hydrases, which forms the product 6,7-trans-dihydroxydihydroaldrin. Both products can be conjugated to form 6,7-trans-dihydroxydihydroaldrin glucuronide and 9-hydroxydieldrin glucuronide, respectively. 6,7-trans-dihydroxydihydroaldrin can also be oxidized to form aldrin dicarboxylic acid.
Efficacy and side effects
Considering the toxicokinetics of aldrin in the environment, the efficacy of the compound has been determined. In addition, the adverse effects after exposure to aldrin are demonstrated, indicating the risk regarding the compound.
Efficacy
The ability of aldrin, in its use for the control of termites, is examined in order to determine the maximum response when applied. In 1953 US researchers tested aldrin and dieldrin on terrains with rats known to carry chiggers, at a rate of . The aldrin and dieldrin treatment demonstrated a decrease of 75 times less chiggers on rats for dieldrin treated terrains and 25 times less chiggers on the rats when treated with aldrin. The aldrin treatment indicate a high productivity, especially in comparison to other insecticides that were used at the time, such as DDT, sulfur or lindane.
Adverse effects
Exposure of aldrin to the environment leads to the localization of the chemical compound in the air, soil, and water. Aldrin gets changed quickly to dieldrin and that compound degrades slowly, which accounts for aldrin concentrations in the environment around the primary exposure and in the plants. These concentrations can also be found in animals, which eat contaminated plants or animals that reside in the contaminated water. This biomagnification can lead to a high concentrations in their fat.
There are some reported cases of workers who developed anemia after multiple dieldrin exposures. However the main adverse effect of aldrin and dieldrin is in relationship to the central nervous system. The accumulated levels of dieldrin in the body were believed to lead to convulsions. Besides that other symptoms were also reported like headaches, nausea and vomiting, anorexia, muscle twitching and myoclonic jerking and EEG distortions. In all these cases removal of the source of exposure to aldrin/dieldrin led to a rapid recovery.
Toxicity
The toxicity of aldrin and dieldrin is determined by the results of several animal studies. Reports of a significant increase in workers death in relation to aldrin has not been found, although death by anemia is reported in some cases after multiple exposure to aldrin. Immunological tests linked an antigenic response to erythrocytes coated with dieldrin in those cases. Direct dose-response relations being a cause for death are yet to be examined.
The NOAEL that was derived from rat studies:
The minimal risk level at acute oral exposure to aldrin is 0.002 mg/kg/day.
The minimal risk level at intermediate exposure to dieldrin is 0.0001 mg/kg/day.
The minimal risk level at chronic exposure to aldrin is 0.00003 mg/kg/day.
The minimal risk level at chronic exposure to dieldrin is 0.00005 mg/kg/day.
In addition to these studies, breast cancer risk studies were performed demonstrating a significant increased breast cancer risk. After comparing blood concentrations to number of lymph nodes and tumor size a 5-fold higher risk of death was determined, comparing the highest quartile range in the research to the lower quartile range.
Effects on animals
Most of the animal studies done with aldrin and dieldrin used rats. High doses of aldrin and dieldrin demonstrated neurotoxicity, but in multiple rat studies also showed a unique sensitivity of the mouse liver to dieldrin induced hepatocarcinogenicity. Furthermore, aldrin treated rats demonstrated an increased post-natal mortality, in which adults showed an increased susceptibility to the compounds compared to children in rats.
Environmental impact and regulation
Like related polychlorinated pesticides, aldrin is highly lipophilic. Its solubility in water is only 0.027 mg/L, which exacerbates its persistence in the environment. It was banned by the Stockholm Convention on Persistent Organic Pollutants. In the U.S., aldrin was cancelled in 1974. The substance is banned from use for plant protection by the EU.
Safety and environmental aspects
Aldrin has rat of 39 to 60 mg/kg (oral in rats). For fish however, it is extremely toxic, with an LC50 of 0.006–0.01 for trout and bluegill.
In the US, aldrin is considered a potential occupational carcinogen by the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health; these agencies have set an occupational exposure limit for dermal exposures at 0.25 mg/m3 over an eight-hour time-weighted average.
Further, an IDLH limit has been set at 25 mg/m3, based on acute toxicity data in humans to which subjects reacted with convulsions within 20 minutes of exposure.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
Obsolete pesticides
IARC Group 2A carcinogens
Organochloride insecticides
Endocrine disruptors
Neurotoxins
Persistent organic pollutants under the Stockholm Convention
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution | Aldrin | [
"Chemistry"
] | 2,407 | [
"Persistent organic pollutants under the Stockholm Convention",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Endocrine disruptors",
"Neurochemistry",
"Neurotoxins"
] |
2,233,706 | https://en.wikipedia.org/wiki/MOPAC | MOPAC is a computational chemistry software package that implements a variety of semi-empirical quantum chemistry methods based on the neglect of diatomic differential overlap (NDDO) approximation and fit primarily for gas-phase thermochemistry. Modern versions of MOPAC support 83 elements of the periodic table (H-La, Lu-Bi as atoms, Ce-Yb as ionic sparkles) and have expanded functionality for solvated molecules, crystalline solids, and proteins.
MOPAC was originally developed in Michael Dewar's research group in the early 1980s and released as public domain software on the Quantum Chemistry Program Exchange in 1983. It became commercial software in 1993, developed and distributed by Fujitsu, and Stewart Computational Chemistry took over commercial development and distribution in 2007. In 2022, it was released as open-source software on GitHub.
Functionality
MOPAC is primarily a serial command-line program. Its default behavior is to take a molecular geometry specified by an input file and perform a local optimization of the geometry to minimize the heat of formation of the molecule. The details of this process are then summarized by an output file. The behavior of MOPAC can be modified by specifying keywords on the first line of the input file, and translation vectors can be added to the geometry to specify a polymer, surface, or crystal.
MOPAC is compatible with other software to provide graphical user interfaces (GUIs), visualization of outputs, and processing of inputs. The most well-known GUIs that support MOPAC are Chem3D, WebMO, the Amsterdam Modeling Suite, and the Molecular Operating Environment. Jmol can visualize some MOPAC outputs such as molecular orbitals and partial charges. Open Babel supports conversion to and from MOPAC's input file format.
Major features
Semiempirical models: AM1, PM3, PM6, PM7
Geometry optimization
Transition-state optimization
Vibrational analysis
COSMO solvation model
Periodic boundary conditions (Gamma point only, no Brillouin zone sampling)
MOZYME for closed-shell systems (linear-scaling electronic structure algorithm)
Gas-phase thermodynamics
Molecular polarizability
Automatic hydrogenation for pre-processing of Protein Data Bank structures
INDO spectroscopy
Configuration interaction
PARAM, a companion program for parameter optimization
History
MOPAC was originally developed in Michael Dewar's research group at the University of Texas at Austin to consolidate their previous developments of MINDO/3 and MNDO models and software and to serve as the software implementation of the AM1 model. The name MOPAC was both an acronym for Molecular Orbital PACkage and a reference to the Mopac Expressway that runs alongside parts of the UT Austin campus. The first version of MOPAC was deposited in the Quantum Chemistry Program Exchange (QCPE) in 1983 as QCPE Program #455 with James Stewart as its primary author. James Stewart joined the Dewar group in 1980 as a visiting professor on leave from the University of Strathclyde, and he continued the development of MOPAC after moving to the United States Air Force Academy in 1984. In 1993, MOPAC was acquired by Fujitsu and sold as commercial software, while James Stewart continued its development as a consultant. After 2007, new versions of MOPAC were developed and sold by Stewart Computational Chemistry with support from the Small Business Innovation Research program. Concurrent with its commercial development, there was an effort to continue development of the last pre-commercial version of MOPAC as an open-source software project. In 2022, the commercial development and distribution of MOPAC ended, and it was officially re-released as an open-source software project on GitHub developed by the Molecular Sciences Software Institute.
Early versions of MOPAC distributed by the QCPE were considered to be in the public domain and were forked into several other notable software projects. After James Stewart left, other members of the Dewar group continued to develop a fork of MOPAC called AMPAC that was originally released on the QCPE before also becoming commercial software. VAMP (Vectorized AMPAC) was a parallel version of AMPAC developed by Timothy Clark's group at the University of Erlangen–Nuremberg. Donald Truhlar's group at the University of Minnesota developed both a fork of AMPAC with implicit solvent models, AMSOL, and a fork of MOPAC itself. Also, commercial versions of MOPAC distributed by Fujitsu have some proprietary features (e.g. PM5, Tomasi solvation) not available in other versions.
MOPAC used different versioning systems throughout its development, sometimes with a version number or year stylized into the name. These alternate names include MOPAC3, MOPAC4, MOPAC5, MOPAC6, MOPAC7, MOPAC93, MOPAC97, MOPAC 2000, MOPAC 2007, MOPAC 2009, MOPAC 2012, and MOPAC 2016. Open-source versions of MOPAC now use semantic versioning.
See also
List of quantum chemistry and solid-state physics software
Semi-empirical quantum chemistry methods
AMPAC
References
External links
MOPAC download page on openmopac.net
Historical archive of MOPAC source code and manuals
MOPAC 2002 Manual
MOPAC 2009 Manual
Source code and compiled binaries at the Computational Chemistry List repository:
Source code (in FORTRAN):
MOPAC 6
MOPAC 7
Compiled binaries:
MOPAC 6 for MS-DOS/Windows;
MOPAC 6 for Windows 95/NT;
MOPAC 6 with GUI (Winmostar)
MOPAC 7 for MS-DOS/Windows
MOPAC 7 for Linux
MOPAC-5.022mn (MOPAC at the University of Minnesota)
Computational chemistry software | MOPAC | [
"Chemistry"
] | 1,151 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
2,234,210 | https://en.wikipedia.org/wiki/Electrospray | The name electrospray is used for an apparatus that employs electricity to disperse a liquid or for the fine aerosol resulting from this process. High voltage is applied to a liquid supplied through an emitter (usually a glass or metallic capillary). Ideally the liquid reaching the emitter tip forms a Taylor cone, which emits a liquid jet through its apex. Varicose waves on the surface of the jet lead to the formation of small and highly charged liquid droplets, which are radially dispersed due to Coulomb repulsion.
History
In the late 16th century William Gilbert set out to describe the behaviour of magnetic and electrostatic phenomena. He observed that, in the presence of a charged piece of amber, a drop of water deformed into a cone. This effect is clearly related to electrosprays, even though Gilbert did not record any observation related to liquid dispersion under the effect of the electric field.
In 1750 the French clergyman and physicist Jean-Antoine (Abbé) Nollet noted water flowing from a vessel would aerosolize if the vessel was electrified and placed near electrical ground.
In 1882, Lord Rayleigh theoretically estimated the maximum amount of charge a liquid droplet could carry; this is now known as the "Rayleigh limit". His prediction that a droplet reaching this limit would throw out fine jets of liquid was confirmed experimentally more than 100 years later.
In 1914, John Zeleny published work on the behaviour of fluid droplets at the end of glass capillaries. This report presents experimental evidence for several electrospray operating regimes (dripping, burst, pulsating, and cone-jet). A few years later, Zeleny captured the first time-lapse images of the dynamic liquid meniscus.
Between 1964 and 1969 Sir Geoffrey Ingram Taylor produced the theoretical underpinning of electrospraying. Taylor modeled the shape of the cone formed by the fluid droplet under the effect of an electric field; this characteristic droplet shape is now known as the Taylor cone. He further worked with J. R. Melcher to develop the "leaky dielectric model" for conducting fluids.
The number of publications about electrospray started rising significantly around 1990 (as shown in the figure on the right) when John Fenn (2002 Nobel Prize in Chemistry) and others discovered electrospray ionization for mass spectrometry.
Mechanism
To simplify the discussion, the following paragraphs will address the case of a positive electrospray with the high voltage applied to a metallic emitter. A classical electrospray setup is considered, with the emitter situated at a distance from a grounded counter-electrode. The liquid being sprayed is characterized by its viscosity , surface tension , conductivity , and relative permittivity .
Effect of small electric fields on liquid menisci
Under the effect of surface tension, the liquid meniscus assumes a semi-spherical shape at the tip of the emitter. Application of the positive voltage will induce the electric field:
where is the liquid radius of curvature. This field leads to liquid polarization: the negative/positive charge carriers migrate toward/away from the electrode where the voltage is applied. At voltages below a certain threshold, the liquid quickly reaches a new equilibrium geometry with a smaller radius of curvature.
The Taylor cone
Voltages above the threshold draw the liquid into a cone. Sir Geoffrey Ingram Taylor described the theoretical shape of this cone based on the assumptions that (1) the surface of the cone is equipotential and (2) the cone exists in a steady state equilibrium. To meet both of these criteria the electric field must have azimuthal symmetry and have dependence to balance the surface tension and produce the cone. The solution to this problem is:
where (equipotential surface) exists at a value of (regardless of R) producing an equipotential cone. The angle necessary for for all R values is a zero of the Legendre polynomial of order 1/2, . There is only one zero between 0 and at 130.7099°, which is the complement of the Taylor's now famous 49.3° angle.
Singularity development
The apex of the conical meniscus cannot become infinitely small. A singularity develops when the hydrodynamic relaxation time becomes larger than the charge relaxation time . The undefined symbols stand for characteristic length and vacuum permittivity . Due to intrinsic varicose instability, the charged liquid jet ejected through the cone apex breaks into small charged droplets, which are radially dispersed by the space-charge.
Closing the electrical circuit
The charged liquid is ejected through the cone apex and captured on the counter electrode as charged droplets or positive ions. To balance the charge loss, the excess negative charge is neutralized electrochemically at the emitter. Imbalances between the amount of charge generated electrochemically and the amount of charge lost at the cone apex can lead to several electrospray operating regimes. For cone-jet electrosprays, the potential at the metal/liquid interface self-regulates to generate the same amount of charge as that lost through the cone apex.
Applications
Electrospray ionization
Electrospray became widely used as ionization source for mass spectrometry after the Fenn group successfully demonstrated its use as ion source for the analysis of large biomolecules.
Liquid metal ion source
A liquid metal ion source (LMIS) uses electrospray in conjunction with liquid metal to form ions. Ions are produced by field evaporation at tip of the Taylor cone. Ions from a LMIS are used in ion implantation and in focused ion beam instruments.
Electrospinning
Similarly to the standard electrospray, the application of high voltage to a polymer solution can result in the formation of a cone-jet geometry. If the jet turns into very fine fibers instead of breaking into small droplets, the process is known as electrospinning .
Colloid thrusters
Electrospray techniques are used as low thrust electric propulsion rocket engines to control satellites, since the fine-controllable particle ejection allows precise and effective thrust.
Deposition of particles for nanostructures
Electrospray may be used in nanotechnology, for example to deposit single particles on surfaces. This is done by spraying colloids on average containing only one particle per droplet. The solvent evaporates, leaving an aerosol stream of single particles of the desired type. The ionizing property of the process is not crucial for the application but may be used in electrostatic precipitation of the particles.
Deposition of ions as precursors for nanoparticles and nanostructures
Instead of depositing nanoparticles, nanoparticles and nano structures can also fabricated in situ by depositing metal ions to desired locations. Electrochemical reduction of ions to atoms and in situ assembly was believed to be the mechanism of nano structure formation.
Fabrication of drug carriers
Electrospray has garnered attention in the field of drug delivery, and it has been used to fabricate drug carriers including polymer microparticles used in immunotherapy as well as lipoplexes used for nucleic acid delivery. The sub-micrometer-sized drug particles created by electrospray possess increased dissolution rates, thus increased bioavailability due to the increased surface area. The side-effects of drugs can thus be reduced, as smaller dosage is enough for the same effect.
Air purifiers
Electrospray is used in some air purifiers. Particulate suspended in air can be charged by aerosol electrospray, manipulated by an electric field, and collected on a grounded electrode. This approach minimizes the production of ozone which is common to other types of air purifiers.
See also
Flow focusing
References
Electric and magnetic fields in matter
Industrial equipment
Aerosols | Electrospray | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,596 | [
"Electric and magnetic fields in matter",
"Colloids",
"Materials science",
"Aerosols",
"Condensed matter physics",
"nan"
] |
2,235,142 | https://en.wikipedia.org/wiki/Manning%20formula | The Manning formula or Manning's equation is an empirical formula estimating the average velocity of a liquid in an open channel flow (flowing in a conduit that does not completely enclose the liquid). However, this equation is also used for calculation of flow variables in case of flow in partially full conduits, as they also possess a free surface like that of open channel flow. All flow in so-called open channels is driven by gravity.
It was first presented by the French engineer in 1867, and later re-developed by the Irish engineer Robert Manning in 1890.
Thus, the formula is also known in Europe as the Gauckler–Manning formula or Gauckler–Manning–Strickler formula (after Albert Strickler).
The Gauckler–Manning formula is used to estimate the average velocity of water flowing in an open channel in locations where it is not practical to construct a weir or flume to measure flow with greater accuracy. Manning's equation is also commonly used as part of a numerical step method, such as the standard step method, for delineating the free surface profile of water flowing in an open channel.
Formulation
The Gauckler–Manning formula states:
where:
is the cross-sectional average velocity (dimension of L/T; units of ft/s or m/s);
is the Gauckler–Manning coefficient. Units of are often omitted, however is not dimensionless, having dimension of T/L1/3 and units of s/m1/3.
is the hydraulic radius (L; ft, m);
is the stream slope or hydraulic gradient, the linear hydraulic head loss loss (dimension of L/L, units of m/m or ft/ft); it is the same as the channel bed slope when the water depth is constant. ().
is a conversion factor between SI and English units. It can be left off, as long as you make sure to note and correct the units in the term. If you leave in the traditional SI units, is just the dimensional analysis to convert to English. for SI units, and for English units. (Note: (1 m)1/3/s = (3.2808399 ft)1/3/s = 1.4859 ft/s)
Note: the Strickler coefficient is the reciprocal of Manning coefficient: 1/, having dimension of L1/3/T and units of m1/3/s; it varies from 20 m1/3/s (rough stone and rough surface) to 80 m1/3/s (smooth concrete and cast iron).
The discharge formula, , can be used to rewrite Gauckler–Manning's equation by substitution for . Solving for then allows an estimate of the volumetric flow rate (discharge) without knowing the limiting or actual flow velocity.
The formula can be obtained by use of dimensional analysis. In the 2000s this formula was derived theoretically using the phenomenological theory of turbulence.
Hydraulic radius
The hydraulic radius is one of the properties of a channel that controls water discharge. It also determines how much work the channel can do, for example, in moving sediment. All else equal, a river with a larger hydraulic radius will have a higher flow velocity, and also a larger cross sectional area through which that faster water can travel. This means the greater the hydraulic radius, the larger volume of water the channel can carry.
Based on the 'constant shear stress at the boundary' assumption, hydraulic radius is defined as the ratio of the channel's cross-sectional area of the flow to its wetted perimeter (the portion of the cross-section's perimeter that is "wet"):
where:
is the hydraulic radius (L);
is the cross sectional area of flow (L2);
is the wetted perimeter (L).
For channels of a given width, the hydraulic radius is greater for deeper channels. In wide rectangular channels, the hydraulic radius is approximated by the flow depth.
The hydraulic radius is not half the hydraulic diameter as the name may suggest, but one quarter in the case of a full pipe. It is a function of the shape of the pipe, channel, or river in which the water is flowing.
Hydraulic radius is also important in determining a channel's efficiency (its ability to move water and sediment), and is one of the properties used by water engineers to assess the channel's capacity.
Gauckler–Manning coefficient
The Gauckler–Manning coefficient, often denoted as , is an empirically derived coefficient, which is dependent on many factors, including surface roughness and sinuosity. When field inspection is not possible, the best method to determine is to use photographs of river channels where has been determined using Gauckler–Manning's formula.
The friction coefficients across weirs and orifices are less subjective than along a natural (earthen, stone or vegetated) channel reach. Cross sectional area, as well as , will likely vary along a natural channel. Accordingly, more error is expected in estimating the average velocity by assuming a Manning's , than by direct sampling (i.e., with a current flowmeter), or measuring it across weirs, flumes or orifices.
In natural streams, values vary greatly along its reach, and will even vary in a given reach of channel with different stages of flow. Most research shows that will decrease with stage, at least up to bank-full. Overbank values for a given reach will vary greatly depending on the time of year and the velocity of flow. Summer vegetation will typically have a significantly higher value due to leaves and seasonal vegetation. Research has shown, however, that values are lower for individual shrubs with leaves than for the shrubs without leaves.
This is due to the ability of the plant's leaves to streamline and flex as the flow passes them thus lowering the resistance to flow. High velocity flows will cause some vegetation (such as grasses and forbs) to lay flat, where a lower velocity of flow through the same vegetation will not.
In open channels, the Darcy–Weisbach equation is valid using the hydraulic diameter as equivalent pipe diameter.
It is the only best and sound method to estimate the energy loss in human made open channels. For various reasons (mainly historical reasons), empirical resistance coefficients (e.g. Chézy, Gauckler–Manning–Strickler) were and are still used. The Chézy coefficient was introduced in 1768 while the Gauckler–Manning coefficient was first developed in 1865, well before the classical pipe flow resistance experiments in the 1920–1930s. Historically both the Chézy and the Gauckler–Manning coefficients were expected to be constant and functions of the roughness only. But it is now well recognised that these coefficients are only constant for a range of flow rates. Most friction coefficients (except perhaps the Darcy–Weisbach friction factor) are estimated 100% empirically and they apply only to fully rough turbulent water flows under steady flow conditions.
One of the most important applications of the Manning equation is its use in sewer design. Sewers are often constructed as circular pipes. It has long been accepted that the value of varies with the flow depth in partially filled circular pipes. A complete set of explicit equations that can be used to calculate the depth of flow and other unknown variables when applying the Manning equation to circular pipes is available. These equations account for the variation of with the depth of flow in accordance with the curves presented by Camp.
Authors of flow formulas
Albert Brahms (1692–1758)
Antoine de Chézy (1718–1798)
Henry Darcy (1803–1858)
Julius Ludwig Weisbach (1806-1871)
(1826–1905)
Robert Manning (1816–1897)
Wilhelm Rudolf Kutter (1818–1888)
Henri Bazin (1843–1917)
Ludwig Prandtl (1875–1953)
Paul Richard Heinrich Blasius (1883–1970)
Albert Strickler (1887–1963)
Cyril Frank Colebrook (1910–1997)
See also
Chézy formula
Darcy–Weisbach equation
Hydraulics
Notes and references
Further reading
External links
Hydraulic Radius Design Equations Formulas Calculator
History of the Manning Formula
Manning formula calculator for several channel shapes
Manning values associated with photos
Table of values of Manning's n
Interactive demo of Manning's equation
Fluid dynamics
Hydrology
Piping
Hydraulic engineering
Sedimentology
Geomorphology | Manning formula | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,715 | [
"Hydrology",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Mechanical engineering",
"Environmental engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
2,237,293 | https://en.wikipedia.org/wiki/Staudinger%20reaction | The Staudinger reaction is a chemical reaction of an organic azide with a phosphine or phosphite produces an iminophosphorane. The reaction was discovered by and named after Hermann Staudinger. The reaction follows this stoichiometry:
R3P + R'N3 → R3P=NR' + N2
Staudinger reduction
The Staudinger reduction is conducted in two steps. First phosphine imine-forming reaction is conducted involving treatment of the azide with the phosphine. The intermediate, e.g. triphenylphosphine phenylimide, is then subjected to hydrolysis to produce a phosphine oxide and an amine:
R3P=NR' + H2O → R3P=O + R'NH2
The overall conversion is a mild method of reducing an azide to an amine. Triphenylphosphine or tributylphosphine are most commonly used, yielding tributylphosphine oxide or triphenylphosphine oxide as a side product in addition to the desired amine. An example of a Staudinger reduction is the organic synthesis of the pinwheel compound 1,3,5-tris(aminomethyl)-2,4,6-triethylbenzene.
Reaction mechanism
The reaction mechanism centers around the formation of an iminophosphorane through nucleophilic addition of the aryl or alkyl phosphine at the terminal nitrogen atom of the organic azide and expulsion of diatomic nitrogen. The iminophosphorane is then hydrolyzed in the second step to the amine and a phosphine oxide byproduct.
Staudinger ligation
Of interest in chemical biology is the Staudinger ligation, which has been called one of the most important bioconjugation methods. Two versions of the Staudinger ligation have been developed. Both begin with the classic iminophosphorane reaction.
In classical Staudinger ligation, the organophosphorus compound becomes incorporated into the peptide. Typically, appended to the organophosphorus component are reporter groups such as fluorophores. In traceless Staudinger ligation, the organophosphorus group dissociates giving a phosphorus-free bioconjugate.
References
External links
Staudinger Reaction at organic-chemistry.org accessed 060906.
Julia-Staudinger Reaction
Organic redox reactions
Carbohydrate chemistry
Name reactions | Staudinger reaction | [
"Chemistry"
] | 536 | [
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
2,237,309 | https://en.wikipedia.org/wiki/Cohesion%20%28chemistry%29 | In chemistry and physics, cohesion (), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a macroscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules.
Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action.
Mercury in a glass flask is a good example of the effects of the ratio between cohesive and adhesive forces. Because of its high cohesion and low adhesion to the glass, mercury does not spread out to cover the bottom of the flask, and if enough is placed in the flask to cover the bottom, it exhibits a strongly convex meniscus, whereas the meniscus of water is concave. Mercury will not wet the glass, unlike water and many other liquids, and if the glass is tipped, it will 'roll' around inside.
See also
Adhesion – the attraction of molecules or compounds for other molecules of a different kind
Specific heat capacity – the amount of heat needed to raise the temperature of one gram of a substance by one degree Celsius
Heat of vaporization – the amount of energy needed to change one gram of a liquid substance to a gas at constant temperature
Zwitterion – a molecule composed of individual functional groups which are ions, of which the most prominent examples are the amino acids
Chemical polarity – a neutral, or uncharged molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end
References
External links
The Bubble Wall (audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
"Adhesion and Cohesion of Water" – US Geological Survey
Molecular physics
Intermolecular forces
Physical quantities | Cohesion (chemistry) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 670 | [
"Physical phenomena",
"Molecular physics",
"Physical quantities",
"Quantity",
"Materials science",
"Intermolecular forces",
" molecular",
"nan",
"Atomic",
"Physical properties",
" and optical physics"
] |
13,520,509 | https://en.wikipedia.org/wiki/Baire%20measure | In mathematics, a Baire measure is a measure on the σ-algebra of Baire sets of a topological space whose value on every compact Baire set is finite. In compact metric spaces the Borel sets and the Baire sets are the same, so Baire measures are the same as Borel measures that are finite on compact sets. In general Baire sets and Borel sets need not be the same. In spaces with non-Baire Borel sets, Baire measures are used because they connect to the properties of continuous functions more directly.
Variations
There are several inequivalent definitions of Baire sets, so correspondingly there are several inequivalent concepts of Baire measure on a topological space. These all coincide on spaces that are locally compact σ-compact Hausdorff spaces.
Relation to Borel measure
In practice Baire measures can be replaced by regular Borel measures. The relation between Baire measures and regular Borel measures is as follows:
The restriction of a finite Borel measure to the Baire sets is a Baire measure.
A finite Baire measure on a compact space is always regular.
A finite Baire measure on a compact space is the restriction of a unique regular Borel measure.
On compact (or σ-compact) metric spaces, Borel sets are the same as Baire sets and Borel measures are the same as Baire measures.
Examples
Counting measure on the unit interval is a measure on the Baire sets that is not regular (or σ-finite).
The (left or right) Haar measure on a locally compact group is a Baire measure invariant under the left (right) action of the group on itself. In particular, if the group is an abelian group, the left and right Haar measures coincide and we say the Haar measure is translation invariant. See also Pontryagin duality.
References
Leonard Gillman and Meyer Jerison, Rings of Continuous Functions, Springer Verlag #43, 1960
Measures (measure theory) | Baire measure | [
"Physics",
"Mathematics"
] | 408 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
13,522,147 | https://en.wikipedia.org/wiki/Clinical%20coder | A clinical coder—also known as clinical coding officer, diagnostic coder, medical coder, or nosologist—is a health information professional whose main duties are to analyse clinical statements and assign standardized codes using a classification system. The health data produced are an integral part of health information management, and are used by local and national governments, private healthcare organizations and international agencies for various purposes, including medical and health services research, epidemiological studies, health resource allocation, case mix management, public health programming, medical billing, and public education.
For example, a clinical coder may use a set of published codes on medical diagnoses and procedures, such as the International Classification of Diseases (ICD), the Healthcare Common procedural Coding System (HCPCS), and Current Procedural Terminology (CPT) for reporting to the health insurance provider of the recipient of the care. The use of standard codes allows insurance providers to map equivalencies across different service providers who may use different terminologies or abbreviations in their written claims forms, and be used to justify reimbursement of fees and expenses. The codes may cover topics related to diagnoses, procedures, pharmaceuticals or topography. The medical notes may also be divided into specialities, for example cardiology, gastroenterology, nephrology, neurology, pulmonology or orthopedic care. There are also specialist manuals for oncology known as ICD-O (International Classification of Diseases for Oncology) or "O Codes", which are also used by tumor registrars (who work with cancer registries), as well as dental codes for dentistry procedures known as "D codes" for further specifications.
A clinical coder therefore requires a good knowledge of medical terminology, anatomy and physiology, a basic knowledge of clinical procedures and diseases and injuries and other conditions, medical illustrations, clinical documentation (such as medical or surgical reports and patient charts), legal and ethical aspects of health information, health data standards, classification conventions, and computer- or paper-based data management, usually as obtained through formal education and/or on-the-job training.
In practice
The basic task of a clinical coder is to classify medical and health care concepts using a standardised classification. Inpatient, mortality events, outpatient episodes, general practitioner visits and population health studies can all be coded.
Clinical coding has three key phases: a) abstraction; b) assignment; and c) review.
Abstraction
The abstraction phase involves reading the entire record of the health encounter and analysing the information to determine what condition(s) the patient had, what caused it and how it was treated. The information comes from a variety of sources within the medical record, such as clinical notes, laboratory and radiology results, and operation notes.
Assignment
The assignment phase has two parts: finding the appropriate code(s) from the classification for the abstraction; and entering the code into the system being used to collect the coded data.
Review
Reviewing the code set produced from the assignment phase is very important. Clinical coder must ask themselves, "does this code set fairly represent what happened to this patient in this health encounter at this facility?" By doing this, clinical coders are checking that they have covered everything that they must, but not used extraneous codes. For health encounters that are funded through a case mix mechanism, the clinical coder will also review the diagnosis-related group (DRG) to ensure that it does fairly represent the health encounter.
Competency levels
Clinical coders may have different competency levels depending on the specific tasks and employment setting.
Entry-level / trainee coder
An entry-level coder has completed (or nearly completed) an introductory training program in using clinical classifications. Depending on the country, this program may be in the form of a certificate, or even a degree, which has to be earned before the trainee is allowed to start coding. All trainee coders will have some form of continuous, on-the-job training, often being overseen by a more senior coder.
Intermediate-level coder
An intermediate-level coder has acquired the skills necessary to code many cases independently. Coders at this level are also able to code cases with incomplete information. They have a good understanding of anatomy and physiology along with disease processes. Intermediate-level coders have their work audited periodically by an advanced coder.
Advanced-level / senior coder
Advanced-level and senior coders are authorized to code all cases including the most complex. Advanced coders will usually be credentialed and will have several years of experience. An advanced coder is also able to train entry-level coders.
Nosologist
A nosologist understands how the classification is underpinned. Nosologists consult nationally and internationally to resolve issues in the classification and are viewed as experts who can not only code, but design and deliver education, assist in the development of the classification and the rules for using it.
Nosologists are usually expert in more than one classification, including morbidity, mortality and case mix. In some countries the term nosologist is used as a catch-all term for all levels.
Classification types
Clinical coders may use many different classifications, which fall into two main groupings: statistical classifications and nomenclatures.
Statistical classification
A statistical classification, such as ICD-10 or DSM-5, will bring together similar clinical concepts, and group them into one category. This allows the number of categories to be limited so that the classification does not become too big, but still allows statistical analysis. An example of this is in ICD-10 at code I47.1. The code title (or rubric) is Supraventricular tachycardia. However, there are several other clinical concepts that are also classified here. Amongst them are paroxysmal atrial tachycardia, paroxysmal junctional tachycardia, auricular tachycardia and nodal tachycardia.
Nomenclature
With a nomenclature, for example SNOMED CT, there is a separate listing and code for every clinical concept. So, in the tachycardia example above, each type and clinical term for tachycardia would have its own code listed. This makes nomenclatures unwieldy for compiling health statistics.
Qualification and professional association
In some countries, clinical coders may seek voluntary certification or accreditation through assessments conducted by professional associations, health authorities or, in some instances, universities. The options available to the coder will depend on the country, and, occasionally, even between states within a country.
Professional bodies that provide certification for clinical coders may also represent other health information management professionals.
Australia
Clinical Coders' Society of Australia (CCSA)
Health Information Management Association of Australia (HIMAA)
Canada
Canadian Health Information Management Association (CHIMA)
Saudi Arabia
Saudi Health Information Management Association (SHIMA)
United Kingdom
Clinical coders start as trainees, and there are no conversion courses for coders immigrating to the United Kingdom.
The National Clinical Coding Qualification (NCCQ) is an exam for experienced coders, and is recognised by the four health agencies of the UK. Institute of Health Records and Information Management (IHRIM) are the awarding body.
England
In England, a novice coder will complete the national standards course written by NHS Digital within six months of being in post. They will then start working towards the NCCQ.
Three years after passing the NCCQ, two further professional qualifications are made available to the coder in the form of NHS Digital's clinical coding auditor and trainer programmes.
Scotland
In 2015, National Services Scotland, in collaboration with Health Boards, launched the Certificate of Technical Competence (CTC) in Clinical Coding (Scotland). Awarded by the Institute of Health Records & Information Management (IHRIM), the aims of the certificate include supporting staff new to clinical coding, and providing a standardised framework of clinical coding training across NHS Scotland.
The NCCQ is a recognized coding qualification in Scotland.
Wales
The NCCQ is a recognized coding qualification by NHS Wales.
Northern Ireland
Health and Social Care in Northern Ireland recognizes the NCCQ as a coding qualification.
United States
, the typical qualification for an entry-level medical coder in the United States is completion of a diploma or certificate, or, where they are offered, an associate degree. The diploma, certificate, or degree will usually always include an Internet-based and/or in-person internship at some form of a medical office or facility. Some form of on-the-job training is also usually provided in the first months on the job until the coder can earn an intermediate or advanced level of certification and accumulate time on the job. For further academic training, a baccalaureate or master's degree in medical information technology, or a related field, can be earned by those who wish to advance to a supervisory or academic role. A nosologist (medical coding expert) in the U.S. will usually be certified by either AHIMA or the AAPC (often both) at their highest level of certification and speciality inpatient and/or outpatient certification (pediatrics, obstetrics/gynecology, gerontology, oncology are among those offered by AHIMA and/or the AAPC), have at least 3–5 years of intermediate experience beyond entry-level certification and employment, and often holds an associate, bachelor's, or graduate degree.
There are several associations that medical coders in the United States may join, including:
AAPC (formerly American Academy of Professional Coders)
American Board of Health Care Professionals (ABHCP)
American Health Information Management Association (AHIMA)
Some medical coders elect to be certified by more than one society.
The AAPC offers the following entry-level certifications in the U.S.: Certified Professional Coder (CPC); which tests on most areas of medical coding, and also the Certified Inpatient Coder (CIC) and Certified Outpatient Coder (COC). Both the CPC and COC have apprentice designations (CPC-A and COC-A, respectively) for those who pass the certification exams but do not have two years of on the job experience. There is no apprentice designation available for the CIC. After completing two years of on the job experience the apprentice credential holder can request to have the apprentice designation removed from their credential. There are also further specialist coding certifications, for example, the CHONC credential for those who specialize in hematology and oncology coding and the CASCC credential for those who specialize in ambulatory surgery center coding.
The other main organization is American Health Information Management Association (AHIMA) which offers the Certified Coding Specialist (CCS), Certified Coding Specialist-Physician-based (CCS-P), and the entry-level Certified Coding Associate (CCA).
Some U.S. states now mandate or at least strongly encourage certification from either AAPC or AHIMA or a degree from a college to be employed. Some states have registries of medical coders, though these can be voluntary listings. This trend was accelerated in part by the passage of HIPAA and the Affordable Care Act and similar changes in other Western countries, many of which use the ICD-10 for diagnostic medical coding. The change to more regulation and training has also been driven by the need to create accurate, detailed, and secure medical records (especially patient charts, bills, and claim form submissions) that can be recorded efficiently in an electronic era of medical records where they need to be carefully shared between different providers or institutions of care. This was encouraged and later required by legislation and institutional policy.
See also
Clinical medicine
Current Procedural Terminology
Diagnosis-related group
Diagnostic and Statistical Manual of Mental Disorders (DSM)
Health informatics
International Classification of Diseases (ICD)
ICD-11
ICD-10
Medical diagnosis
Pathology Messaging Implementation Project
WHO Family of International Classifications
References
External links
WHO Family of International Classifications
Health informatics
Health care occupations
Medical classification | Clinical coder | [
"Biology"
] | 2,468 | [
"Health informatics",
"Medical technology"
] |
13,522,189 | https://en.wikipedia.org/wiki/%C5%8Cno%20Benkichi | was a Japanese photographer and inventor. He is known for making Karakuri puppets.
Life and career
Ōno Benkichi was born in Kyoto in 1801. His real name was . At the age of 20, he moved to Nagasaki to study Western medicine and science. After studying weaponry and mathematics on Tsushima Island, he returned to Kyoto and married. In 1831, he moved to Ohno (now Ishikawa Prefecture), where his wife was born, and lived there for the rest of his life. He died in 1870.
Ōno was one of the first Japanese to experiment with photography. His first photograph dates back to 1850s. Ōno designed various devices, including cameras, lighters, clocks, and telescopes. His invention, , was designed to generate static electricity and was used in medicine. One of his famous mechanisms is called the "Tea-serving boy".
Legacy
Karakuri Memorial Museum is dedicated to Ōno's inventions and life, and features a display of the Karakuri puppets he made.
References
Japanese photographers
Japanese inventors
1801 births
1870 deaths
Date of death missing
Date of birth unknown
Karakuri | Ōno Benkichi | [
"Physics",
"Technology"
] | 227 | [
"Physical systems",
"Machines",
"Karakuri"
] |
13,522,371 | https://en.wikipedia.org/wiki/Iron%20oxide%20adsorption | Iron oxide adsorption is a water treatment process that is used to remove arsenic from drinking water. Arsenic is a common natural contaminant of well water and is highly carcinogenic. Iron oxide adsorption treatment for arsenic in groundwater is a commonly practiced removal process which involves the chemical treatment of arsenic species such that they adsorb onto iron oxides and create larger particles that may be filtered out of the water stream.
The addition of ferric chloride, FeCl3, to well water immediately after the well at the influent to the treatment plant creates ferric hydroxide, Fe(OH)3, and hydrochloric acid, HCl.
3H2O + FeCl3 → Fe(OH)3 + 3HCl
Fe(OH)3 in water is a strong adsorbent of arsenate, As(V), provided that the pH is low. HCl lowers pH, assuring arsenic adsorption, and the disassociated chlorine oxidizes iron in solution from Fe+2 to Fe+3, which then may bond with hydroxide ions, OH−, thus creating more adsorbent.
This adjustment also lowers the pH of the well water, decreasing alkalinity and allowing more cationic species such Fe(+) or As(+) as to exist freely within the flow. Low pH also decreases the solubility of some iron and arsenic species as well as increasing the adsorptive reactivity of arsenate, As(V).
Additional oxidation of Fe+2 to Fe+3, also referred to as iron(II) and iron(III), is induced by the addition of sodium hypochlorite, NaOCl, at the well head. NaOCl is usually added for disinfection although it may be used in this case towards the objectives of a distribution system free chlorine residual of 1 mg/L and the oxidation of aqueous As(III) to As(V), and aqueous iron Fe+2 to Fe+3, which will bond with hydroxide for further adsorption.
The filter media usually consists of anthracite, iron-manganese oxidizing sand, and garnet sand over support gravel.
References
Water treatment | Iron oxide adsorption | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 476 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
10,984,862 | https://en.wikipedia.org/wiki/Synchrotron%20Radiation%20Center | The Synchrotron Radiation Center (SRC), located in Stoughton, Wisconsin and operated by the University of Wisconsin–Madison, was a national synchrotron light source research facility, operating the Aladdin storage ring. From 1968 to 1987 SRC was the home of Tantalus, the first storage ring dedicated to the production of synchrotron radiation.
History
The Road to SRC: 1953–1968
15 universities formed the Midwest Universities Research Association (MURA) in 1953 to promote and design a high energy proton synchrotron, to be built in the Midwest. With the intent of constructing a large accelerator, MURA purchased a suitable area of land with an underlying flat limestone base near Stoughton, Wisconsin, about from the Madison campus of the University of Wisconsin.
MURA's first accelerator was a 45 MeV synchrotron, built in a concrete underground "vault", mostly for radiation protection purposes. A small electron storage ring, operating at 240 MeV, was designed by Ed Rowe and collaborators as a test facility to study high currents, and construction of this ring started in 1965. However, in 1963 President Johnson had decided that the next large accelerator facility would not be built at the MURA site, but in Batavia, Illinois; this became Fermilab. In 1967 MURA dissolved with the storage ring incomplete and with no further funding. The researchers, feeling teased by fate (and the government backers) named the machine after the mythological figure Tantalus, famed for his eternal punishment to stand beneath a fruit tree with the fruit ever eluding his grasp.
In 1966 a subcommittee of the National Research Council, which had been investigating the properties of synchrotron radiation from the 240 MeV ring, recommended it be completed as a tool for spectroscopy. A successful proposal was made to the US Air Force Office of Scientific Research, and the ring was completed in 1968—the first storage ring dedicated to the production of synchrotron radiation.
With the demise of MURA, a new entity was created to run the facility: the Synchrotron Radiation Center (SRC), administered by the University of Wisconsin.
Tantalus: 1968–1987
Tantalus had a circumference of just over , and, with an energy of 240 MeV, had a critical energy of slightly under 50 eV. It achieved its first stored beam in March 1968. Initial operations were very difficult, with only about 5 hours per week of usable beam, and currents of less than 1 mA. Initial users came from three groups, who took turns using their commercial monochromators on the one available beamline. On August 7, 1968, this first dedicated storage ring based synchrotron radiation facility produced its first data when Ulrich Gerhardt of the University of Chicago, carried out simultaneous reflection and absorption measurements on CdS over the wavelength range 1100-2700 Å.
In 1972 the building was enlarged to accommodate new beamlines, and by 1973 there were ten ports, and beam currents were up to about 50 mA. A new injector, a 40 MeV microtron, was installed as an injector in 1974, replacing the original MURA accelerator that had been used until that point, and within a year currents exceeded 150 mA, with typically over 30 hours of beam per week. A stored beam of 260 mA was achieved in 1977. In October 1974 the National Science Foundation took over funding from the Air Force.
Initial monochromators were commercial instruments with drawbacks for use at a synchrotron. SRC started a program of instrument development, both to take advantage of the unique properties of synchrotron radiation and to make beamlines available to users without their own instruments. Such users became known as "general users", while groups with their own beamlines became known as Participating Research Teams (PRTs). This model has become widely used at other facilities, where PRTs are also denoted Collaborating Access Teams (CATs) and Collaborating Research Groups (CRGs). PRTs have been used extensively by US scientists at US facilities but by 2010 were somewhat out of favor. The CRG in Europe, however, remains as an important and successful means of flexible access.
For two decades Tantalus produced hundreds of experiments and was a testing ground for many synchrotron techniques still in use. Current synchrotron facilities can be very large, while Tantalus was not, and its small building, even after the 1972 expansion, was crowded with equipment and researchers. Users worked in very close quarters and the close proximity combined with the relative isolation of the facility, made cross fertilization of ideas unavoidable. The atmosphere was open, friendly, and informal, although not particularly comfortable physically, The heating system in one washroom did not work, so, to avoid frozen pipes, users just left the door wide open. After someone posted a sign alerting users to the policy, an international contest began, with each person translating the message into their own language. A copy of this sign was included as part of an NSF funding request as evidence of Tantalus's growing international impact.
Research during those early years was dominated by optical spectroscopy. In 1971 an IBM research group produced the first photoelectron spectra using Tantalus, a milestone in the development of photoemission spectroscopy as a research tool. The tunability of the radiation allowed researchers to disentangle a material's ground-state electronic properties. In the mid-1970s the increasing beam current from the ring gave intensity levels sufficient for angle-resolved photoemission spectroscopy, with a joint Bell Labs–Montana State University group conducting the earliest experiments. As an experimental technique, angle-resolved photoemission developed rapidly and had an important conceptual impact on condensed-matter physics. Gas-phase spectroscopy was another successful field at SRC, starting from early absorption studies of noble gases.
With the new Aladdin storage ring operating, Tantalus was officially decommissioned in 1987, although it was run for six weeks in the summer of 1988 for experiments in atomic and molecular fluorescence. The storage ring was disassembled in 1995, and half the ring, the RF cavity and one of the original beamlines are now in storage at the Smithsonian Institution.
Aladdin, the early years: 1976–1986
In 1976 SRC submitted a proposal to the NSF for a 750 MeV storage ring as an intense source of VUV and soft x-ray radiation to an energy greater than 1 keV. This proposed ring was named Aladdin. Funding for the new ring was obtained from the NSF, the State of Wisconsin, and the Wisconsin Alumni Research Foundation (WARF). The final design was a four straight section 1 GeV ring, of circumference, and construction of some components started in 1978. A new building to house the facility started construction in April 1979. The initial target date for first stored beam was October 1980.
The construction phase of Aladdin ended in 1981, but by late 1984 SRC had been unable to complete the commissioning of the facility, with a maximum stored current of 2.5 mA, too little to provide useful light intensities. Accelerator experts reviewing the project recommended the addition of a booster synchrotron at a cost of (equivalent to $ million in ). In May 1985, after a review by L. Edward Temple of the Department of Energy, which recommended still another study period while difficulties were ironed out, NSF director Eric Bloch decided not only against the upgrade, but also against continued funding for Aladdin operations. SRC was kept running with existing NSF funding for Tantalus and funds from WARF. The University of Wisconsin made it clear it would only continue funding Aladdin until June 1986, a situation characterized on campus as the Perils of Pauline. Concurrent with these events, the technical issue limiting the machine performance had been solved, and three months after the decision to withdraw NSF funding, currents of 40 mA had been achieved. By July 1986 this had risen to over 150 mA, and NSF funding was restored.
Closing
National Science Foundation funding stopped in 2011. The University of Wisconsin gave SRC (equivalent to $ million in ) to keep the facility operating until June 2013, while new funding was sought. The biggest budget cutbacks were in education, outreach and support for outside users. By January 2012 the facility had lost about one-third of its staff to retirements and layoffs. In February 2014 the facility director announced that the center would be closing. The final beam run was completed March 7, 2014, after which the process of dismantling and disposing of the equipment began.
SRC history project
A project, completed in 2011, collected oral histories and historical documents related to SRC. These were deposited in the archives of the University of Wisconsin–Madison, and digitized copies of some of the material are available online.
G. J. Lapeyre award
In 1973 the vault that held Tantalus was being enlarged, and during a facility picnic a rainstorm hit and caused the vault to start to flood. Jerry Lapeyre of Montana State University used the lab's tractor to build earthworks to divert the water. His efforts led then-director Rowe to create the annual G. J. Lapeyre award to be awarded to "one who met and overcame the greatest obstacle in the pursuit of their research". The trophy had an octagonal base representing Tantalus, with a beer can from the lab picnic which preceded the flood, topped by a concrete "raindrop".
Technical description
Beamlines
References
External links
Official website
SRC history project digital archive
Synchrotron radiation facilities
Research institutes in Wisconsin
University of Wisconsin–Madison | Synchrotron Radiation Center | [
"Materials_science"
] | 1,979 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
10,985,744 | https://en.wikipedia.org/wiki/Rotational%20diffusion | Rotational diffusion is the rotational movement which acts upon any object such as particles, molecules, atoms when present in a fluid, by random changes in their orientations.
Although the directions and intensities of these changes are statistically random, they do not arise randomly and are instead the result of interactions between particles. One example occurs in colloids, where relatively large insoluble particles are suspended in a greater amount of fluid. The changes in orientation occur from collisions between the particle and the many molecules forming the fluid surrounding the particle, which each transfer kinetic energy to the particle, and as such can be considered random due to the varied speeds and amounts of fluid molecules incident on each individual particle at any given time.
The analogue to translational diffusion which determines the particle's position in space, rotational diffusion randomises the orientation of any particle it acts on.
Anything in a solution will experience rotational diffusion, from the microscopic scale where individual atoms may have an effect on each other, to the macroscopic scale.
Applications
Rotational diffusion has multiple applications in chemistry and physics, and is heavily involved in many biology based fields. For example, protein-protein interaction is a vital step in the communication of biological signals. In order to communicate, the proteins must both come into contact with each other and be facing the appropriate way to interact with each other's binding site, which relies on the proteins ability to rotate.
As an example concerning physics, rotational Brownian motion in astronomy can be used to explain the orientations of the orbital planes of binary stars, as well as the seemingly random spin axes of supermassive black holes.
The random re-orientation of molecules (or larger systems) is an important process for many biophysical probes. Due to the equipartition theorem, larger molecules re-orient more slowly than do smaller objects and, hence, measurements of the rotational diffusion constants can give insight into the overall mass and its distribution within an object. Quantitatively, the mean square of the angular velocity about each of an object's principal axes is inversely proportional to its moment of inertia about that axis. Therefore, there should be three rotational diffusion constants - the eigenvalues of the rotational diffusion tensor - resulting in five rotational time constants. If two eigenvalues of the diffusion tensor are equal, the particle diffuses as a spheroid with two unique diffusion rates and three time constants. And if all eigenvalues are the same, the particle diffuses as a sphere with one time constant. The diffusion tensor may be determined from the Perrin friction factors, in analogy with the Einstein relation of translational diffusion, but often is inaccurate and direct measurement is required.
The rotational diffusion tensor may be determined experimentally through fluorescence anisotropy, flow birefringence, dielectric spectroscopy, NMR relaxation and other biophysical methods sensitive to picosecond or slower rotational processes. In some techniques such as fluorescence it may be very difficult to characterize the full diffusion tensor, for example measuring two diffusion rates can sometimes be possible when there is a great difference between them, e.g., for very long, thin ellipsoids such as certain viruses. This is however not the case of the extremely sensitive, atomic resolution technique of NMR relaxation that can be used to fully determine the rotational diffusion tensor to very high precision. Rotational diffusion of macromolecules in complex biological fluids (i.e., cytoplasm) is slow enough to be measurable by techniques with microsecond time resolution, i.e. fluorescence correlation spectroscopy.
Relation to translational diffusion
thumb|The standard translational model of Brownian motion
Much like translational diffusion in which particles in one area of high concentration slowly spread position through random walks until they are near-equally distributed over the entire space, in rotational diffusion, over long periods of time the directions which these particles face will spread until they follow a completely random distribution with a near-equal amount facing in all directions. As impacts from surrounding particles rarely, if ever, occur directly in the centre of mass of a 'target' particle, each impact will occur off-centre and as such it is important to note that the same collisions that cause translational diffusion cause rotational diffusion as some of the impact energy is transferred to translational kinetic energy and some is transferred into torque.
Rotational version of Fick's law
A rotational version of Fick's law of diffusion can be defined. Let each rotating molecule be associated with a unit vector ; for example, might represent the orientation of an electric or magnetic dipole moment. Let f(θ, φ, t) represent the probability density distribution for the orientation of at time t. Here, θ and φ represent the spherical angles, with θ being the polar angle between and the z-axis and φ being the azimuthal angle of in the x-y plane.
The rotational version of Fick's law states
.
This partial differential equation (PDE) may be solved by expanding f(θ, φ, t) in spherical harmonics for which the mathematical identity holds
.
Thus, the solution of the PDE may be written
,
where Clm are constants fitted to the initial distribution and the time constants equal
.
Two-dimensional rotational diffusion
A sphere rotating around a fixed axis will rotate in two dimensions only and can be viewed from above the fixed axis as a circle. In this example, a sphere which is fixed on the vertical axis rotates around that axis only, meaning that the particle can have a θ value of 0 through 360 degrees, or 2π Radians, before having a net rotation of 0 again.
These directions can be placed onto a graph which covers the entirety of the possible positions for the face to be at relative to the starting point, through 2π radians, starting with -π radians through 0 to π radians. Assuming all particles begin with single orientation of 0, the first measurement of directions taken will resemble a delta function at 0 as all particles will be at their starting, or 0th, position and therefore create an infinitely steep single line. Over time, the increasing amount of measurements taken will cause a spread in results; the initial measurements will see a thin peak form on the graph as the particle can only move slightly in a short time. Then as more time passes, the chance for the molecule to rotate further from its starting point increases which widens the peak, until enough time has passed that the measurements will be evenly distributed across all possible directions.
The distribution of orientations will reach a point where they become uniform as they all randomly disperse to be nearly equal in all directions. This can be visualized in two ways.
For a single particle with multiple measurements taken over time. A particle which has an area designated as its face pointing in the starting orientation, starting at a time t0 will begin with an orientation distribution resembling a single line as it is the only measurement. Each successive measurement at time greater than t0 will widen the peak as the particle will have had more time to rotate away from the starting position.
For multiple particles measured once long after the first measurement. The same case can be made with a large number of molecules, all starting at their respective 0th orientation. Assuming enough time has passed to be much greater than t0, the molecules may have fully rotated if the forces acting on them require, and a single measurement shows they are near-to-evenly distributed.
Basic equations
For rotational diffusion about a single axis, the mean-square angular deviation in time is
,
where is the rotational diffusion coefficient (in units of radians2/s).
The angular drift velocity in response to an external torque (assuming that the flow stays non-turbulent and that inertial effects can be neglected) is given by
,
where is the frictional drag coefficient. The relationship between the rotational diffusion coefficient and the rotational frictional drag coefficient is given by the Einstein relation (or Einstein–Smoluchowski relation):
,
where is the Boltzmann constant and is the absolute temperature. These relationships are in complete analogy to translational diffusion.
The rotational frictional drag coefficient for a sphere of radius is
where is the dynamic (or shear) viscosity.
The rotational diffusion of spheres, such as nanoparticles, may deviate from what is expected when in complex environments, such as in polymer solutions or gels. This deviation can be explained by the formation of a depletion layer around the nanoparticle.
Langevin dynamics
Collisions with the surrounding fluid molecules will create a fluctuating torque on the sphere due to the varied speeds, numbers, and directions of impact. When trying to rotate a sphere via an externally applied torque, there will be a systematic drag resistance to rotation. With these two facts combined, it is possible to write the Langevin-like equation:
Where:
L is the angular momentum.
is torque.
I is the moment of inertia about the rotation axis.
t is the time.
t0 is the start time.
θ is the angle between the orientation at t0 and any time after, t.
ζr is the rotational friction coefficient.
TB(t) is the fluctuating Brownian torque at time t.
The overall Torque on the particle will be the difference between:
and .
This equation is the rotational version of Newtons second equation of motion. For example, in standard translational terms, a rocket will experience a boosting force from the engine while simultaneously experiencing a resistive force from the air it is travelling through. The same can be said for an object which is rotating.
Due to the random nature of rotation of the particle, the average Brownian torque is equal in both directions of rotation. symbolised as:
This means the equation can be averaged to get:
Which is to say that the first derivative with respect to time of the average Angular momentum is equal to the negative of the Rotational friction coefficient divided by the moment of inertia, all multiplied by the average of the angular momentum.
As is the rate of change of angular momentum over time, and is equal to a negative value of a coefficient multiplied by , this shows that the angular momentum is decreasing over time, or decaying with a decay time of:
.
For a sphere of mass m, uniform density ρ and radius a, the moment of inertia is:
.
As mentioned above, the rotational drag is given by the Stokes friction for rotation:
Combining all of the equations and formula from above, we get:
where:
is the momentum relaxation time
η is the viscosity of the Liquid the sphere is in.
Example: Spherical particle in water
Let's say there is a virus which can be modelled as a perfect sphere with the following conditions:
Radius (a) of 100 nanometres, a = 10−7m.
Density: ρ = 1500 kg m−3
Orientation originally facing in a direction denoted by π.
Suspended in water.
Water has a viscosity of η = 8.9 × 10−4 Pa·s at 25 °C
Assume uniform mass and density throughout the particle
First, the mass of the virus particle can be calculated:
From this, we now know all the variables to calculate moment of inertia:
Simultaneous to this, we can also calculate the rotational drag:
Combining these equations we get:
As the SI units for Pascal are kg⋅m−1⋅s−2 the units in the answer can be reduced to read:
For this example, the decay time of the virus is in the order of nanoseconds.
Smoluchowski description of rotation
To write the Smoluchowski equation for a particle rotating in two dimensions, we introduce a probability density P(θ, t) to find the vector u at an angle θ and time t.
This can be done by writing a continuity equation:
where the current can be written as:
Which can be combined to give the rotational diffusion equation:
We can express the current in terms of an angular velocity which is a result of Brownian torque TB through a rotational mobility with the equation:
Where:
The only difference between rotational and translational diffusion in this case is that in the rotational diffusion, we have periodicity in the angle θ. As the particle is modelled as a sphere rotating in two dimensions, the space the particle can take is compact and finite, as the particle can rotate a distance of 2π before returning to its original position
We can create a conditional probability density, which is the probability of finding the vector u at the angle θ and time t given that it was at angle θ0 at time t=0 This is written as such:
The solution to this equation can be found through a Fourier series:
Where is the Jacobian theta function of the third kind.
By using the equation
The conditional probability density function can be written as :
For short times after the starting point where t ≈ t0 and θ ≈ θ0, the formula becomes:
The terms included in these are exponentially small and make little enough difference to not be included here. This means that at short times the conditional probability looks similar to translational diffusion, as both show extremely small perturbations near t0. However at long times, t » t0 , the behaviour of rotational diffusion is different to translational diffusion:
The main difference between rotational diffusion and translational diffusion is that rotational diffusion has a periodicity of , meaning that these two angles are identical. This is because a circle can rotate entirely once before being at the same angle as it was in the beginning, meaning that all the possible orientations can be mapped within the space of . This is opposed to translational diffusion, which has no such periodicity.
The conditional probability of having the angle be θ is approximately .
This is because over long periods of time, the particle has had time rotate throughout the entire range of angles possible and as such, the angle θ could be any amount between θ0 and θ0 + 2 π. The probability is near-evenly distributed through each angle as at large enough times.
This can be proven through summing the probability of all possible angles. As there are 2π possible angles, each with the probability of , the total probability sums to 1, which means there is a certainty of finding the angle at some point on the circle.
See also
Diffusion equation
Perrin friction factors
Rotational correlation time
False diffusion
References
Further reading
Diffusion
Rotation | Rotational diffusion | [
"Physics",
"Chemistry"
] | 2,924 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Classical mechanics",
"Rotation",
"Motion (physics)"
] |
10,986,798 | https://en.wikipedia.org/wiki/Molecular%20symmetry | In chemistry, molecular symmetry describes the symmetry present in molecules and the classification of these molecules according to their symmetry. Molecular symmetry is a fundamental concept in chemistry, as it can be used to predict or explain many of a molecule's chemical properties, such as whether or not it has a dipole moment, as well as its allowed spectroscopic transitions. To do this it is necessary to use group theory. This involves classifying the states of the molecule using the irreducible representations
from the character table of the symmetry group of the molecule. Symmetry is useful in the study of molecular orbitals, with applications to the Hückel method, to ligand field theory, and to the Woodward-Hoffmann rules. Many university level textbooks on physical chemistry, quantum chemistry, spectroscopy and inorganic chemistry discuss symmetry. Another framework on a larger scale is the use of crystal systems to describe crystallographic symmetry in bulk materials.
There are many techniques for determining the symmetry of a given molecule, including X-ray crystallography and various forms of spectroscopy. Spectroscopic notation is based on symmetry considerations.
Point group symmetry concepts
Elements
The point group symmetry of a molecule is defined by the presence or absence of 5 types of symmetry element.
Symmetry axis: an axis around which a rotation by results in a molecule indistinguishable from the original. This is also called an n-fold rotational axis and abbreviated Cn. Examples are the C2 axis in water and the C3 axis in ammonia. A molecule can have more than one symmetry axis; the one with the highest n is called the principal axis, and by convention is aligned with the z-axis in a Cartesian coordinate system.
Plane of symmetry: a plane of reflection through which an identical copy of the original molecule is generated. This is also called a mirror plane and abbreviated σ (sigma = Greek "s", from the German 'Spiegel' meaning mirror). Water has two of them: one in the plane of the molecule itself and one perpendicular to it. A symmetry plane parallel with the principal axis is dubbed vertical (σv) and one perpendicular to it horizontal (σh). A third type of symmetry plane exists: If a vertical symmetry plane additionally bisects the angle between two 2-fold rotation axes perpendicular to the principal axis, the plane is dubbed dihedral (σd). A symmetry plane can also be identified by its Cartesian orientation, e.g., (xz) or (yz).
Center of symmetry or inversion center, abbreviated i. A molecule has a center of symmetry when, for any atom in the molecule, an identical atom exists diametrically opposite this center an equal distance from it. In other words, a molecule has a center of symmetry when the points (x,y,z) and (−x,−y,−z) of the molecule always look identical. For example, whenever there is an oxygen atom in some point (x,y,z), then there also has to be an oxygen atom in the point (−x,−y,−z). There may or may not be an atom at the inversion center itself. An inversion center is a special case of having a rotation-reflection axis about an angle of 180° through the center. Examples are xenon tetrafluoride (a square planar molecule), where the inversion center is at the Xe atom, and benzene () where the inversion center is at the center of the ring.
Rotation-reflection axis: an axis around which a rotation by , followed by a reflection in a plane perpendicular to it, leaves the molecule unchanged. Also called an n-fold improper rotation axis, it is abbreviated Sn. Examples are present in tetrahedral silicon tetrafluoride, with three S4 axes, and the staggered conformation of ethane with one S6 axis. An S1 axis corresponds to a mirror plane σ and an S2 axis is an inversion center i. A molecule which has no Sn axis for any value of n is a chiral molecule.
Identity, abbreviated to E, from the German 'Einheit' meaning unity. This symmetry element simply consists of no change: every molecule has this symmetry element, which is equivalent to a C1 proper rotation. It must be included in the list of symmetry elements so that they form a mathematical group, whose definition requires inclusion of the identity element. It is so called because it is analogous to multiplying by one (unity).
Operations
The five symmetry elements have associated with them five types of symmetry operation, which leave the geometry of the molecule indistinguishable from the starting geometry. They are sometimes distinguished from symmetry elements by a caret or circumflex. Thus, Ĉn is the rotation of a molecule around an axis and Ê is the identity operation. A symmetry element can have more than one symmetry operation associated with it. For example, the C4 axis of the square xenon tetrafluoride (XeF4) molecule is associated with two Ĉ4 rotations in opposite directions (90° and 270°), a Ĉ2 rotation (180°) and Ĉ1 (0° or 360°). Because Ĉ1 is equivalent to Ê, Ŝ1 to σ and Ŝ2 to î, all symmetry operations can be classified as either proper or improper rotations.
For linear molecules, either clockwise or counterclockwise rotation about the molecular axis by any angle Φ is a symmetry operation.
Symmetry groups
Groups
The symmetry operations of a molecule (or other object) form a group. In mathematics, a group is a set with a binary operation that satisfies the four properties listed below.
In a symmetry group, the group elements are the symmetry operations (not the symmetry elements), and the binary combination consists of applying first one symmetry operation and then the other. An example is the sequence of a C4 rotation about the z-axis and a reflection in the xy-plane, denoted σ(xy)C4. By convention the order of operations is from right to left.
A symmetry group obeys the defining properties of any group.
closure property:
This means that the group is closed so that combining two elements produces no new elements. Symmetry operations have this property because a sequence of two operations will produce a third state indistinguishable from the second and therefore from the first, so that the net effect on the molecule is still a symmetry operation. This may be illustrated by means of a table. For example, with the point group C3, there are three symmetry operations: rotation by 120°, C3, rotation by 240°, C32 and rotation by 360°, which is equivalent to identity, E.
{| class="wikitable"
|+ Point group C3 Multiplication table
|-
! !!E || C3 || C32
|-
!E
| E|| C3 || C32
|-
!C3
|C3||C32||E
|-
!C32
|C32||E||C3
|-
|}
This table also illustrates the following properties
Associative property:
existence of identity property:
existence of inverse element:
The order of a group is the number of elements in the group. For groups of small orders, the group properties can be easily verified by considering its composition table, a table whose rows and columns correspond to elements of the group and whose entries correspond to their products.
Point groups
The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. For example, a C2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*C2 = σv'. ("Operation A followed by B to form C" is written BA = C). Moreover, the set of all symmetry operations (including this composition operation) obeys all the properties of a group, given above. So (S,*) is a group, where S is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations.
This group is called the point group of that molecule, because the set of symmetry operations leave at least one point fixed (though for some symmetries an entire axis or an entire plane remains fixed). In other words, a point group is a group that summarises all symmetry operations that all molecules in that category have. The symmetry of a crystal, by contrast, is described by a space group of symmetry operations, which includes translations in space.
Examples of point groups
Assigning each molecule a point group classifies molecules into categories with similar symmetry properties. For example, PCl3, POF3, XeO3, and NH3 all share identical symmetry operations. They all can undergo the identity operation E, two different C3 rotation operations, and three different σv plane reflections without altering their identities, so they are placed in one point group, C3v, with order 6. Similarly, water (H2O) and hydrogen sulfide (H2S) also share identical symmetry operations. They both undergo the identity operation E, one C2 rotation, and two σv reflections without altering their identities, so they are both placed in one point group, C2v, with order 4. This classification system helps scientists to study molecules more efficiently, since chemically related molecules in the same point group tend to exhibit similar bonding schemes, molecular bonding diagrams, and spectroscopic properties.
Point group symmetry describes the symmetry of a molecule when fixed at its equilibrium configuration in a particular electronic state. It does not allow for tunneling between minima nor for the change in shape that can come about from the centrifugal distortion effects of molecular rotation.
Common point groups
The following table lists many of the point groups applicable to molecules, labelled using the Schoenflies notation, which is common in chemistry and molecular spectroscopy. The descriptions include common shapes of molecules, which can be explained by the VSEPR model. In each row, the descriptions and examples have no higher symmetries, meaning that the named point group captures all of the point symmetries.
Representations and their characters
A set of matrices that multiply together in a way that mimics the multiplication table of the elements of a group is called a representation of the group. For example, for the C2v point group, the following three matrices are part of a representation of the group:
This point group only contains four operations and the 3-dimensional representations above provide matrices for three of the four operations. Only the identity operation E remains but this matrix just contains 1's on the leading diagonal (top left to bottom right) and 0's elsewhere. Although an infinite number of such representations exist, the irreducible representations (or "irreps") of the group are all that are needed as all other representations of the group can be described as a direct sum of the irreducible representations. The first step in finding the irreps making up a given representation is to sum up the values of the leading diagonals for each matrix so, taking the identity matrix first then the matrices in the order above, one obtains (3, -1, 1, 1). These values are the traces or characters of the four matrices. Asymmetric point groups such as C2v only have 1-dimensional irreps so the character of an irrep is exactly the same is the irrep itself and the following table can be interpreted as irreps or characters.
Looking again at the characters obtained for the 3D representation above (3, -1, 1, 1), we only need simple arithmetic to break this down into irreps. Clearly, E = 3 means there are three irreps and a C2 representation sum of -1 means there must be one A and two B irreps so the only combination that adds up to the characters derived is
A1+ B1 + B2
In fact, this result could have been deduced by simply looking at the 3D representation itself: the three irreps are obvious in the three diagonal positions. Robert Mulliken was the first to publish character tables in English (1933), and E. Bright Wilson used them in 1934 to predict the symmetry of vibrational normal modes. . For this reason, the notation used to label irreps in the above table is called Mulliken notation and for asymmetric groups it consists of letters A and B with subscripts 1 and 2 as above and subscripts g and u as in the C2h example below. (Subscript 3 also also appears in D2 ) The irreducible representations are those matrix representations in which the matrices are in their most diagonal form possible and for asymmetric groups this means totally diagonal. One further thing to note about the irrep/character table above is the appearance of polar and axial base vector symbols on the right hand side. This tells us that, for example, cartesian base vector x transforms as irrep B1 under the operations of this group. The same collection of product base vectors is used for all asymmetric groups but symmetric and spherical groups use different sets of product base vectors.
Point group C2h has the operations and transformation matrices shown in the following diagram
In summary, for any group, its character table gives a tabulation (for the classes of the group) of the characters (the sum of the diagonal elements) of the matrices of all the irreducible representations of the group. As the number of irreducible representations equals the number of classes, the character table is square.
The representations are labeled according to a set of conventions:
A, when rotation around the principal axis is symmetrical
B, when rotation around the principal axis is asymmetrical
E and T are doubly and triply degenerate representations, respectively
when the point group has an inversion center, the subscript g ( or even) signals no change in sign, and the subscript u (ungerade or uneven) a change in sign, with respect to inversion.
with point groups C∞v and D∞h the symbols are borrowed from angular momentum description: Σ, Π, Δ.
The tables also capture information about how the Cartesian basis vectors, rotations about them, and quadratic functions of them transform by the symmetry operations of the group, by noting which irreducible representation transforms in the same way. These indications are conventionally on the righthand side of the tables. This information is useful because chemically important orbitals (in particular p and d orbitals) have the same symmetries as these entities.
Atomic orbital symmetry
Consider the example of water (H2O), which has the C2v symmetry described above. The 2px orbital of oxygen has B1 symmetry as in the fourth row of the character table above, with x in the sixth column). It is oriented perpendicular to the plane of the molecule and switches sign with a C2 and a σv'(yz) operation, but remains unchanged with the other two operations (obviously, the character for the identity operation is always +1). This orbital's character set is thus {1, −1, 1, −1}, corresponding to the B1 irreducible representation. Likewise, the 2pz orbital is seen to have the symmetry of the A1 irreducible representation (i.e.: none of the symmetry operations change it), 2py B2, and the 3dxy orbital A2. These assignments and others are noted in the rightmost two columns of the table.
Historical background
All of the group operations described above and the symbols for crystallographic point groups themselves were first published by Arthur Schoenflies in 1891 but the groups had been applied by other researchers to the external morphology of crystals much earlier in the 19th century.
In 1914 Max von Laue published the results of experiments using x-ray diffraction to elucidate the internal structures of crystals producing a limited version of a table of "Laue classes" shown to the right. When adapted for molecular work this table first divides point groups into three kinds: asymmetric, symmetric and spherical tops. These are categories related to the angular momentum of molecules, having respectively 3, 2 and 1 distinct values of angular momentum. A further sub-division into systems is defined by the rotational group G in the leftmost column then into rows of Laue classes. Every point group in a Laue class has exactly the same abstract group structure except the centred group in the rightmost column which is the direct product of the rotational group with inversion. It follows that all groups in a Laue class have the same order except the centred group which is twice that of the others. Laue found that x-ray diffraction was unable to distinguish between point groups of a Laue class.
Hans Bethe used characters of point group operations in his study of ligand field theory in 1929, and Eugene Wigner used group theory to explain the selection rules of atomic spectroscopy. The first character tables were compiled by László Tisza (1933), in connection to vibrational spectra. It is important to note that, since all the point groups of a Laue class have the same abstract structure, they also have exactly the same irreducible representations and character tables. As in x-ray crystallography many properties in molecular work are decided by the Laue class.
The complete set of 32 crystallographic point groups was published in 1936 by Rosenthal and Murphy
The molecular symmetry group
One can determine the symmetry operations of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group to classify molecular states, the operations in it are not to be interpreted in the same way. Instead the operations are interpreted as rotating and/or reflecting the vibronic (vibration-electronic) coordinates and these operations commute with the vibronic Hamiltonian. They are "symmetry operations" for that vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates of a rigid molecule. The symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, can be achieved through the use of the appropriate permutation-inversion group (called the molecular symmetry group), as introduced by Longuet-Higgins.
Symmetry of vibrational modes
Each normal mode of molecular vibration has a symmetry which forms a basis for one irreducible representation of the molecular symmetry group. For example, the water molecule has three normal modes of vibration: symmetric stretch in which the two O-H bond lengths vary in phase with each other, asymmetric stretch in which they vary out of phase, and bending in which the bond angle varies. The molecular symmetry of water is C2v with four irreducible representations A1, A2, B1 and B2. The symmetric stretching and the bending modes have symmetry A1, while the asymmetric mode has symmetry B2. The overall symmetry of the three vibrational modes is therefore Γvib = 2A1 + B2.
Vibrational modes of ammonia
The molecular symmetry of ammonia (NH3) is C3v, with symmetry operations E, C3 and σv. For N = 4 atoms, the number of vibrational modes for a non-linear molecule is 3N-6 = 6, due to the relative motion of the nitrogen atom and the three hydrogen atoms. All three hydrogen atoms travel symmetrically along the N-H bonds, either in the direction of the nitrogen atom or away from it. This mode is known as symmetric stretch (v₁) and reflects the symmetry in the N-H bond stretching. Of the three vibrational modes, this one has the highest frequency.
In the Bending (ν₂) vibration, the nitrogen atom stays on the axis of symmetry, while the three hydrogen atoms move in different directions from one another, leading to changes in the bond angles. The hydrogen atoms move like an umbrella, so this mode is often referred to as the "umbrella mode".
There is also an Asymmetric Stretch mode (ν₃) in which one hydrogen atom approaches the nitrogen atom while the other two hydrogens move away.
The total number of degrees of freedom for each symmetry species (or irreducible representation) can be determined. Ammonia has four atoms, and each atom is associated with three vector components. The symmetry group C3v for NH3 has the three symmetry species A1, A2 and E. The modes of vibration include the vibrational, rotational and translational modes.
Total modes = 3A1 + A2 + 4E. This is a total of 12 modes because each E corresponds to 2 degenerate modes (at the same energy).
Rotational modes = A2 + E (3 modes)
Translational modes = A1 + E
Vibrational modes = Total modes - Rotational modes - Translational modes = 3A1 + A2 + 4E - A2 - E - A1 - E = 2A1 + 2E (6 modes).
More examples of vibrational symmetry
W(CO)6 has octahedral geometry. The irreducible representation for the C-O stretching vibration is A1g + Eg + T1u . Of these, only T1u is IR active.
B2H6 (diborane) has D2h molecular symmetry. The terminal B-H stretching vibrations which are active in IR are B2u and B3u.
Fac-Mo(CO)3(CH3CH2CN)3, has C3v geometry. The irreducible representation for the C-O stretching vibration is A1 + E. Both of which are IR active.
Symmetry of molecular orbitals
Each molecular orbital also has the symmetry of one irreducible representation. For example, ethylene (C2H4) has symmetry group D2h, and its highest occupied molecular orbital (HOMO) is the bonding pi orbital which forms a basis for its irreducible representation B1u.
Molecular rotation and molecular nonrigidity
As discussed above in the section The molecular symmetry group, point groups are useful for classifying the vibrational and electronic states of rigid molecules (sometimes called semi-rigid molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced the molecular symmetry group (a more general type of symmetry group) suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of non-rigid (or fluxional) molecules that tunnel between equivalent geometries and to allow for the distorting effects of molecular rotation. The symmetry operations in the molecular symmetry group are so-called 'feasible' permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two, so that the group is sometimes called a "permutation-inversion group".
Examples of molecular nonrigidity abound. For example, ethane (C2H6) has three equivalent staggered conformations. Tunneling between the conformations occurs at ordinary temperatures by internal rotation of one methyl group relative to the other. This is not a rotation of the entire molecule about the C3 axis, although each conformation has D3d symmetry, as in the table above. Similarly, ammonia (NH3) has two equivalent pyramidal (C3v) conformations which are interconverted by the process known as nitrogen inversion.
Additionally, the methane (CH4) and H3+ molecules have highly symmetric equilibrium structures with Td and D3h point group symmetries respectively; they lack permanent electric dipole moments but they do have very weak pure rotation spectra because of rotational centrifugal distortion.
Sometimes it is necessary to consider together electronic states having different point group symmetries at equilibrium. For example, in its ground (N) electronic state the ethylene molecule C2H4
has D2h point group symmetry whereas in the excited (V) state it has
D2d symmetry. To treat these two states together it is necessary to
allow torsion and to use the double group of the molecular symmetry group
G16.
See also
Character table
Crystallographic point group
Point groups in three dimensions
Symmetry of diatomic molecules
Symmetry in quantum mechanics
References
External links
The molecular symmetry group @ The University of Western Ontario
Point group symmetry @ Newcastle University
Molecular symmetry @ Imperial College London
Molecular Point Group Symmetry Tables
Character tables for point groups for chemistry
Molecular Symmetry Online @ The Open University of Israel
An internet lecture course on molecular symmetry @ Bergische Universitaet
DECOR – Symmetry @ The Cambridge Crystallographic Data Centre
Symmetry
Theoretical chemistry | Molecular symmetry | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,067 | [
"Theoretical chemistry",
"nan",
"Geometry",
"Symmetry"
] |
10,987,985 | https://en.wikipedia.org/wiki/Exact%20category | In mathematics, specifically in category theory, an exact category is a category equipped with short exact sequences. The concept is due to Daniel Quillen and is designed to encapsulate the properties of short exact sequences in abelian categories without requiring that morphisms actually possess kernels and cokernels, which is necessary for the usual definition of such a sequence.
Definition
An exact category E is an additive category possessing a class E of "short exact sequences": triples of objects connected by arrows
satisfying the following axioms inspired by the properties of short exact sequences in an abelian category:
E is closed under isomorphisms and contains the canonical ("split exact") sequences:
Suppose occurs as the second arrow of a sequence in E (it is an admissible epimorphism) and is any arrow in E. Then their pullback exists and its projection to is also an admissible epimorphism. Dually, if occurs as the first arrow of a sequence in E (it is an admissible monomorphism) and is any arrow, then their pushout exists and its coprojection from is also an admissible monomorphism. (We say that the admissible epimorphisms are "stable under pullback", resp. the admissible monomorphisms are "stable under pushout".);
Admissible monomorphisms are kernels of their corresponding admissible epimorphisms, and dually. The composition of two admissible monomorphisms is admissible (likewise admissible epimorphisms);
Suppose is a map in E which admits a kernel in E, and suppose is any map such that the composition is an admissible epimorphism. Then so is Dually, if admits a cokernel and is such that is an admissible monomorphism, then so is
Admissible monomorphisms are generally denoted and admissible epimorphisms are denoted These axioms are not minimal; in fact, the last one has been shown by to be redundant.
One can speak of an exact functor between exact categories exactly as in the case of exact functors of abelian categories: an exact functor from an exact category D to another one E is an additive functor such that if
is exact in D, then
is exact in E. If D is a subcategory of E, it is an exact subcategory if the inclusion functor is fully faithful and exact.
Motivation
Exact categories come from abelian categories in the following way. Suppose A is abelian and let E be any strictly full additive subcategory which is closed under taking extensions in the sense that given an exact sequence
in A, then if are in E, so is . We can take the class E to be simply the sequences in E which are exact in A; that is,
is in E iff
is exact in A. Then E is an exact category in the above sense. We verify the axioms:
E is closed under isomorphisms and contains the split exact sequences: these are true by definition, since in an abelian category, any sequence isomorphic to an exact one is also exact, and since the split sequences are always exact in A.
Admissible epimorphisms (respectively, admissible monomorphisms) are stable under pullbacks (resp. pushouts): given an exact sequence of objects in E,
and a map with in E, one verifies that the following sequence is also exact; since E is stable under extensions, this means that is in E:
Every admissible monomorphism is the kernel of its corresponding admissible epimorphism, and vice versa: this is true as morphisms in A, and E is a full subcategory.
If admits a kernel in E and if is such that is an admissible epimorphism, then so is : See .
Conversely, if E is any exact category, we can take A to be the category of left-exact functors from E into the category of abelian groups, which is itself abelian and in which E is a natural subcategory (via the Yoneda embedding, since Hom is left exact), stable under extensions, and in which a sequence is in E if and only if it is exact in A.
Examples
Any abelian category is exact in the obvious way, according to the construction of #Motivation.
A less trivial example is the category Abtf of torsion-free abelian groups, which is a strictly full subcategory of the (abelian) category Ab of all abelian groups. It is closed under extensions: if
is a short exact sequence of abelian groups in which are torsion-free, then is seen to be torsion-free by the following argument: if is a torsion element, then its image in is zero, since is torsion-free. Thus lies in the kernel of the map to , which is , but that is also torsion-free, so . By the construction of #Motivation, Abtf is an exact category; some examples of exact sequences in it are:
where the last example is inspired by de Rham cohomology ( and are the closed and exact differential forms on the circle group); in particular, it is known that the cohomology group is isomorphic to the real numbers. This category is not abelian.
The following example is in some sense complementary to the above. Let Abt be the category of abelian groups with torsion (and also the zero group). This is additive and a strictly full subcategory of Ab again. It is even easier to see that it is stable under extensions: if
is an exact sequence in which have torsion, then naturally has all the torsion elements of . Thus it is an exact category.
References
Additive categories
Homological algebra | Exact category | [
"Mathematics"
] | 1,204 | [
"Mathematical structures",
"Additive categories",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
10,988,372 | https://en.wikipedia.org/wiki/Statistical%20interference | When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much.
This technique can be used for geometric dimensioning of mechanical parts, determining when an applied load exceeds the strength of a structure, and in many other situations. This type of analysis can also be used to estimate the probability of failure or the failure rate.
Dimensional interference
Mechanical parts are usually designed to fit precisely together. For example, if a shaft is designed to have a "sliding fit" in a hole, the shaft must be a little smaller than the hole. (Traditional tolerances may suggest that all dimensions fall within those intended tolerances. A process capability study of actual production, however, may reveal normal distributions with long tails.) Both the shaft and hole sizes will usually form normal distributions with some average (arithmetic mean) and standard deviation.
With two such normal distributions, a distribution of interference can be calculated. The derived distribution will also be normal, and its average will be equal to the difference between the means of the two base distributions. The variance of the derived distribution will be the sum of the variances of the two base distributions.
This derived distribution can be used to determine how often the difference in dimensions will be less than zero (i.e., the shaft cannot fit in the hole), how often the difference will be less than the required sliding gap (the shaft fits, but too tightly), and how often the difference will be greater than the maximum acceptable gap (the shaft fits, but not tightly enough).
Physical property interference
Physical properties and the conditions of use are also inherently variable. For example, the applied load (stress) on a mechanical part may vary. The measured strength of that part (tensile strength, etc.) may also be variable. The part will break when the stress exceeds the strength.
With two normal distributions, the statistical interference may be calculated as above. (This problem is also workable for transformed units such as the log-normal distribution). With other distributions, or combinations of different distributions, a Monte Carlo method or simulation is often the most practical way to quantify the effects of statistical interference.
See also
Interference fit
Interval estimation
Joint probability distribution
Probabilistic design
Process capability
Reliability engineering
Specification
Tolerance (engineering)
References
Paul H. Garthwaite, Byron Jones, Ian T. Jolliffe (2002) Statistical Inference.
Haugen, (1980) Probabilistic mechanical design, Wiley.
Statistical theory
Survival analysis
Reliability engineering
Probability theory
Applied probability | Statistical interference | [
"Mathematics",
"Engineering"
] | 519 | [
"Applied mathematics",
"Systems engineering",
"Reliability engineering",
"Applied probability"
] |
10,989,135 | https://en.wikipedia.org/wiki/Electroanalytical%20methods | Electroanalytical methods are a class of techniques in analytical chemistry which study an analyte by measuring the potential (volts) and/or current (amperes) in an electrochemical cell containing the analyte. These methods can be broken down into several categories depending on which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), amperometry (electric current is the analytical signal), coulometry (charge passed during a certain time is recorded).
Potentiometry
Potentiometry passively measures the potential of a solution between two electrodes, affecting the solution very little in the process. One electrode is called the reference electrode and has a constant potential, while the other one is an indicator electrode whose potential changes with the sample's composition. Therefore, the difference in potential between the two electrodes gives an assessment of the sample's composition. In fact, since the potentiometric measurement is a non-destructive measurement, assuming that the electrode is in equilibrium with the solution, we are measuring the solution's potential.
Potentiometry usually uses indicator electrodes made selectively sensitive to the ion of interest, such as fluoride in fluoride selective electrodes, so that the potential solely depends on the activity of this ion of interest.
The time that takes the electrode to establish equilibrium with the solution will affect the sensitivity or accuracy of the measurement. In aquatic environments, platinum is often used due to its high electron transfer kinetics, although an electrode made from several metals can be used in order to enhance the electron transfer kinetics. The most common potentiometric electrode is by far the glass-membrane electrode used in a pH meter.
A variant of potentiometry is chronopotentiometry which consists in using a constant current and measurement of potential as a function of time. It has been initiated by Weber.
Amperometry
Amperometry indicates the whole of electrochemical techniques in which a current is measured as a function of an independent variable that is, typically, time (in a chronoamperometry) or electrode potential (in a voltammetry). Chronoamperometry is the technique in which the current is measured, at a fixed potential, at different times since the start of polarisation. Chronoamperometry is typically carried out in unstirred solution and at the fixed electrode, i.e., under experimental conditions avoiding convection as the mass transfer to the electrode. On the other hand, voltammetry is a subclass of amperometry, in which the current is measured by varying the potential applied to the electrode. According to the waveform that describes the way how the potential is varied as a function of time, the different voltammetric techniques are defined.
Chronoamperometry
In a chronoamperometry, a sudden step in potential is applied at the working electrode and the current is measured as a function of time. Since this is not an exhaustive method, microelectrodes are used and the amount of time used to perform the experiments is usually very short, typically 20 ms to 1 s, as to not consume the analyte.
Voltammetry
A voltammetry consists in applying a constant and/or varying potential at an electrode's surface and measuring the resulting current with a three-electrode system. This method can reveal the reduction potential of an analyte and its electrochemical reactivity. This method, in practical terms, is non-destructive since only a very small amount of the analyte is consumed at the two-dimensional surface of the working and auxiliary electrodes. In practice, the analyte solution is usually disposed of since it is difficult to separate the analyte from the bulk electrolyte, and the experiment requires a small amount of analyte. A normal experiment may involve 1–10 mL solution with an analyte concentration between 1 and 10 mmol/L. More advanced voltammetric techniques can work with microliter volumes and down to nanomolar concentrations. Chemically modified electrodes are employed for the analysis of organic and inorganic samples.
Polarography
Polarography is a subclass of voltammetry that uses a dropping mercury electrode as the working electrode.
Coulometry
Coulometry uses applied current or potential to convert an analyte from one oxidation state to another completely. In these experiments, the total current passed is measured directly or indirectly to determine the number of electrons passed. Knowing the number of electrons passed can indicate the concentration of the analyte or when the concentration is known, the number of electrons transferred in the redox reaction. Typical forms of coulometry include bulk electrolysis, also known as Potentiostatic coulometry or controlled potential coulometry, as well as a variety of coulometric titrations.
References
Bibliography | Electroanalytical methods | [
"Chemistry"
] | 1,000 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
10,990,255 | https://en.wikipedia.org/wiki/Oocyte%20cryopreservation | Oocyte cryopreservation is a procedure to preserve a woman's eggs (oocytes). The technique is often used to delay pregnancy. At the time pregnancy is desired, the eggs can be thawed, fertilized, and transferred to the uterus as embryos. Many studies have suggested infertility problems as germ cell deterioration related to aging. The procedure's success rate varies according to the woman's age (with higher odds of success in younger women), health, and genetic factors. The first human birth of oocyte cryopreservation was reported in 1986.
Indications
Women diagnosed with cancer who have not yet begun chemotherapy or radiotherapy can benefit from Oocyte cryopreservation. Chemotherapy and radiotherapy are toxic to oocytes, reducing the number of viable eggs. Egg-freezing may be used in this case to preserve eggs as opposed to Oocyte cryopreservation.
Those undergoing treatment with assisted reproductive technologies who do not consider embryo freezing an option often look towards Oocyte cryopreservation as an alternative option.
Women who would like to preserve their future ability to have children often use oocyte cryopreservation to delay and freeze their eggs, allowing for them to have children later in life.
Women with a family history of early menopause may have an interest in fertility preservation to preserve viable eggs that could deteriorate at an earlier onset.
Those with ovarian diseases such as Polycystic Ovary Syndrome could opt for this method.
Oocyte cryopreservation is one of many options for individuals undergoing IVF. In some cases, persons may prefer oocyte cryopreservation over other options, where freezing embryos is the primary procedure.
Method
The egg retrieval process for oocyte cryopreservation is the same as that for in vitro fertilization (IVF). This includes one to several weeks of hormone injections that stimulate ovaries to ripen multiple eggs. When the eggs are mature, final maturation induction is performed. The eggs are subsequently removed from the body by transvaginal oocyte retrieval. The procedure is usually conducted under sedation. The eggs are immediately frozen.
The egg is the largest cell in the human body and contains a large amount of water. When the egg is frozen, the ice crystals that form can destroy the integrity of the cell. To prevent this, the egg must be dehydrated before freezing. This is done using cryoprotectants which replace most of the water within the cell and inhibit the formation of ice crystals.
Eggs (oocytes) are frozen using either a controlled rate, a slow-cooling method, or a newer flash-freezing process known as vitrification. Vitrification is much faster but requires higher concentrations of cryoprotectants to be added. The result of vitrification is a solid glass-like cell, free of ice crystals. Vitrification has been developed and successfully applied in IVF treatment with the first live birth following the vitrification of oocytes achieved in 1999. Vitrification eliminates ice formation inside and outside of oocytes on cooling, during cryostorage, and as the oocytes warm. Vitrification is associated with higher survival rates and enhanced development compared to slow-cooling when applied to oocytes in metaphase II. Vitrification has also become the method of choice for pronuclear oocytes, although prospective randomized controlled trials are still lacking.
During the freezing process, the zona pellucida, or shell of the egg, can be modified preventing fertilization. Thus, when eggs are thawed and pregnancy is desired, a fertilization procedure known as ICSI (Intracytoplasmic Sperm Injection) is performed by an embryologist whereby sperm is injected directly into the egg with a needle rather than allowing sperm to penetrate naturally by placing it around the egg in a dish.
Immature oocytes have been grown until maturation in vitro, but it is not yet clinically available.
Success rates
Early work investigating the percentage of transferred cycles showed lower frozen cycles compared with fresh cycles (approx. 30% and 50%), however more recent studies show "fertilization and pregnancy rates are similar to IVF/ICSI (in vitro fertilization/intracytoplasmic sperm injection) with fresh oocytes when [both] vitrified and warmed oocytes are used as part of IVF/ICSI". These studies were completed mostly in young patients.
In a 2013 meta-analysis of more than 2,200 cycles using frozen eggs, scientists found the probability of having a live birth after three cycles was 31.5% for women who froze their eggs at age 25, 25.9% at age 30, 19.3% at age 35, and 14.8% at age 40.
Studies have shown that the rate of birth defects and chromosomal defects when using cryopreserved oocytes is consistent with that of natural conception.
Recent modifications in the protocol regarding cryoprotectant composition, temperature, and storage methods have had a large impact on the technology, and while it is still considered an experimental procedure, it is quickly becoming an option for women. Slow freezing traditionally has been the most commonly used method to cryopreserve oocytes and is the method that has resulted in the most babies born from frozen oocytes worldwide. Ultra-rapid freezing or vitrification represents a potential alternative freezing method.
In the fall of 2009, The American Society for Reproductive Medicine (ASRM) issued an opinion on oocyte cryopreservation concluding that the science holds "great promise for applications in oocyte donation and fertility preservation" because recent laboratory modifications have resulted in improved oocyte survival, fertilization, and pregnancy rates from frozen-thawed oocytes in IVF. The ASRM noted that from the limited research performed to date, there does not appear to be an increase in chromosomal abnormalities, birth defects, or developmental deficits in the children born from cryopreserved oocytes. The ASRM recommended that pending further research, oocyte cryopreservation should be introduced into clinical practice on an investigational basis and under the guidance of an Institutional Review Board (IRB). As with any new technology, safety and efficacy must be evaluated and demonstrated through continued research.
In October 2012, the ASRM lifted the experimental label from the technology for women with a medical need, citing success rates in live births, among other findings. However, they also warned against using it only to delay child-bearing.
In 2014, a Cochrane systematic review was published. It compared vitrification (the newest technology) versus slow freezing (the oldest one). Key results of that review showed that the clinical pregnancy rate was almost 4 times higher in the oocyte vitrification group than in the slow-freezing group, with moderate quality of evidence.
Immature oocytes have been grown until maturation in vitro at a 10% survival rate, but no experiment has been performed to fertilize such oocytes.
Cost
The cost of the egg-freezing procedure (without embryo transfer) in the United States, the United Kingdom, and other European countries varies in between $5,000 and $12,000. The cost of egg storage can vary from $100 to more than $1,000. Provisional health programs do not cover social egg freezing. Furthermore, no provinces provide funding for IVF after social egg freezing.
Medical tourism may have lower costs than performing egg freezing in high-cost countries like the US. Some well-established medical tourism and IVF countries such as the Czech Republic, Ukraine, Greece and Cyprus offer egg freezing at competitive prices. It is a lower-cost alternative to typical US options for egg freezing. Spain and the Czech Republic are popular destinations for this treatment.
Iranian insurance started to pay insurance incentives for women freezing their eggs in 2024.
History
Cryopreservation itself has always played a central role in assisted reproductive technology. With the first cryopreservation of sperm in 1953 and of embryos twenty five years later, these techniques have become routine. Dr. Christopher Chen of Singapore reported the world's first pregnancy in 1986 using previously frozen oocytes. This report stood alone for several years followed by studies reporting success rates using frozen eggs to be much lower than those of traditional in vitro fertilization (IVF) techniques using fresh oocytes. Providing the lead to a new direction in cryobiology, Dr. Lilia Kuleshova was the first scientist to achieve vitrification of human oocytes that resulted in a live birth in 1999. Articles published in the journal Fertility and Sterility reported that pregnancy rates using frozen oocytes that were comparable to those of cryopreserved embryos and even fresh embryos.
Elective oocyte cryopreservation
Elective oocyte cryopreservation, also known as social egg freezing, is non-essential egg freezing to preserve fertility for delayed child-bearing when natural conception becomes more problematic. The frequency of this procedure has steadily increased since October 2012 when the American Society for Reproductive Medicine (ASRM) lifted the 'experimental' label from the process. There was a spike in interest in 2014 when global corporations Apple and Meta Platforms announced they were going to pay for the procedure of egg freezing as a benefit for their female employees. This announcement was controversial as some women found it empowering and practical, while others viewed the message these companies were sending to women trying to have a successful long-term career and a family as harmful and alienating. A string of "egg-freezing parties" hosted by third-party companies have also helped popularize the concept among young women.
Social science research suggests that women use elective egg freezing to disentangle their search for a romantic partner from their plans to have children.
In 2016, then US Secretary of Defense Ash Carter announced that the Department of Defense would cover the cost of freezing sperm or eggs through a pilot program for active duty service members, to preserve their ability to start a family even if they sustain certain combat injuries.
There are still warnings for women using this technology to fall pregnant at an older age as the risk of pregnancy complications increases with a mother's age. However, studies have shown that the risk of congenital abnormalities in babies born from frozen oocytes is not increased further when compared to naturally conceived babies.
Risks
The risks associated with egg freezing relate to the administration of medications to stimulate the ovaries and the procedure of egg collection.
The main risk associated with the administration of medications to stimulate the ovaries is ovarian hyperstimulation syndrome (OHSS). This is a transient syndrome in which there is increased permeability of the blood vessels, resulting in fluid loss from the vessels into the surrounding tissues. In most cases, the syndrome is mild, with symptoms such as abdominal bloating, mild discomfort, and nausea. In moderate OHSS there is increased abdominal bloating resulting in pain and vomiting. Reduced urine output may occur. Severe OHSS is serious with even further bloating so that the abdomen appears very distended, and thirst and dehydration occur with minimal urine output. There may be shortness of breath and there is an increased risk of DVT and/or pulmonary embolism. Kidney and liver function can be compromised. Hospitalization under specialist care is indicated. There is no treatment for OHSS, supportive care until the symptoms naturally resolve is required. If an hCG trigger has been used with no embryo transfer, OHSS usually resolves in 7–10 days. If an embryo transfer has occurred and pregnancy results, the symptoms may persist for several weeks. Doctors reduce the likelihood of OHSS occurring by decreasing the doses of gonadotropins (FSH) administered, using a GnRH agonist trigger (instead of an hCG trigger), and freezing all embryos for transfer rather than conducting a fresh embryo transfer.
Risks associated with the egg collection procedure relate to bleeding and infection. The collection procedure involves passing a needle through the wall of the vagina into vascular-stimulated ovaries. A small amount of bleeding is inevitable. In rare cases, there is excessive bleeding into the abdomen requiring surgery. Women undergoing the procedure must advise their specialist of all medications, including herbal supplements, they are using so the specialist can assess whether any of these medications will affect the ability of the blood to clot. Concerning infection, provided the woman does not have additional risk factors for infection (suppressed immune system, use of immuno-suppressive medications, or large ovarian endometriomas) the risk of infection is very low.
One additional risk of the ovaries being temporarily increased in size is ovarian torsion. Ovarian torsion occurs when an enlarged ovary twists around on itself, cutting off its blood supply. The condition is excruciatingly painful and requires urgent surgery to prevent the ischemic loss of the ovary.
See also
Egg donation
Semen cryopreservation
In vitro fertilization
References
External links
How egg freezing works, Human Fertilisation and Embryology Authority
National Cancer Institute – Sexuality and Reproductive Issues
Mature oocyte cryopreservation: a guideline American Society for Reproductive Medicine (PDF)
American Society for Reproductive Medicine
World Association of Reproductive Medicine
Assisted reproductive technology
Cryopreservation
Human embryology | Oocyte cryopreservation | [
"Chemistry",
"Biology"
] | 2,758 | [
"Cryopreservation",
"Cryobiology",
"Assisted reproductive technology",
"Medical technology"
] |
10,993,126 | https://en.wikipedia.org/wiki/Baldwin%E2%80%93Lomax%20model | The Baldwin–Lomax model is a 0-equation turbulence model used in computational fluid dynamics analysis of turbulent boundary layer flows.
External links
Baldwin-Lomax model at cfd-online.com
Fluid dynamics
Mathematical modeling | Baldwin–Lomax model | [
"Chemistry",
"Mathematics",
"Engineering"
] | 45 | [
"Mathematical modeling",
"Applied mathematics",
"Chemical engineering",
"Applied mathematics stubs",
"Piping",
"Fluid dynamics"
] |
10,993,199 | https://en.wikipedia.org/wiki/Cebeci%E2%80%93Smith%20model | The Cebeci–Smith model, developed by Tuncer Cebeci and Apollo M. O. Smith in 1967, is a 0-equation eddy viscosity model used in computational fluid dynamics analysis of turbulence in boundary layer flows. The model gives eddy viscosity, , as a function of the local boundary layer velocity profile. The model is suitable for high-speed flows with thin attached boundary layers, typically present in aerospace applications. Like the Baldwin-Lomax model, it is not suitable for large regions of flow separation and significant curvature or rotation. Unlike the Baldwin-Lomax model, this model requires the determination of a boundary layer edge.
Equations
In a two-layer model, the boundary layer is considered to comprise two layers: inner (close to the surface) and outer. The eddy viscosity is calculated separately for each layer and combined using:
where is the smallest distance from the surface where is equal to .
The inner-region eddy viscosity is given by:
where
with the von Karman constant usually being taken as 0.4, and with
The eddy viscosity in the outer region is given by:
where , is the displacement thickness, given by
and FK is the Klebanoff intermittency function given by
References
Smith, A.M.O. and Cebeci, T., 1967. Numerical solution of the turbulent boundary layer equations. Douglas aircraft division report DAC 33735
Cebeci, T. and Smith, A.M.O., 1974. Analysis of turbulent boundary layers. Academic Press,
Wilcox, D.C., 1998. Turbulence Modeling for CFD. , 2nd Ed., DCW Industries, Inc.
External links
This article was based on the Cebeci Smith model article in CFD-Wiki
Turbulence models
Fluid dynamics
Mathematical modeling | Cebeci–Smith model | [
"Chemistry",
"Mathematics",
"Engineering"
] | 371 | [
"Mathematical modeling",
"Applied mathematics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
10,993,822 | https://en.wikipedia.org/wiki/Plumbane | Plumbane is an inorganic chemical compound with the chemical formula PbH. It is a colorless gas. It is a metal hydride and group 14 hydride composed of lead and hydrogen. Plumbane is not well characterized or well known, and it is thermodynamically unstable with respect to the loss of a hydrogen atom. Derivatives of plumbane include lead tetrafluoride, PbF, and tetraethyllead, (CHCH)Pb.
History
Until recently, it was uncertain whether plumbane had ever actually been synthesized, although the first reports date back to the 1920s and in 1963, Saalfeld and Svec reported the observation of by mass spectrometry. Plumbane has repeatedly been the subject of Dirac–Hartree–Fock relativistic calculation studies, which investigate the stabilities, geometries, and relative energies of hydrides of the formula MH or MH.
Properties
Plumbane is an unstable colorless gas and is the heaviest group IV hydride; and has a tetrahedral (T) structure with an equilibrium distance between lead and hydrogen of 1.73 Å. By weight, plumbane is 1.91% hydrogen and 98.09% lead. In plumbane, the formal oxidation states of hydrogen and lead are +1 and -4, respectively, because the electronegativity of lead(IV) is higher than that of hydrogen. The stability of hydrides MH (M = C–Pb) decreases as the atomic number of M increases.
Preparation
Early studies of PbH revealed that the molecule is unstable as compared to its lighter congeners silane, germane, and stannane. It cannot be made by methods used to synthesize GeH or SnH.
In 1999, plumbane was synthesized from lead(II) nitrate, Pb(NO), and sodium borohydride, NaBH. A non-nascent mechanism for plumbane synthesis was reported in 2005.
In 2003, Wang and Andrews carefully studied the preparation of PbH by laser ablation and additionally identified the infrared (IR) bands.
Congeners
Congeners of plumbane include:
Methane, CH
Silane, SiH
Germane, GeH
Stannane, SnH
References
Metal hydrides
Lead compounds | Plumbane | [
"Chemistry"
] | 484 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
5,611,195 | https://en.wikipedia.org/wiki/Uranium-232 | Uranium-232 () is an isotope of uranium. It has a half-life of around
69 years and is a side product in the thorium cycle. It has been cited as an obstacle to nuclear proliferation using 233U as the fissile material, because the intense gamma radiation emitted by 208Tl (a daughter of 232U, produced relatively quickly) makes the 233U contaminated with it more difficult to handle.
Production of 233U (through the neutron irradiation of 232Th) invariably produces small amounts of 232U as an impurity, because of parasitic (n,2n) reactions on uranium-233 itself, or on protactinium-233, or on thorium-232:
232Th (n,γ) 233Th (β−) 233Pa (β−) 233U (n,2n) 232U
232Th (n,γ) 233Th (β−) 233Pa (n,2n) 232Pa (β−) 232U
232Th (n,2n) 231Th (β−) 231Pa (n,γ) 232Pa (β−) 232U
Another channel involves neutron capture reaction on small amounts of thorium-230, which is a tiny fraction of natural thorium present due to the decay of uranium-238:
230Th (n,γ) 231Th (β−) 231Pa (n,γ) 232Pa (β−) 232U
The decay chain of 232U quickly yields strong gamma radiation emitters:
232U (α, 68.9 years)
228Th (α, 1.9 year)
224Ra (α, 3.6 day, 0.24 MeV) (from this point onwards, the decay chain is identical to that of 232Th; thorium-232 is nevertheless much less dangerous because its extremely long half-life of about 14-15 billion years means that not as much of its dangerous daughters builds up)
220Rn (α, 55 s, 0.54 MeV)
216Po (α, 0.15 s)
212Pb (β−, 10.64 h)
212Bi (α, 61 min, 0.78 MeV)
208Tl (β−, 3 min, 2.6 MeV) (35.94% branching ratio)
208Pb (stable)
This makes manual handling in a glove box with only light shielding (as commonly done with plutonium) too hazardous, (except possibly in a short period immediately following chemical separation of the uranium from its decay products) and instead requiring remote manipulation for fuel fabrication.
Unusually for an isotope with even mass number, 232U has a significant neutron absorption cross section for fission (thermal neutrons , resonance integral ) as well as for neutron capture (thermal , resonance integral ).
References
Isotopes of uranium
Actinides
Nuclear materials
Fissile materials | Uranium-232 | [
"Physics",
"Chemistry"
] | 581 | [
"Isotopes",
"Fissile materials",
"Materials",
"Nuclear materials",
"Isotopes of uranium",
"Explosive chemicals",
"Matter"
] |
5,611,262 | https://en.wikipedia.org/wiki/Uranium%20in%20the%20environment | Uranium in the environment is a global health concern, and comes from both natural and man-made sources. Beyond naturally occurring uranium, mining, phosphates in agriculture, weapons manufacturing, and nuclear power are anthropogenic sources of uranium in the environment.
In the natural environment, radioactivity of uranium is generally low, but uranium is a toxic metal that can disrupt normal functioning of the kidney, brain, liver, heart, and numerous other systems. Chemical toxicity can cause public health issues when uranium is present in groundwater, especially if concentrations in food and water are increased by mining activity. The biological half-life (the average time it takes for the human body to eliminate half the amount in the body) for uranium is about 15 days.
Uranium's radioactivity can present health and environmental issues in the case of nuclear waste produced by nuclear power plants or weapons manufacturing.
Uranium is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238). The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects.
Natural occurrence
Uranium is a naturally occurring element found at low levels within all rock, soil, and water. This is the highest-numbered element to be found naturally in significant quantities on Earth. According to the United Nations Scientific Committee on the Effects of Atomic Radiation the normal concentration of uranium in soil is 300 μg/kg to 11.7 mg/kg.
It is considered to be more plentiful than antimony, beryllium, cadmium, gold, mercury, silver, or tungsten and is about as abundant as tin, arsenic or molybdenum. It is found in many minerals including uraninite (the most common uranium ore), autunite, uranophane, torbernite, and coffinite. There are significant concentrations of uranium in some substances, such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores. (It is recovered commercially from these sources.) Coal fly ash from uranium-bearing coal is particularly rich in uranium, and there have been several proposals to "mine" this waste product for its uranium content. Because some of the ash produced in a coal power plant escapes through the smokestack, the radioactive contamination released by coal power plants in normal operation is actually higher than that of nuclear power plants.
Seawater contains about 3.3 parts per billion (3.3 μg/kg of uranium by weight or 3.3 micrograms per liter).
Sources of uranium
Mining and milling
Mining is the largest source of uranium contamination in the environment. Uranium milling creates radioactive waste in the form of tailings, which contain uranium, radium, and polonium. Consequently, uranium mining results in "the unavoidable radioactive contamination of the environment by solid, liquid and gaseous wastes".
Seventy percent of global uranium resources are on or adjacent to traditional lands belonging to Indigenous people, and perceived environmental risks associated with uranium mining have resulted in environmental conflicts involving multiple actors, in which local campaigns have become national or international debates.
Some of these environmental conflicts have limited uranium exploration. Incidents at Ranger Uranium Mine in the Northern Territory of Australia and disputes over Indigenous land rights led to increased opposition to development of the nearby Jabiluka deposits and suspension of that project in the early 2000s. Similarly, environmental damage from Uranium mining on traditional Navajo lands in the southwestern United States resulted in restrictions on additional mining in Navajo lands in 2005.
Occupational hazards
The radiation hazards of uranium mining and milling were not appreciated in the early years, resulting in workers being exposed to high levels of radiation. Inhalation of radon gas caused sharp increases in lung cancers among underground uranium miners employed in the 1940s and 1950s.
Military activity
Military activity is a source of uranium, especially at nuclear or munitions testing sites. Depleted uranium (DU) is a byproduct of uranium enrichment that is used for defensive armor plating and armor-piercing projectiles. Uranium contamination has been found at testing sites in the UK, in Kazakhstan, and in several countries as a result of DU munitions used in the Gulf War and the Yugoslav wars. During a three-week period of conflict in 2003 in Iraq, 1,000 to 2,000 tonnes of DU munitions were used.
Combustion and impact of DU munitions can produce aerosols that disperse uranium metal into the air and water where it can be inhaled or ingested by humans. A United Nations Environment Programme (UNEP) study has expressed concerns about groundwater contamination from these munitions. Studies of DU aerosol exposure suggest that uranium particles would quickly settle out of the air, and thus should not affect populations more than a few kilometres from target areas.
Nuclear energy and waste
The nuclear power industry is also a source of uranium in the environment in the form of radioactive waste or through nuclear accidents such as Three Mile Island or the Chernobyl disaster. Perceived risks of contamination associated with this industry contribute to the anti-nuclear movement.
In 2020, there were over 250,000 metric tons of high-level radioactive waste being stored globally in temporary containers. This waste is produced by nuclear power plants and weapons facilities, and is a serious human health and environmental issue. There are plans to permanently dispose of high-level waste in deep geological repositories, but none of these are operational. Corrosion of aging temporary containers has caused some waste to leak into the environment.
As spent uranium dioxide fuel is very insoluble in water, it is likely to release uranium (and fission products) even more slowly than borosilicate glass when in contact with water.
Health effects
Soluble uranium salts are toxic, though less so than those of other heavy metals such as lead or mercury. The organ which is most affected is the kidney. Soluble uranium salts are readily excreted in the urine, although some accumulation in the kidneys does occur in the case of chronic exposure. The World Health Organization has established a daily "tolerated intake" of soluble uranium salts for the general public of 0.5 μg/kg body weight (or 35 μg for a 70 kg adult): exposure at this level is not thought to lead to any significant kidney damage.
Tiron may be used to remove uranium from the human body, in a form of chelation therapy. Bicarbonate may also be used as uranium (VI) forms complexes with the carbonate ion.
Public health
Uranium mining produces toxic tailings that are radioactive and may contain other toxic elements such as radon. Dust and water leaving tailing sites may carry long-lived radioactive elements that enter water sources and the soil, increase background radiation, and eventually be ingested by humans and animals. A 2013 analysis in a medical journal found that, "The effects of all these sources of contamination on human health will be subtle and widespread, and therefore difficult to detect both clinically and epidemiologically." A 2019 analysis of the global uranium industry said that the industry was shifting mining activities toward the Global South where environmental regulations are typically less stringent; and that people in impacted communities would "surely experience adverse environmental consequences" and public health issues arising from mining activities carried out by powerful multi-national corporations or mining companies based in foreign countries.
Cancer
In 1950, the US Public Health service began a comprehensive study of uranium miners, leading to the first publication of a statistical correlation between cancer and uranium mining, released in 1962. The federal government eventually regulated the standard amount of radon in mines, setting the level at 0.3 WL on January 1, 1969.
Out of 69 present and former uranium milling sites in 12 states, 24 have been abandoned, and are the responsibility of the US Department of Energy. Accidental releases from uranium mills include the 1979 Church Rock uranium mill spill in New Mexico, called the largest accident of nuclear-related waste in US history, and the 1986 Sequoyah Corporation Fuels Release in Oklahoma.
In 1990, Congress passed the Radiation Exposure Compensation Act (RECA), granting reparations for those affected by mining, with amendments passed in 2000 to address criticisms with the original act.
Depleted uranium exposure studies
The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects. Normal functioning of the kidney, brain, liver, heart, and numerous other systems can be affected by uranium exposure, because uranium is a toxic metal. Some people have raised concerns about the use of DU munitions because of its mutagenicity, teratogenicity in mice, neurotoxicity, and its suspected carcinogenic potential. Additional concerns address unexploded DU munitions leeching into groundwater over time.
The toxicity of DU is a point of medical controversy. Multiple studies using cultured cells and laboratory rodents suggest the possibility of leukemogenic, genetic, reproductive, and neurological effects from chronic exposure.
A 2005 epidemiology review concluded: "In aggregate the human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU." The World Health Organization states that no risk of reproductive, developmental, or carcinogenic effects have been reported in humans due to DU exposure. This report has been criticized by Dr. Keith Baverstock for not including possible long-term effects.
Birth defects
Most scientific studies have found no link between uranium and birth defects, but some claim statistical correlations between soldiers exposed to DU, and those who were not, concerning reproductive abnormalities.
One study found epidemiological evidence for increased risk of birth defects in the offspring of persons exposed to DU. Several sources have attributed an increased rate of birth defects in the children of Gulf War veterans and in Iraqis to inhalation of depleted uranium. A 2001 study of 15,000 Gulf War combat veterans and 15,000 control veterans found that the Gulf War veterans were 1.8 (fathers) to 2.8 (mothers) times more likely to have children with birth defects.
A study of Gulf War Veterans from the UK found a 50% increased risk of malformed pregnancies reported by men over non-Gulf War veterans. The study did not find correlations between Gulf war deployment and other birth defects such as stillbirth, chromosomal malformations, or congenital syndromes. The father's service in the Gulf War was associated with increased rate of miscarriage, but the mother's service was not.
In animals
Uranium causes reproductive defects and other health problems in rodents, frogs and other animals. Uranium was also shown to have cytotoxic, genotoxic and carcinogenic effects in animals. It has been shown in rodents and frogs that water-soluble forms of uranium are teratogenic.
In soil and microbiology
Bacteria and Pseudomonadota, such as Geobacter and Burkholderia fungorum (strain Rifle), can reduce and fix uranium in soil and groundwater. These bacteria change soluble U(VI) into the highly insoluble complex-forming U(IV) ion, hence stopping chemical leaching.
It has been suggested that it is possible to form a reactive barrier by adding something to the soil which will cause the uranium to become fixed. One method of doing this is to use a mineral (apatite) while a second method is to add a food substance such as acetate to the soil. This will enable bacteria to reduce the uranium(VI) to uranium(IV), which is much less soluble. In peat-like soils, the uranium will tend to bind to the humic acids; this tends to fix the uranium in the soil.
References
Element toxicology
Nuclear technology
Radiobiology
Radioactivity
Soil contamination
Uranium
fi:Uraanin esiintyminen | Uranium in the environment | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 2,391 | [
"Element toxicology",
"Biology and pharmacology of chemical elements",
"Radiobiology",
"Environmental chemistry",
"Nuclear technology",
"Soil contamination",
"Nuclear physics",
"Radioactivity"
] |
11,965,603 | https://en.wikipedia.org/wiki/Stellar%20rotation | Stellar rotation is the angular motion of a star about its axis. The rate of rotation can be measured from the spectrum of the star, or by timing the movements of active features on the surface.
The rotation of a star produces an equatorial bulge due to centrifugal force. As stars are not solid bodies, they can also undergo differential rotation. Thus the equator of the star can rotate at a different angular velocity than the higher latitudes. These differences in the rate of rotation within a star may have a significant role in the generation of a stellar magnetic field.
In its turn, the magnetic field of a star interacts with the stellar wind. As the wind moves away from the star its angular speed decreases. The magnetic field of the star interacts with the wind, which applies a drag to the stellar rotation. As a result, angular momentum is transferred from the star to the wind, and over time this gradually slows the star's rate of rotation.
Measurement
Unless a star is being observed from the direction of its pole, sections of the surface have some amount of movement toward or away from the observer. The component of movement that is in the direction of the observer is called the radial velocity. For the portion of the surface with a radial velocity component toward the observer, the radiation is shifted to a higher frequency because of Doppler shift. Likewise the region that has a component moving away from the observer is shifted to a lower frequency. When the absorption lines of a star are observed, this shift at each end of the spectrum causes the line to broaden. However, this broadening must be carefully separated from other effects that can increase the line width.
The component of the radial velocity observed through line broadening depends on the inclination of the star's pole to the line of sight. The derived value is given as , where is the rotational velocity at the equator and is the inclination. However, is not always known, so the result gives a minimum value for the star's rotational velocity. That is, if is not a right angle, then the actual velocity is greater than . This is sometimes referred to as the projected rotational velocity. In fast rotating stars polarimetry offers a method of recovering the actual velocity rather than just the rotational velocity; this technique has so far been applied only to Regulus.
For giant stars, the atmospheric microturbulence can result in line broadening that is much larger than effects of rotational, effectively drowning out the signal. However, an alternate approach can be employed that makes use of gravitational microlensing events. These occur when a massive object passes in front of the more distant star and functions like a lens, briefly magnifying the image. The more detailed information gathered by this means allows the effects of microturbulence to be distinguished from rotation.
If a star displays magnetic surface activity such as starspots, then these features can be tracked to estimate the rotation rate. However, such features can form at locations other than equator and can migrate across latitudes over the course of their life span, so differential rotation of a star can produce varying measurements. Stellar magnetic activity is often associated with rapid rotation, so this technique can be used for measurement of such stars. Observation of starspots has shown that these features can actually vary the rotation rate of a star, as the magnetic fields modify the flow of gases in the star.
Physical effects
Equatorial bulge
Gravity tends to contract celestial bodies into a perfect sphere, the shape where all the mass is as close to the center of gravity as possible. But a rotating star is not spherical in shape, it has an equatorial bulge.
As a rotating proto-stellar disk contracts to form a star its shape becomes more and more spherical, but the contraction doesn't proceed all the way to a perfect sphere. At the poles all of the gravity acts to increase the contraction, but at the equator the effective gravity is diminished by the centrifugal force. The final shape of the star after star formation is an equilibrium shape, in the sense that the effective gravity in the equatorial region (being diminished) cannot pull the star to a more spherical shape. The rotation also gives rise to gravity darkening at the equator, as described by the von Zeipel theorem.
An extreme example of an equatorial bulge is found on the star Regulus A (α Leonis A). The equator of this star has a measured rotational velocity of 317 ± 3 km/s. This corresponds to a rotation period of 15.9 hours, which is 86% of the velocity at which the star would break apart. The equatorial radius of this star is 32% larger than polar radius. Other rapidly rotating stars include Alpha Arae, Pleione, Vega and Achernar.
The break-up velocity of a star is an expression that is used to describe the case where the centrifugal force at the equator is equal to the gravitational force. For a star to be stable the rotational velocity must be below this value.
Differential rotation
Surface differential rotation is observed on stars such as the Sun when the angular velocity varies with latitude. Typically the angular velocity decreases with increasing latitude. However the reverse has also been observed, such as on the star designated HD 31993. The first such star, other than the Sun, to have its differential rotation mapped in detail is AB Doradus.
The underlying mechanism that causes differential rotation is turbulent convection inside a star. Convective motion carries energy toward the surface through the mass movement of plasma. This mass of plasma carries a portion of the angular velocity of the star. When turbulence occurs through shear and rotation, the angular momentum can become redistributed to different latitudes through meridional flow.
The interfaces between regions with sharp differences in rotation are believed to be efficient sites for the dynamo processes that generate the stellar magnetic field. There is also a complex interaction between a star's rotation distribution and its magnetic field, with the conversion of magnetic energy into kinetic energy modifying the velocity distribution.
Rotation braking
During formation
Stars are believed to form as the result of a collapse of a low-temperature cloud of gas and dust. As the cloud collapses, conservation of angular momentum causes any small net rotation of the cloud to increase, forcing the material into a rotating disk. At the dense center of this disk a protostar forms, which gains heat from the gravitational energy of the collapse.
As the collapse continues, the rotation rate can increase to the point where the accreting protostar can break up due to centrifugal force at the equator. Thus the rotation rate must be braked during the first 100,000 years to avoid this scenario. One possible explanation for the braking is the interaction of the protostar's magnetic field with the stellar wind in magnetic braking. The expanding wind carries away the angular momentum and slows down the rotation rate of the collapsing protostar.
Most main-sequence stars with a spectral class between O5 and F5 have been found to rotate rapidly. For stars in this range, the measured rotation velocity increases with mass. This increase in rotation peaks among young, massive B-class stars. "As the expected life span of a star decreases with increasing mass, this can be explained as a decline in rotational velocity with age."
After formation
For main-sequence stars, the decline in rotation can be approximated by a mathematical relation:
where is the angular velocity at the equator and is the star's age. This relation is named Skumanich's law after Andrew P. Skumanich who discovered it in 1972.
Gyrochronology is the determination of a star's age based on the rotation rate, calibrated using the Sun.
Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. Stars with a rate of rotation greater than 15 km/s also exhibit more rapid mass loss, and consequently a faster rate of rotation decay. Thus as the rotation of a star is slowed because of braking, there is a decrease in rate of loss of angular momentum. Under these conditions, stars gradually approach, but never quite reach, a condition of zero rotation.
At the end of the main sequence
Ultracool dwarfs and brown dwarfs experience faster rotation as they age, due to gravitational contraction. These objects also have magnetic fields similar to the coolest stars. However, the discovery of rapidly rotating brown dwarfs such as the T6 brown dwarf WISEPC J112254.73+255021.5 lends support to theoretical models that show that rotational braking by stellar winds is over 1000 times less effective at the end of the main sequence.
Close binary systems
A close binary star system occurs when two stars orbit each other with an average separation that is of the same order of magnitude as their diameters. At these distances, more complex interactions can occur, such as tidal effects, transfer of mass and even collisions. Tidal interactions in a close binary system can result in modification of the orbital and rotational parameters. The total angular momentum of the system is conserved, but the angular momentum can be transferred between the orbital periods and the rotation rates.
Each of the members of a close binary system raises tides on the other through gravitational interaction. However the bulges can be slightly misaligned with respect to the direction of gravitational attraction. Thus the force of gravity produces a torque component on the bulge, resulting in the transfer of angular momentum (tidal acceleration). This causes the system to steadily evolve, although it can approach a stable equilibrium. The effect can be more complex in cases where the axis of rotation is not perpendicular to the orbital plane.
For contact or semi-detached binaries, the transfer of mass from a star to its companion can also result in a significant transfer of angular momentum. The accreting companion can spin up to the point where it reaches its critical rotation rate and begins losing mass along the equator.
Degenerate stars
After a star has finished generating energy through thermonuclear fusion, it evolves into a more compact, degenerate state. During this process the dimensions of the star are significantly reduced, which can result in a corresponding increase in angular velocity.
White dwarf
A white dwarf is a star that consists of material that is the by-product of thermonuclear fusion during the earlier part of its life, but lacks the mass to burn those more massive elements. It is a compact body that is supported by a quantum mechanical effect known as electron degeneracy pressure that will not allow the star to collapse any further. Generally most white dwarfs have a low rate of rotation, most likely as the result of rotational braking or by shedding angular momentum when the progenitor star lost its outer envelope. (See planetary nebula.)
A slow-rotating white dwarf star can not exceed the Chandrasekhar limit of 1.44 solar masses without collapsing to form a neutron star or exploding as a Type Ia supernova. Once the white dwarf reaches this mass, such as by accretion or collision, the gravitational force would exceed the pressure exerted by the electrons. If the white dwarf is rotating rapidly, however, the effective gravity is diminished in the equatorial region, thus allowing the white dwarf to exceed the Chandrasekhar limit. Such rapid rotation can occur, for example, as a result of mass accretion that results in a transfer of angular momentum.
Neutron star
A neutron star is a highly dense remnant of a star that is primarily composed of neutrons—a particle that is found in most atomic nuclei and has no net electrical charge. The mass of a neutron star is in the range of 1.2 to 2.1 times the mass of the Sun. As a result of the collapse, a newly formed neutron star can have a very rapid rate of rotation; on the order of a hundred rotations per second.
Pulsars are rotating neutron stars that have a magnetic field. A narrow beam of electromagnetic radiation is emitted from the poles of rotating pulsars. If the beam sweeps past the direction of the Solar System then the pulsar will produce a periodic pulse that can be detected from the Earth. The energy radiated by the magnetic field gradually slows down the rotation rate, so that older pulsars can require as long as several seconds between each pulse.
Black hole
A black hole is an object with a gravitational field that is sufficiently powerful that it can prevent light from escaping. When they are formed from the collapse of a rotating mass, they retain all of the angular momentum that is not shed in the form of ejected gas. This rotation causes the space within an oblate spheroid-shaped volume, called the "ergosphere", to be dragged around with the black hole. Mass falling into this volume gains energy by this process and some portion of the mass can then be ejected without falling into the black hole. When the mass is ejected, the black hole loses angular momentum (the "Penrose process").
See also
Rossiter–McLaughlin effect
References
External links
Rotation
Rotation
Concepts in stellar astronomy | Stellar rotation | [
"Physics",
"Astronomy"
] | 2,658 | [
"Physical phenomena",
"Concepts in astrophysics",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
11,967,974 | https://en.wikipedia.org/wiki/White%20Bear%20Forest | The White Bear Forest is an old growth forest, located in Temagami, Ontario, Canada. The forest is named after Chief White Bear, who was the last chief of the Teme-Augama Anishnabai before Europeans appeared in the region. In some parts of the White Bear Forest trees commonly reach 200 to 300 years in age, while the oldest tree accurately aged in White Bear Forest was a red pine that was 400 years old in 1999. The White Bear Forest contains one of Canada's oldest portages, dating back some 3,000 years. Today, more than of trails access the White Bear Forest. A trail guide is available online at http://ancientforest.org/whitebear.html.
The Caribou Mountain contains a renovated fire lookout tower that visitors can climb for a small fee.
History
In 1928, the Gillies Bros. logging company logged about of the White Bear Forest surrounding Cassels Lake and Rabbit Lake. A log dam was constructed at the narrows connecting Cassels Lake and Rabbit Lake to float logs from the surrounding area out to the Ottawa River. The water level in numerous lakes in the Temagami area was increased numerous feet. The Gillies' Bros. logging company then cut the trees from the flooded forest area leaving behind the snags and stumps seen in the water. The area which we now call the White Bear Forest escaped the first wave of logging partly because the mill owner enjoyed the view of this forest, which was situated directly across the lake from the mill site. In 1992, the White Bear Forest was once again spared from logging because of local opposition, and is now promoted by the town of Temagami as a tourist attraction. In June 1996, the White Bear Forest was declared a Conservation Area by the Ministry of Natural Resources.
Trails
The trails of the White Bear Forest offer hikers, canoeists and adventurers the opportunity to travel through a portion of Ontario's forest that has changed little over time. The majority of the trails are located in an area that has never been logged or mined. The trails vary, from a leisurely one-hour hike, to all day or weekend trips. Many species of birds and wildlife can be observed in their natural surroundings. There is at least seven named trails in the White Bear Forest.
Old Ranger Trail was formerly used by fire rangers to get from Caribou Lake Portage to the fire tower on top of Caribou Mountain. They would haul their canoes through the trail and pull themselves up Caribou Mountain using an old water hose. Remnants of this hose can still be found around Caribou Mountain. The trail is about long.
Another trail adjacent to the fire tower is the White Bear Trail. It is long, coming out on the Ontario Hydro line in the east and adjoining the Old Ranger Trail in the west.
Red Fox Trail is about long, crossing the Ontario Hydro line at two locations. At Pleasant Lake, the Red Fox Trail immediately goes into the old growth forest. The Red Fox Trail comes out at two locations; one at the Beaver Pond and the other at the end of the Caribou Trail at Pinque Lake.
To the west and southwest is the long Caribou Trail. It has at least three entrances; the Trans Canada Pipeline on O'Connor Drive, the Red Fox Trail and across from Finlayson Point on Highway 11. The trail extends along the shores of both Caribou Lake and Pingue Lake.
Peregrine Trail extends along the shores of Cassels Lake and through the heart of the White Bear Forest. It is about long and comes out at three locations; Cassels Lake, Pecours Bay of Snake Island Lake and the Red Fox Trail.
Otter Trail is in length, extending largely along Cassels Lake and Pecours Bay of Snake Island Lake.
Beaver Trail loops through the heart of the White Bear Forest. In contrast to most other White Bear Forest trails, the Beaver Trail contains rocky terrain and steep cliffs. Its two southern ends come out on the Peregrine Trail whereas its northern end comes out on the Otter Trail.
See also
List of old growth forests
References
External links
Town of Temagami
Friends of Temagami
Old-growth forests
Geography of Temagami
Protected areas of Nipissing District | White Bear Forest | [
"Biology"
] | 867 | [
"Old-growth forests",
"Ecosystems"
] |
11,969,823 | https://en.wikipedia.org/wiki/Nanosystems%20Initiative%20Munich | The Nanosystems Initiative Munich (NIM) is a German research cluster in the field of nano sciences. It is one of the excellence clusters being funded within the German Excellence Initiative of the Deutsche Forschungsgemeinschaft.
The cluster joins the scientific work of about 60 research groups in the Munich region and combines several disciplines: physics, biophysics, physical chemistry, biochemistry, pharmacology, biology, electrical engineering and medical science. Using the expertise in all these fields the cluster aims to create new nanosystems for information technology as well as for life sciences.
The participating institutions of the Nanosystems Initiative Munich are the Ludwig Maximilians University, the Technical University of Munich, the University of Augsburg, the Max Planck Institutes of Quantum Optics and Biochemistry, the Munich University of Applied Sciences, the Walther Meissner Institute and the "Center for New Technologies" at Deutsches Museum.
References
External links
Nanosystems Initiative Munich
NIM on the LMU Excellent website of Ludwig Maximilians University of Munich
https://www.dfg.de/forschungsfoerderung/koordinierte_programme/exzellenzinitiative/exzellenzcluster/liste/exc_detail_4.html
http://idw-online.de/pages/de/news179797
Ludwig Maximilian University of Munich
Munich University of Applied Sciences
Nanotechnology institutions
Research institutes in Munich
University of Augsburg | Nanosystems Initiative Munich | [
"Materials_science"
] | 298 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
11,974,030 | https://en.wikipedia.org/wiki/List%20of%20Chinese%20mushrooms%20and%20fungi | East Asian mushrooms and fungi are often used in East Asian cuisine, either fresh or dried. According to Chinese traditional medicine, many types of mushroom affect the eater's physical and emotional wellbeing.
List of mushrooms and fungi
See also
List of mushroom dishes
Chinese
Chinese cuisine
Chinese edible mushrooms | List of Chinese mushrooms and fungi | [
"Biology"
] | 59 | [
"Fungi",
"Lists of fungi"
] |
7,330,494 | https://en.wikipedia.org/wiki/Hepatocyte%20growth%20factor | Hepatocyte growth factor (HGF) or scatter factor (SF) is a paracrine cellular growth, motility and morphogenic factor. It is secreted by mesenchymal cells and targets and acts primarily upon epithelial cells and endothelial cells, but also acts on haemopoietic progenitor cells and T cells. It has been shown to have a major role in embryonic organ development, specifically in myogenesis, in adult organ regeneration, and in wound healing.
Function
Hepatocyte growth factor regulates cell growth, cell motility, and morphogenesis by activating a tyrosine kinase signaling cascade after binding to the proto-oncogenic c-Met receptor. Hepatocyte growth factor is secreted by platelets, and mesenchymal cells and acts as a multi-functional cytokine on cells of mainly epithelial origin. Its ability to stimulate mitogenesis, cell motility, and matrix invasion gives it a central role in angiogenesis, tumorogenesis, and tissue regeneration.
Structure
It is secreted as a single inactive polypeptide and is cleaved by serine proteases into a 69-kDa alpha-chain and 34-kDa beta-chain. A disulfide bond between the alpha and beta chains produces the active, heterodimeric molecule. The protein belongs to the plasminogen subfamily of S1 peptidases but has no detectable protease activity.
Clinical significance
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.
As well as the well-characterised effects of HGF on epithelial cells, endothelial cells and haemopoietic progenitor cells, HGF also regulates the chemotaxis of T cells into heart tissue. Binding of HGF by c-Met, expressed on T cells, causes the upregulation of c-Met, CXCR3, and CCR4 which in turn imbues them with the ability to migrate into heart tissue. HGF also promotes angiogenesis in ischemia injury.
HGF may further play a role as an indicator for prognosis of chronicity for Chikungunya virus induced arthralgia. High HGF levels correlate with high rates of recovery.
Excessive local expression of HGF in the breasts has been implicated in macromastia. HGF is also importantly involved in normal mammary gland development.
HGF has been implicated in a variety of cancers, including of the lungs, pancreas, thyroid, colon, and breast.
Increased expression of HGF has been associated with the enhanced and scarless wound healing capabilities of fibroblast cells isolated from the oral mucosa tissue.
Circulating plasma levels
Plasma from patients with advanced heart failure presents increased levels of HGF, which correlates with a negative prognosis and a high risk of mortality. Circulating HGF has been also identified as a prognostic marker of severity in patients with hypertension. Circulating HGF has been also suggested as a precocious biomarker for the acute phase of bowel inflammation.
Pharmacokinetics
Exogenous HGF administered by intravenous injection is cleared rapidly from circulation by the liver, with a half-life of approximately 4 minutes.
Modulators
Dihexa is an orally active, centrally penetrant small-molecule compound that directly binds to HGF and potentiates its ability to activate its receptor, c-Met. It is a strong inducer of neurogenesis and is being studied for the potential treatment of Alzheimer's disease and Parkinson's disease.
Interactions
Hepatocyte growth factor has been shown to interact with the protein product of the c-Met oncogene, identified as the HGF receptor (HGFR). Both overexpression of the Met/HGFR receptor protein and autocrine activation of Met/HGFR by simultaneous expression of the hepatocyte growth factor ligand have been implicated in oncogenesis.
Hepatocyte growth factor interacts with the sulfated glycosaminoglycans heparan sulfate and dermatan sulfate. The interaction with heparan sulfate allows hepatocyte growth factor to form a complex with c-Met that is able to transduce intracellular signals leading to cell division and cell migration.
See also
Epidermal growth factor
Insulin-like growth factor 1
Epithelial–mesenchymal transition
Madin-Darby Canine Kidney Cells
References
Further reading
External links
Hepatocyte growth factor on the Atlas of Genetics and Oncology
UCSD Signaling Gateway Molecule Page on HGF
Growth factors
Developmental genes and proteins
Cytokines | Hepatocyte growth factor | [
"Chemistry",
"Biology"
] | 1,017 | [
"Growth factors",
"Signal transduction",
"Cytokines",
"Developmental genes and proteins",
"Induced stem cells"
] |
7,330,572 | https://en.wikipedia.org/wiki/Apoptosis-inducing%20factor | Apoptosis inducing factor is involved in initiating a caspase-independent pathway of apoptosis (positive intrinsic regulator of apoptosis) by causing DNA fragmentation and chromatin condensation. Apoptosis inducing factor is a flavoprotein. It also acts as an NADH oxidase. Another AIF function is to regulate the permeability of the mitochondrial membrane upon apoptosis. Normally it is found behind the outer membrane of the mitochondrion and is therefore secluded from the nucleus. However, when the mitochondrion is damaged, it moves to the cytosol and to the nucleus. Inactivation of AIF leads to resistance of embryonic stem cells to death following the withdrawal of growth factors indicating that it is involved in apoptosis.
Function
Apoptosis Inducing Factor (AIF) is a protein that triggers chromatin condensation and DNA fragmentation in a cell in order to induce programmed cell death. The mitochondrial AIF protein was found to be a caspase-independent death effector that can allow independent nuclei to undergo apoptotic changes. The process triggering apoptosis starts when the mitochondrion releases AIF, which exits through the mitochondrial membrane, enters the cytosol, and moves to the nucleus of the cell, where it signals the cell to condense its chromosomes and fragment its DNA molecules in order to prepare for cell death. Recently, researchers have discovered that the activity of AIF depends on the type of cell, the apoptotic insult, and its DNA-binding ability. AIF also plays a significant role in the mitochondrial respiratory chain and metabolic redox reactions.
Synthesis
The AIF protein is located across 16 exons on the X chromosome in humans. AIF1 (the most abundant type of AIF) is translated in the cytosol and recruited to the mitochondrial membrane and intermembrane space by its N-terminal mitochondrial localization signal (MLS). Inside the mitochondrion, AIF folds into its functional configuration with the help of the cofactor flavin adenine dinucleotide (FAD).
A protein called Scythe (BAT3), which is used to regulate organogenesis, can increase the AIF lifetime in the cell. As a result, decreased amounts of Scythe lead to a quicker fragmentation of AIF. The X-linked inhibitor of apoptosis (XIAP) has the power to influence the half-life of AIF along with Scythe. Together, the two do not affect the AIF attached to the inner mitochondrial membrane, however they influence the stability of AIF once it exits the mitochondrion.
Role in mitochondria
It was thought that if a recombinant version of AIF lacked the first N-terminal 120 amino acids of the protein, then AIF would function as an NADH and NADPH oxidase. However, it was instead discovered that recombinant AIF that do not have the last 100 N-terminal amino acids have limited NADP and NADPH oxidase activity. Therefore, researchers concluded that the AIF N-terminus may function in interactions with other proteins or control AIF redox reactions and substrate specificity.
Mutations of AIF due to deletions have stimulated the creation of the mouse model of complex I deficiency. Complex I deficiency is the reason behind over thirty percent of human mitochondrial diseases. For example, complex I mitochondriopathies mostly affect infants by causing symptoms such as seizures, blindness, deafness, etc. These AIF-deficient mouse models are important for fixing complex I deficiencies. The identification of AIF-interacting proteins in the inner mitochondrial membrane and intermembrane space will help researchers identify the mechanism of the signalling pathway that monitors the function of AIF in the mitochondria.
Isozymes
Human genes encoding apoptosis inducing factor isozymes include:
AIFM1
AIFM2
AIFM3
Evolution
The apoptotic function of AIFs has been shown in organisms belonging to different eukaryotic organisms including mentioned above human factors: AIM1, AIM2, and AIM3 (Xie et al., 2005), yeast factors NDI1 and AIF1 as well as AIF of Tetrahymena. Phylogenetic analysis indicates that the divergence of the AIFM1, AIFM2, AIFM3, and NDI sequences occurred before the divergence of eukaryotes.
Role in cancer
Despite an involvement in cell death, AIF plays a contributory role to the growth and aggressiveness of a variety of cancer types including colorectal, prostate, and pancreatic cancers through its NADH oxidase activity. AIF enzymatic activity regulates metabolism but can also increase ROS levels promoting oxidative stress activated signaling molecules including the MAPKs. AIF-mediated redox signaling promotes the activation of JNK1, which in turn can trigger the cadherin switch.
See also
Apoptosis
Parthanatos
References
External links
Programmed cell death
Cell signaling
Apoptosis | Apoptosis-inducing factor | [
"Chemistry",
"Biology"
] | 1,045 | [
"Senescence",
"Programmed cell death",
"Apoptosis",
"Signal transduction"
] |
7,330,787 | https://en.wikipedia.org/wiki/Quasi-periodic%20oscillation%20%28astronomy%29 | In X-ray astronomy, quasi-periodic oscillation (QPO) is the manner in which the X-ray light from an astronomical object flickers about certain frequencies. In these situations, the X-rays are emitted near the inner edge of an accretion disk in which gas swirls onto a compact object such as a white dwarf, neutron star, or black hole.
The QPO phenomenon promises to help astronomers understand the innermost regions of accretion disks and the masses, radii, and spin periods of white dwarfs, neutron stars, and black holes. QPOs could help test Albert Einstein's theory of general relativity which makes predictions that differ most from those of Newtonian gravity when the gravitational force is strongest or when rotation is fastest (when a phenomenon called the Lense–Thirring effect comes into play). However, the various explanations of QPOs remain controversial and the conclusions reached from their study remain provisional.
A QPO is identified by performing a power spectrum of the time series of the X-rays. A constant level of white noise is expected from the random variation of sampling the object's light. Systems that show QPOs sometimes also show nonperiodic noise that appears as a continuous curve in the power spectrum. A periodic pulsation appears in the power spectrum as a peak of power at exactly one frequency (a Dirac delta function given a long enough observation). A QPO, on the other hand, appears as a broader peak, sometimes with a Lorentzian shape.
What sort of variation with time could cause a QPO? For example, the power spectrum of an oscillating shot appears as a continuum of noise together with a QPO. An oscillating shot is a sinusoidal variation that starts suddenly and decays exponentially. A scenario in which oscillating shots cause the observed QPOs could involve "blobs" of gas in orbit around a rotating, weakly magnetized neutron star. Each time a blob comes near a magnetic pole, more gas accretes and the X-rays increase. At the same time, the blob's mass decreases so that the oscillation decays.
Often power spectra are formed from several time intervals and then added together before the QPO can be seen to be statistically significant.
History
QPOs were first identified in white dwarf systems and then in neutron star systems.
At first the neutron star systems found to have QPOs were of a class (Z sources and atoll sources) not known to have pulsations. The spin periods of these neutron stars were unknown as a result. These neutron stars are thought to have relatively low magnetic fields so the gas does not fall mostly onto their magnetic poles, as in accreting pulsars. Because their magnetic fields are so low, the accretion disk can get very close to the neutron star before being disrupted by the magnetic field.
The spectral variability of these neutron stars was seen to correspond to changes in the QPOs. Typical QPO frequencies were found to be between about 1 and 60 Hz. The fastest oscillations were found in a spectral state called the Horizontal Branch, and were thought to be a result of the combined rotation of the matter in the disk and the rotation of the collapsed star (the "beat frequency model"). During the Normal Branch and Flaring Branch, the star was thought to approach its Eddington luminosity at which the force of the radiation could repel the accreting gas. This could give rise to a completely different kind of oscillation.
Observations starting in 1996 with the Rossi X-ray Timing Explorer could detect faster variability, and it was found that neutron stars and black holes emit X-rays that have QPOs with frequencies up to 1000 Hz or so. Often "twin peak" QPOs were found in which two oscillations of roughly the same power appeared at high amplitudes. These higher frequency QPOs may show behavior related to that of the lower frequency QPOs.
Measuring black holes
QPOs can be used to determine the mass of black holes. The technique uses a relationship between black holes and the inner part of their surrounding disks, where gas spirals inward before reaching the event horizon. The hot gas piles up near the black hole and radiates a torrent of X-rays, with an intensity that varies in a pattern that repeats itself over a nearly regular interval. This signal is the QPO. Astronomers have long suspected that a QPO's frequency depends on the black hole's mass. The congestion zone lies close in for small black holes, so the QPO clock ticks quickly. As black holes increase in mass, the congestion zone is pushed farther out, so the QPO clock ticks slower and slower.
See also
Broad iron K line
Quasiperiodicity
Neutron-star oscillation
References
X-rays
Observational astronomy | Quasi-periodic oscillation (astronomy) | [
"Physics",
"Astronomy"
] | 1,014 | [
"X-rays",
"Spectrum (physical sciences)",
"Observational astronomy",
"Electromagnetic spectrum",
"Astronomical sub-disciplines"
] |
7,330,887 | https://en.wikipedia.org/wiki/Myogenin | Myogenin, is a transcriptional activator encoded by the MYOG gene.
Myogenin is a muscle-specific basic-helix-loop-helix (bHLH) transcription factor involved in the coordination of skeletal muscle development or myogenesis and repair. Myogenin is a member of the MyoD family of transcription factors, which also includes MyoD, Myf5, and MRF4.
In mice, myogenin is essential for the development of functional skeletal muscle. Myogenin is required for the proper differentiation of most myogenic precursor cells during the process of myogenesis. When the DNA coding for myogenin was knocked out of the mouse genome, severe skeletal muscle defects were observed. Mice lacking both copies of myogenin (homozygous-null) suffer from perinatal lethality due to the lack of mature secondary skeletal muscle fibers throughout the body.
In cell culture, myogenin can induce myogenesis in a variety of non-muscle cell types.
Interactions
Myogenin has been shown to interact with:
MDFI,
POLR2C,
Serum response factor
Sp1 transcription factor, and
TCF3.
References
Further reading
External links
Gene expression
Human proteins | Myogenin | [
"Chemistry",
"Biology"
] | 249 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
7,330,907 | https://en.wikipedia.org/wiki/Twist-related%20protein%201 | Twist-related protein 1 (TWIST1) also known as class A basic helix–loop–helix protein 38 (bHLHa38) is a basic helix-loop-helix transcription factor that in humans is encoded by the TWIST1 gene.
Function
Basic helix-loop-helix (bHLH) transcription factors have been implicated in cell lineage determination and differentiation. The protein encoded by this gene is a bHLH transcription factor and shares similarity with another bHLH transcription factor, Dermo1 (a.k.a. TWIST2). The strongest expression of this mRNA is in placental tissue; in adults, mesodermally derived tissues express this mRNA preferentially.
Twist1 is thought to regulate osteogenic lineage.
Clinical significance
Mutations in the TWIST1 gene are associated with Saethre–Chotzen syndrome, breast cancer, and Sézary syndrome.
Craniosynostosis
TWIST1 mutations are involved in a number of craniosynostosis presentations. It can present in nonsyndromic forms (isolated scaphocephaly, right unicoronal synostosis, and turricephaly), but also in syndromic forms such as:
Acrocephalosyndactyly type 1 (Apert syndrome) (primary FGFR2)
Beare-Stevenson cutis gyrata syndrome (primary FGFR2)
Crouzon syndrome (primary FGFR2)
Crouzon syndrome-acanthosis nigricans syndrome (primary FGFR3)
Jackson-Weiss syndrome (primary FGFR1 or FGFR2)
Muenke syndrome (primary FGFR3)
Pfeiffer syndrome (primary FGFR1 or FGFR2)
As an oncogene
Twist plays an essential role in cancer metastasis. Over-expression of Twist or methylation of its promoter is common in metastatic carcinomas. Hence targeting Twist has a great promise as a cancer therapeutic. In cooperation with N-Myc, Twist-1 acts as an oncogene in several cancers including neuroblastoma.
Twist is activated by a variety of signal transduction pathways, including Akt, signal transducer and activator of transcription 3 (STAT3), mitogen-activated protein kinase, Ras, and Wnt signaling. Activated Twist upregulates N-cadherin and downregulates E-cadherin, which are the hallmarks of EMT. Moreover, Twist plays an important role in some physiological processes involved in metastasis, like angiogenesis, invadopodia, extravasation, and chromosomal instability. Twist also protects cancer cells from apoptotic cell death. In addition, Twist is responsible for the maintenance of cancer stem cells and the development of chemotherapy resistance. Twist1 is extensively studied for its role in head- and neck cancers. Here and in epithelial ovarian cancer, Twist1 has been shown to be involved in evading apoptosis, making the tumour cells resistant against platinum-based chemotherapeutic drugs like cisplatin. Moreover, Twist1 has been shown to be expressed under conditions of hypoxia, corresponding to the observation that hypoxic cells respond less to chemotherapeutic drugs.
Another process in which Twist 1 is involved is tumour metastasis. The underlying mechanism is not completely understood, but it has been implicated in the upregulation of matrix metalloproteinases and inhibition of TIMP.
Recently, targeting Twist has gained interest as a target for cancer therapeutics. The inactivation of Twist by small interfering RNA or chemotherapeutic approach has been demonstrated in vitro. Moreover, several inhibitors which are antagonistic to the upstream or downstream molecules of Twist signaling pathways have also been identified.
Interactions
Twist transcription factor has been shown to interact with EP300, TCF3 and PCAF.
See also
Transcription factor
TWIST2
References
Further reading
Z
External links
GeneReviews/NCBI.NIH.UW entry on Saethre–Chotzen syndrome
Transcription factors | Twist-related protein 1 | [
"Chemistry",
"Biology"
] | 862 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
7,330,969 | https://en.wikipedia.org/wiki/Microphthalmia-associated%20transcription%20factor | Microphthalmia-associated transcription factor also known as class E basic helix-loop-helix protein 32 or bHLHe32 is a protein that in humans is encoded by the MITF gene.
MITF is a basic helix-loop-helix leucine zipper transcription factor involved in lineage-specific pathway regulation of many types of cells including melanocytes, osteoclasts, and mast cells. The term "lineage-specific", since it relates to MITF, means genes or traits that are only found in a certain cell type. Therefore, MITF may be involved in the rewiring of signaling cascades that are specifically required for the survival and physiological function of their normal cell precursors.
MITF, together with transcription factor EB (TFEB), TFE3 and TFEC, belong to a subfamily of related bHLHZip proteins, termed the MiT-TFE family of transcription factors. The factors are able to form stable DNA-binding homo- and heterodimers. The gene that encodes for MITF resides at the mi locus in mice, and its protumorogenic targets include factors involved in cell death, DNA replication, repair, mitosis, microRNA production, membrane trafficking, mitochondrial metabolism, and much more. Mutation of this gene results in deafness, bone loss, small eyes, and poorly pigmented eyes and skin. In human subjects, because it is known that MITF controls the expression of various genes that are essential for normal melanin synthesis in melanocytes, mutations of MITF can lead to diseases such as melanoma, Waardenburg syndrome, and Tietz syndrome. Its function is conserved across vertebrates, including in fishes such as zebrafish and Xiphophorus.
An understanding of MITF is necessary to understand how certain lineage-specific cancers and other diseases progress. In addition, current and future research can lead to potential avenues to target this transcription factor mechanism for cancer prevention.
Clinical significance
Mutations
As mentioned above, changes in MITF can result in serious health conditions. For example, mutations of MITF have been implicated in both Waardenburg syndrome and Tietz syndrome.
Waardenburg syndrome is a rare genetic disorder. Its symptoms include deafness, minor defects, and abnormalities in pigmentation. Mutations in the MITF gene have been found in certain patients with Waardenburg syndrome, type II. Mutations that change the amino acid sequence of that result in an abnormally small MITF are found. These mutations disrupt dimer formation, and as a result cause insufficient development of melanocytes. The shortage of melanocytes causes some of the characteristic features of Waardenburg syndrome.
Tietz syndrome, first described in 1923, is a congenital disorder often characterized by deafness and leucism. Tietz is caused by a mutation in the MITF gene. The mutation in MITF deletes or changes a single amino acid base pair specifically in the base motif region of the MITF protein. The new MITF protein is unable to bind to DNA and melanocyte development and subsequently melanin production is altered. A reduced number of melanocytes can lead to hearing loss, and decreased melanin production can account for the light skin and hair color that make Tietz syndrome so noticeable.
Melanoma
Melanocytes are commonly known as cells that are responsible for producing the pigment melanin which gives coloration to the hair, skin, and nails. The exact mechanisms of how exactly melanocytes become cancerous are relatively unclear, but there is ongoing research to gain more information about the process. For example, it has been uncovered that the DNA of certain genes is often damaged in melanoma cells, most likely as a result of damage from UV radiation, and in turn increases the likelihood of developing melanoma. Specifically, it has been found that a large percentage of melanomas have mutations in the B-RAF gene which leads to melanoma by causing an MEK-ERK kinase cascade when activated. In addition to B-RAF, MITF is also known to play a crucial role in melanoma progression. Since it is a transcription factor that is involved in the regulation of genes related to invasiveness, migration, and metastasis, it can play a role in the progression of melanoma.
Target genes
MITF recognizes E-box (CAYRTG) and M-box (TCAYRTG or CAYRTGA) sequences in the promoter regions of target genes. Known target genes (confirmed by at least two independent sources) of this transcription factor include,
Additional genes identified by a microarray study (which confirmed the above targets) include the following,
The LysRS-Ap4A-MITF signaling pathway
The LysRS-Ap4A-MITF signaling pathway was first discovered in mast cells, in which, the A mitogen-activated protein kinase (MAPK) pathway is activated upon allergen stimulation. The binding of immunoglobulin E to the high-affinity IgE receptor (FcεRI) provides the stimulus that starts the cascade.
Lysyl-tRNA synthetase (LysRS) normally resides in the multisynthetase complex. This complex consists of nine different aminoacyl-tRNA synthetases and three scaffold proteins and has been termed the "signalosome" due to its non-catalytic signalling functions. After activation, LysRS is phosphorylated on Serine 207 in a MAPK-dependent manner. This phosphorylation causes LysRS to change its conformation, detach from the complex and translocate into the nucleus, where it associates with the encoding histidine triad nucleotide–binding protein 1 (HINT1) thus forming the MITF-HINT1 inhibitory complex. The conformational change also switches LysRS activity from aminoacylation of Lysine tRNA to diadenosine tetraphosphate (Ap4A) production. Ap4A, which is an adenosine joined to another adenosine through a 5‘-5’tetraphosphate bridge, binds to HINT1 and this releases MITF from the inhibitory complex, allowing it to transcribe its target genes. Specifically, Ap4A causes a polymerization of the HINT1 molecule into filaments. The polymerization blocks the interface for MITF and thus prevents the binding of the two proteins. This mechanism is dependent on the precise length of the phosphate bridge in the Ap4A molecule so other nucleotides such as ATP or AMP will not affect it.
MITF is also an integral part of melanocytes, where it regulates the expression of a number of proteins with melanogenic potential. Continuous expression of MITF at a certain level is one of the necessary factors for melanoma cells to proliferate, survive and avoid detection by host immune cells through the T-cell recognition of the melanoma-associated antigen (melan-A). Post-translational modifications of the HINT1 molecules have been shown to affect MITF gene expression as well as the binding of Ap4A. Mutations in HINT1 itself have been shown to be the cause of axonal neuropathies. The regulatory mechanism relies on the enzyme diadenosine tetraphosphate hydrolase, a member of the Nudix type 2 enzymatic family (NUDT2), to cleave Ap4A, allow the binding of HINT1 to MITF and thus suppress the expression of the MITF transcribed genes. NUDT2 itself has also been shown to be associated with human breast carcinoma, where it promotes cellular proliferation. The enzyme is 17 kDa large and can freely diffuse between the nucleus and cytosol explaining its presence in the nucleus. It has also been shown to be actively transported into the nucleus by directly interacting with the N-terminal domain of importin-β upon immunological stimulation of the mast cells. Growing evidence is pointing to the fact that the LysRS-Ap4A-MITF signalling pathway is in fact an integral aspect of controlling MITF transcriptional activity.
Activation of the LysRS-Ap4A-MITF signalling pathway by isoproterenol has been confirmed in cardiomyocytes. A heart specific isoform of MITF is a major regulator of cardiac growth and hypertrophy responsible for heart growth and for the physiological response of the cardiomyocytes to beta-adrenergic stimulation.
Phosphorylation
MITF is phosphorylated on several serine and tyrosine residues. Serine phosphorylation is regulated by several signaling pathways including MAPK/BRAF/ERK, receptor tyrosine kinase KIT, GSK-3 and mTOR. In addition, several kinases including PI3K, AKT, SRC and P38 are also critical activators of MITF phosphorylation. In contrast, tyrosine phosphorylation is induced by the presence of the KIT oncogenic mutation D816V. This KITD816V pathway is dependent on SRC protein family activation signaling. The induction of serine phosphorylation by the frequently altered MAPK/BRAF pathway and the GSK-3 pathway in melanoma regulates MITF nuclear export and thereby decreasing MITF activity in the nucleus. Similarly, tyrosine phosphorylation mediated by the presence of the KIT oncogenic mutation D816V also increases the presence of MITF in the cytoplasm.
Interactions
Most transcription factors function in cooperation with other factors by protein–protein interactions. Association of MITF with other proteins is a critical step in the regulation of MITF-mediated transcriptional activity. Some commonly studied MITF interactions include those with MAZR, PIAS3, Tfe3, hUBC9, PKC1, and LEF1. Looking at the variety of structures gives insight into MITF's varied roles in the cell.
The Myc-associated zinc-finger protein related factor (MAZR) interacts with the Zip domain of MITF. When expressed together, both MAZR and MITF increase promoter activity of the mMCP-6 gene. MAZR and MITF together transactivate the mMCP-6 gene. MAZR also plays a role in the phenotypic expression of mast cells in association with MITF.
PIAS3 is a transcriptional inhibitor that acts by inhibiting STAT3's DNA binding activity. PIAS3 directly interacts with MITF, and STAT3 does not interfere with the interaction between PIAS3 and MITF. PIAS3 functions as a key molecule in suppressing the transcriptional activity of MITF. This is important when considering mast cell and melanocyte development.
MITF, TFE3 and TFEB are part of the basic helix-loop-helix-leucine zipper family of transcription factors. Each protein encoded by the family of transcription factors can bind DNA. MITF is necessary for melanocyte and eye development and new research suggests that TFE3 is also required for osteoclast development, a function redundant of MITF. The combined loss of both genes results in severe osteopetrosis, pointing to an interaction between MITF and other members of its transcription factor family. In turn, TFEB has been termed as the master regulator of lysosome biogenesis and autophagy. Interestingly, MITF, TFEB and TFE3 separate roles in modulating starvation-induced autophagy have been described in melanoma. Moreover, MITF and TFEB proteins, directly regulate each other’s mRNA and protein expression while their subcellular localization and transcriptional activity are subject to similar modulation, such as the mTOR signaling pathway.
UBC9 is a ubiquitin conjugating enzyme whose proteins associates with MITF. Although hUBC9 is known to act preferentially with SENTRIN/SUMO1, an in vitro analysis demonstrated greater actual association with MITF. hUBC9 is a critical regulator of melanocyte differentiation. To do this, it targets MITF for proteasome degradation.
Protein kinase C-interacting protein 1 (PKC1) associates with MITF. Their association is reduced upon cell activation. When this happens MITF disengages from PKC1. PKC1 by itself, found in the cytosol and nucleus, has no known physiological function. It does, however, have the ability to suppress MITF transcriptional activity and can function as an in vivo negative regulator of MITF induced transcriptional activity.
The functional cooperation between MITF and the lymphoid enhancing factor (LEF-1) results in a synergistic transactivation of the dopachrome tautomerase gene promoter, which is an early melanoblast marker. LEF-1 is involved in the process of regulation by Wnt signaling. LEF-1 also cooperates with MITF-related proteins like TFE3. MITF is a modulator of LEF-1, and this regulation ensures efficient propagation of Wnt signals in many cells.
Translational regulation
Translational regulation of MITF is still an unexplored area with only two peer-reviewed papers (as of 2019) highlighting the importance. During glutamine starvation of melanoma cells ATF4 transcripts increases as well as the translation of the mRNA due to eIF2α phosphorylation. This chain of molecular events leads to two levels of MITF suppression: first, ATF4 protein binds and suppresses MITF transcription and second, eIF2α blocks MITF translation possibly through the inhibition of eIF2B by eIF2α.
MITF can also be directly translationally modified by the RNA helicase DDX3X. The 5' UTR of MITF contains important regulatory elements (IRES) that is recognized, bound and activated by DDX3X. Although, the 5' UTR of MITF only consists of a nucleotide stretch of 123-nt, this region is predicted to fold into energetically favorable RNA secondary structures including multibranched loops and asymmetric bulges that is characteristics of IRES elements. Activation of this cis-regulatory sequences by DDX3X promotes MITF expression in melanoma cells.
See also
Microphthalmia
Splashed white
References
External links
Transcription factors
Gene expression
Human proteins | Microphthalmia-associated transcription factor | [
"Chemistry",
"Biology"
] | 3,006 | [
"Gene expression",
"Signal transduction",
"Molecular genetics",
"Cellular processes",
"Induced stem cells",
"Molecular biology",
"Biochemistry",
"Transcription factors"
] |
7,331,570 | https://en.wikipedia.org/wiki/Pressure%20switch | A pressure switch is a form of switch that operates an electrical contact when a certain set fluid pressure has been reached on its input. The switch may be designed to make contact either on pressure rise or on pressure fall. Pressure switches are widely used in industry to automatically supervise and control systems that use pressurized fluids.
Another type of pressure switch detects mechanical force; for example, a pressure-sensitive mat is used to automatically open doors on commercial buildings. Such sensors are also used in security alarm applications such as pressure sensitive floors.
Construction and types
A pressure switch for sensing fluid pressure contains a capsule, bellows, Bourdon tube, diaphragm or piston element that deforms or displaces proportionally to the applied pressure. The resulting motion is applied, either directly or through amplifying levers, to a set of switch contacts. Since pressure may be changing slowly and contacts should operate quickly, some kind of over-center mechanism such as a miniature snap-action switch is used to ensure quick operation of the contacts. One sensitive type of pressure switch uses mercury switches mounted on a Bourdon tube; the shifting weight of the mercury provides a useful over-center characteristic.
The pressure switch may be adjustable, by moving the contacts or adjusting tension in a counterbalance spring. Industrial pressure switches may have a calibrated scale and pointer to show the set point of the switch. A pressure switch will have a hysteresis, that is, a differential range around its setpoint, known as the switch's deadband, inside which small changes of pressure do not influence the state of the contacts. Some types allow adjustment of the differential.
The pressure-sensing element of a pressure switch may be arranged to respond to the difference of two pressures. Such switches are useful when the difference is significant, for example, to detect a clogged filter in a water supply system. The switches must be designed to respond only to the difference and not to false-operate for changes in the common mode pressure.
The contacts of the pressure switch may be rated a few tenths of an ampere to around 15 amperes, with smaller ratings found on more sensitive switches. Often a pressure switch will operate a relay or other control device, but some types can directly control small electric motors or other loads.
Since the internal parts of the switch are exposed to the process fluid, they must be chosen to balance strength and life expectancy against compatibility with process fluids. For example, rubber diaphragms are commonly used in contact with water, but would quickly degrade if used in a system containing mineral oil.
Switches designed for use in hazardous areas with flammable gas have enclosure to prevent an arc at the contacts from igniting the surrounding gas. Switch enclosures may also be required to be weatherproof, corrosion resistant, or submersible.
An electronic pressure switch incorporates some variety of pressure transducer (strain gauge, capacitive element, or other) and an internal circuit to compare the measured pressure to a set point. Such devices may provide improved repeatability, accuracy and precision over a mechanical switch.
Pneumatic
Uses of pneumatic pressure switches include:
Switch a household well water pump automatically when water is drawn from the pressure tank.
Switching off an electrically driven gas compressor when a set pressure is achieved in the reservoir
Switching off a gas compressor, whenever there is no feed in the suction stage.
in-cell charge control in a battery
Switching on an alarm light in the cockpit of an aircraft if cabin pressure (based on altitude) is critically low.
Air filled hoses that activate switches when vehicles drive over them. Common for counting traffic and at gas stations.
Hydraulic
Hydraulic pressure switches have various uses in automobiles, for example, to warn if the engine's oil pressure falls below a safe level, or to control automatic transmission torque converter lock-up. Prior to the 1960s, a pressure switch was used in the hydraulic braking circuit to control power to the brake lights; more recent automobiles use a switch directly activated by the brake pedal.
In dust control systems (bag filter), a pressure switch is mounted on the header which will raise an alarm when air pressure in the header is less than necessary. A differential pressure switch may be installed across a filter element to sense increased pressure drop, indicating the need for filter cleaning or replacement.
Examples
Pressure sensitive mat
A pressure sensitive mat provides a contact signal when force is applied anywhere within the area of the mat. Some mats provide a single signal, while others can resolve the position of the applied force within the mat. Pressure sensitive mats can be used to operate electrically operated doors, or as part of an interlock system to ensure machine operators are clear of dangerous areas of a process before it operates. Pressure sensitive mats can be used to detect persons walking over a particular point, as part of a security alarm system or to count attendance, or for other purposes.
See also
Dynamic pressure
List of sensors
Pressure sensor
References
External links
Pneumatic tools
Hydraulic tools
Security technology | Pressure switch | [
"Physics"
] | 1,013 | [
"Physical systems",
"Hydraulics",
"Hydraulic tools"
] |
7,333,367 | https://en.wikipedia.org/wiki/Industrial%20control%20system | An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves.
Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.
Discrete controllers
The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.
Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective.
Distributed control systems
A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralized control rooms and local on-plant monitoring and control.
A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling.
A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.
The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.
The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch.
Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals.
SCADA systems
Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.
The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks.
The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded.
Programmable logic controllers
PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
History
Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised.
The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks.
While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.
SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.
The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.
In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.
Security
SCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.
See also
Automation
Plant process and emergency shutdown systems
MTConnect
OPC Foundation
Safety instrumented system (SIS)
Control system security
Operational Technology
References
Further reading
Guide to Industrial Control Systems (ICS) Security, SP800-82 Rev2, National Institute of Standards and Technology, May 2015.
External links
Proview, an open source process control system
Telemetry
Control system
Control engineering
Manufacturing | Industrial control system | [
"Engineering"
] | 2,427 | [
"Manufacturing",
"Automation",
"Industrial engineering",
"Control engineering",
"Mechanical engineering",
"Industrial automation"
] |
1,586,105 | https://en.wikipedia.org/wiki/Lattice%20protein | Lattice proteins are highly simplified models of protein-like heteropolymer chains on lattice conformational space which are used to investigate protein folding. Simplification in lattice proteins is twofold: each whole residue (amino acid) is modeled as a single "bead" or "point" of a finite set of types (usually only two), and each residue is restricted to be placed on vertices of a (usually cubic) lattice. To guarantee the connectivity of the protein chain, adjacent residues on the backbone must be placed on adjacent vertices of the lattice. Steric constraints are expressed by imposing that no more than one residue can be placed on the same lattice vertex.
Because proteins are such large molecules, there are severe computational limits on the simulated timescales of their behaviour when modeled in all-atom detail. The millisecond regime for all-atom simulations was not reached until 2010, and it is still not possible to fold all real proteins on a computer. Simplification significantly reduces the computational effort in handling the model, although even in this simplified scenario the protein folding problem is NP-complete.
Overview
Different versions of lattice proteins may adopt different types of lattice (typically square and triangular ones), in two or three dimensions, but it has been shown that generic lattices can be used and handled via a uniform approach.
Lattice proteins are made to resemble real proteins by introducing an energy function, a set of conditions which specify the interaction energy between beads occupying adjacent lattice sites. The energy function mimics the interactions between amino acids in real proteins, which include steric, hydrophobic and hydrogen bonding effects. The beads are divided into types, and the energy function specifies the interactions depending on the bead type, just as different types of amino acids interact differently. One of the most popular lattice models, the hydrophobic-polar model (HP model), features just two bead types—hydrophobic (H) and polar (P)—and mimics the hydrophobic effect by specifying a favorable interaction between H beads.
For any sequence in any particular structure, an energy can be rapidly calculated from the energy function. For the simple HP model, this is an enumeration of all the contacts between H residues that are adjacent in the structure but not in the chain. Most researchers consider a lattice protein sequence protein-like only if it possesses a single structure with an energetic state lower than in any other structure, although there are exceptions that consider ensembles of possible folded states. This is the energetic ground state, or native state. The relative positions of the beads in the native state constitute the lattice protein's tertiary structure. Lattice proteins do not have genuine secondary structure; however, some researchers have claimed that they can be extrapolated onto real protein structures which do include secondary structure, by appealing to the same law by which the phase diagrams of different substances can be scaled onto one another (the theorem of corresponding states).
By varying the energy function and the bead sequence of the chain (the primary structure), effects on the native state structure and the kinetics of folding can be explored, and this may provide insights into the folding of real proteins. Some of the examples include study of folding processes in lattice proteins that have been discussed to resemble the two-phase folding kinetics in proteins. Lattice protein was shown to have quickly collapsed into compact state and followed by slow subsequent structure rearrangement into native state. Attempts to resolve Levinthal paradox in protein folding are another efforts made in the field. As an example, study conducted by Fiebig and Dill examined searching method involving constraints in forming residue contacts in lattice protein to provide insights to the question of how a protein finds its native structure without global exhaustive searching. Lattice protein models have also been used to investigate the energy landscapes of proteins, i.e. the variation of their internal free energy as a function of conformation.
Lattices
A lattice is a set of orderly points that are connected by "edges". These points are called vertices and are connected to a certain number other vertices in the lattice by edges. The number of vertices each individual vertex is connected to is called the coordination number of the lattice, and it can be scaled up or down by changing the shape or dimension (2-dimensional to 3-dimensional, for example) of the lattice. This number is important in shaping the characteristics of the lattice protein because it controls the number of other residues allowed to be adjacent to a given residue. It has been shown that for most proteins the coordination number of the lattice used should fall between 3 and 20, although most commonly used lattices have coordination numbers at the lower end of this range.
Lattice shape is an important factor in the accuracy of lattice protein models. Changing lattice shape can dramatically alter the shape of the energetically favorable conformations. It can also add unrealistic constraints to the protein structure such as in the case of the parity problem where in square and cubic lattices residues of the same parity (odd or even numbered) cannot make hydrophobic contact. It has also been reported that triangular lattices yield more accurate structures than other lattice shapes when compared to crystallographic data. To combat the parity problem, several researchers have suggested using triangular lattices when possible, as well as a square matrix with diagonals for theoretical applications where the square matrix may be more appropriate. Hexagonal lattices were introduced to alleviate sharp turns of adjacent residues in triangular lattices. Hexagonal lattices with diagonals have also been suggested as a way to combat the parity problem.
Hydrophobic-polar model
The hydrophobic-polar protein model is the original lattice protein model. It was first proposed by Dill et al. in 1985 as a way to overcome the significant cost and difficulty of predicting protein structure, using only the hydrophobicity of the amino acids in the protein to predict the protein structure. It is considered to be the paradigmatic lattice protein model. The method was able to quickly give an estimate of protein structure by representing proteins as "short chains on a 2D square lattice" and has since become known as the hydrophobic-polar model. It breaks the protein folding problem into three separate problems: modeling the protein conformation, defining the energetic properties of the amino acids as they interact with one another to find said conformation, and developing an efficient algorithm for the prediction of these conformations. It is done by classifying amino acids in the protein as either hydrophobic or polar and assuming that the protein is being folded in an aqueous environment. The lattice statistical model seeks to recreate protein folding by minimizing the free energy of the contacts between hydrophobic amino acids. Hydrophobic amino acid residues are predicted to group around each other, while hydrophilic residues interact with the surrounding water.
Different lattice types and algorithms were used to study protein folding with HP model. Efforts were made to obtain higher approximation ratios using approximation algorithms in 2 dimensional and 3 dimensional, square and triangular lattices. Alternative to approximation algorithms, some genetic algorithms were also exploited with square, triangular, and face-centered-cubic lattices.
Problems and alternative models
The simplicity of the hydrophobic-polar model has caused it to have several problems that people have attempted to correct with alternative lattice protein models. Chief among these problems is the issue of degeneracy, which is when there is more than one minimum energy conformation for the modeled protein, leading to uncertainty about which conformation is the native one. Attempts to address this include the HPNX model which classifies amino acids as hydrophobic (H), positive (P), negative (N), or neutral (X) according to the charge of the amino acid, adding additional parameters to reduce the number of low energy conformations and allowing for more realistic protein simulations. Another model is the Crippen model which uses protein characteristics taken from crystal structures to inform the choice of native conformation.
Another issue with lattice models is that they generally don't take into account the space taken up by amino acid side chains, instead considering only the α-carbon. The side chain model addresses this by adding a side chain to the vertex adjacent to the α-carbon.
References
Protein structure
NP-complete problems | Lattice protein | [
"Chemistry",
"Mathematics"
] | 1,660 | [
"Protein structure",
"Computational problems",
"Structural biology",
"Mathematical problems",
"NP-complete problems"
] |
1,586,291 | https://en.wikipedia.org/wiki/Fr%C3%A9chet%20filter | In mathematics, the Fréchet filter, also called the cofinite filter, on a set is a certain collection of subsets of (that is, it is a particular subset of the power set of ).
A subset of belongs to the Fréchet filter if and only if the complement of in is finite.
Any such set is said to be , which is why it is alternatively called the cofinite filter on .
The Fréchet filter is of interest in topology, where filters originated, and relates to order and lattice theory because a set's power set is a partially ordered set under set inclusion (more specifically, it forms a lattice).
The Fréchet filter is named after the French mathematician Maurice Fréchet (1878-1973), who worked in topology.
Definition
A subset of a set is said to be cofinite in if its complement in (that is, the set ) is finite.
If the empty set is allowed to be in a filter, the Fréchet filter on , denoted by is the set of all cofinite subsets of .
That is:
If is a finite set, then every cofinite subset of is necessarily not empty, so that in this case, it is not necessary to make the empty set assumption made before.
This makes a on the lattice the power set of with set inclusion, given that denotes the complement of a set in The following two conditions hold:
Intersection condition If two sets are finitely complemented in , then so is their intersection, since and
Upper-set condition If a set is finitely complemented in , then so are its supersets in .
Properties
If the base set is finite, then since every subset of , and in particular every complement, is then finite.
This case is sometimes excluded by definition or else called the improper filter on Allowing to be finite creates a single exception to the Fréchet filter’s being free and non-principal since a filter on a finite set cannot be free and a non-principal filter cannot contain any singletons as members.
If is infinite, then every member of is infinite since it is simply minus finitely many of its members.
Additionally, is infinite since one of its subsets is the set of all where
The Fréchet filter is both free and non-principal, excepting the finite case mentioned above, and is included in every free filter.
It is also the dual filter of the ideal of all finite subsets of (infinite) .
The Fréchet filter is necessarily an ultrafilter (or maximal proper filter).
Consider the power set where is the natural numbers.
The set of even numbers is the complement of the set of odd numbers. Since neither of these sets is finite, neither set is in the Fréchet filter on
However, an (and any other non-degenerate filter) is free if and only if it includes the Fréchet filter.
The ultrafilter lemma states that every non-degenerate filter is contained in some ultrafilter.
The existence of free ultrafilters was established by Tarski in 1930, relying on a theorem equivalent to the axiom of choice, and is used in the construction of the hyperreals in nonstandard analysis.
Examples
If is a finite set, assuming that the empty set can be in a filter, then the Fréchet filter on consists of all the subsets of .
On the set of natural numbers, the set of infinite intervals
is a Fréchet filter base, that is, the Fréchet filter on consists of all supersets of elements of .
See also
References
External links
J.B. Nation, Notes on Lattice Theory, course notes, revised 2017.
Order theory
Topology | Fréchet filter | [
"Physics",
"Mathematics"
] | 759 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Order theory"
] |
1,586,721 | https://en.wikipedia.org/wiki/ABO%20blood%20group%20system | The ABO blood group system is used to denote the presence of one, both, or neither of the A and B antigens on erythrocytes (red blood cells). For human blood transfusions, it is the most important of the 44 different blood type (or group) classification systems currently recognized by the International Society of Blood Transfusions (ISBT) as of
December 2022. A mismatch in this serotype (or in various others) can cause a potentially fatal adverse reaction after a transfusion, or an unwanted immune response to an organ transplant. Such mismatches are rare in modern medicine. The associated anti-A and anti-B antibodies are usually IgM antibodies, produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses.
The ABO blood types were discovered by Karl Landsteiner in 1901; he received the Nobel Prize in Physiology or Medicine in 1930 for this discovery. ABO blood types are also present in other primates such as apes, monkeys and Old World monkeys.
History
Discovery
The ABO blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that red blood cells would clump together (agglutinate) when mixed in test tubes with sera from different persons, and that some human blood also agglutinated with animal blood. He wrote a two-sentence footnote:
This was the first evidence that blood variations exist in humans — it was believed that all humans have similar blood. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human blood into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B.
This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. In his paper, he referred to the specific blood group interactions as isoagglutination, and also introduced the concept of agglutinins (antibodies), which is the actual basis of antigen-antibody reaction in the ABO system. He asserted:
Thus, he discovered two antigens (agglutinogens A and B) and two antibodies (agglutinins — anti-A and anti-B). His third group (C) indicated absence of both A and B antigens, but contains anti-A and anti-B. The following year, his students Adriano Sturli and Alfred von Decastello discovered the fourth type (but not naming it, and simply referred to it as "no particular type").
In 1910, Ludwik Hirszfeld and Emil Freiherr von Dungern introduced the term 0 (null) for the group Landsteiner designated as C, and AB for the type discovered by Adriano sturli and Alfred von decastello (https://www.rockefeller.edu/our-scientists/karl-landsteiner/2554-nobel-prize/). They were also the first to explain the genetic inheritance of the blood groups.
Classification systems
Czech serologist Jan Janský independently introduced blood type classification in 1907 in a local journal. He used the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB). Unknown to Janský, an American physician William L. Moss devised a slightly different classification using the same numerical; his I, II, III, and IV corresponding to modern AB, A, B, and O.
These two systems created confusion and potential danger in medical practice. Moss's system was adopted in Britain, France, and US, while Janský's was preferred in most European countries and some parts of US. To resolve the chaos, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Moss's system had been used.
In 1927, Landsteiner had moved to the Rockefeller Institute for Medical Research in New York. As a member of a committee of the National Research Council concerned with blood grouping, he suggested to substitute Janský's and Moss's systems with the letters O, A, B, and AB. (There was another confusion on the use of figure 0 for German null as introduced by Hirszfeld and von Dungern, because others used the letter O for ohne, meaning without or zero; Landsteiner chose the latter.) This classification was adopted by the National Research Council and became variously known as the National Research Council classification, the International classification, and most popularly the "new" Landsteiner classification. The new system was gradually accepted and by the early 1950s, it was universally followed.
Other developments
The first practical use of blood typing in transfusion was by an American physician Reuben Ottenberg in 1907. Large-scale application began during the First World War (1914–1915) when citric acid began to be used for blood clot prevention. Felix Bernstein demonstrated the correct blood group inheritance pattern of multiple alleles at one locus in 1924. Watkins and Morgan, in England, discovered that the ABO epitopes were conferred by sugars, to be specific, N-acetylgalactosamine for the A-type and galactose for the B-type. After much published literature claiming that the ABH substances were all attached to glycosphingolipids, Finne et al. (1978) found that the human erythrocyte glycoproteins contain polylactosamine chains that contains ABH substances attached and represent the majority of the antigens. The main glycoproteins carrying the ABH antigens were identified to be the Band 3 and Band 4.5 proteins and glycophorin. Later, Yamamoto's group showed the precise glycosyl transferase set that confers the A, B and O epitopes.
Genetics
Blood groups are inherited from both parents. The ABO blood type is controlled by a single gene (the ABO gene) with three types of alleles inferred from classical genetics: i, IA, and IB. The I designation stands for isoagglutinogen, another term for antigen. The gene encodes a glycosyltransferase—that is, an enzyme that modifies the carbohydrate content of the red blood cell antigens. The gene is located on the long arm of the ninth chromosome (9q34).
The IA allele gives type A, IB gives type B, and i gives type O. As both IA and IB are dominant over i, only ii people have type O blood. Individuals with IAIA or IAi have type A blood, and individuals with IBIB or IBi have type B. IAIB people have both phenotypes, because A and B express a special dominance relationship: codominance, which means that type A and B parents can have an AB child. A couple with type A and type B can also have a type O child if they are both heterozygous (IBi and IAi). The cis-AB phenotype has a single enzyme that creates both A and B antigens. The resulting red blood cells do not usually express A or B antigen at the same level that would be expected on common group A1 or B red blood cells, which can help solve the problem of an apparently genetically impossible blood group.
Individuals with the rare Bombay phenotype (hh) produce antibodies against the A, B, and O groups and can only receive transfusions from other hh individuals. The table above summarizes the various blood groups that children may inherit from their parents. Genotypes are shown in the second column and in small print for the offspring: AO and AA both test as type A; BO and BB test as type B. The four possibilities represent the combinations obtained when one allele is taken from each parent; each has a 25% chance, but some occur more than once. The text above them summarizes the outcomes.
Historically, ABO blood tests were used in paternity testing, but in 1957 only 50% of American men falsely accused were able to use them as evidence against paternity. Occasionally, the blood types of children are not consistent with expectations—for example, a type O child can be born to an AB parent—due to rare situations, such as Bombay phenotype and cis AB.
Subgroups
The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood.
With the development of DNA sequencing, it has been possible to identify a much larger number of alleles at the ABO locus, each of which can be categorized as A, B, or O in terms of the reaction to transfusion, but which can be distinguished by variations in the DNA sequence. There are six common alleles in white individuals of the ABO gene that produce one's blood type:
The same study also identified 18 rare alleles, which generally have a weaker glycosylation activity. People with weak alleles of A can sometimes express anti-A antibodies, though these are usually not clinically significant as they do not stably interact with the antigen at body temperature.
Cis AB is another rare variant, in which A and B genes are transmitted together from a single parent.
Distribution and evolutionary history
The distribution of the blood groups A, B, O and AB varies across the world according to the population. There are also variations in blood type distribution within human subpopulations.
In the UK, the distribution of blood type frequencies through the population still shows some correlation to the distribution of placenames and to the successive invasions and migrations including Celts, Norsemen, Danes, Anglo-Saxons, and Normans who contributed the morphemes to the placenames and the genes to the population. The native Celts tended to have more type O blood, while the other populations tended to have more type A.
The two common O alleles, O01 and O02, share their first 261 nucleotides with the group A allele A01. However, unlike the group A allele, a guanosine base is subsequently deleted. A premature stop codon results from this frame-shift mutation. This variant is found worldwide, and likely predates human migration from Africa. The O01 allele is considered to predate the O02 allele.
Some evolutionary biologists theorize that there are four main lineages of the ABO gene and that mutations creating type O have occurred at least three times in humans. From oldest to youngest, these lineages comprise the following alleles: A101/A201/O09, B101, O02 and O01. The continued presence of the O alleles is hypothesized to be the result of balancing selection. Both theories contradict the previously held theory that type O blood evolved first.
Origin theories
It is possible that food and environmental antigens (bacterial, viral, or plant antigens) have epitopes similar enough to A and B glycoprotein antigens. The antibodies created against these environmental antigens in the first years of life can cross-react with ABO-incompatible red blood cells that it comes in contact with during blood transfusion later in life. Anti-A antibodies are hypothesized to originate from immune response towards influenza virus, whose epitopes are similar enough to the α-D-N-galactosamine on the A glycoprotein to be able to elicit a cross-reaction. Anti-B antibodies are hypothesized to originate from antibodies produced against Gram-negative bacteria, such as E. coli, cross-reacting with the α-D-galactose on the B glycoprotein.
However, it is more likely that the force driving evolution of allele diversity is simply negative frequency-dependent selection; cells with rare variants of membrane antigens are more easily distinguished by the immune system from pathogens carrying antigens from other hosts. Thus, individuals possessing rare types are better equipped to detect pathogens. The high within-population diversity observed in human populations would, then, be a consequence of natural selection on individuals.
Clinical relevance
The carbohydrate molecules on the surfaces of red blood cells have roles in cell membrane integrity, cell adhesion, membrane transportation of molecules, and acting as receptors for extracellular ligands, and enzymes. ABO antigens are found having similar roles on epithelial cells as well as red blood cells.
Bleeding and thrombosis (von Willebrand factor)
The ABO antigen is also expressed on the von Willebrand factor (vWF) glycoprotein, which participates in hemostasis (control of bleeding). In fact, having type O blood predisposes to bleeding, as 30% of the total genetic variation observed in plasma vWF is explained by the effect of the ABO blood group, and individuals with group O blood normally have significantly lower plasma levels of vWF (and Factor VIII) than do non-O individuals. In addition, vWF is degraded more rapidly due to the higher prevalence of blood group O with the Cys1584 variant of vWF (an amino acid polymorphism in VWF): the gene for ADAMTS13 (vWF-cleaving protease) maps to human chromosome 9 band q34.2, the same locus as ABO blood type. Higher levels of vWF are more common amongst people who have had ischemic stroke (from blood clotting) for the first time. The results of this study found that the occurrence was not affected by ADAMTS13 polymorphism, and the only significant genetic factor was the person's blood group.
ABO(H) blood group antigens are also carried by other hemostatically relevant glycoproteins, such as platelet glycoprotein Ibα, which is a ligand for vWF on platelets. The significance of ABO(H) antigen expression on these other hemostatic glycoproteins is not fully defined, but may also be relevant for bleeding and thrombosis.
ABO hemolytic disease of the newborn
ABO blood group incompatibilities between the mother and child do not usually cause hemolytic disease of the newborn (HDN) because antibodies to the ABO blood groups are usually of the IgM type, which do not cross the placenta. However, in an O-type mother, IgG ABO antibodies are produced and the baby can potentially develop ABO hemolytic disease of the newborn.
Clinical applications
In human cells, the ABO alleles and their encoded glycosyltransferases have been described in several oncologic conditions. Using anti-GTA/GTB monoclonal antibodies, it was demonstrated that a loss of these enzymes was correlated to malignant bladder and oral epithelia. Furthermore, the expression of ABO blood group antigens in normal human tissues is dependent the type of differentiation of the epithelium. In most human carcinomas, including oral carcinoma, a significant event as part of the underlying mechanism is decreased expression of the A and B antigens. Several studies have observed that a relative down-regulation of GTA and GTB occurs in oral carcinomas in association with tumor development. More recently, a genome wide association study (GWAS) has identified variants in the ABO locus associated with susceptibility to pancreatic cancer.
In addition, another large GWAS study has associated ABO-histo blood groups as well as FUT2 secretor status with the presence in the intestinal microbiome of specific bacterial species. In this case the association was with Bacteroides and Faecalibacterium spp. Bacteroides of the same OTU (operational taxonomic unit) have been shown to be associated with inflammatory bowel disease, thus the study suggests an important role for the ABO histo-blood group antigens as candidates for direct modulation of the human microbiome in health and disease.
Clinical marker
A multi-locus genetic risk score study based on a combination of 27 loci, including the ABO gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22).
Alteration of ABO antigens for transfusion
In April 2007, an international team of researchers announced in the journal Nature Biotechnology an inexpensive and efficient way to convert types A, B, and AB blood into type O. This is done by using glycosidase enzymes from specific bacteria to strip the blood group antigens from red blood cells. The removal of A and B antigens still does not address the problem of the Rh blood group antigen on the blood cells of Rh positive individuals, and so blood from Rh negative donors must be used. The modified blood is named "enzyme converted to O" (ECO blood) but despite the early success of converting B- to O-type RBCs and clinical trials without adverse effects transfusing into A- and O-type patients, the technology has not yet become clinical practice.
Another approach to the blood antigen problem is the manufacture of artificial blood, which could act as a substitute in emergencies.
Pseudoscience
In Japan and other parts of East Asia, there is a popular belief in Blood type personality theory, which claims that blood types predict or influence personality. This claim is not scientifically based, and there is scientific consensus that no such link exists; the scientific community considers it a pseudoscience and a superstition.
The belief originated in the 1930s, when it was introduced as part of Japan's eugenics program. Its popularity faded following Japan's defeat in World War 2 and Japanese support for eugenics faltered, but it resurfaced in the 1970s by a journalist named Masahiko Nomi. Despite its status as a pseudoscience, it remains widely popular throughout East Asia.
Other popular ideas are blood type-specific dietary needs, that group A causes severe hangovers, that group O is associated with better teeth, and that those with group A2 have the highest IQ scores. As with blood type personality theory, these and other popular ideas lack scientific evidence, and many are discredited or pseudoscientific.
See also
Secretor status — secretion of ABO antigens in body fluids
References
Further reading
External links
ABO at BGMUT Blood Group Antigen Gene Mutation Database at NCBI, NIH
Encyclopædia Britannica, ABO blood group system
National Blood Transfusion Service
Blood antigen systems
Transfusion medicine
Antigenic determinant
Hematopathology
Glycoproteins
Serology
Genes on human chromosome 9 | ABO blood group system | [
"Chemistry"
] | 4,125 | [
"Glycoproteins",
"Glycobiology"
] |
1,587,123 | https://en.wikipedia.org/wiki/Land%C3%A9%20g-factor | In physics, the Landé g-factor is a particular example of a g-factor, namely for an electron with both spin and orbital angular momenta. It is named after Alfred Landé, who first described it in 1921.
In atomic physics, the Landé g-factor is a multiplicative term appearing in the expression for the energy levels of an atom in a weak magnetic field. The quantum states of electrons in atomic orbitals are normally degenerate in energy, with these degenerate states all sharing the same angular momentum. When the atom is placed in a weak magnetic field, however, the degeneracy is lifted.
Description
The factor comes about during the calculation of the first-order perturbation in the energy of an atom when a weak uniform magnetic field (that is, weak in comparison to the system's internal magnetic field) is applied to the system. Formally we can write the factor as,
The orbital is equal to 1, and under the approximation , the above expression simplifies to
Here, J is the total electronic angular momentum, L is the orbital angular momentum, and S is the spin angular momentum. Because for electrons, one often sees this formula written with 3/4 in place of . The quantities gL and gS are other g-factors of an electron. For an atom, and for an atom, .
If we wish to know the g-factor for an atom with total atomic angular momentum (nucleus + electrons), such that the total atomic angular momentum quantum number can take values of , giving
Here is the Bohr magneton and is the nuclear magneton. This last approximation is justified because is smaller than by the ratio of the electron mass to the proton mass.
A derivation
The following working is a common derivation.
Both orbital angular momentum and spin angular momentum of electron contribute to the magnetic moment. In particular, each of them alone contributes to the magnetic moment by the following form
where
Note that negative signs in the above expressions are because an electron carries negative charge, and the value of can be derived naturally from Dirac's equation. The total magnetic moment , as a vector operator, does not lie on the direction of total angular momentum , because the g-factors for orbital and spin part are different. However, due to Wigner-Eckart theorem, its expectation value does effectively lie on the direction of which can be employed in the determination of the g-factor according to the rules of angular momentum coupling. In particular, the g-factor is defined as a consequence of the theorem itself
Therefore,
One gets
See also
Einstein–de Haas effect,
Zeeman effect,
g-factor (physics).
References
Atomic physics
Nuclear physics | Landé g-factor | [
"Physics",
"Chemistry"
] | 546 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Nuclear physics",
"Atomic",
" and optical physics"
] |
1,587,656 | https://en.wikipedia.org/wiki/Virtual%20private%20database | A virtual private database or VPD masks data in a larger database so that only a subset of the data appears to exist, without actually segregating data into different tables, schemas or databases. A typical application is constraining sites, departments, individuals, etc. to operate only on their own records and at the same time allowing more privileged users and operations (e.g. reports, data warehousing, etc.) to access on the whole table.
The term is typical of the Oracle DBMS, where the implementation is very general: tables can be associated to SQL functions, which return a predicate as a SQL expression. Whenever a query is executed, the relevant predicates for the involved tables are transparently collected and used to filter rows. SELECT, INSERT, UPDATE and DELETE can have different rules.
External links
Using Virtual Private Database to Implement Application Security Policies
http://www.oracle-base.com/articles/8i/VirtualPrivateDatabases.php
Data security
Types of databases | Virtual private database | [
"Engineering"
] | 212 | [
"Cybersecurity engineering",
"Data security"
] |
1,589,701 | https://en.wikipedia.org/wiki/Nemawashi | Nemawashi () is a Japanese business informal process of laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement. It is considered an important element in any major change in the Japanese business environment before any formal steps are taken. Successful nemawashi enables changes to be carried out with the consent of all sides, avoiding embarrassment.
Nemawashi literally translates as "turning the roots", from ne (, "root") and mawasu (, "to turn something, to put something around something else"). Its original meaning was literal: in preparation for transplanting a tree, one would carefully dig around a tree some time before transplanting, and trim the roots to encourage the growth of smaller roots that will help the tree become established in its new location.
Nemawashi is often cited as an example of a Japanese word which is difficult to translate effectively, because it is tied so closely to Japanese culture itself, although it is often translated as "laying the groundwork."
In Japan, high-ranking people expect to be let in on new proposals prior to an official meeting. If they find out about something for the first time during the meeting, they will feel that they have been ignored, and they may reject it for that reason alone. Thus, it's important to approach these people individually before the meeting. This provides an opportunity to introduce the proposal to them and gauge their reaction. This is also a good chance to hear their input.
The term is associated with forming a consensus, along with ringiseido (which is a more formal process). There is debate whether Nemawashi is truly co-operative, or if sometimes those consulted have little choice but to agree. The process can be time consuming.
See also
Japanese management culture
Lobbying
Polder model - Dutch form of consensus building
Toyota Production System
References
External links
Kirai, a geek in Japan: Nemawashi
Japanese words and phrases
Japanese business terms
Economy of Japan
Lean manufacturing | Nemawashi | [
"Engineering"
] | 415 | [
"Lean manufacturing"
] |
8,830,237 | https://en.wikipedia.org/wiki/Subspace%20theorem | In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes. It is a result obtained by .
Statement
The subspace theorem states that if L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then
the non-zero integer points x with
lie in a finite number of proper subspaces of Qn.
A quantitative form of the theorem, which determines the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by to allow more general absolute values on number fields.
Applications
The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation.
A corollary on Diophantine approximation
The following corollary to the subspace theorem is often itself referred to as the subspace theorem.
If a1,...,an are algebraic such that 1,a1,...,an are linearly independent over Q and ε>0 is any given real number, then there are only finitely many rational n-tuples (x1/y,...,xn/y) with
The specialization n = 1 gives the Thue–Siegel–Roth theorem. One may also note that the exponent 1+1/n+ε is best possible by Dirichlet's theorem on diophantine approximation.
References
Diophantine approximation
Theorems in number theory | Subspace theorem | [
"Mathematics"
] | 326 | [
"Mathematical theorems",
"Theorems in number theory",
"Mathematical relations",
"Mathematical problems",
"Diophantine approximation",
"Approximations",
"Number theory"
] |
8,834,198 | https://en.wikipedia.org/wiki/Heptagonal%20tiling | In geometry, a heptagonal tiling is a regular tiling of the hyperbolic plane. It is represented by Schläfli symbol of {7,3}, having three regular heptagons around each vertex.
Images
Related polyhedra and tilings
This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbol {n,3}.
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
Hurwitz surfaces
The symmetry group of the tiling is the (2,3,7) triangle group, and a fundamental domain for this action is the (2,3,7) Schwarz triangle. This is the smallest hyperbolic Schwarz triangle, and thus, by the proof of Hurwitz's automorphisms theorem, the tiling is the universal tiling that covers all Hurwitz surfaces (the Riemann surfaces with maximal symmetry group), giving them a tiling by heptagons whose symmetry group equals their automorphism group as Riemann surfaces. The smallest Hurwitz surface is the Klein quartic (genus 3, automorphism group of order 168), and the induced tiling has 24 heptagons, meeting at 56 vertices.
The dual order-7 triangular tiling has the same symmetry group, and thus yields triangulations of Hurwitz surfaces.
See also
Hexagonal tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Regular tilings | Heptagonal tiling | [
"Physics"
] | 447 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
8,834,365 | https://en.wikipedia.org/wiki/Order-7%20triangular%20tiling | In geometry, the order-7 triangular tiling is a regular tiling of the hyperbolic plane with a Schläfli symbol of {3,7}.
Hurwitz surfaces
The symmetry group of the tiling is the (2,3,7) triangle group, and a fundamental domain for this action is the (2,3,7) Schwarz triangle. This is the smallest hyperbolic Schwarz triangle, and thus, by the proof of Hurwitz's automorphisms theorem, the tiling is the universal tiling that covers all Hurwitz surfaces (the Riemann surfaces with maximal symmetry group), giving them a triangulation whose symmetry group equals their automorphism group as Riemann surfaces.
The smallest of these is the Klein quartic, the most symmetric genus 3 surface, together with a tiling by 56 triangles, meeting at 24 vertices, with symmetry group the simple group of order 168, known as PSL(2,7). The resulting surface can in turn be polyhedrally immersed into Euclidean 3-space, yielding the small cubicuboctahedron.
The dual order-3 heptagonal tiling has the same symmetry group, and thus yields heptagonal tilings of Hurwitz surfaces.
Related polyhedra and tiling
It is related to two star-tilings by the same vertex arrangement: the order-7 heptagrammic tiling, {7/2,7}, and heptagrammic-order heptagonal tiling, {7,7/2}.
This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbol {3,p}.
This tiling is a part of regular series {n,7}:
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
See also
Order-7 tetrahedral honeycomb
List of regular polytopes
List of uniform planar tilings
Tilings of regular polygons
Triangular tiling
Uniform tilings in hyperbolic plane
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-7 tilings
Regular tilings
Triangular tilings | Order-7 triangular tiling | [
"Physics"
] | 560 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
8,834,541 | https://en.wikipedia.org/wiki/Frataxin | Frataxin is a protein that in humans is encoded by the FXN gene.
It is located in the mitochondrion and Frataxin mRNA is mostly expressed in tissues with a high metabolic rate. The function of frataxin is not clear but it is involved in assembly of iron-sulfur clusters. It has been proposed to act as either an iron chaperone or an iron storage protein. Reduced expression of frataxin is the cause of Friedreich's ataxia.
Structure
X-ray crystallography has shown that human frataxin consists of a β-sheet that supports a pair of parallel α-helices, forming a compact αβ sandwich. Frataxin homologues in other species are similar, sharing the same core structure. However, the frataxin tail sequences, extending from the end of one helix, diverge in sequence and differ in length. Human frataxin has a longer tail sequence than frataxin found in bacteria or yeast. It is hypothesized that the purpose of the tail is to stabilize the protein.
Like most mitochondrial proteins, frataxin is synthesized in cytoplasmic ribosomes as large precursor molecules with mitochondrial targeting sequences. Upon entry into mitochondria, the molecules are broken down by a proteolytic reaction to yield mature frataxin.
Function
Frataxin is localized to the mitochondrion. The function of frataxin is not entirely clear, but it seems to be involved in assembly of iron-sulfur clusters. It has been proposed to act as either an iron chaperone or an iron storage protein.
Frataxin mRNA is predominantly expressed in tissues with a high metabolic rate (including liver, kidney, brown fat and heart). Mouse and yeast frataxin homologues contain a potential N-terminal mitochondrial targeting sequence, and human frataxin has been observed to co-localise with a mitochondrial protein. Furthermore, disruption of the yeast gene has been shown to result in mitochondrial dysfunction. Friedreich's ataxia is thus believed to be a mitochondrial disease caused by a mutation in the nuclear genome (specifically, expansion of an intronic GAA triplet repeat in the FXN gene, which encodes the protein frataxin.).
Clinical significance
Reduced expression of frataxin is the cause of Friedreich's ataxia (FRDA), a neurodegenerative disease. The reduction in frataxin gene expression may be attributable from either the silencing of transcription of the frataxin gene because of epigenetic modifications in the chromosomal entity or from the inability of splicing the expanded GAA repeats in the first intron of the pre-mRNA as seen in bacteria and Human cells or both. The expansion of intronic trinucleotide repeat GAA results in Friedreich's ataxia. This expanded repeat causes R-loop formation, and using a repeat-targeted oligonucleotide to disrupt the R-loop can reactivate frataxin expression.
96% of FRDA patients have a GAA trinucleotide repeat expansion in intron 1 of both alleles of their FXN gene. Overall, this leads to a decrease in frataxin mRNA synthesis and a decrease (but not absence) in frataxin protein in people with FRDA. (A subset of FRDA patients have GAA expansion in one chromosome and a point mutation in the FXN exon in the other chromosome.) In the typical case, the length of the allele with the shorter GAA expansion inversely correlates with frataxin levels. FRDA patients’ peripheral tissues typically have less than 10% of the frataxin levels exhibited by unaffected people. Lower levels of frataxin result in earlier disease onset and faster progression.
FRDA is characterized by ataxia, sensory loss, and cardiomyopathy. The reason frataxin deficiency causes these symptoms is not entirely clear. On a cellular level, it is linked to iron accumulation in the mitochondria and increased oxidant sensitivity. For reasons that are not well understood, this primarily affects the tissue of the dorsal root ganglia, cerebellum, and heart muscle.
Animal studies
In mice, complete inactivation of the FXN homolog (Frda) is lethal in the early embryonic stage. Although nearly all organisms express a frataxin homologue, the GAA repeat in intron 1 only exists in humans and other primates, so the mutation that causes FDRA can't occur naturally in other animals. Scientists have developed several options to model this disease in mice. One approach is to silence frataxin expression in just one specific tissue type of interest: the heart (mice modified this way are called MCK), all neurons (NSE), or just the spinal cord and cerebellum (PRP). Another approach involves inserting a GAA expansion into the first intron of the mouse FXN gene, which should inhibit frataxin production, just like in humans. Mice that are homozygous for this modified gene are called KIKI (knock-in knock-in), and the compound heterozygotes formed by crossing KIKI mice with frataxin knockout mice are called KIKO (knock-in knock-out). However, even KIKO mice still express 25-36% of the normal frataxin level, and show very mild symptoms. The final approach involves creating transgenic mice with a GAA-expanded version of the human frataxin gene. These mice are called YG22R (one GAA sequence of 190 repeats) and YG8R (two GAA sequences of 90 and 190 repeats). These mice show symptoms similar to human patients.
An overexpression of frataxin in Drosophila has shown an increase in antioxidant capability, resistance to oxidative stress insults and longevity, supporting the theory that the role of frataxin is to protect the mitochondria from oxidative stress and the ensuing cellular damage.
Fibroblasts from a mouse model of FRDA and FRDA patient fibroblasts show increased levels of DNA double-strand breaks. A lentivirus gene delivery system was used to deliver the frataxin gene to the FRDA mouse model and human patient cells, and this resulted in long-term restored expression of frataxin mRNA and frataxin protein. This restored expression of the frataxin gene was accompanied by a substantial reduction in the number of DNA double-strand breaks. The impaired frataxin in FRDA cells appears to cause reduced capacity for repair of DNA damage and this may contribute to neurodegeneration.
Interactions
Frataxin has been shown to biologically interact with the enzyme PMPCB.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Friedreich Ataxia
Proteins | Frataxin | [
"Chemistry"
] | 1,416 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
8,837,004 | https://en.wikipedia.org/wiki/Dihydroartemisinin | Dihydroartemisinin (also known as dihydroqinghaosu, artenimol or DHA) is a drug used to treat malaria. Dihydroartemisinin is the active metabolite of all artemisinin compounds (artemisinin, artesunate, artemether, etc.) and is also available as a drug in itself. It is a semi-synthetic derivative of artemisinin and is widely used as an intermediate in the preparation of other artemisinin-derived antimalarial drugs. It is sold commercially in combination with piperaquine and has been shown to be equivalent to artemether/lumefantrine.
Medical use
Dihydroartemisinin is used to treat malaria, generally as a combination drug with piperaquine.
In a systematic review of randomized controlled trials, both dihydroartemisinin-piperaquine and artemether-lumefantrine are very effective at treating malaria (high quality evidence). However, dihydroartemisinin-piperaquine cures slightly more patients than artemether-lumefantrine, and it also prevents further malaria infections for longer after treatment (high quality evidence). Dihydroartemisinin-piperaquine and artemether-lumefantrine probably have similar side effects (moderate quality evidence). The studies were all conducted in Africa. In studies of people living in Asia, dihydroartemisinin-piperaquine is as effective as artesunate plus mefloquine at treating malaria (moderate quality evidence). Artesunate plus mefloquine probably causes more nausea, vomiting, dizziness, sleeplessness, and palpitations than dihydroartemisinin-piperaquine (moderate quality evidence).
Pharmacology and mechanism
The proposed mechanism of action of artemisinin involves cleavage of endoperoxide bridges by iron, producing free radicals (hypervalent iron-oxo species, epoxides, aldehydes, and dicarbonyl compounds) which damage biological macromolecules causing oxidative stress in the cells of the parasite. Malaria is caused by apicomplexans, primarily Plasmodium falciparum, which largely reside in red blood cells and itself contains iron-rich heme-groups (in the form of hemozoin). In 2015 artemisinin was shown to bind to a large number targets suggesting that it acts in a promiscuous manner. Recent mechanism research discovered that artemisinin targets a broad spectrum of proteins in the human cancer cell proteome through heme-activated radical alkylation.
Chemistry
Dihydroartemisinin has a low solubility in water of less than 0.1 g/L. Consequently, its use may result in side effects caused by minor, yet much more soluble, additives (excipients) such as Cremophor EL.
The lactone of artemisinin could selectively be reduced with mild hydride-reducing agents, such as sodium borohydride, potassium borohydride, and lithium borohydride to dihydroartemisinin (a lactol) in over 90% yield. It is a novel reduction, because normally lactones cannot be reduced with sodium borohydride under the same reaction conditions (0–5 ˚C in methanol). Reduction with LiAlH4 leads to some rearranged products. It was surprising to find that the lactone was reduced, but that the peroxy group survived. However, the lactone of deoxyartemisinin resisted reduction with sodium borohydride and could only be reduced with diisobutylaluminium hydride to the lactol deoxydihydroartimisinin. These results show that the peroxy group assists the reduction of lactone with sodium borohydride to a lactol, but not to the alcohol which is the over-reduction product. No clear evidence for this reduction process exists.
Society and culture
In combination with piperaquine, brands include:
D-Artepp (GPSC)
Artekin (Holleykin)
Diphos (Genix Pharma)
TimeQuin (Sami Pharma)
Eurartesim (Sigma Tau; by Good Manufacturing Practices)
Duocotecxin (Holley Pharm)
Alone:
Cotecxin (Zhejiang Holley Nanhu Pharmaceutical Co.)
Research
Accumulative research suggests that dihydroartemisinin and other artemisinin-based endoperoxide compounds may display activity as experimental cancer chemotherapeutics. Recent pharmacological evidence demonstrates that dihydroartemisinin targets human metastatic melanoma cells with induction of NOXA-dependent mitochondrial apoptosis that occurs downstream of iron-dependent generation of cytotoxic oxidative stress.
References
Further reading
Antimalarial agents
Organic peroxides
Trioxanes
Chinese discoveries
Oxygen heterocycles
Heterocyclic compounds with 4 rings
Tetracyclic compounds
Lactols | Dihydroartemisinin | [
"Chemistry"
] | 1,082 | [
"Organic compounds",
"Lactols",
"Functional groups",
"Organic peroxides"
] |
9,508,538 | https://en.wikipedia.org/wiki/Eukaryotic%20initiation%20factor | Eukaryotic initiation factors (eIFs) are proteins or protein complexes involved in the initiation phase of eukaryotic translation. These proteins help stabilize the formation of ribosomal preinitiation complexes around the start codon and are an important input for post-transcription gene regulation. Several initiation factors form a complex with the small 40S ribosomal subunit and Met-tRNAiMet called the 43S preinitiation complex (43S PIC). Additional factors of the eIF4F complex (eIF4A, E, and G) recruit the 43S PIC to the five-prime cap structure of the mRNA, from which the 43S particle scans 5'-->3' along the mRNA to reach an AUG start codon. Recognition of the start codon by the Met-tRNAiMet promotes gated phosphate and eIF1 release to form the 48S preinitiation complex (48S PIC), followed by large 60S ribosomal subunit recruitment to form the 80S ribosome. There exist many more eukaryotic initiation factors than prokaryotic initiation factors, reflecting the greater biological complexity of eukaryotic translation. There are at least twelve eukaryotic initiation factors, composed of many more polypeptides, and these are described below.
eIF1 and eIF1A
eIF1 and eIF1A both bind to the 40S ribosome subunit-mRNA complex. Together they induce an "open" conformation of the mRNA binding channel, which is crucial for scanning, tRNA delivery, and start codon recognition. In particular, eIF1 dissociation from the 40S subunit is considered to be a key step in start codon recognition.
eIF1 and eIF1A are small proteins (13 and 16 kDa, respectively in humans) and are both components of the 43S PIC. eIF1 binds near the ribosomal P-site, while eIF1A binds near the A-site, in a manner similar to the structurally and functionally related bacterial counterparts IF3 and IF1, respectively.
eIF2
eIF2 is the main protein complex responsible for delivering the initiator tRNA to the P-site of the preinitiation complex, as a ternary complex containing Met-tRNAiMet and GTP (the eIF2-TC). eIF2 has specificity for the methionine-charged initiator tRNA, which is distinct from other methionine-charged tRNAs used for elongation of the polypeptide chain. The eIF2 ternary complex remains bound to the P-site while the mRNA attaches to the 40s ribosome and the complex begins to scan the mRNA. Once the AUG start codon is recognized and located in the P-site, eIF5 stimulates the hydrolysis of eIF2-GTP, effectively switching it to the GDP-bound form via gated phosphate release. The hydrolysis of eIF2-GTP provides the conformational change to change the scanning complex into the 48S Initiation complex with the initiator tRNA-Met anticodon base paired to the AUG. After the initiation complex is formed the 60s subunit joins and eIF2 along with most of the initiation factors dissociate from the complex allowing the 60S subunit to bind. eIF1A and eIF5B-GTP remain bound to one another in the A site and must be hydrolyzed to be released and properly initiate elongation.
eIF2 has three subunits, eIF2-α, β, and γ. The former α-subunit is a target of regulatory phosphorylation and is of particular importance for cells that may need to turn off protein synthesis globally as a response to cell signaling events. When phosphorylated, it sequesters eIF2B (not to be confused with eIF2β), a GEF. Without this GEF, GDP cannot be exchanged for GTP, and translation is repressed. One example of this is the eIF2α-induced translation repression that occurs in reticulocytes when starved for iron. In the case of viral infection, protein kinase R (PKR) phosphorylates eIF2α when dsRNA is detected in many multicellular organisms, leading to cell death.
The proteins eIF2A and eIF2D are both technically named 'eIF2' but neither are part of the eIF2 heterotrimer and they seem to play unique functions in translation. Instead, they appear to be involved in specialized pathways, such as 'eIF2-independent' translation initiation or re-initiation, respectively.
eIF3
eIF3 independently binds the 40S ribosomal subunit, multiple initiation factors, and cellular and viral mRNA.
In mammals, eIF3 is the largest initiation factor, made up of 13 subunits (a-m). It has a molecular weight of ~800 kDa and controls the assembly of the 40S ribosomal subunit on mRNA that have a 5' cap or an IRES. eIF3 may use the eIF4F complex, or alternatively during internal initiation, an IRES, to position the mRNA strand near the exit site of the 40S ribosomal subunit, thus promoting the assembly of a functional pre-initiation complex.
In many human cancers, eIF3 subunits are overexpressed (subunits a, b, c, h, i, and m) and underexpressed (subunits e and f). One potential mechanism to explain this disregulation comes from the finding that eIF3 binds a specific set of cell proliferation regulator mRNA transcripts and regulates their translation. eIF3 also mediates cellular signaling through S6K1 and mTOR/Raptor to effect translational regulation.
eIF4
The eIF4F complex is composed of three subunits: eIF4A, eIF4E, and eIF4G. Each subunit has multiple human isoforms and there exist additional eIF4 proteins: eIF4B and eIF4H.
eIF4G is a 175.5-kDa scaffolding protein that interacts with eIF3 and the Poly(A)-binding protein (PABP), as well as the other members of the eIF4F complex. eIF4E recognizes and binds to the 5' cap structure of mRNA, while eIF4G binds PABP, which binds the poly(A) tail, potentially circularizing and activating the bound mRNA. eIF4Aa DEAD box RNA helicaseis important for resolving mRNA secondary structures.
eIF4B contains two RNA-binding domainsone non-specifically interacts with mRNA, whereas the second specifically binds the 18S portion of the small ribosomal subunit. It acts as an anchor, as well as a critical co-factor for eIF4A. It is also a substrate of S6K, and when phosphorylated, it promotes the formation of the pre-initiation complex. In vertebrates, eIF4H is an additional initiation factor with similar function to eIF4B.
eIF5, eIF5A and eIF5B
eIF5 is a GTPase-activating protein, which helps the large ribosomal subunit associate with the small subunit. It is required for GTP-hydrolysis by eIF2.
eIF5A is the eukaryotic homolog of EF-P. It helps with elongation and also plays a role in termination. EIF5A contains the unusual amino acid hypusine.
eIF5B is a GTPase, and is involved in assembly of the full ribosome. It is the functional eukaryotic analog of bacterial IF2.
eIF6
eIF6 performs the same inhibition of ribosome assembly as eIF3, but binds with the large subunit.
See also
Eukaryotic translation
Ded1/DDX3
DHX29
References
Further reading
External links
Helicases
Molecular biology
Protein biosynthesis
Gene expression | Eukaryotic initiation factor | [
"Chemistry",
"Biology"
] | 1,690 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
9,508,543 | https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor | A bacterial initiation factor (IF) is a protein that stabilizes the initiation complex for polypeptide translation.
Translation initiation is essential to protein synthesis and regulates mRNA translation fidelity and efficiency in bacteria. The 30S ribosomal subunit, initiator tRNA, and mRNA form an initiation complex for elongation. This complex process requires three essential protein factors in bacteria – IF1, IF2, and IF3. These factors bind to the 30S subunit and promote correct initiation codon selection on the mRNA. IF1, the smallest factor at 8.2 kDa, blocks elongator tRNA binding at the A-site. IF2 is the major component that transports initiator tRNA to the P-site. IF3 checks P-site codon-anticodon pairing and rejects incorrect initiation complexes.
The orderly mechanism of initiation starts with IF3 attaching to the 30S subunit and changing its shape. IF1 joins next, followed by mRNA binding, and starts codon-P-site interaction. IF2 enters with the initiator tRNA and places it on the start codon. GTP hydrolysis by IF2 releases it and IF3, enabling 50S subunit joining. The coordinated binding and activities of IF1, IF2, and IF3 are essential for the rapid and precise translation initiation in bacteria. They facilitate start codon selection and assemble an active, protein-synthesis-ready 70S ribosome.
IF1
Bacterial initiation factor 1 associates with the 30S ribosomal subunit in the A site and prevents an aminoacyl-tRNA from entering. It modulates IF2 binding to the ribosome by increasing its affinity. It may also prevent the 50S subunit from binding, stopping the formation of the 70S subunit. It also contains a β-domain fold common for nucleic acid-binding proteins. It is a homolog of eIF1A. Initiation factor IF-1 is the smallest translation factor at only 8.2kDa. Beyond blocking the A-site, it affects the dynamics of ribosome association and dissociation. IF-1 enhances dissociation with IF-3, likely by inducing conformational changes in the 30S subunit. It also increases the binding affinity of IF-2 to the 30S subunit, possibly by altering the subunit configuration. Though IF-1 occupies the A-site, it does so in a way that is distinct from tRNA binding. Structural studies show IF-1 inserts a loop into the minor groove of helix 44 of 16S rRNA, flipping out bases A1492 and A1493. This insertion repositions nucleotides of helix 44, transmitting a conformational change over a 70Å distance and rotating the head of the 30S subunit. IF-1 mutants can exhibit cold-sensitive phenotypes, indicating a role for the factor in cold shock adaptation. Certain mutations also lead to o of genes at low temperatures, suggesting IF-1 is involved in gene regulation. IF-1 actively modifies ribosome structure and dynamics during initiation, in addition to just blocking the A-site.
IF2
The IF2 initiation factor is a crucial component in the process of protein synthesis. The largest among the three indispensable translation initiation factors is IF-2, which possesses a molecular mass of 97 kDa. The protein has many domains, including an N-terminal domain, a GTPase domain, a linker region, C1, C2, and C-terminal domains. The GTPase domain encompasses the G1-G5 motif, which is responsible for the binding and hydrolysis of GTP. The activity of IF2 is regulated by conformational changes induced by the binding and hydrolysis of GTP. The primary function of IF-2 is to transport the initiator fMet-tRNA to the P-site of the 30S ribosomal subunit. The C2 domain of IF2 has a unique recognition and binding affinity towards the initiator tRNA. The IF-2 protein has been observed to form a ternary complex when interacting with GTP and fMet-tRNA. This complex has been found to interact with the 30S subunit. The initiation of mRNA translation involves the placement of the start codon in the P-site through the codon-anticodon base matching with the tRNA anti-codon. IF2 regulates start codon selection accuracy and inhibits elongator tRNAs' binding by selectively binding to fMet-tRNA. Additionally, it relocates the initiator tRNA on the 30S subunit to enhance the optimum contact with the P-site. Furthermore, IF2 exhibits RNA chaperone activity, which enables it to rectify misfolded RNA structures. In general, the IF2 protein plays a crucial role in coordinating many steps of translation initiation, including the binding of mRNA and fMet-tRNA to the start codon, the joining of sub-units, and the activation of GTPase.
IF3
Initiation factor IF3 is a small protein of 21 kDa containing two compact α/β domains (IF3C and IF3N) connected by a flexible lysine-rich linker. Most IF3 functions are mediated by the IF3C domain, while IF3N regulates 30S subunit binding. Bacterial initiation factor 3 (infC) is not universally found in all bacterial species but in E. coli it is required for the 30S subunit to bind to the initiation site in mRNA. IF3 is required by the small subunit to form initiation complexes, but has to be released to allow the 50S subunit to bind. IF3 attaches to the platform side of the 30S subunit, close to helices 23, 24, 25, 26 and 45 of 16S rRNA, as well as ribosomal proteins S7, S11, and S12. The IF3C domain interacts with the 30S subunit via its conserved basic residues R99, R116, R147 and R168 . A major function of IF3 is inspecting codon-anticodon pairing at the P-site during start codon selection. It accelerates the dissociation of non-canonical initiation complexes containing mismatched or incorrect tRNAs. IF3 also inspects the initiator tRNA, rejecting elongator tRNAs and it also promotes the dissociation of the 70S ribosome into subunits, providing a pool of free 30S subunits for initiation. Another key role of IF3 is repositioning mRNA on the 30S subunit from a standby site to the P-site decoding site for start codon selection. IF3 works cooperatively with IF1 and IF2 during initiation and modulates IF2 binding and enhances the fidelity of start codon selection.
References
External links
Protein biosynthesis
Gene expression | Bacterial initiation factor | [
"Chemistry",
"Biology"
] | 1,414 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
9,511,813 | https://en.wikipedia.org/wiki/Zero%20waste%20agriculture | Zero waste agriculture is a type of sustainable agriculture which optimizes use of the five natural kingdoms, i.e. plants, animals, bacteria, fungi and algae, to produce biodiverse-food, energy and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process.
History
The integration of shallow oxidisation ponds of microalgae was demonstrated by Golueke & Oswald in the 1960s. The widespread global implementation of these systems can be largely credited to Prof George Lai Chan-Yu-Thim (2 March 1924 Mauritius-8 October 2016 Mauritius) from ZERI. Zero waste agriculture is now practiced in China (ecological farming), Columbia (integrated food & waste management systems) & Fiji (integrated farming systems), India (integrated biogas farming), South Africa (BEAT Coop & African Agroecological Biotechnology Initiative) and Mauritius. The Brazilian government has adopted integrated farming system as a major social technology for the uplifting of marginalized and subsistence farmers through coordination with TECPAR.
Zero waste agriculture combines mature ecological farming practices that delivers an integrated balance of job creation, poverty relief, food security, energy security, water conservation, climate change relief, land security & stewardship.
Practice
Zero waste agriculture is optimally practiced on small 1-5 ha sized family owned and managed farms and it complements traditional farming & animal husbandry as practiced in most third world communities. Zero Waste Agriculture also preserves local indigenous systems and existing agrarian cultural values and practices.
Zero waste agriculture presents a balance of economically, socially and ecologically benefits as it:
optimizes food production in an ecological sound manner
reduces water consumption through recycling and reduced evaporation
provides energy security through the harvesting of biomethane (biogas) and the extraction of biodiesel from micro-algae, as a by-product of food production
provides climate change relief through the substantial reduction in greenhouse gas emissions from both traditional agriculture practices and fossil fuel usage
reduces the use of pesticides through biodiverse farming
Certification of such farming practices is both challenging and an opportunity.
See also
Agricultural technology a/k/a Agritech
Integrated Multi-Trophic Aquaculture
Miniwaste
References
Further reading
Sustainable agriculture
Waste
Food waste | Zero waste agriculture | [
"Physics"
] | 459 | [
"Materials",
"Waste",
"Matter"
] |
9,512,445 | https://en.wikipedia.org/wiki/Manufacturing%20Engineering%20Centre | The Manufacturing Engineering Centre (MEC) is an international R&D Centre of Excellence for Advanced Manufacturing and Information Technology. The MEC was founded in 1996 under the directorship of Professor Duc Truong Pham. The Centre forms part of Cardiff University, which dates back to 1883 and is one of Britain's major civic universities.
The MEC's purpose is to conduct research and development in all major areas of Advanced Manufacturing and use the output to promote the introduction of new manufacturing technology and practice to industry. It was the first autonomous research centre created by Cardiff University.
Research
The MEC conducts basic, strategic and applied research as well as technology transfer with partners from 22 countries in Europe, Asia and the Americas. The research spans a broad spectrum of subjects, including robotics and microsystems, sensor systems, high-speed automation and intelligent control, rapid manufacturing, micromanufacturing, nanotechnology, quality engineering, multimedia, virtual reality and enterprise information management.
Since 1996, the Centre has received over £50 million in grants and contracts and has attracted hundreds of industrial partners. In 2004, the MEC won two EC 6th Framework Networks of Excellence contracts totalling 15M Euros in value. The two Networks of Excellence led by the MEC, I*PROMS and 4M, involve some 50 centres of excellence in the field of Advanced Manufacturing across the EU.
As a Centre of Excellence for Technology and Industrial Collaboration (CETIC) sponsored by the Welsh Assembly Government (WAG) and the European Regional Development Fund (ERDF), the MEC has contributed significantly to the Welsh economy, having completed thousands of projects with local companies and helped to generate and safeguard jobs in the region.
Awards
Under Professor Pham's leadership, the MEC was awarded the DTI University/Industry First Prize by the Secretary of State for Trade and Industry for its success in building research partnerships with industry (March 1999), and the Queen's Anniversary Prize for Higher and Further Education in recognition of its contribution made to the economy (February 2001).
References
Cardiff University
Engineering universities and colleges in the United Kingdom
Industrial engineering
Nanotechnology institutions | Manufacturing Engineering Centre | [
"Materials_science",
"Engineering"
] | 432 | [
"Nanotechnology",
"Nanotechnology institutions",
"Industrial engineering"
] |
9,516,170 | https://en.wikipedia.org/wiki/Polyanhydride | Polyanhydrides are a class of biodegradable polymers characterized by anhydride bonds that connect repeat units of the polymer backbone chain. Their main application is in the medical device and pharmaceutical industry. In vivo, polyanhydrides degrade into non-toxic diacid monomers that can be metabolized and eliminated from the body. Owing to their safe degradation products, polyanhydrides are considered to be biocompatible.
Applications
The characteristic anhydride bonds in polyanhydrides are water-labile (the polymer chain breaks apart at the anhydride bond). This results in two carboxylic acid groups which are easily metabolized and biocompatible.
Biodegradable polymers, such as polyanhydrides, are capable of releasing physically entrapped or encapsulated drugs by well-defined kinetics and are a growing area of medical research. Polyanhydrides have been investigated as an important material for the short-term release of drugs or bioactive agents. The rapid degradation and limited mechanical properties of polyanhydrides render them ideal as controlled drug delivery devices.
One example, Gliadel, is a device in clinical use for the treatment of brain cancer. This product is made of a polyanhydride wafer containing a chemotherapeutic agent. After removal of a cancerous brain tumor, the wafer is inserted into the brain releasing a chemotherapy agent at a controlled rate proportional to the degradation rate of the polymer. The localized treatment of chemotherapy protects the immune system from high levels of radiation.
Other applications of polyanhydrides include the use of unsaturated polyanhydrides in bone replacement, as well as polyanhydride copolymers as vehicles for vaccine delivery.
Classes
There are three main classes of polyanhydrides: aliphatic, unsaturated, and aromatic. These classes are determined by examining their R groups (the chemistry of the molecule between the anhydride bonds).
Aliphatic polyanhydrides consist of R groups containing carbon atoms bonded in straight or branched chains. This class of polymers is characterized by a crystalline structure, melting temperature range of 50–90 °C, and solubility in chlorinated hydrocarbons. They degrade and are eliminated from the body within weeks of being introduced to the bodily environment.
Unsaturated polyanhydrides consist of organic R groups with one or more double bonds (or degrees of unsaturation). This class of polymers has a highly crystalline structure and is insoluble in common organic solvents.
Aromatic polyanhydrides consist of R groups containing a benzene (aromatic) ring. Properties of this class include a crystalline structure, insolubility in common organic solvents, and melting points greater than 100 °C. They are very hydrophobic and therefore degrade slowly when in the bodily environment. This slow degradation rate makes aromatic polyanhydrides less suitable for drug delivery when used as homopolymers, but they can be copolymerized with the aliphatic class to achieve the desired degradation rate.
Synthesis and characterization
Polyanhydrides are synthesized using either melt condensation or solution polymerization. Depending on the synthesis method used,
various characteristics of polyanhydrides can be altered to achieve the desired product. Characterization of polyanhydrides determines the structure, composition, molecular weight, and thermal properties of the molecule. These properties are determined by using various light-scattering and size-exclusion methods.
Polymerization
Polyanhydrides can be easily prepared by using available, low cost resources. The process can be varied to achieve desirable characteristics. Traditionally, polyanhydrides have been prepared by melt condensation polymerization, which results in high molecular weight polymers. Melt condensation polymerization involves reacting dicarboxylic acid monomers with excess acetic anhydride at a high temperature and under a vacuum to form the polymers. Catalysts may be used to achieve higher molecular weights and shorter reaction times. Generally, a one-step synthesis (method involving only one reaction) is used which does not require purification.
There are many other methods used to synthesize polyanhydrides. Some of the other methods include: microwave heating, high-throughput synthesis (synthesis of polymers in parallel), ring opening polymerization (removal of cyclic monomers), interfacial condensation (high temperature reaction of two monomers), dehydrative coupling agents (removing the water group from two carboxyl groups), and solution polymerization (reacting in a solution).
Chemical structure and composition analysis
The chemical structure and composition of polyanhydrides can be determined using nuclear magnetic resonance (NMR) spectroscopy. The positions of peaks in proton NMR spectroscopy are determined by the class of polanhydride (aromatic, aliphatic, or unsaturated), and so provide information regarding structural features of the polymer, including whether a copolymer has a random or block-like structure. Molecular weight and degradation rate can also determined by spectroscopically.
Molecular weight analysis
Aside from using NMR to determine a polyanhydride’s molecular weight, gel permeation chromatography (GPC), and viscosity measurements may also be used.
Thermal properties
Differential scanning calorimetry (DSC) is used to determine the thermal properties of polyanhydrides. Glass transition temperature, melting temperature, and heat of fusion can all be determined by DSC. Crystallinity of a polyanhydride can be determined using DSC, Small angle X-ray scattering (SAXS), Nuclear magnetic resonance (NMR), and X-ray diffraction.
Degradation
The erosion and degradation of a polymer describe how the polymer physically loses mass (degrades). The two common erosion mechanisms are surface and bulk erosion. Polyanhydrides are surface eroding polymers. Surface eroding polymers do not allow water to penetrate into the material. They erode layer by layer, like a lollipop. The hydrophobic backbone with hydrolytically labile anhydride linkages allows hydrolytic degradation to be controlled by manipulating the polymer composition. This manipulation can occur by adding a hydrophilic group to the polyanhydride to make a copolymer. Polyanhydride copolymers with hydrophilic groups exhibit bulk eroding characteristics. Bulk eroding polymers take in water like a sponge (throughout the material) and erode inside and on the surface of the polymer.
Drug release from bulk eroding polymers is difficult to characterize because the primary mode of release from these polymers is diffusion. Unlike surface eroding polymers, bulk eroding polymers show a very weak relationship between the rate of polymer degradation and the rate of drug release. Therefore, the development of surface eroding polyanhydrides incorporated into the bulk eroding polymers is of increased importance.
Biocompatibility
Biocompatibility and toxicity of a polymeric material is evaluated by examining systemic toxic responses, local tissue responses, carcinogenic and mutagenic responses, and allergic responses to the material's degradation products. Animal studies are conducted to test the polymer’s effect on each of these negative responses. Polyanhydrides and their degradation products have not been found to cause significant harmful responses and are considered to be biocompatible.
References
Domb, A., Amselem, S., Langer, R., and Manair, M. “Chapter 3: Polyanhydrides as Carriers of Drugs.” Biomedical Polymers Designed –to –Degrade Systems. Hanser Publishers: Munich, Vienna, NY, 1994.
Kumar, N., Langer, R., and Domb, A. “Polyanhydrides: an overview.” Advanced Drug Delivery Reviews, 2002.
“Polyanhydride Synthesis Techniques.” Wyatt Technology Corp.
Tamada, J. and Langer, R. “The development of polyanhydrides for drug delivery applications.” Journal of Biomaterials Science, Polymer Ed. Vol. 3, No. 4, pp. 315–353, 1992.
Torres, M. P.; Determan, A. S.; Malapragada, S. K.; Narasimhan, B. “Polyanhydrides.” Encyclopedia of Chemical Processing. 2006.
B.M. Vogel, S.K. Mallapragada, and B. Narasimhan, “Rapid Synthesis of Polyanhydrides By Microwave Polymerization”, Macromolecular Rapid Communications 25, 330-333, 2004.
B.M. Vogel, S.K. Mallapragada, “Synthesis of Novel Biodegradable Polyanhydrides Containing Aromatic and Glycol Functionality for Tailoring of Hydrophilicity in Controlled Drug Delivery Devices”, Biomaterials, 26, 721-728, 2004.
B.M. Vogel, Naomi Eidelman, S.K. Mallapragada and B. Narasimhan, “Parallel Synthesis and Dissolution Testing of Polyanhydride Random Copolymers”, Journal of Combinatorial Chemistry, 7, 921-928, 2005.
B.M. Vogel and S.K. Mallapragada, “The Synthesis of Polyanhydrides”, in Handbook of Biodegradable Materials and their Applications, edited by S.K. Mallapragada and Balaji Narasimhan, ASP Publishers, Vol. 1, 1-19, 2005.
P.Guruprasad Reddy and A.J.Domb, “Polyanhydride Chemistry”. Biomacromolecules, 2022, 23(12), 4959-4984. doi: 10.1021/acs.biomac.2c01180.
Biomaterials
Biological engineering
Polymers | Polyanhydride | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 2,090 | [
"Biomaterials",
"Biological engineering",
"Materials",
"Polymer chemistry",
"Polymers",
"Matter",
"Medical technology"
] |
1,026,848 | https://en.wikipedia.org/wiki/Weyl%20tensor | In differential geometry, the Weyl curvature tensor, named after Hermann Weyl, is a measure of the curvature of spacetime or, more generally, a pseudo-Riemannian manifold. Like the Riemann curvature tensor, the Weyl tensor expresses the tidal force that a body feels when moving along a geodesic. The Weyl tensor differs from the Riemann curvature tensor in that it does not convey information on how the volume of the body changes, but rather only how the shape of the body is distorted by the tidal force. The Ricci curvature, or trace component of the Riemann tensor contains precisely the information about how volumes change in the presence of tidal forces, so the Weyl tensor is the traceless component of the Riemann tensor. This tensor has the same symmetries as the Riemann tensor, but satisfies the extra condition that it is trace-free: metric contraction on any pair of indices yields zero. It is obtained from the Riemann tensor by subtracting a tensor that is a linear expression in the Ricci tensor.
In general relativity, the Weyl curvature is the only part of the curvature that exists in free space—a solution of the vacuum Einstein equation—and it governs the propagation of gravitational waves through regions of space devoid of matter. More generally, the Weyl curvature is the only component of curvature for Ricci-flat manifolds and always governs the characteristics of the field equations of an Einstein manifold.
In dimensions 2 and 3 the Weyl curvature tensor vanishes identically. In dimensions ≥ 4, the Weyl curvature is generally nonzero. If the Weyl tensor vanishes in dimension ≥ 4, then the metric is locally conformally flat: there exists a local coordinate system in which the metric tensor is proportional to a constant tensor. This fact was a key component of Nordström's theory of gravitation, which was a precursor of general relativity.
Definition
The Weyl tensor can be obtained from the full curvature tensor by subtracting out various traces. This is most easily done by writing the Riemann tensor as a (0,4) valence tensor (by contracting with the metric). The (0,4) valence Weyl tensor is then
where n is the dimension of the manifold, g is the metric, R is the Riemann tensor, Ric is the Ricci tensor, s is the scalar curvature, and denotes the Kulkarni–Nomizu product of two symmetric (0,2) tensors:
In tensor component notation, this can be written as
The ordinary (1,3) valent Weyl tensor is then given by contracting the above with the inverse of the metric.
The decomposition () expresses the Riemann tensor as an orthogonal direct sum, in the sense that
This decomposition, known as the Ricci decomposition, expresses the Riemann curvature tensor into its irreducible components under the action of the orthogonal group. In dimension 4, the Weyl tensor further decomposes into invariant factors for the action of the special orthogonal group, the self-dual and antiself-dual parts C+ and C−.
The Weyl tensor can also be expressed using the Schouten tensor, which is a trace-adjusted multiple of the Ricci tensor,
Then
In indices,
where is the Riemann tensor, is the Ricci tensor, is the Ricci scalar (the scalar curvature) and brackets around indices refers to the antisymmetric part. Equivalently,
where S denotes the Schouten tensor.
Properties
Conformal rescaling
The Weyl tensor has the special property that it is invariant under conformal changes to the metric. That is, if for some positive scalar function then the (1,3) valent Weyl tensor satisfies . For this reason the Weyl tensor is also called the conformal tensor. It follows that a necessary condition for a Riemannian manifold to be conformally flat is that the Weyl tensor vanish. In dimensions ≥ 4 this condition is sufficient as well. In dimension 3 the vanishing of the Cotton tensor is a necessary and sufficient condition for the Riemannian manifold being conformally flat. Any 2-dimensional (smooth) Riemannian manifold is conformally flat, a consequence of the existence of isothermal coordinates.
Indeed, the existence of a conformally flat scale amounts to solving the overdetermined partial differential equation
In dimension ≥ 4, the vanishing of the Weyl tensor is the only integrability condition for this equation; in dimension 3, it is the Cotton tensor instead.
Symmetries
The Weyl tensor has the same symmetries as the Riemann tensor. This includes:
In addition, of course, the Weyl tensor is trace free:
for all u, v. In indices these four conditions are
Bianchi identity
Taking traces of the usual second Bianchi identity of the Riemann tensor eventually shows that
where S is the Schouten tensor. The valence (0,3) tensor on the right-hand side is the Cotton tensor, apart from the initial factor.
See also
Curvature of Riemannian manifolds
Christoffel symbols provides a coordinate expression for the Weyl tensor.
Lanczos tensor
Peeling theorem
Petrov classification
Plebanski tensor
Weyl curvature hypothesis
Weyl scalar
Notes
References
.
.
Curvature tensors
Riemannian geometry
Tensors in general relativity | Weyl tensor | [
"Physics",
"Engineering"
] | 1,101 | [
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Curvature tensors",
"Tensors in general relativity"
] |
1,027,403 | https://en.wikipedia.org/wiki/G-code | G-code (also RS-274) is the most widely used computer numerical control (CNC) and 3D printing programming language. It is used mainly in computer-aided manufacturing to control automated machine tools, as well as for 3D-printer slicer applications. The G stands for geometry. G-code has many variants.
G-code instructions are provided to a machine controller (industrial computer) that tells the motors where to move, how fast to move, and what path to follow. The two most common situations are that, within a machine tool such as a lathe or mill, a cutting tool is moved according to these instructions through a toolpath cutting away material to leave only the finished workpiece and/or an unfinished workpiece is precisely positioned in any of up to nine axes around the three dimensions relative to a toolpath and, either or both can move relative to each other. The same concept also extends to noncutting tools such as forming or burnishing tools, photoplotting, additive methods such as 3D printing, and measuring instruments.
History
The first implementation of a numerical control programming language was developed at the MIT Servomechanisms Laboratory in the 1950s. In the decades that followed, many implementations were developed by numerous organizations, both commercial and noncommercial. Elements of G-code had often been used in these implementations. The first standardized version of G-code used in the United States, RS-274, was published in 1963 by the Electronic Industries Alliance (EIA; then known as Electronic Industries Association). In 1974, EIA approved RS-274-C, which merged RS-273 (variable block for positioning and straight cut) and RS-274-B (variable block for contouring and contouring/positioning). A final revision of RS-274 was approved in 1979, as RS-274-D. In other countries, the standard ISO 6983 (finalized in 1982) is often used, but many European countries use other standards. For example, DIN 66025 is used in Germany, and PN-73M-55256 and PN-93/M-55251 were formerly used in Poland.
During the 1970s through 1990s, many CNC machine tool builders attempted to overcome compatibility difficulties by standardizing on machine tool controllers built by Fanuc. Siemens was another market dominator in CNC controls, especially in Europe. In the 2010s, controller differences and incompatibility were mitigated with the widespread adoption of CAD/CAM applications that were capable of outputting machine operations in the appropriate G-code for a specific machine through a software tool called a post-processor (sometimes shortened to just a "post").
Syntax
G-code began as a limited language that lacked constructs such as loops, conditional operators, and programmer-declared variables with natural-word-including names (or the expressions in which to use them). It was unable to encode logic but was just a way to "connect the dots" where the programmer figured out many of the dots' locations longhand. The latest implementations of G-code include macro language capabilities somewhat closer to a high-level programming language. Additionally, all primary manufacturers (e.g., Fanuc, Siemens, Heidenhain) provide access to programmable logic controller (PLC) data, such as axis positioning data and tool data, via variables used by NC programs. These constructs make it easier to develop automation applications.
Extensions and variations
Extensions and variations have been added independently by control manufacturers and machine tool manufacturers, and operators of a specific controller must be aware of the differences between each manufacturer's product.
One standardized version of G-code, known as BCL (Binary Cutter Language), is used only on very few machines. Developed at MIT, BCL was developed to control CNC machines in terms of straight lines and arcs.
Some CNC machines use "conversational" programming, which is a wizard-like programming mode that either hides G-code or completely bypasses the use of G-code. Some popular examples are Okuma's Advanced One Touch (AOT), Southwestern Industries' ProtoTRAK, Mazak's Mazatrol, Hurco's Ultimax and Winmax, Haas' Intuitive Programming System (IPS), and Mori Seiki's CAPS conversational software.
See also
Canned cycle
Direct Numerical Control
LinuxCNC
List of computer-aided manufacturing software
References
Bibliography
External links
CNC G-Code and M-Code Programming
http://museum.mit.edu/150/86 Has several links (including history of MIT Servo Lab)
Complete list of G-code used by most 3D printers at reprap.org
Fanuc and Haas G-code Reference
Fanuc and Haas G-code Tutorial
Haas Milling Manual
G Code For Lathe & Milling
M Code for Lathe & Milling
Computer-aided engineering
Domain-specific programming languages
Encodings
Metalworking | G-code | [
"Engineering"
] | 1,013 | [
"Construction",
"Industrial engineering",
"Computer-aided engineering"
] |
1,027,466 | https://en.wikipedia.org/wiki/Xprize%20Foundation | XPRIZE Foundation is a non-profit organization that designs and hosts public competitions intended to encourage technological development. The XPRIZE mission is to bring about "radical breakthroughs for the benefit of humanity" through incentivized competition. It aims to motivate individuals, companies, and organizations to develop ideas and technologies.
The Ansari X Prize relating to spacecraft development was awarded in 2004, intended to inspire research and development into technology for space exploration.
Background
The first XPRIZE, the Ansari XPRIZE, was inspired by the Orteig Prize, a $25,000 prize offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis. In total, nine teams spent $400,000 in pursuit of the Orteig Prize.
In 1996, entrepreneur Peter Diamandis offered a $10-million prize to the first privately financed team that could build and fly a three-passenger vehicle 100 kilometers into space twice within two weeks. The contest, later titled the Ansari XPRIZE for Suborbital Spaceflight, motivated 26 teams from seven nations to invest more than $100 million in pursuit of the $10 million purse. On October 4, 2004, the Ansari XPRIZE was won by Mojave Aerospace Ventures, who successfully completed the contest in their spacecraft SpaceShipOne. The prize was awarded in a ceremony at the Saint Louis Science Center in St. Louis, Missouri.
The foundation has also created the XPRIZE Cup rocket challenge competition.
XPRIZE unifying principles
XPRIZES are monetary rewards to incentivize three primary goals:
Attract investments from outside the sector that take new approaches to difficult problems.
Create significant results that are real and meaningful. Competitions have measurable goals, and are created to promote adoption of innovation.
Cross national and disciplinary boundaries to encourage teams around the world to invest the intellectual and financial capital required to solve difficult challenges.
Other organizations such as the Nobel Prize committee award prizes and financial rewards to individuals or organizations that produce novel advances in science, medicine and technology. One difference between the XPRIZE foundation and other similar organizations is the awarding of prizes based on the first to achieve objective 'finish line' requirements rather than a selection committee discussing the relative merits of different endeavors. For instance, the Archon Genomics XPRIZE target was to sequence 100 human genomes in 10 days or less, with less than one error per 100,000 DNA base pairs, covering 98% of the genome and costing less than $10,000 per genome (this prize was canceled because it was outpaced by innovation).
The prize can increase attention to endeavors that otherwise might not receive much publicity. XPRIZE is currently developing new prizes in Exploration (Space and Oceans), Life Sciences, Energy & Environment, Education and Global Development. The prizes will aim to help improve lives, create equity of opportunity and stimulate new, important discoveries.
Prizes and events overseen
Past contests
1996–2004 Ansari XPRIZE for Suborbital Spaceflight
The Ansari XPRIZE for Suborbital Spaceflight was the first prize from the foundation. It successfully challenged teams to build private spaceships capable of carrying three people and fly two times within two weeks to open the space frontier. The first part of the Ansari XPRIZE requirements was fulfilled by Mike Melvill on September 29, 2004, On SpaceShipOne, a spacecraft designed by Burt Rutan and financed by Paul Allen, co-founder and former CEO of Microsoft. On that ship, Melvill broke the 100-kilometer (62.5 mi) mark, internationally recognized as the boundary of outer space. Brian Binnie completed the second part of the requirements on October 4, 2004, winning the prize. As a result, US$10 million was awarded to the winner, but more than $100 million was invested in new technologies in pursuit of the prize.
Awarding this first prize gave XPRIZE as much publicity as the winners themselves. After the 2004 success there was ample media coverage to afford both Scaled Composites and XPRIZE additional support for them to expand and continue to pursue their aims. Following this early success several other XPRIZES were announced.
The Ansari XPRIZE won the Space Foundation's Douglas S. Morrow Public Outreach Award in 2005. The award is given annually to an individual or organization that has made significant contributions to public awareness of space programs.
2007–2010 Progressive Insurance Automotive XPRIZE
The goal of the Progressive Insurance Automotive XPRIZE was to design, build and race super-efficient vehicles that achieve 100 MPGe (2.35 liter/100 kilometer) efficiency, produce less than 200 grams/mile well-to-wheel CO2 equivalent emissions, and could be manufactured for the mass market.
The winners of the competition were announced on September 16, 2010.
Team Edison2 won the $5 million Mainstream competition with its four-passenger Very Light Car, obtaining 102.5 MPGe running on E85 fuel.
Team Li-Ion Motors won the $2.5 million Alternative Side-by-Side competition with their aerodynamic Wave-II electric vehicle achieving 187 MPGe.
Team X-Tracer Switzerland won the $2.5 million Alternative Tandem competition with their 205.3 MPGe faired electric motorcycle.
2010–2011 Wendy Schmidt Oil Cleanup XCHALLENGE
The Wendy Schmidt Oil Cleanup XCHALLENGE was introduced on July 29, 2010. The $1 million prize had a goal to inspire a new generation of innovative solutions that will speed the pace of cleaning up seawater surface oil resulting from spillage from ocean platforms, tankers, and other sources. The team of Elastec/American Marine won the challenge by developing a device that skims oil off water three times faster than previously existing technology.
2006–2009 Northrop Grumman Lunar Lander XCHALLENGE
The Northrop Grumman Lunar Lander XCHALLENGE (NGLLXPC) was a competition (co-hosted by NASA) to build precise, efficient small rocket systems. It was introduced in 2006 and the US$1 million top prize was awarded on November 5, 2009 to Masten Space Systems, led by David Masten; while Armadillo Aerospace, led by id Software founder John Carmack took home the second place prize of US$500,000, plus an additional $500,000 in 2008.
2012–2014 The Nokia Sensing XCHALLENGE
The Nokia Sensing XCHALLENGE goal is accelerating the use of sensors and sensing technology to tackle health care problems and find ways for people to monitor and maintain their personal well-being. It was composed of two distinct Challenges held in 2013 and 2014. It was announced in 2012 and 12 finalists announced in 2013. On November 11, 2014, the winner was named to be team DMI, led by Eugene Y. Chan, MD, whose entry was the rHEALTH technology which used lasers and nanostrips to perform vast multiplexing on samples. In this competition, prize purses totaling $2.25 million were awarded.
2013–2015 The Wendy Schmidt Ocean Health XPRIZE
The Wendy Schmidt Ocean Health XPRIZE is a $2 million competition to improve our understanding of ocean acidification. On July 20, 2015, the winners of the challenge were announced.
2011–2017 Qualcomm Tricorder XPRIZE
The Qualcomm Tricorder XPRIZE was announced on May 10, 2011, and is sponsored by Qualcomm Foundation. It was officially launched on January 10, 2012. The $10 million prize is awarded for creating a mobile device that can "diagnose patients better than or equal to a panel of board certified physicians". The name is taken from the tricorder device in Star Trek which can be used to instantly diagnose ailments. No team met all the requirements needed to win the full prize purse. Reduced prizes were made to the strongest performers (US$2.6 million for Final Frontier Medical Devices, US$1 million for Dynamical Biomarkers, and $100,000 for Cloud DX, named "Bold Epic Innovator"). For the first time at any XPRIZE, the leftover funds from the main prize purse were diverted for consumer testing for commercialization ($3.8 million) and for adapting tricorders for use in hospitals in developing countries ($1.6 million).
2016–2018 Anu & Naveen Jain Women's Safety XPRIZE
The Anu & Naveen Jain Women's Safety XPRIZE was launched on October 24, 2016, and has a $1 million purse. The goal for competing teams is to develop a safety device for women that can autonomously and inconspicuously trigger an emergency alert while transmitting information to a network of community responders. On June 7, 2018, Leaf Wearables received the grand prize winner of the $1M.
2016–2018 Water Abundance XPRIZE
On October 20, 2018, the XPRIZE Foundation awarded The Water Abundance XPRIZE, which launched on October 24, 2016, with a purse of $1.75 million provided by the Tata Group and Australian Aid, to the Skysource/Skywater Alliance based in Venice, California, who received a grand prize of $1.5 million. An additional award of $150,000 went to the second place team, JMCC WING, based in South Point, Hawaii, to acknowledge the team's ingenuity in developing a unique technological approach. Over a 24-hour period, the Skysource/Skywater Alliance successfully extracted over 2,000 liters of water using only renewable energy, at a cost of US$0.02 per liter. The team, led by architect David Hertz, intends to use the award to productize the system to address water scarcity in the developing world.
2014–2019 The Global Learning XPRIZE
The Global Learning XPRIZE, launched in September 2014, is a $15-million prize to create mobile apps to improve reading, writing, and arithmetic in developing nations. Each application will be developed during an 18-month period and the top five teams will receive $1 million each, with each of the winning apps being made available under an open-source license. The finalist of the group, that then develops an app producing the highest performance gains, will win an additional $10 million top prize. On May 15, 2019, the grand prize winners were announced; there was a tie between Kitkit School from South Korea and the United States, and one billion from Kenya and the United Kingdom.
2015–2019 Shell Ocean Discovery XPRIZE
On December 14, 2015, XPRIZE Founder Peter Diamandis announced the launch of a new $7 million prize that will be a three-year global competition that challenges researchers to build better technologies for mapping Earth's seafloor. On May 31, 2019, the grand prize winner, receiving a total of $4M, was GEBCO-NF Alumni, an international team based in the United States, while KUROSHIO, from Japan, claimed $1M as the runner-up. GEBCO-NF Alumni used the unmanned boat Maxlimer to autonomously map of seafloor.
2015–2019 Adult Literacy XPRIZE
The challenge set was to find or create solutions for improving the literacy proficiency of adults in reading within a 12-month period. The challenge was announced on June 8, 2015, and awarded $7 million by Barbara Bush Foundation for Family Literacy and the Dollar General Literacy Foundation. The winners, Learning Upgrade and People ForWords were announced February 7, 2019.
2020 Next-Gen Mask Challenge
The $1 million Next-Gen Mask Prize is open to only 16–24 year olds and was sponsored by Marc Benioff and Jim Cramer, the host of Mad Money on CNBC. On December 23, 2020, The Luminosity Lab was named the winning team with their anti-fog mask design, taking home $500,000.
2020–2021 Pandemic Response XPRIZE
This was a four-month challenge focused on the development of AI-driven systems to predict COVID-19 infection rates and to prescribe intervention plans. The $500,000 award was funded by Cognizant. The winners, VALENCIA IA4COVID19 from Spain and JSI vs COVID from Slovenia were announced on March 9, 2021.
2018–2022 ANA Avatar XPRIZE
The $10M ANA Avatar XPRIZE aimed to create avatar systems that can transport human presence to remote locations in real time. The participants of this competition developed robotic systems that allow operators to see, hear, and interact with a remote environment in a way that feels as if they are truly there. On the other hand, people in the remote environment were given the impression that the operator was present inside the avatar robot. At the competition finals, held in November 2022 in Long Beach, CA, USA, the avatar systems were evaluated on their support for remotely interacting with humans, exploring new environments, and employing specialized skills.
The winners of the competition were:
Team NimbRo, University of Bonn, Germany won the grand prize of $5,000,000
Pollen Robotics, France won $2,000,000
Team Northeastern, Northeastern University, USA won $1,000,000
Canceled contests
2006–2013 Archon Genomics XPRIZE
The Archon Genomics XPRIZE, the second XPRIZE to be offered by the foundation, was announced on October 4, 2006. The goal of the Archon Genomics XPRIZE was to greatly reduce the cost and increase the speed of human genome sequencing to create a new era of personalized, predictive, and preventive medicine, eventually transforming medical care from reactive to proactive. The $10 million prize purse was promised to the first team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $1,000 per genome.
If more than one team attempted the competition at the same time, and more than one team fulfilled all the criteria, then teams would have been ranked according to the time of completion. No more than three teams would have been ranked and would have shared the purse in the following manner: $7.5 million to the winner and $2.5 million to the second place team if two teams were successful, or $7 million, $2 million and $1 million if three teams are successful.
Actual competition events were originally scheduled to occur twice a year, with all eligible teams given the opportunity to attempt, starting at precisely the same time as the other teams. This was changed to a single competition scheduled for September 5, 2013, to October 1, 2013, which was canceled on August 22, 2013. The CEO articulated the rationale for the change, "companies can do this for less than $5,000 per genome, in a few days or less – and are moving quickly towards the goals we set for the prize. For this reason, we have decided to cancel an XPRIZE for the first time ever." A public debate concerning the validity and potential implications of the cancellation was published March 27, 2014.
2007–2018 Google Lunar XPRIZE
The Google Lunar XPRIZE was introduced on September 13, 2007. The goal of the prize was similar to that of the Ansari XPRIZE, to inspire a new generation of private investment in space exploration and technology. The challenge called for teams to compete in successfully launching, landing, and operating a rover on the lunar surface. The prize would award $20 million to the first team to land a rover on the Moon that successfully roved more than 500 meters and transmitted back high-definition images and video. There was a $5 million second prize, as well as $5 million in potential bonus prizes for extra features such as roving long distances (greater than 5,000 meters), capturing images of man-made objects on the Moon, or surviving a lunar night.
On January 23, 2018, the prize ended when no team could schedule, confirm, and pay for a launch attempt. The XPRIZE Foundation announced that "no team would be able to make a launch attempt to reach the Moon by the March 31, 2018 deadline... and the US $30 million Google Lunar XPRIZE will go unclaimed."
2019-2024 Rainforest XPRIZE
On November 19, 2019, the $10 million Rainforest XPrize was announced. Registration opened in February 2020 and the first round will began in September 2020. On November 18, 2024, the winning teams were announced. Limelight Rainforest won the $5 million first place award, Map of Life Rapid Assessments won the $2 million second place award, the Brazilian Team won the $1 million third place award, and a special award of $250,000 for integration of technology and outreach was awarded to ETH BiodivX.
Active contests
2014 IBM Watson A.I. XPRIZE
The A.I. XPRIZE was announced as having the aim to use an artificial intelligence system to deliver a compelling TED talk. Diamandis hopes to contrast the benevolent value of AI against the dystopian point of view that sometimes enter AI conversations. The winning team of the contest, which is scheduled for 2020, will be determined by the audience.
2015 NRG Cosia Carbon XPRIZE
On September 29, 2015, Peter Diamandis, chairman and CEO of X Prize, announced the launch of a $20 million prize for a 4.5-year competition on testing technologies that converts CO2 into products with the highest net value to reduce carbon dioxide emissions of either coal or a natural gas power plant. Round three began in April 2018 as the 27 semifinalists were cut down to ten finalists; each is receiving an equal share of $5 million milestone prize money. Five teams will compete at a coal-fired power plant in Gillette, Wyoming. The remaining five teams will compete at a natural gas-fired power plant in Alberta, Canada. In February 2020 this operational round will conclude and winners will be announced the following month.
A delay occurred, and in April 2021, the winners were announced: CarbonCure Technologies (Canada) and CarbonBuilt (United States).
2020 Rapid Covid Testing
XPRIZE Rapid Covid Testing is a $6 million, six-month competition to develop faster, cheaper, and easier to use COVID-19 testing methods at scale.
2020 Feed the next billion XPRIZE
In 2020, the XPRIZE "Feed the next billion" challenge was launched as a $15 million 3-year competition with the goal of developing authentic chicken breast or fish filet alternatives, made from non-animal based ingredients. The challenge is currently in the final round, with 6 teams competing for the finals in July 2024.
2021 Gigaton Scale Carbon Removal
Funded by Elon Musk and the Musk Foundation, the $100 million carbon removal competition is the so far largest incentive prize in history. It is aimed "to inspire and help scale efficient solutions to collectively achieve the 10 gigaton per year carbon removal target by 2050, to help fight climate change and restore the Earth’s carbon balance".
In April 2022, XPRIZE and the Musk Foundation announced that in celebration of Earth Day, 15 teams had been designated as milestone winners in the $100 million XPRIZE carbon removal competition. The milestone winners have received $1 million each, with the overall winners to be awarded $80 million in 2025.
2023 XPRIZE Healthspan
In November 2023, XPRIZE announced the largest prize to date of $101 million for medical interventions targeting the biology of aging that show a restoration of 10 or more years of function in muscle, cognitive, and immune clinical endpoints. Winners are planned to be announced at the end of 2030.
See also
DARPA Grand Challenge
Elevator:2010
Global Security Challenge
H-Prize
Hutter Prize
Inducement prize contest
L Prize
Methuselah prize
Orteig Prize
References
External links
Non-profit organizations based in California
Scientific research foundations in the United States
Challenge awards
Transhumanism
1995 establishments in California
Organizations based in Culver City, California | Xprize Foundation | [
"Technology",
"Engineering",
"Biology"
] | 4,131 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
1,028,314 | https://en.wikipedia.org/wiki/Penman%20equation | The Penman equation describes evaporation (E) from an open water surface, and was developed by Howard Penman in 1948. Penman's equation requires daily mean temperature, wind speed, air pressure, and solar radiation to predict E. Simpler Hydrometeorological equations continue to be used where obtaining such data is impractical, to give comparable results within specific contexts, e.g. humid vs arid climates.
Details
Numerous variations of the Penman equation are used to estimate evaporation from water, and land. Specifically the Penman–Monteith equation refines weather based potential evapotranspiration (PET) estimates of vegetated land areas. It is widely regarded as one of the most accurate models, in terms of estimates.
The original equation was developed by Howard Penman at the Rothamsted Experimental Station, Harpenden, UK.
The equation for evaporation given by Penman is:
where:
m = Slope of the saturation vapor pressure curve (Pa K−1)
Rn = Net irradiance (W m−2)
ρa = density of air (kg m−3)
cp = heat capacity of air (J kg−1 K−1)
δe = vapor pressure deficit (Pa)
ga = momentum surface aerodynamic conductance (m s−1)
λv = latent heat of vaporization (J kg−1)
γ = psychrometric constant (Pa K−1)
which (if the SI units in parentheses are used) will give the evaporation Emass in units of kg/(m2·s), kilograms of water evaporated every second for each square meter of area.
Remove λ to obviate that this is fundamentally an energy balance. Replace λv with L to get familiar precipitation units ETvol, where Lv=λvρwater. This has units of m/s, or more commonly mm/day, because it is flux m3/s per m2=m/s.
This equation assumes a daily time step so that net heat exchange with the ground is insignificant, and a unit area surrounded by similar open water or vegetation so that net heat & vapor exchange with the surrounding area cancels out. Some times people replace Rn with and A for total net available energy when a situation warrants account of additional heat fluxes.
Temperature, wind speed, relative humidity impact the values of m, g, cp, ρ, and δe.
Shuttleworth (1993)
In 1993, W.Jim Shuttleworth modified and adapted the Penman equation to use SI, which made calculating evaporation simpler. The resultant equation is:
where:
Emass = Evaporation rate (mm day−1)
m = Slope of the saturation vapor pressure curve (kPa K−1)
Rn = Net irradiance (MJ m−2 day−1)
γ = psychrometric constant = (kPa K−1)
U2 = wind speed (m s−1)
δe = vapor pressure deficit (kPa)
λv = latent heat of vaporization (MJ kg−1)
Some useful relationships
δe = (es - ea) = (1 – relative humidity) es
es = saturated vapor pressure of air, as is found inside plant stoma.
ea = vapor pressure of free flowing air.
es, mmHg = exp(21.07-5336/Ta), approximation by Merva, 1975
Therefore , mmHg/K
Ta = air temperature in kelvins
See also
Pan evaporation
Evapotranspiration
Thornthwaite model
Blaney–Criddle equation
Penman–Monteith equation
Notes
References
Jarvis, P.G. (1976) The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field. Phil. Trans. R. Soc. Lond. B. 273, 593–610.
Neitsch, S.L.; J.G. Arnold; J.R. Kliniry; J.R. Wolliams. 2005. Soil and Water Assessment Tool Theoretical Document; Version 2005. Grassland, Soil and Water Research Laboratory; Agricultural Research Service. and Blackland Research Center; Texas Agricultural Experiment Station. Temple, Texas. https://web.archive.org/web/20090116193356/http://www.brc.tamus.edu/swat/downloads/doc/swat2005/SWAT%202005%20theory%20final.pdf
Penman, H.L. (1948): Natural evaporation from open water, bare soil and grass. Proc. Roy. Soc. London A(194), S. 120–145.
Agronomy
Equations
Hydrology | Penman equation | [
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 973 | [
"Hydrology",
"Mathematical objects",
"Equations",
"Environmental engineering"
] |
1,028,388 | https://en.wikipedia.org/wiki/Nitrosamine | Nitrosamines (or more formally N-nitrosamines) are organic compounds produced by industrial processes. The chemical structure is , where R is usually an alkyl group. They feature a nitroso group () that are "probable human carcinogens", bonded to a deprotonated amine. Most nitrosamines are carcinogenic in animals. A 2006 systematic review supports a "positive association between nitrite and nitrosamine intake and gastric cancer, between meat and processed meat intake and gastric cancer and oesophageal cancer, and between preserved fish, vegetable and smoked food intake and gastric cancer, but is not conclusive".
Chemistry
The organic chemistry of nitrosamines is well developed with regard to their syntheses, their structures, and their reactions. They usually are produced by the reaction of nitrous acid () and secondary amines, although other nitrosyl sources (e.g. , , RONO) have the same effect:
The nitrous acid usually arises from protonation of a nitrite. This synthesis method is relevant to the generation of nitrosamines under some biological conditions. The nitrosation is also reversible, particularly in acidic solutions of nucleophiles. Aryl nitrosamines rearrange to give a para-nitroso aryl amine in the Fischer-Hepp rearrangement.
With regards to structure, the core of nitrosamines is planar, as established by X-ray crystallography. The N-N and N-O distances are 132 and 126 pm, respectively in dimethylnitrosamine, one of the simplest members of a large class of N-nitrosamines
Nitrosamines are not directly carcinogenic. Metabolic activation is required to convert them to the alkylating agents that modify bases in DNA, inducing mutations. The specific alkylating agents vary with the nitrosamine, but all are proposed to feature alkyldiazonium centers.
History and occurrence
In 1956, two British scientists, John Barnes and Peter Magee, reported that a simple member of the large class of N-nitrosamines, dimethylnitrosamine, produced liver tumours in rats. Subsequent studies showed that approximately 90% of the 300 nitrosamines tested were carcinogenic in a wide variety of animals.
Tobacco exposure
A common way ordinary consumers are exposed to nitrosamines is through tobacco use and cigarette smoke. Tobacco-specific nitrosamines also can be found in American dip snuff, chewing tobacco, and to a much lesser degree, snus (127.9 ppm for American dip snuff compared to 2.8 ppm in Swedish snuff or snus).
Dietary exposure
Medication impurities
There have been recalls for various medications due to the presence of nitrosamine impurities. There have been recalls for angiotensin II receptor blockers, ranitidine, valsartan, Duloxetine, and others.
The US Food and Drug Administration published guidance about the control of nitrosamine impurities in medicines. Health Canada published guidance about nitrosamine impurities in medications and a list of established acceptable intake limits of nitrosamine impurities in medications.
Examples
See also
Hydrazines derived from these nitrosamines, e.g. UDMH, are also carcinogenic.
Possible health hazards of pickled vegetables
Tobacco-specific nitrosamines
Additional reading
References
External links
Oregon State University, Linus Pauling Institute article on Nitrosamines and cancer, including info on history of meat laws
Risk factors in Pancreatic Cancer
Nitrogen cycle
Functional groups
Garde manger
Carcinogens
IARC Group 1 carcinogens | Nitrosamine | [
"Chemistry",
"Biology",
"Environmental_science"
] | 785 | [
"Digestive system",
"Toxicology",
"Functional groups",
"Organ systems",
"Nitrogen cycle",
"Carcinogens",
"Metabolism"
] |
1,028,589 | https://en.wikipedia.org/wiki/Normal%20basis | In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory.
Normal basis theorem
Let be a Galois extension with Galois group . The classical normal basis theorem states that there is an element such that forms a basis of K, considered as a vector space over F. That is, any element can be written uniquely as for some elements
A normal basis contrasts with a primitive element basis of the form , where is an element whose minimal polynomial has degree .
Group representation point of view
A field extension with Galois group G can be naturally viewed as a representation of the group G over the field F in which each automorphism is represented by itself. Representations of G over the field F can be viewed as left modules for the group algebra F[G]. Every homomorphism of left F[G]-modules is of form for some . Since is a linear basis of F[G] over F, it follows easily that is bijective iff generates a normal basis of K over F. The normal basis theorem therefore amounts to the statement saying that if is finite Galois extension, then as left -module. In terms of representations of G over F, this means that K is isomorphic to the regular representation.
Case of finite fields
For finite fields this can be stated as follows: Let denote the field of q elements, where is a prime power, and let denote its extension field of degree . Here the Galois group is with a cyclic group generated by the q-power Frobenius automorphism with Then there exists an element such that
is a basis of K over F.
Proof for finite fields
In case the Galois group is cyclic as above, generated by with the normal basis theorem follows from two basic facts. The first is the linear independence of characters: a multiplicative character is a mapping χ from a group H to a field K satisfying ; then any distinct characters are linearly independent in the K-vector space of mappings. We apply this to the Galois group automorphisms thought of as mappings from the multiplicative group . Now as an F-vector space, so we may consider as an element of the matrix algebra Mn(F); since its powers are linearly independent (over K and a fortiori over F), its minimal polynomial must have degree at least n, i.e. it must be .
The second basic fact is the classification of finitely generated modules over a PID such as . Every such module M can be represented as , where may be chosen so that they are monic polynomials or zero and is a multiple of . is the monic polynomial of smallest degree annihilating the module, or zero if no such non-zero polynomial exists. In the first case , in the second case . In our case of cyclic G of size n generated by we have an F-algebra isomorphism where X corresponds to , so every -module may be viewed as an -module with multiplication by X being multiplication by . In case of K this means , so the monic polynomial of smallest degree annihilating K is the minimal polynomial of . Since K is a finite dimensional F-space, the representation above is possible with . Since we can only have , and as F[X]-modules. (Note this is an isomorphism of F-linear spaces, but not of rings or F-algebras.) This gives isomorphism of -modules that we talked about above, and under it the basis on the right side corresponds to a normal basis of K on the left.
Note that this proof would also apply in the case of a cyclic Kummer extension.
Example
Consider the field over , with Frobenius automorphism . The proof above clarifies the choice of normal bases in terms of the structure of K as a representation of G (or F[G]-module). The irreducible factorization
means we have a direct sum of F[G]-modules (by the Chinese remainder theorem):
The first component is just , while the second is isomorphic as an F[G]-module to under the action (Thus as F[G]-modules, but not as F-algebras.)
The elements which can be used for a normal basis are precisely those outside either of the submodules, so that and . In terms of the G-orbits of K, which correspond to the irreducible factors of:
the elements of are the roots of , the nonzero elements of the submodule are the roots of , while the normal basis, which in this case is unique, is given by the roots of the remaining factor .
By contrast, for the extension field in which is divisible by , we have the F[G]-module isomorphism
Here the operator is not diagonalizable, the module L has nested submodules given by generalized eigenspaces of , and the normal basis elements β are those outside the largest proper generalized eigenspace, the elements with .
Application to cryptography
The normal basis is frequently used in cryptographic applications based on the discrete logarithm problem, such as elliptic curve cryptography, since arithmetic using a normal basis is typically more computationally efficient than using other bases.
For example, in the field above, we may represent elements as bit-strings:
where the coefficients are bits Now we can square elements by doing a left circular shift, , since squaring β4 gives . This makes the normal basis especially attractive for cryptosystems that utilize frequent squaring.
Proof for the case of infinite fields
Suppose is a finite Galois extension of the infinite field F. Let , , where . By the primitive element theorem there exists such and . Let us write . 's (monic) minimal polynomial f over K is the irreducible degree n polynomial given by the formula
Since f is separable (it has simple roots) we may define
In other words,
Note that and for . Next, define an matrix A of polynomials over K and a polynomial D by
Observe that , where k is determined by ; in particular iff . It follows that is the permutation matrix corresponding to the permutation of G which sends each to . (We denote by the matrix obtained by evaluating at .) Therefore, . We see that D is a non-zero polynomial, and therefore it has only a finite number of roots. Since we assumed F is infinite, we can find such that . Define
We claim that is a normal basis. We only have to show that are linearly independent over F, so suppose for some . Applying the automorphism yields for all i. In other words, . Since , we conclude that , which completes the proof.
It is tempting to take because . But this is impermissible because we used the fact that to conclude that for any F-automorphism and polynomial over the value of the polynomial at a equals .
Primitive normal basis
A primitive normal basis of an extension of finite fields is a normal basis for that is generated by a primitive element of E, that is a generator of the multiplicative group K×. (Note that this is a more restrictive definition of primitive element than that mentioned above after the general normal basis theorem: one requires powers of the element to produce every non-zero element of K, not merely a basis.) Lenstra and Schoof (1987) proved that every extension of finite fields possesses a primitive normal basis, the case when F is a prime field having been settled by Harold Davenport.
Free elements
If is a Galois extension and x in K generates a normal basis over F, then x is free in . If x has the property that for every subgroup H of the Galois group G, with fixed field KH, x is free for , then x is said to be completely free in . Every Galois extension has a completely free element.
See also
Dual basis in a field extension
Polynomial basis
Zech's logarithm
References
Linear algebra
Field (mathematics)
Abstract algebra
Cryptography | Normal basis | [
"Mathematics",
"Engineering"
] | 1,705 | [
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Linear algebra",
"Abstract algebra",
"Algebra"
] |
1,028,841 | https://en.wikipedia.org/wiki/Simplex%20category | In mathematics, the simplex category (or simplicial category or nonempty finite ordinal category) is the category of non-empty finite ordinals and order-preserving maps. It is used to define simplicial and cosimplicial objects.
Formal definition
The simplex category is usually denoted by . There are several equivalent descriptions of this category. can be described as the category of non-empty finite ordinals as objects, thought of as totally ordered sets, and (non-strictly) order-preserving functions as morphisms. The objects are commonly denoted (so that is the ordinal ). The category is generated by coface and codegeneracy maps, which amount to inserting or deleting elements of the orderings. (See simplicial set for relations of these maps.)
A simplicial object is a presheaf on , that is a contravariant functor from to another category. For instance, simplicial sets are contravariant with the codomain category being the category of sets. A cosimplicial object is defined similarly as a covariant functor originating from .
Augmented simplex category
The augmented simplex category, denoted by is the category of all finite ordinals and order-preserving maps, thus , where . Accordingly, this category might also be denoted FinOrd. The augmented simplex category is occasionally referred to as algebraists' simplex category and the above version is called topologists' simplex category.
A contravariant functor defined on is called an augmented simplicial object and a covariant functor out of is called an augmented cosimplicial object; when the codomain category is the category of sets, for example, these are called augmented simplicial sets and augmented cosimplicial sets respectively.
The augmented simplex category, unlike the simplex category, admits a natural monoidal structure. The monoidal product is given by concatenation of linear orders, and the unit is the empty ordinal (the lack of a unit prevents this from qualifying as a monoidal structure on ). In fact, is the monoidal category freely generated by a single monoid object, given by with the unique possible unit and multiplication. This description is useful for understanding how any comonoid object in a monoidal category gives rise to a simplicial object since it can then be viewed as the image of a functor from to the monoidal category containing the comonoid; by forgetting the augmentation we obtain a simplicial object. Similarly, this also illuminates the construction of simplicial objects from monads (and hence adjoint functors) since monads can be viewed as monoid objects in endofunctor categories.
See also
Simplicial category
PROP (category theory)
Abstract simplicial complex
References
External links
What's special about the Simplex category?
Algebraic topology
Homotopy theory
Categories in category theory
Free algebraic structures | Simplex category | [
"Mathematics"
] | 637 | [
"Mathematical structures",
"Algebraic topology",
"Basic concepts in set theory",
"Families of sets",
"Category theory",
"Algebraic structures",
"Simplicial sets",
"Categories in category theory",
"Topology",
"Fields of abstract algebra",
"Free algebraic structures"
] |
1,028,926 | https://en.wikipedia.org/wiki/Architectural%20acoustics | Architectural acoustics (also known as building acoustics) is the science and engineering of achieving a good sound within a building and is a branch of acoustical engineering. The first application of modern scientific methods to architectural acoustics was carried out by the American physicist Wallace Sabine in the Fogg Museum lecture room. He applied his newfound knowledge to the design of Symphony Hall, Boston.
Architectural acoustics can be about achieving good speech intelligibility in a theatre, restaurant or railway station, enhancing the quality of music in a concert hall or recording studio, or suppressing noise to make offices and homes more productive and pleasant places to work and live in. Architectural acoustic design is usually done by acoustic consultants.
Building skin envelope
This science analyzes noise transmission from building exterior envelope to interior and vice versa. The main noise paths are roofs, eaves, walls, windows, door and penetrations. Sufficient control ensures space functionality and is often required based on building use and local municipal codes. An example would be providing a suitable design for a home which is to be constructed close to a high volume roadway, or under the flight path of a major airport, or of the airport itself.
Inter-space noise control
The science of limiting and/or controlling noise transmission from one building space to another to ensure space functionality and speech privacy. The typical sound paths are ceilings, room partitions, acoustic ceiling panels (such as wood dropped ceiling panels), doors, windows, flanking, ducting and other penetrations. Technical solutions depend on the source of the noise and the path of acoustic transmission, for example noise by steps or noise by (air, water) flow vibrations. An example would be providing suitable party wall design in an apartment complex to minimize the mutual disturbance due to noise by residents in adjacent apartments.
Inter-space noise control can take a different form when talking about Acoustics in European football stadiums. One goal in stadium acoustics is to make the crowd as loud as possible and inter-space noise control becomes a factor but in helping reflect noise to create more reverberation and louder decibel level throughout the stadium. Many outdoor soccer stadiums for example have roofs over the fan sections which create more reverberation and echoing which helps raise the general volume in the stadium.
Interior space acoustics
This is the science of controlling a room's surfaces based on sound absorbing and reflecting properties. Excessive reverberation time, which can be calculated, can lead to poor speech intelligibility.
Sound reflections create standing waves that produce natural resonances that can be heard as a pleasant sensation or an annoying one. Reflective surfaces can be angled and coordinated to provide good coverage of sound for a listener in a concert hall or music recital space. To illustrate this concept consider the difference between a modern large office meeting room or lecture theater and a traditional classroom with all hard surfaces.
Interior building surfaces can be constructed of many different materials and finishes. Ideal acoustical panels are those without a face or finish material that interferes with the acoustical infill or substrate. Fabric covered panels are one way to heighten acoustical absorption. Perforated metal also shows sound absorbing qualities. Finish material is used to cover over the acoustical substrate. Mineral fiber board, or Micore, is a commonly used acoustical substrate. Finish materials often consist of fabric, wood or acoustical tile. Fabric can be wrapped around substrates to create what is referred to as a "pre-fabricated panel" and often provides good noise absorption if laid onto a wall.
Prefabricated panels are limited to the size of the substrate ranging from to . Fabric retained in a wall-mounted perimeter track system, is referred to as "on-site acoustical wall panels". This is constructed by framing the perimeter track into shape, infilling the acoustical substrate and then stretching and tucking the fabric into the perimeter frame system. On-site wall panels can be constructed to accommodate door frames, baseboard, or any other intrusion. Large panels (generally, greater than ) can be created on walls and ceilings with this method. Wood finishes can consist of punched or routed slots and provide a natural look to the interior space, although acoustical absorption may not be great.
There are four ways to improve workplace acoustics and solve workplace sound problems – the ABCDs.
A = Absorb (via drapes, carpets, ceiling tiles, etc.)
B = Block (via panels, walls, floors, ceilings and layout)
C = Cover-up, or Control (background sound levels and spectra) (via masking sound)
D = Diffuse (cause the sound energy to spread by radiating in many directions)
Mechanical equipment noise
Building services noise control is the science of controlling noise produced by:
HVAC (heating, ventilation, air conditioning) systems
Elevators
Electrical generators positioned within or attached to a building
Any other building service infrastructure component that emits sound.
Inadequate control may lead to elevated sound levels within the space which can be annoying and reduce speech intelligibility. Typical improvements are vibration isolation of mechanical equipment, and sound attenuators in ductwork. Sound masking can also be created by adjusting HVAC noise to a predetermined level.
See also
Noise health effects
Noise mitigation
Noise Reduction Coefficient
Noise regulation
Noise, vibration, and harshness
Sound transmission class
References
Further reading
Thompson, Emily (2002). The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America, 1900–1933. Cambridge, Mass.: MIT Press.
Acoustics
Building engineering
Acoustic problems
Sound | Architectural acoustics | [
"Physics",
"Materials_science",
"Engineering"
] | 1,120 | [
"Building engineering",
"Classical mechanics",
"Acoustics",
"Civil engineering",
"Building defects",
"Mechanical failure",
"Architecture"
] |
1,029,211 | https://en.wikipedia.org/wiki/Metabolomics | Metabolomics is the scientific study of chemical processes involving metabolites, the small molecule substrates, intermediates, and products of cell metabolism. Specifically, metabolomics is the "systematic study of the unique chemical fingerprints that specific cellular processes leave behind", the study of their small-molecule metabolite profiles. The metabolome represents the complete set of metabolites in a biological cell, tissue, organ, or organism, which are the end products of cellular processes. Messenger RNA (mRNA), gene expression data, and proteomic analyses reveal the set of gene products being produced in the cell, data that represents one aspect of cellular function. Conversely, metabolic profiling can give an instantaneous snapshot of the physiology of that cell, and thus, metabolomics provides a direct "functional readout of the physiological state" of an organism. There are indeed quantifiable correlations between the metabolome and the other cellular ensembles (genome, transcriptome, proteome, and lipidome), which can be used to predict metabolite abundances in biological samples from, for example mRNA abundances. One of the ultimate challenges of systems biology is to integrate metabolomics with all other -omics information to provide a better understanding of cellular biology.
History
The concept that individuals might have a "metabolic profile" that could be reflected in the makeup of their biological fluids was introduced by Roger Williams in the late 1940s, who used paper chromatography to suggest characteristic metabolic patterns in urine and saliva were associated with diseases such as schizophrenia. However, it was only through technological advancements in the 1960s and 1970s that it became feasible to quantitatively (as opposed to qualitatively) measure metabolic profiles. The term "metabolic profile" was introduced by Horning, et al. in 1971 after they demonstrated that gas chromatography-mass spectrometry (GC-MS) could be used to measure compounds present in human urine and tissue extracts. The Horning group, along with that of Linus Pauling and Arthur B. Robinson led the development of GC-MS methods to monitor the metabolites present in urine through the 1970s.
Concurrently, NMR spectroscopy, which was discovered in the 1940s, was also undergoing rapid advances. In 1974, Seeley et al. demonstrated the utility of using NMR to detect metabolites in unmodified biological samples. This first study on muscle highlighted the value of NMR in that it was determined that 90% of cellular ATP is complexed with magnesium. As sensitivity has improved with the evolution of higher magnetic field strengths and magic angle spinning, NMR continues to be a leading analytical tool to investigate metabolism. Recent efforts to utilize NMR for metabolomics have been largely driven by the laboratory of Jeremy K. Nicholson at Birkbeck College, University of London and later at Imperial College London. In 1984, Nicholson showed 1H NMR spectroscopy could potentially be used to diagnose diabetes mellitus, and later pioneered the application of pattern recognition methods to NMR spectroscopic data.
In 1994 and 1996, liquid chromatography mass spectrometry metabolomics experiments were performed by Gary Siuzdak while working with Richard Lerner (then president of the Scripps Research Institute) and Benjamin Cravatt, to analyze the cerebral spinal fluid from sleep deprived animals. One molecule of particular interest, oleamide, was observed and later shown to have sleep inducing properties. This work is one of the earliest such experiments combining liquid chromatography and mass spectrometry in metabolomics.
In 2005, the first metabolomics tandem mass spectrometry database, METLIN, for characterizing human metabolites was developed in the Siuzdak laboratory at the Scripps Research Institute. METLIN has since grown and as of December, 2023, METLIN contains MS/MS experimental data on over 930,000 molecular standards and other chemical entities, each compound having experimental tandem mass spectrometry data generated from molecular standards at multiple collision energies and in positive and negative ionization modes. METLIN is the largest repository of tandem mass spectrometry data of its kind. The dedicated academic journal Metabolomics first appeared in 2005, founded by its current editor-in-chief Roy Goodacre.
In 2005, the Siuzdak lab was engaged in identifying metabolites associated with sepsis and in an effort to address the issue of statistically identifying the most relevant dysregulated metabolites across hundreds of LC/MS datasets, the first algorithm was developed to allow for the nonlinear alignment of mass spectrometry metabolomics data. Called XCMS, it has since (2012) been developed as an online tool and as of 2019 (with METLIN) has over 30,000 registered users.
On 23 January 2007, the Human Metabolome Project, led by David S. Wishart, completed the first draft of the human metabolome, consisting of a database of approximately 2,500 metabolites, 1,200 drugs and 3,500 food components. Similar projects have been underway in several plant species, most notably Medicago truncatula and Arabidopsis thaliana for several years.
As late as mid-2010, metabolomics was still considered an "emerging field". Further, it was noted that further progress in the field depended in large part, through addressing otherwise "irresolvable technical challenges", by technical evolution of mass spectrometry instrumentation.
In 2015, real-time metabolome profiling was demonstrated for the first time.
Metabolome
The metabolome refers to the complete set of small-molecule (<1.5 kDa) metabolites (such as metabolic intermediates, hormones and other signaling molecules, and secondary metabolites) to be found within a biological sample, such as a single organism. The word was coined in analogy with transcriptomics and proteomics; like the transcriptome and the proteome, the metabolome is dynamic, changing from second to second. Although the metabolome can be defined readily enough, it is not currently possible to analyse the entire range of metabolites by a single analytical method.
In January 2007, scientists at the University of Alberta and the University of Calgary completed the first draft of the human metabolome. The Human Metabolome Database (HMDB) is perhaps the most extensive public metabolomic spectral database to date and is a freely available electronic database (www.hmdb.ca) containing detailed information about small molecule metabolites found in the human body. It is intended to be used for applications in metabolomics, clinical chemistry, biomarker discovery and general education. The database is designed to contain or link three kinds of data:
Chemical data,
Clinical data and
Molecular biology/biochemistry data.
The database contains 220,945 metabolite entries including both water-soluble and lipid soluble metabolites. Additionally, 8,610 protein sequences (enzymes and transporters) are linked to these metabolite entries. Each MetaboCard entry contains 130 data fields with 2/3 of the information being devoted to chemical/clinical data and the other 1/3 devoted to enzymatic or biochemical data. The version 3.5 of the HMDB contains >16,000 endogenous metabolites, >1,500 drugs and >22,000 food constituents or food metabolites. This information, available at the Human Metabolome Database and based on analysis of information available in the current scientific literature, is far from complete. In contrast, much more is known about the metabolomes of other organisms. For example, over 50,000 metabolites have been characterized from the plant kingdom, and many thousands of metabolites have been identified and/or characterized from single plants.
Each type of cell and tissue has a unique metabolic ‘fingerprint’ that can elucidate organ or tissue-specific information. Bio-specimens used for metabolomics analysis include but not limit to plasma, serum, urine, saliva, feces, muscle, sweat, exhaled breath and gastrointestinal fluid. The ease of collection facilitates high temporal resolution, and because they are always at dynamic equilibrium with the body, they can describe the host as a whole. Genome can tell what could happen, transcriptome can tell what appears to be happening, proteome can tell what makes it happen and metabolome can tell what has happened and what is happening.
Metabolites
Metabolites are the substrates, intermediates and products of metabolism. Within the context of metabolomics, a metabolite is usually defined as any molecule less than 1.5 kDa in size. However, there are exceptions to this depending on the sample and detection method. For example, macromolecules such as lipoproteins and albumin are reliably detected in NMR-based metabolomics studies of blood plasma. In plant-based metabolomics, it is common to refer to "primary" and "secondary" metabolites. A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Examples include antibiotics and pigments. By contrast, in human-based metabolomics, it is more common to describe metabolites as being either endogenous (produced by the host organism) or exogenous. Metabolites of foreign substances such as drugs are termed xenometabolites.
The metabolome derives from a large network of metabolic reactions, where outputs from one enzymatic chemical reaction are inputs to other chemical reactions. Such systems have been described as hypercycles.
Metabonomics
Metabonomics is defined as "the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification". The word origin is from the Greek μεταβολή meaning change and nomos meaning a rule set or set of laws. This approach was pioneered by Jeremy Nicholson at Murdoch University and has been used in toxicology, disease diagnosis and a number of other fields. Historically, the metabonomics approach was one of the first methods to apply the scope of systems biology to studies of metabolism.
There has been some disagreement over the exact differences between 'metabolomics' and 'metabonomics'. The difference between the two terms is not related to choice of analytical platform: although metabonomics is more associated with NMR spectroscopy and metabolomics with mass spectrometry-based techniques, this is simply because of usages amongst different groups that have popularized the different terms. While there is still no absolute agreement, there is a growing consensus that 'metabolomics' places a greater emphasis on metabolic profiling at a cellular or organ level and is primarily concerned with normal endogenous metabolism. 'Metabonomics' extends metabolic profiling to include information about perturbations of metabolism caused by environmental factors (including diet and toxins), disease processes, and the involvement of extragenomic influences, such as gut microflora. This is not a trivial difference; metabolomic studies should, by definition, exclude metabolic contributions from extragenomic sources, because these are external to the system being studied. However, in practice, within the field of human disease research there is still a large degree of overlap in the way both terms are used, and they are often in effect synonymous.
Exometabolomics
Exometabolomics, or "metabolic footprinting", is the study of extracellular metabolites. It uses many techniques from other subfields of metabolomics, and has applications in biofuel development, bioprocessing, determining drugs' mechanism of action, and studying intercellular interactions.
Analytical technologies
The typical workflow of metabolomics studies is shown in the figure. First, samples are collected from tissue, plasma, urine, saliva, cells, etc. Next, metabolites extracted often with the addition of internal standards and derivatization. During sample analysis, metabolites are quantified (liquid chromatography or gas chromatography coupled with MS and/or NMR spectroscopy). The raw output data can be used for metabolite feature extraction and further processed before statistical analysis (such as principal component analysis, PCA). Many bioinformatic tools and software are available to identify associations with disease states and outcomes, determine significant correlations, and characterize metabolic signatures with existing biological knowledge.
Separation methods
Initially, analytes in a metabolomic sample comprise a highly complex mixture. This complex mixture can be simplified prior to detection by separating some analytes from others. Separation achieves various goals: analytes which cannot be resolved by the detector may be separated in this step; in MS analysis, ion suppression is reduced; the retention time of the analyte serves as information regarding its identity. This separation step is not mandatory and is often omitted in NMR and "shotgun" based approaches such as shotgun lipidomics.
Gas chromatography (GC), especially when interfaced with mass spectrometry (GC-MS), is a widely used separation technique for metabolomic analysis. GC offers very high chromatographic resolution, and can be used in conjunction with a flame ionization detector (GC/FID) or a mass spectrometer (GC-MS). The method is especially useful for identification and quantification of small and volatile molecules. However, a practical limitation of GC is the requirement of chemical derivatization for many biomolecules as only volatile chemicals can be analysed without derivatization. In cases where greater resolving power is required, two-dimensional chromatography (GCxGC) can be applied.
High performance liquid chromatography (HPLC) has emerged as the most common separation technique for metabolomic analysis. With the advent of electrospray ionization, HPLC was coupled to MS. In contrast with GC, HPLC has lower chromatographic resolution, but requires no derivatization for polar molecules, and separates molecules in the liquid phase. Additionally HPLC has the advantage that a much wider range of analytes can be measured with a higher sensitivity than GC methods.
Capillary electrophoresis (CE) has a higher theoretical separation efficiency than HPLC (although requiring much more time per separation), and is suitable for use with a wider range of metabolite classes than is GC. As for all electrophoretic techniques, it is most appropriate for charged analytes.
In direct-infusion mass spectrometry (DI-MS), sample is directly introduced into the spectrometer and separation steps are skipped. DI-MS can be employed to perform single cell metabolic analysis of human cells.
Detection methods
Mass spectrometry (MS) is used to identify and quantify metabolites after optional separation by GC, HPLC, or CE. GC-MS was the first hyphenated technique to be developed. Identification leverages the distinct patterns in which analytes fragment. These patterns can be thought of as a mass spectral fingerprint. Libraries exist that allow identification of a metabolite according to this fragmentation pattern . MS is both sensitive and can be very specific. There are also a number of techniques which use MS as a stand-alone technology: the sample is infused directly into the mass spectrometer with no prior separation, and the MS provides sufficient selectivity to both separate and to detect metabolites.
For analysis by mass spectrometry, the analytes must be imparted with a charge and transferred to the gas phase. Electron ionization (EI) is the most common ionization technique applied to GC separations as it is amenable to low pressures. EI also produces fragmentation of the analyte, both providing structural information while increasing the complexity of the data and possibly obscuring the molecular ion. Atmospheric-pressure chemical ionization (APCI) is an atmospheric pressure technique that can be applied to all the above separation techniques. APCI is a gas phase ionization method, which provides slightly more aggressive ionization than ESI which is suitable for less polar compounds. Electrospray ionization (ESI) is the most common ionization technique applied in LC/MS. This soft ionization is most successful for polar molecules with ionizable functional groups. Another commonly used soft ionization technique is secondary electrospray ionization (SESI).
In the 2000s, surface-based mass analysis has seen a resurgence, with new MS technologies focused on increasing sensitivity, minimizing background, and reducing sample preparation. The ability to analyze metabolites directly from biofluids and tissues continues to challenge current MS technology, largely because of the limits imposed by the complexity of these samples, which contain thousands to tens of thousands of metabolites. Among the technologies being developed to address this challenge is Nanostructure-Initiator MS (NIMS), a desorption/ ionization approach that does not require the application of matrix and thereby facilitates small-molecule (i.e., metabolite) identification. MALDI is also used; however, the application of a MALDI matrix can add significant background at that complicates analysis of the low-mass range (i.e., metabolites). In addition, the size of the resulting matrix crystals limits the spatial resolution that can be achieved in tissue imaging. Because of these limitations, several other matrix-free desorption/ionization approaches have been applied to the analysis of biofluids and tissues.
Secondary ion mass spectrometry (SIMS) was one of the first matrix-free desorption/ionization approaches used to analyze metabolites from biological samples. SIMS uses a high-energy primary ion beam to desorb and generate secondary ions from a surface. The primary advantage of SIMS is its high spatial resolution (as small as 50 nm), a powerful characteristic for tissue imaging with MS. However, SIMS has yet to be readily applied to the analysis of biofluids and tissues because of its limited sensitivity at and analyte fragmentation generated by the high-energy primary ion beam. Desorption electrospray ionization (DESI) is a matrix-free technique for analyzing biological samples that uses a charged solvent spray to desorb ions from a surface. Advantages of DESI are that no special surface is required and the analysis is performed at ambient pressure with full access to the sample during acquisition. A limitation of DESI is spatial resolution because "focusing" the charged solvent spray is difficult. However, a recent development termed laser ablation ESI (LAESI) is a promising approach to circumvent this limitation. Most recently, ion trap techniques such as orbitrap mass spectrometry are also applied to metabolomics research.
Nuclear magnetic resonance (NMR) spectroscopy is the only detection technique which does not rely on separation of the analytes, and the sample can thus be recovered for further analyses. All kinds of small molecule metabolites can be measured simultaneously - in this sense, NMR is close to being a universal detector. The main advantages of NMR are high analytical reproducibility and simplicity of sample preparation. Practically, however, it is relatively insensitive compared to mass spectrometry-based techniques.
Although NMR and MS are the most widely used modern-day techniques for detection, there are other methods in use. These include Fourier-transform ion cyclotron resonance, ion-mobility spectrometry, electrochemical detection (coupled to HPLC), Raman spectroscopy and radiolabel (when combined with thin-layer chromatography).
Statistical methods
The data generated in metabolomics usually consist of measurements performed on subjects under various conditions. These measurements may be digitized spectra, or a list of metabolite features. In its simplest form, this generates a matrix with rows corresponding to subjects and columns corresponding with metabolite features (or vice versa). Several statistical programs are currently available for analysis of both NMR and mass spectrometry data. A great number of free software are already available for the analysis of metabolomics data shown in the table. Some statistical tools listed in the table were designed for NMR data analyses were also useful for MS data. For mass spectrometry data, software is available that identifies molecules that vary in subject groups on the basis of mass-over-charge value and sometimes retention time depending on the experimental design.
Once metabolite data matrix is determined, unsupervised data reduction techniques (e.g. PCA) can be used to elucidate patterns and connections. In many studies, including those evaluating drug-toxicity and some disease models, the metabolites of interest are not known a priori. This makes unsupervised methods, those with no prior assumptions of class membership, a popular first choice. The most common of these methods includes principal component analysis (PCA) which can efficiently reduce the dimensions of a dataset to a few which explain the greatest variation. When analyzed in the lower-dimensional PCA space, clustering of samples with similar metabolic fingerprints can be detected. PCA algorithms aim to replace all correlated variables with a much smaller number of uncorrelated variables (referred to as principal components (PCs)) and retain most of the information in the original dataset. This clustering can elucidate patterns and assist in the determination of disease biomarkers – metabolites that correlate most with class membership.
Linear models are commonly used for metabolomics data, but are affected by multicollinearity. On the other hand, multivariate statistics are thriving methods for high-dimensional correlated metabolomics data, of which the most popular one is Projection to Latent Structures (PLS) regression and its classification version PLS-DA. Other data mining methods, such as random forest, support-vector machines, etc. are received increasing attention for untargeted metabolomics data analysis. In the case of univariate methods, variables are analyzed one by one using classical statistics tools (such as Student's t-test, ANOVA or mixed models) and only these with sufficient small p-values are considered relevant. However, correction strategies should be used to reduce false discoveries when multiple comparisons are conducted since there is no standard method for measuring the total amount of metabolites directly in untargeted metabolomics. For multivariate analysis, models should always be validated to ensure that the results can be generalized.
Machine learning and data mining
Machine learning is a powerful tool that can be used in metabolomics analysis. Recently, scientists have developed retention time prediction software. These tools allow researchers to apply artificial intelligence to the retention time prediction of small molecules in complex mixture, such as human plasma, plant extracts, foods, or microbial cultures. Retention time prediction increases the identification rate in liquid chromatography and can lead to an improved biological interpretation of metabolomics data.
Key applications
Toxicity assessment/toxicology by metabolic profiling (especially of urine or blood plasma samples) detects the physiological changes caused by toxic insult of a chemical (or mixture of chemicals). In many cases, the observed changes can be related to specific syndromes, e.g. a specific lesion in liver or kidney. This is of particular relevance to pharmaceutical companies wanting to test the toxicity of potential drug candidates: if a compound can be eliminated before it reaches clinical trials on the grounds of adverse toxicity, it saves the enormous expense of the trials.
For functional genomics, metabolomics can be an excellent tool for determining the phenotype caused by a genetic manipulation, such as gene deletion or insertion. Sometimes this can be a sufficient goal in itself—for instance, to detect any phenotypic changes in a genetically modified plant intended for human or animal consumption. More exciting is the prospect of predicting the function of unknown genes by comparison with the metabolic perturbations caused by deletion/insertion of known genes. Such advances are most likely to come from model organisms such as Saccharomyces cerevisiae and Arabidopsis thaliana. The Cravatt laboratory at the Scripps Research Institute has recently applied this technology to mammalian systems, identifying the N-acyltaurines as previously uncharacterized endogenous substrates for the enzyme fatty acid amide hydrolase (FAAH) and the monoalkylglycerol ethers (MAGEs) as endogenous substrates for the uncharacterized hydrolase KIAA1363.
Metabologenomics is a novel approach to integrate metabolomics and genomics data by correlating microbial-exported metabolites with predicted biosynthetic genes. This bioinformatics-based pairing method enables natural product discovery at a larger-scale by refining non-targeted metabolomic analyses to identify small molecules with related biosynthesis and to focus on those that may not have previously well known structures.
Fluxomics is a further development of metabolomics. The disadvantage of metabolomics is that it only provides the user with abundances or concentrations of metabolites, while fluxomics determines the reaction rates of metabolic reactions and can trace metabolites in a biological system over time.
Nutrigenomics is a generalised term which links genomics, transcriptomics, proteomics and metabolomics to human nutrition. In general, in a given body fluid, a metabolome is influenced by endogenous factors such as age, sex, body composition and genetics as well as underlying pathologies. The large bowel microflora are also a very significant potential confounder of metabolic profiles and could be classified as either an endogenous or exogenous factor. The main exogenous factors are diet and drugs. Diet can then be broken down to nutrients and non-nutrients. Metabolomics is one means to determine a biological endpoint, or metabolic fingerprint, which reflects the balance of all these forces on an individual's metabolism. Thanks to recent cost reductions, metabolomics has now become accessible for companion animals, such as pregnant dogs.
Plant metabolomics is designed to study the overall changes in metabolites of plant samples and then conduct deep data mining and chemometric analysis. Specialized metabolites are considered components of plant defense systems biosynthesized in response to biotic and abiotic stresses. Metabolomics approaches have recently been used to assess the natural variance in metabolite content between individual plants, an approach with great potential for the improvement of the compositional quality of crops.
See also
Epigenomics
Fluxomics
Genomics
Lipidomics
Molecular epidemiology
Molecular medicine
Molecular pathology
Precision medicine
Proteomics
Transcriptomics
XCMS Online, a bioinformatics software designed for statistical analysis of mass spectrometry data
References
Further reading
External links
Human Metabolome Database (HMDB)
METLIN
XCMS
LCMStats
Metabolights
NIH Common Fund Metabolomics Consortium
Metabolomics Workbench
Golm Metabolome Database
Metabolon
Metabolism
Systems biology
Omics | Metabolomics | [
"Chemistry",
"Biology"
] | 5,643 | [
"Bioinformatics",
"Omics",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
1,029,949 | https://en.wikipedia.org/wiki/Printed%20circuit%20board%20milling | Printed circuit board milling (also: isolation milling) is the milling process used for removing areas of copper from a sheet of printed circuit board (PCB) material to recreate the pads, signal traces and structures according to patterns from a digital circuit board plan known as a layout file. Similar to the more common and well known chemical PCB etch process, the PCB milling process is subtractive: material is removed to create the electrical isolation and ground planes required. However, unlike the chemical etch process, PCB milling is typically a non-chemical process and as such it can be completed in a typical office or lab environment without exposure to hazardous chemicals. High quality circuit boards can be produced using either process. In the case of PCB milling, the quality of a circuit board is chiefly determined by the system's true, or weighted, milling accuracy and control as well as the condition (sharpness, temper) of the milling bits and their respective feed/rotational speeds. By contrast, in the chemical etch process, the quality of a circuit board depends on the accuracy and/or quality of the mask used to protect the copper from the chemicals and the state of the etching chemicals.
Advantages
PCB milling has advantages for both prototyping and some special PCB designs. The biggest benefit is that one does not have to use chemicals to produce PCBs.
When creating a prototype, outsourcing a board takes time. An alternative is to make a PCB in-house. Using the wet process, in-house production presents problems with chemicals and disposing thereof. High-resolution boards using the wet process are hard to achieve and still, when done, one still has to drill and eventually cut out the PCB from the base material.
CNC machine prototyping can provide a fast-turnaround board production process without the need for wet processing. If a CNC machine is already used for drilling, this single machine could carry out both parts of the process, drilling and milling. A CNC machine is used to process drilling, milling and cutting.
Many boards that are simple for milling would be very difficult to process by wet etching and manual drilling afterward in a laboratory environment without using top-of-the-line systems that usually cost many times more than CNC milling machines.
In mass production, milling is unlikely to replace etching although the use of CNC is already standard practice for drilling the boards.
Hardware
A PCB milling system is a single machine that can perform all of the required actions to create a prototype board, with the exception of inserting vias and through hole plating. Most of these machines require only a standard AC mains outlet and a shop-type vacuum cleaner for operation.
Software
Software for milling PCBs is usually delivered by the CNC machine manufacturer. Most of the packages can be split in two main categories – raster and vector.
Software that produces tool paths using raster calculation method tends to have lower resolution of processing than the vector based software since it relies on the raster information it receives.
Mechanical system
The mechanics behind a PCB milling machine are fairly straightforward and have their roots in CNC milling technology. A PCB milling system is similar to a miniature and highly accurate NC milling table. For machine control, positioning information and machine control commands are sent from the controlling software via a serial port or parallel port connection to the milling machine's on-board controller. The controller is then responsible for driving and monitoring the various positioning components which move the milling head and gantry and control the spindle speed. Spindle speeds can range from 30,000 RPM to 100,000 RPM depending on the milling system, with higher spindle speeds equating to better accuracy, in a nutshell the smaller the tool diameter the higher RPM you need. Typically this drive system comprises non-monitored stepper motors for the X/Y axis, an on-off non-monitored solenoid, pneumatic piston or lead screw for the Z-axis, and a DC motor control circuit for spindle speed, none of which provide positional feedback. More advanced systems provide a monitored stepper motor Z-axis drive for greater control during milling and drilling as well as more advanced RF spindle motor control circuits that provide better control over a wider range of speeds.
X and Y-axis control
For the X and Y-axis drive systems most PCB milling machines use stepper motors that drive a precision lead screw. The lead screw is in turn linked to the gantry or milling head by a special precision machined connection assembly. To maintain correct alignment during milling, the gantry or milling head's direction of travel is guided along using linear or dovetailed bearing(s). Most X/Y drive systems provide user control, via software, of the milling speed, which determines how fast the stepper motors drive their respective axes.
Z-axis control
Z-axis drive and control are handled in several ways. The first and most common is a simple solenoid that pushes against a spring. When the solenoid is energized it pushes the milling head down against a spring stop that limits the downward travel. The rate of descent as well as the amount of force exerted on the spring stop must be manually set by mechanically adjusting the position of the solenoid's plunger.
The second type of Z-axis control is through the use of a pneumatic cylinder and a software-driven gate valve. Due to the small cylinder size and the amount of air pressure used to drive it there is little range of control between the up and down stops. Both the solenoid and pneumatic system cannot position the head anywhere other than the endpoints, and are therefore useful for only simple 'up/down' milling tasks. The final type of Z-axis control uses a stepper motor that allows the milling head to be moved in small accurate steps up or down. Further, the speed of these steps can be adjusted to allow tool bits to be eased into the board material rather than hammered into it. The depth (number of steps required) as well as the downward/upward speed is under user control via the controlling software.
One of the major challenges with milling PCBs is handling variations in flatness. Since conventional etching techniques rely on optical masks that sit right on the copper layer they can conform to any slight bends in the material so all features are replicated faithfully.
When milling PCBs however, any minute height variations encountered when milling will cause conical bits to either sink deeper (creating a wider cut) or rise off the surface, leaving an uncut section. Before cutting some systems perform height mapping probes across the board to measure height variations and adjust the Z values in the G-code beforehand.
Tooling
PCBs may be machined with conventional endmills, conical d-bit cutters, and spade mills. D-bits and spade mills are cheap and as they have a small point allow the traces to be close together. Taylor's equation, Vc Tn = C, can predict tool life for a given surface speed.
References
External links
Software review and how-to's on RepRap wiki
Printed circuit board manufacturing | Printed circuit board milling | [
"Engineering"
] | 1,467 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
1,030,401 | https://en.wikipedia.org/wiki/Humic%20substance | Humic substances (HS) are colored relatively recalcitrant organic compounds naturally formed during long-term decomposition and transformation of biomass residues. The color of humic substances varies from bright yellow to light or dark brown leading to black. The term comes from humus, which in turn comes from the Latin word humus, meaning "soil, earth". Humic substances represent the major part of organic matter in soil, peat, coal, and sediments, and are important components of dissolved natural organic matter (NOM) in lakes (especially dystrophic lakes), rivers, and sea water. Humic substances account for 50 – 90% of cation exchange capacity in soils.
"Humic substances" is an umbrella term covering humic acid, fulvic acid and humin, which differ in solubility. By definition, humic acid (HA) is soluble in water at neutral and alkaline pH, but insoluble at acidic pH < 2. Fulvic acid (FA) is soluble in water at any pH. Humin is not soluble in water at any pH.
This definition of humic substances is largely operational. It is rooted in the history of soil science and, more precisely, in the tradition of alkaline extraction, which dates back to 1786, when Franz Karl Achard treated peat with a solution of potassium hydroxide and, after subsequent addition of an acid, obtained an amorphous dark precipitate (i.e., humic acid). Aquatic humic substances were isolated for the first time in 1806, from spring water by Jöns Jakob Berzelius.
In terms of chemistry, FA, HA, and humin share more similarities than differences and represent a continuum of humic molecules. All of them are constructed from similar aromatic, polyaromatic, aliphatic, and carbohydrate units and contain the same functional groups (mainly carboxylic, phenolic, and ester groups), albeit in varying proportions.
Water solubility of humic substances is primarily governed by interplay of two factors: the amount of ionizable functional groups and (mainly carboxylic) and molecular weight (MW). In general, fulvic acid has a higher amount of carboxylic groups and lower average molecular weight than does humic acid. Measured average molecular weights vary with source; however, molecular weight distributions of HA and FA overlap significantly.
Age and origin of the source material determine the chemical structure of humic substances. In general, humic substances derived from soil and peat (which takes hundreds to thousands of years to form) have higher molecular weight, higher amounts of O and N, more carbohydrate units, and fewer polyaromatic units than humic substances derived from coal and leonardite (which takes millions of years to form).
Isolation of HS is the result of an alkaline extraction from solid sources of NOM the adsorption of HS on a resin. A newer view of humic substances is that they are not mostly high-molecular-weight macropolymers but rather represent a heterogeneous mixture of relatively small molecular components of the soil organic matter auto-assembled in supramolecular associations and are composed of a variety of compounds of biological origin and synthesized by abiotic and biotic reactions in soil. and surface waters It is the large molecular complexity of the soil humeome that confers to humic matter its bioactivity in, its stability in ecosystems, soil and its role as plant growth promoter (in particular plant roots).
The academic definition of humic substances is under debate and some researchers argue against the traditional concepts of humification and seek to forgo alkali extract method and to analyze the soil directly.
Concepts of humic substances
The formation of HS in nature is one of the least understood aspects of humus chemistry and one of the most intriguing. Historically, there have been three main theories to explain it: the lignin theory of Waksman (1932), the polyphenol theory, and the sugar-amine condensation theory of Maillard (1911). Humic substances are formed by the microbial degradation of dead biota matter, such as lignin, cellulose. ligno-cellulose and charcoal. Humic substances in the lab are resistant to further biodegradation. Their structure, elemental composition and content of functional groups of a given sample depend on the water or soil source and on the specific procedures and conditions of extraction. Nevertheless, the average properties of lab extractes HS from different sources are remarkably similar.
Fractionation
Historically, scientists have used variations of similar methods for extracting HS from NOM and separation the extracts into HA and FA. The International Humic Substances Society advocates the use of standard laboratory methods for preparation of humic and fulvic acids. Humic substances are extracted from soil and other solid sources using 0.1 M NaOH, under a nitrogen atmosphere, to prevent abiotic oxidation of some of the components of HS. The HA is then precipitated at pH 1, and the soluble fraction is treated on a resin column to separate fulvic acid components from other acid soluble compounds. The fraction of NOM not extracted by 0.1 NaOH is humin. Humic acid plus fulvic acid is extracted from natural waters using a resin column after microfiltration and acidification to pH 2. The humic materials are eluted from the column with NaOH, and humic acid is precipitated at pH 1. After adjusting the pH to 2 fulvic acid is separated from other acid soluble compounds, using a resin column as with solid phase sources. An analytical method for quantifying humic acid and fulvic acid in commercial ores and humic products, has been developed based on the IHSS humic acid and fulvic acid preparation methods.
Scientists associated with the IHSS have also isolated the entire NOM from black water streams using reverse osmosis The retentate from this process contains both humic and fulvic acids, predominately humic acid. The NOM from hard water streams has been isolated using reverse osmosis and electrodialysis in tandem.
Extracted humic acid not a single acid; rather, it is a complex mixture of many different acids containing carboxyl and phenolate groups so that the mixture behaves functionally as a dibasic acid or, occasionally, as a tribasic acid. Commercial humic acid used to amend soil is manufactured using these same well established procedures. Humic acids can form complexes with ions that are commonly found in the environment creating humic colloids.
A sequential chemical fractionation called Humeomics can be used to isolate more homogeneous humic fractions and determine their molecular structures by advanced spectroscopic and chromatographic methods. Substances identified in humic extracts and directly in soil include mono-, di-, and tri-hydroxycarboxylic acids, fatty acids, dicarboxylic acids, linear alcohols, phenolic acids, terpenoids, carbohydrates, and amino acids. This suggests humic molecules may form a supramolecular structures held together by non-covalent forces, such as van der Waals force, π-π, and CH-π bonds.
Chemical characteristics
Since the dawn of modern chemistry, humic substances are among the most studied among natural materials. Despite long study, their molecular structure remains debatable The traditional view has been that humic substances are hetero- poly-condensates, in varying associations with clay. A more recent view is that relatively small molecules also play major a role.
A typical humic substance is a mixture of many molecules, some of which are based on a motif of aromatic nuclei with phenolic and carboxylic substituents, linked together; The functional groups that contribute most to surface charge and reactivity of humic substances are phenolic and carboxylic groups. Humic substances commonly behave as mixtures of dibasic acids, with a pK1 value around 4 for protonation of carboxyl groups and around 8 for protonation of phenolate groups in HA. Fulvic acids are more acidic than HA. There is considerable overall similarity among individual humic acids. For this reason, measured pK values for a given sample are average values relating to the constituent species. The other important characteristic is charge density.
The more recent determinations of molecular weights of HS show that the molecular weights are not as great as once thought. Reported number average molecular weights of soil HA are < 6000 but they are highly poly disperse with some components with much larger measure molecular weights and much lower. Measured number average molecular weights of aquatic HS with HA <= 1700 and FA< 900. The aquatic HA and FA are also highly poly disperse. The number of individually distinct components in HS, as measured by mass spectroscopy is in the thousands The average composition of HA and FA can be represented by model structures.
The presence of carboxylate and phenolate groups gives the humic acids the ability to form complexes with ions such as Mg2+, Ca2+, Fe2+, and Fe3+ creating humic colloids.. Many humic acids have two or more of these groups arranged so as to enable the formation of chelate complexes. The formation of (chelate) complexes is an important aspect of the biological role of humic acids in regulating bioavailability of metal ions.
Criticism
Decomposition products of dead plant materials form intimate associations with minerals, making it difficult to isolate and characterize soil organic constituents. 18th century soil chemists successfully used alkaline extraction to isolate a portion of the organic constituents in soil. This led to the theory that a 'humification' process created distinct 'humic substances' like 'humic acid', 'fulvic acid', and 'humin'. However, modern chemical analysis methods applied to unprocessed mineral soil have not directly observed large humic molecules. This suggests that the extraction and fractionation techniques used to isolate humic substances alter the original chemical composition of the organic matter. Since the definition of humic substances like humic and fulvic acids relies on their separation through these methods, it raises the question of whether the distinction between these compounds accurately reflects the natural state of organic matter in soil. Despite these concerns, the 'humification' theory persists in the field and in even textbooks, and attempts to redefine 'humic substances' in soil have resulted in a proliferation of conflicting definitions. This lack of consensus makes it difficult to communicate scientific understanding of soil processes and properties accurately."
Determination of humic acids in water samples
The presence of humic acid in water intended for potable or industrial use can have a significant impact on the treatability of that water and the success of chemical disinfection processes. For instance, humic and fulvic acids can react with the chemicals used in the chlorination process to form disinfection byproducts such as dihaloacetonitriles, which are toxic to humans. Accurate methods of establishing humic acid concentrations are therefore essential in maintaining water supplies, especially from upland peaty catchments in temperate climates.
As a lot of different bio-organic molecules in very diverse physical associations are mixed together in natural environments, it is cumbersome to measure their exact concentrations in the humic superstructure. For this reason, concentrations of humic acid are traditionally estimated out of concentrations of organic matter, typically from concentrations of total organic carbon (TOC) or dissolved organic carbon (DOC).
Extraction procedures are bound to alter some of the chemical linkages present in the soil humic substances (mainly ester bonds in biopolyesters such as cutins and suberins). The humic extracts are composed of large numbers of different bio-organic molecules that have not yet been totally separated and identified. However, single classes of residual biomolecules have been identified by selective extractions and chemical fractionation, and are represented by alkanoic and hydroxy alkanoic acids, resins, waxes, lignin residues, sugars, and peptides.
Ecological effects
Organic matter soil amendments have been known by farmers to be beneficial to plant growth for longer than recorded history. However, the chemistry and function of the organic matter have been a subject of controversy since humans began postulating about it in the 18th century. Until the time of Liebig, it was supposed that humus was used directly by plants, but, after Liebig showed that plant growth depends upon inorganic compounds, many soil scientists held the view that organic matter was useful for fertility only as it was broken down with the release of its constituent nutrient elements into inorganic forms.
At the present time, soil scientists hold a more holistic view and at least recognize that humus influences soil fertility through its effect on the water-holding capacity of the soil. Also, since plants have been shown to absorb and translocate the complex organic molecules of systemic insecticides, they can no longer discredit the idea that plants may be able to absorb the soluble forms of humus; this may in fact be an essential process for the uptake of otherwise insoluble iron oxides.
A study on the effects of humic acid on plant growth was conducted at Ohio State University which said in part "humic acids increased plant growth" and that there were "relatively large responses at low application rates".
A 1998 study by scientists at the North Carolina State University College of Agriculture and Life Sciences showed that addition of humate to soil significantly increased root mass in creeping bentgrass turf.
A 2018 study by scientists at the University of Alberta showed that humic acids can reduce prion infectivity in laboratory experiments, but that this effect may be uncertain in the environment due to minerals in the soil that buffer the effect.
Anthropogenic production
Humans can affect the production of humic substances via a variety of ways: by making use of natural processes by composting lignin or adding biochar (see soil rehabilitation), or by industrial synthesis of artificial humic substances from organic feedstocks directly. These artificial substances may be similarly divided into artificial humic acid (A-HA) and artificial fulvic acid (A-FA).
Lignosulfonates, a by-product from the sulfite pulping of wood, are valorized in the industrial fabrication of concrete where they serve as water reducer, or concrete superplasticizer, to decrease the water-cement ratio (w/c) of fresh concrete while preserving its workability. The w/c ratio of concrete is one of the main parameter controlling the mechanical strength of hardened concrete and its durability. The same wood pulping process can also be applied to obtain humus-like substances by hydrolysis and oxidation. A kind of artificial "lignohumate" can be directly produced from wood in this way.
Agricultural litter can be turned into an artificial humic substance by a hydrothermal reaction. The resulting mixture can increase the content of dissolved organic matter (DOM) and total organic carbon (TOC) in soil.
Lignite (brown coal) may also be oxidized to produce humic substances, reversing the natural process of coal formation under anoxic and reducing conditions. This form of "mineral-derived fulvic acid" is widely used in China. This process also occurs in nature, producing leonardite.
Economic geology
In economic geology, the term humate refers to geological materials, such as weathered coal beds (leonardite), mudrock, or pore material in sandstones, that are rich in humic acids. Humate has been mined from the Fruitland Formation of New Mexico for use as a soil amendment since the 1970s, with nearly 60,000 metric tons produced by 2016. Humate deposits may also play an important role in the genesis of uranium ore bodies.
Technological applications
The heavy-metal binding abilities of humic acids have been exploited to develop remediation technologies for removing lead from waste water. To this end, Yurishcheva et al. coated magnetic nanoparticles with humic acids. After capturing lead ions, the nanoparticles can then be captured using a magnet.
Ancient masonry
Archeology finds that ancient Egypt used mudbricks reinforced with straw and humic acids.
See also
Black water (drink)
Humin
Humus
Polycyclic aromatic hydrocarbon
Soil
References
External links
International Humic Substances Society
Composting
Organic acids
Soil chemistry | Humic substance | [
"Chemistry"
] | 3,420 | [
"Organic acids",
"Soil chemistry",
"Acids",
"Organic compounds"
] |
1,030,420 | https://en.wikipedia.org/wiki/Equidistributed%20sequence | In mathematics, a sequence (s1, s2, s3, ...) of real numbers is said to be equidistributed, or uniformly distributed, if the proportion of terms falling in a subinterval is proportional to the length of that subinterval. Such sequences are studied in Diophantine approximation theory and have applications to Monte Carlo integration.
Definition
A sequence (s1, s2, s3, ...) of real numbers is said to be equidistributed on a non-degenerate interval [a, b] if for every subinterval [c, d] of [a, b] we have
(Here, the notation |{s1,...,sn} ∩ [c, d]| denotes the number of elements, out of the first n elements of the sequence, that are between c and d.)
For example, if a sequence is equidistributed in [0, 2], since the interval [0.5, 0.9] occupies 1/5 of the length of the interval [0, 2], as n becomes large, the proportion of the first n members of the sequence which fall between 0.5 and 0.9 must approach 1/5. Loosely speaking, one could say that each member of the sequence is equally likely to fall anywhere in its range. However, this is not to say that (sn) is a sequence of random variables; rather, it is a determinate sequence of real numbers.
Discrepancy
We define the discrepancy DN for a sequence (s1, s2, s3, ...) with respect to the interval [a, b] as
A sequence is thus equidistributed if the discrepancy DN tends to zero as N tends to infinity.
Equidistribution is a rather weak criterion to express the fact that a sequence fills the segment leaving no gaps. For example, the drawings of a random variable uniform over a segment will be equidistributed in the segment, but there will be large gaps compared to a sequence which first enumerates multiples of ε in the segment, for some small ε, in an appropriately chosen way, and then continues to do this for smaller and smaller values of ε. For stronger criteria and for constructions of sequences that are more evenly distributed, see low-discrepancy sequence.
Riemann integral criterion for equidistribution
Recall that if f is a function having a Riemann integral in the interval [a, b], then its integral is the limit of Riemann sums taken by sampling the function f in a set of points chosen from a fine partition of the interval. Therefore, if some sequence is equidistributed in [a, b], it is expected that this sequence can be used to calculate the integral of a Riemann-integrable function. This leads to the following criterion for an equidistributed sequence:
Suppose (s1, s2, s3, ...) is a sequence contained in the interval [a, b]. Then the following conditions are equivalent:
The sequence is equidistributed on [a, b].
For every Riemann-integrable (complex-valued) function , the following limit holds:
{| class="toccolours collapsible collapsed" width="90%" style="text-align:left"
!Proof
|-
|First note that the definition of an equidistributed sequence is equivalent to the integral criterion whenever f is the indicator function of an interval: If f = 1[c, d], then the left hand side is the proportion of points of the sequence falling in the interval [c, d], and the right hand side is exactly
This means 2 ⇒ 1 (since indicator functions are Riemann-integrable), and 1 ⇒ 2 for f being an indicator function of an interval. It remains to assume that the integral criterion holds for indicator functions and prove that it holds for general Riemann-integrable functions as well.
Note that both sides of the integral criterion equation are linear in f, and therefore the criterion holds for linear combinations of interval indicators, that is, step functions.
To show it holds for f being a general Riemann-integrable function, first assume f is real-valued. Then by using Darboux's definition of the integral, we have for every ε > 0 two step functions f1 and f2 such that f1 ≤ f ≤ f2 and Notice that:
By subtracting, we see that the limit superior and limit inferior of differ by at most ε. Since ε is arbitrary, we have the existence of the limit, and by Darboux's definition of the integral, it is the correct limit.
Finally, for complex-valued Riemann-integrable functions, the result follows again from linearity, and from the fact that every such function can be written as f = u + vi, where u, v are real-valued and Riemann-integrable. ∎
|}
This criterion leads to the idea of Monte-Carlo integration, where integrals are computed by sampling the function over a sequence of random variables equidistributed in the interval.
It is not possible to generalize the integral criterion to a class of functions bigger than just the Riemann-integrable ones. For example, if the Lebesgue integral is considered and f is taken to be in L1, then this criterion fails. As a counterexample, take f to be the indicator function of some equidistributed sequence. Then in the criterion, the left hand side is always 1, whereas the right hand side is zero, because the sequence is countable, so f is zero almost everywhere.
In fact, the de Bruijn–Post Theorem states the converse of the above criterion: If f is a function such that the criterion above holds for any equidistributed sequence in [a, b], then f is Riemann-integrable in [a, b].
Equidistribution modulo 1
A sequence (a1, a2, a3, ...) of real numbers is said to be equidistributed modulo 1 or uniformly distributed modulo 1 if the sequence of the fractional parts of an, denoted by (an) or by an − ⌊an⌋, is equidistributed in the interval [0, 1].
Examples
The equidistribution theorem: The sequence of all multiples of an irrational α,
0, α, 2α, 3α, 4α, ...
is equidistributed modulo 1.
More generally, if p is a polynomial with at least one coefficient other than the constant term irrational then the sequence p(n) is uniformly distributed modulo 1.
This was proven by Weyl and is an application of van der Corput's difference theorem.
The sequence log(n) is not uniformly distributed modulo 1. This fact is related to Benford's law.
The sequence of all multiples of an irrational α by successive prime numbers,
2α, 3α, 5α, 7α, 11α, ...
is equidistributed modulo 1. This is a famous theorem of analytic number theory, published by I. M. Vinogradov in 1948.
The van der Corput sequence is equidistributed.
Weyl's criterion
Weyl's criterion states that the sequence an is equidistributed modulo 1 if and only if for all non-zero integers ℓ,
The criterion is named after, and was first formulated by, Hermann Weyl. It allows equidistribution questions to be reduced to bounds on exponential sums, a fundamental and general method.
{| class="toccolours collapsible collapsed" width="90%" style="text-align:left"
!Sketch of proof
|-
|If the sequence is equidistributed modulo 1, then we can apply the Riemann integral criterion (described above) on the function which has integral zero on the interval [0, 1]. This gives Weyl's criterion immediately.
Conversely, suppose Weyl's criterion holds. Then the Riemann integral criterion holds for functions f as above, and by linearity of the criterion, it holds for f being any trigonometric polynomial. By the Stone–Weierstrass theorem and an approximation argument, this extends to any continuous function f.
Finally, let f be the indicator function of an interval. It is possible to bound f from above and below by two continuous functions on the interval, whose integrals differ by an arbitrary ε. By an argument similar to the proof of the Riemann integral criterion, it is possible to extend the result to any interval indicator function f, thereby proving equidistribution modulo 1 of the given sequence. ∎
|}
Generalizations
A quantitative form of Weyl's criterion is given by the Erdős–Turán inequality.
Weyl's criterion extends naturally to higher dimensions, assuming the natural generalization of the definition of equidistribution modulo 1:
The sequence vn of vectors in Rk is equidistributed modulo 1 if and only if for any non-zero vector ℓ ∈ Zk,
Example of usage
Weyl's criterion can be used to easily prove the equidistribution theorem, stating that the sequence of multiples 0, α, 2α, 3α, ... of some real number α is equidistributed modulo 1 if and only if α is irrational.
Suppose α is irrational and denote our sequence by aj = jα (where j starts from 0, to simplify the formula later). Let ℓ ≠ 0 be an integer. Since α is irrational, ℓα can never be an integer, so can never be 1. Using the formula for the sum of a finite geometric series,
a finite bound that does not depend on n. Therefore, after dividing by n and letting n tend to infinity, the left hand side tends to zero, and Weyl's criterion is satisfied.
Conversely, notice that if α is rational then this sequence is not equidistributed modulo 1, because there are only a finite number of options for the fractional part of aj = jα.
Complete uniform distribution
A sequence of real numbers is said to be k-uniformly distributed mod 1 if not only the sequence of fractional parts is uniformly distributed in but also the sequence , where is defined as , is uniformly distributed in .
A sequence of real numbers is said to be completely uniformly distributed mod 1 it is -uniformly distributed for each natural number .
For example, the sequence is uniformly distributed mod 1 (or 1-uniformly distributed) for any irrational number , but is never even 2-uniformly distributed. In contrast, the sequence is completely uniformly distributed for almost all (i.e., for all except for a set of measure 0).
van der Corput's difference theorem
A theorem of Johannes van der Corput states that if for each h the sequence sn+h − sn is uniformly distributed modulo 1, then so is sn.
A van der Corput set is a set H of integers such that if for each h in H the sequence sn+h − sn is uniformly distributed modulo 1, then so is sn.
Metric theorems
Metric theorems describe the behaviour of a parametrised sequence for almost all values of some parameter α: that is, for values of α not lying in some exceptional set of Lebesgue measure zero.
For any sequence of distinct integers bn, the sequence (bnα) is equidistributed mod 1 for almost all values of α.
The sequence (αn) is equidistributed mod 1 for almost all values of α > 1.
It is not known whether the sequences (en) or (n) are equidistributed mod 1. However it is known that the sequence (αn) is not equidistributed mod 1 if α is a PV number.
Well-distributed sequence
A sequence (s1, s2, s3, ...) of real numbers is said to be well-distributed on [a, b] if for any subinterval [c, d] of [a, b] we have
uniformly in k. Clearly every well-distributed sequence is uniformly distributed, but the converse does not hold. The definition of well-distributed modulo 1 is analogous.
Sequences equidistributed with respect to an arbitrary measure
For an arbitrary probability measure space , a sequence of points is said to be equidistributed with respect to if the mean of point measures converges weakly to :
In any Borel probability measure on a separable, metrizable space, there exists an equidistributed sequence with respect to the measure; indeed, this follows immediately from the fact that such a space is standard.
The general phenomenon of equidistribution comes up a lot for dynamical systems associated with Lie groups, for example in Margulis' solution to the Oppenheim conjecture.
See also
Equidistribution theorem
Low-discrepancy sequence
Erdős–Turán inequality
Notes
References
Further reading
External links
Lecture notes by Charles Walkden with proof of Weyl's Criterion
Diophantine approximation
Dynamical systems
Ergodic theory | Equidistributed sequence | [
"Physics",
"Mathematics"
] | 2,832 | [
"Ergodic theory",
"Mechanics",
"Mathematical relations",
"Diophantine approximation",
"Approximations",
"Number theory",
"Dynamical systems"
] |
345,017 | https://en.wikipedia.org/wiki/Finite%20volume%20method | The finite volume method (FVM) is a method for representing and evaluating partial differential equations in the form of algebraic equations.
In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem.
These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages.
"Finite volume" refers to the small volume surrounding each node point on a mesh.
Finite volume methods can be compared and contrasted with the finite difference methods, which approximate derivatives using nodal values, or finite element methods, which create local approximations of a solution using local data, and construct a global approximation by stitching them together. In contrast a finite volume method evaluates exact expressions for the average value of the solution over some volume, and uses this data to construct approximations of the solution within cells.
Example
Consider a simple 1D advection problem:
Here, represents the state variable and represents the flux or flow of . Conventionally, positive represents flow to the right while negative represents flow to the left. If we assume that equation () represents a flowing medium of constant area, we can sub-divide the spatial domain, , into finite volumes or cells with cell centers indexed as . For a particular cell, , we can define the volume average value of at time and , as
and at time as,
where and represent locations of the upstream and downstream faces or edges respectively of the cell.
Integrating equation () in time, we have:
where .
To obtain the volume average of at time , we integrate over the cell volume, and divide the result by , i.e.
We assume that is well behaved and that we can reverse the order of integration. Also, recall that flow is normal to the unit area of the cell. Now, since in one dimension , we can apply the divergence theorem, i.e. , and substitute for the volume integral of the divergence with the values of evaluated at the cell surface (edges and ) of the finite volume as follows:
where .
We can therefore derive a semi-discrete numerical scheme for the above problem with cell centers indexed as , and with cell edge fluxes indexed as , by differentiating () with respect to time to obtain:
where values for the edge fluxes, , can be reconstructed by interpolation or extrapolation of the cell averages. Equation () is exact for the volume averages; i.e., no approximations have been made during its derivation.
This method can also be applied to a 2D situation by considering the north and south faces along with the east and west faces around a node.
General conservation law
We can also consider the general conservation law problem, represented by the following PDE,
Here, represents a vector of states and represents the corresponding flux tensor. Again we can sub-divide the spatial domain into finite volumes or cells. For a particular cell, , we take the volume integral over the total volume of the cell, , which gives,
On integrating the first term to get the volume average and applying the divergence theorem to the second, this yields
where represents the total surface area of the cell and is a unit vector normal to the surface and pointing outward. So, finally, we are able to present the general result equivalent to (), i.e.
Again, values for the edge fluxes can be reconstructed by interpolation or extrapolation of the cell averages. The actual numerical scheme will depend upon problem geometry and mesh construction. MUSCL reconstruction is often used in high resolution schemes where shocks or discontinuities are present in the solution.
Finite volume schemes are conservative as cell averages change through the edge fluxes. In other words, one cell's loss is always another cell's gain!
See also
Finite element method
Flux limiter
Godunov's scheme
Godunov's theorem
High-resolution scheme
KIVA (software)
MIT General Circulation Model
MUSCL scheme
Sergei K. Godunov
Total variation diminishing
Finite volume method for unsteady flow
References
Further reading
Eymard, R. Gallouët, T. R., Herbin, R. (2000) The finite volume method Handbook of Numerical Analysis, Vol. VII, 2000, p. 713–1020. Editors: P.G. Ciarlet and J.L. Lions.
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley.
Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press.
LeVeque, Randall (1990), Numerical Methods for Conservation Laws, ETH Lectures in Mathematics Series, Birkhauser-Verlag.
LeVeque, Randall (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press.
Patankar, Suhas V. (1980), Numerical Heat Transfer and Fluid Flow, Hemisphere.
Tannehill, John C., et al., (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis.
Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag.
Wesseling, Pieter (2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
External links
Finite volume methods by R. Eymard, T Gallouët and R. Herbin, update of the article published in Handbook of Numerical Analysis, 2000
, available under the GFDL.
FiPy: A Finite Volume PDE Solver Using Python from NIST.
CLAWPACK: a software package designed to compute numerical solutions to hyperbolic partial differential equations using a wave propagation approach
Numerical differential equations
Computational fluid dynamics
Numerical analysis | Finite volume method | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,227 | [
"Computational fluid dynamics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
345,141 | https://en.wikipedia.org/wiki/Product%20detector | A product detector is a type of demodulator used for AM and SSB signals. Rather than converting the envelope of the signal into the decoded waveform like an envelope detector, the product detector takes the product of the modulated signal and a local oscillator, hence the name. A product detector is a frequency mixer.
Product detectors can be designed to accept either IF or RF frequency inputs. A product detector which accepts an IF signal would be used as a demodulator block in a superheterodyne receiver, and a detector designed for RF can be combined with an RF amplifier and a low-pass filter into a direct-conversion receiver.
A simple product detector
The simplest form of product detector mixes (or heterodynes) the RF or IF signal with a locally derived carrier (the Beat Frequency Oscillator, or BFO) to produce an audio frequency copy of the original audio signal and a mixer product at twice the original RF or IF frequency. This high-frequency component can then be filtered out, leaving the original audio frequency signal.
Mathematical model of the simple product detector
If m(t) is the original message, the AM signal can be shown to be
Multiplying the AM signal x(t) by an oscillator at the same frequency as and in phase with the carrier yields
which can be re-written as
After filtering out the high-frequency component based around cos(2ωt) and the DC component C, the original message will be recovered.
Drawbacks of the simple product detector
Although this simple detector works, it has two major drawbacks:
The frequency of the local oscillator must be the same as the frequency of the carrier, or else the output message will fade in and out in the case of AM, or be frequency shifted in the case of SSB
Once the frequency is matched, the phase of the carrier must be obtained, or else the demodulated message will be attenuated, but the noise will not be.
The local oscillator can be synchronized with the carrier using a phase-locked loop in a synchronous detector arrangement. For SSB, the only solution is to construct a highly stable oscillator.
Another example
There are many other kinds of product detectors as well, which are practical if one has access to digital signal processing equipment. For instance, it is possible to multiply the incoming signal by the carrier, times the square of another carrier 90° out of phase with it. This will produce a copy of the original message, and another AM signal at the fourth harmonic, by means of the trigonometric identity
The high-frequency component can again be filtered out, leaving the original signal.
Mathematical model of the detector
If m(t) is the original message, the AM signal can be shown to be
Multiplying the AM signal by the new set of frequencies yields
After filtering out the component based around cos(4ωt) and the DC component C, the original message will be recovered.
A more sophisticated product detector
A more sophisticated product detector can be constructed in a way much like a single-sideband modulator. Two copies of the modulated input signals are created. The first copy is mixed with a local oscillator and low-pass filtered. The second copy is mixed with a 90° phase-shifted copy of the oscillator and the output of this mixer is also 90° phase-shifted and then low-pass filtered. These copies are then combined to produce the original message. This operation is similar to that performed by a dual-phase lock-in amplifier.
Example: I-Q Demodulator
Advantages and disadvantages
The product demodulator has some advantages over an envelope detector for AM signal reception.
The product demodulator can decode overmodulated AM and AM with suppressed carrier.
A signal demodulated with a product detector will have a higher signal-to-noise ratio than the same signal demodulated with an envelope detector.
On the other hand, the envelope detector is a simple and relatively inexpensive circuit, and it can provide higher fidelity, since there is no possibility of mistuning the local oscillator.
A product detector (or equivalent) is needed to demodulate SSB signals.
Frequency mixers
Communication circuits
Demodulation
de:Amplitudenmodulation#Koh.C3.A4rente_Demodulation | Product detector | [
"Engineering"
] | 909 | [
"Radio electronics",
"Telecommunications engineering",
"Demodulation",
"Frequency mixers",
"Communication circuits"
] |
345,188 | https://en.wikipedia.org/wiki/VEGAS%20algorithm | The VEGAS algorithm, due to G. Peter Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral.
The VEGAS algorithm is based on importance sampling. It samples points from the probability distribution described by the function so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine.
Sampling method
In general, if the Monte Carlo integral of over a volume is sampled with points distributed according to a probability distribution described by the function we obtain an estimate
The variance of the new estimate is then
where is the variance of the original estimate,
If the probability distribution is chosen as then it can be shown that the variance vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution.
Approximation of probability distribution
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like with dimension d the probability distribution is approximated by a separable function: so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS.
See also
Las Vegas algorithm
Monte Carlo integration
Importance sampling
References
Monte Carlo methods
Computational physics
Statistical algorithms
Variance reduction | VEGAS algorithm | [
"Physics"
] | 418 | [
"Monte Carlo methods",
"Computational physics stubs",
"Computational physics"
] |
345,396 | https://en.wikipedia.org/wiki/List%20of%20differential%20geometry%20topics | This is a list of differential geometry topics. See also glossary of differential and metric geometry and list of Lie group topics.
Differential geometry of curves and surfaces
Differential geometry of curves
List of curves topics
Frenet–Serret formulas
Curves in differential geometry
Line element
Curvature
Radius of curvature
Osculating circle
Curve
Fenchel's theorem
Differential geometry of surfaces
Theorema egregium
Gauss–Bonnet theorem
First fundamental form
Second fundamental form
Gauss–Codazzi–Mainardi equations
Dupin indicatrix
Asymptotic curve
Curvature
Principal curvatures
Mean curvature
Gauss curvature
Elliptic point
Types of surfaces
Minimal surface
Ruled surface
Conical surface
Developable surface
Nadirashvili surface
Foundations
Calculus on manifolds
See also multivariable calculus, list of multivariable calculus topics
Manifold
Differentiable manifold
Smooth manifold
Banach manifold
Fréchet manifold
Tensor analysis
Tangent vector
Tangent space
Tangent bundle
Cotangent space
Cotangent bundle
Tensor
Tensor bundle
Vector field
Tensor field
Differential form
Exterior derivative
Lie derivative
pullback (differential geometry)
pushforward (differential)
jet (mathematics)
Contact (mathematics)
jet bundle
Frobenius theorem (differential topology)
Integral curve
Differential topology
Diffeomorphism
Large diffeomorphism
Orientability
characteristic class
Chern class
Pontrjagin class
spin structure
differentiable map
submersion
immersion
Embedding
Whitney embedding theorem
Critical value
Sard's theorem
Saddle point
Morse theory
Lie derivative
Hairy ball theorem
Poincaré–Hopf theorem
Stokes' theorem
De Rham cohomology
Sphere eversion
Frobenius theorem (differential topology)
Distribution (differential geometry)
integral curve
foliation
integrability conditions for differential systems
Fiber bundles
Fiber bundle
Principal bundle
Frame bundle
Hopf bundle
Associated bundle
Vector bundle
Tangent bundle
Cotangent bundle
Line bundle
Jet bundle
Fundamental structures
Sheaf (mathematics)
Pseudogroup
G-structure
synthetic differential geometry
Riemannian geometry
Fundamental notions
Metric tensor
Riemannian manifold
Pseudo-Riemannian manifold
Levi-Civita connection
Non-Euclidean geometry
Non-Euclidean geometry
Elliptic geometry
Spherical geometry
Sphere-world
Angle excess
hyperbolic geometry
hyperbolic space
hyperboloid model
Poincaré disc model
Poincaré half-plane model
Poincaré metric
Angle of parallelism
Geodesic
Prime geodesic
Geodesic flow
Exponential map (Lie theory)
Exponential map (Riemannian geometry)
Injectivity radius
Geodesic deviation equation
Jacobi field
Symmetric spaces (and related topics)
Riemannian symmetric space
Margulis lemma
Space form
Constant curvature
taut submanifold
Uniformization theorem
Myers theorem
Gromov's compactness theorem
Riemannian submanifolds
Gauss–Codazzi equations
Darboux frame
Hypersurface
Induced metric
Nash embedding theorem
minimal surface
Helicoid
Catenoid
Costa's minimal surface
Hsiang–Lawson's conjecture
Curvature of Riemannian manifolds
Theorema Egregium
Gauss–Bonnet theorem
Chern–Gauss–Bonnet theorem
Chern–Weil homomorphism
Gauss map
Second fundamental form
Curvature form
Riemann curvature tensor
Geodesic curvature
Scalar curvature
Sectional curvature
Ricci curvature, Ricci flat
Ricci decomposition
Schouten tensor
Weyl curvature
Ricci flow
Einstein manifold
Holonomy
Theorems in Riemannian geometry
Gauss–Bonnet theorem
Hopf–Rinow theorem
Cartan–Hadamard theorem
Myers theorem
Rauch comparison theorem
Morse index theorem
Synge theorem
Weinstein theorem
Toponogov theorem
Sphere theorem
Hodge theory
Uniformization theorem
Yamabe problem
Isometry
Killing vector field
Myers-Steenrod theorem
Laplace–Beltrami operator
Hodge star operator
Weitzenböck identity
Laplacian operators in differential geometry
Formulas and other tools
List of coordinate charts
List of formulas in Riemannian geometry
Christoffel symbols
Related structures
Intrinsic metric
Pseudo-Riemannian manifold
Sub-Riemannian manifold
Finsler geometry
General relativity
G2 manifold
Information geometry
Fisher information metric
Lie groups
Connections
covariant derivative
exterior covariant derivative
Levi-Civita connection
parallel transport
Development (differential geometry)
connection form
Cartan connection
affine connection
conformal connection
projective connection
method of moving frames
Cartan's equivalence method
Vierbein, tetrad
Cartan connection applications
Einstein–Cartan theory
connection (vector bundle)
connection (principal bundle)
Ehresmann connection
curvature
curvature form
holonomy, local holonomy
Chern–Weil homomorphism
Curvature vector
Curvature form
Curvature tensor
Cocurvature
torsion (differential geometry)
Complex manifolds
Riemann surface
Complex projective space
Kähler manifold
Dolbeault operator
CR manifold
Stein manifold
Almost complex structure
Hermitian manifold
Newlander–Nirenberg theorem
Generalized complex manifold
Calabi–Yau manifold
Hyperkähler manifold
K3 surface
hypercomplex manifold
Quaternion-Kähler manifold
Symplectic geometry
Symplectic topology
Symplectic space
Symplectic manifold
Symplectic structure
Symplectomorphism
Contact structure
Contact geometry
Hamiltonian system
Sasakian manifold
Poisson manifold
Conformal geometry
Möbius transformation
Conformal map
conformal connection
tractor bundle
Weyl curvature
Weyl–Schouten theorem
ambient construction
Willmore energy
Willmore flow
Index theory
Atiyah–Singer index theorem
de Rham cohomology
Dolbeault cohomology
elliptic complex
Hodge theory
pseudodifferential operator
Homogeneous spaces
Klein geometry, Erlangen programme
symmetric space
space form
Maurer–Cartan form
Examples
hyperbolic space
Gauss–Bolyai–Lobachevsky space
Grassmannian
Complex projective space
Real projective space
Euclidean space
Stiefel manifold
Upper half-plane
Sphere
Systolic geometry
Loewner's torus inequality
Pu's inequality
Gromov's inequality for complex projective space
Wirtinger inequality (2-forms)
Gromov's systolic inequality for essential manifolds
Essential manifold
Filling radius
Filling area conjecture
Bolza surface
First Hurwitz triplet
Hermite constant
Systoles of surfaces
Systolic freedom
Systolic category
Other
Envelope (mathematics)
Bäcklund transform
Differential geometry
Differential geometry
Differential geometry | List of differential geometry topics | [
"Mathematics"
] | 1,215 | [
"nan"
] |
345,919 | https://en.wikipedia.org/wiki/Three%20prime%20untranslated%20region | In molecular genetics, the three prime untranslated region (3′-UTR) is the section of messenger RNA (mRNA) that immediately follows the translation termination codon. The 3′-UTR often contains regulatory regions that post-transcriptionally influence gene expression.
During gene expression, an mRNA molecule is transcribed from the DNA sequence and is later translated into a protein. Several regions of the mRNA molecule are not translated into a protein including the 5' cap, 5' untranslated region, 3′ untranslated region and poly(A) tail. Regulatory regions within the 3′-untranslated region can influence polyadenylation, translation efficiency, localization, and stability of the mRNA. The 3′-UTR contains binding sites for both regulatory proteins and microRNAs (miRNAs). By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also has silencer regions which bind to repressor proteins and will inhibit the expression of the mRNA.
Many 3′-UTRs also contain AU-rich elements (AREs). Proteins bind AREs to affect the stability or decay rate of transcripts in a localized manner or affect translation initiation. Furthermore, the 3′-UTR contains the sequence AAUAAA that directs addition of several hundred adenine residues called the poly(A) tail to the end of the mRNA transcript. Poly(A) binding protein (PABP) binds to this tail, contributing to regulation of mRNA translation, stability, and export. For example, poly(A) tail bound PABP interacts with proteins associated with the 5' end of the transcript, causing a circularization of the mRNA that promotes translation.
The 3′-UTR can also contain sequences that attract proteins to associate the mRNA with the cytoskeleton, transport it to or from the cell nucleus, or perform other types of localization. In addition to sequences within the 3′-UTR, the physical characteristics of the region, including its length and secondary structure, contribute to translation regulation. These diverse mechanisms of gene regulation ensure that the correct genes are expressed in the correct cells at the appropriate times.
Physical characteristics
The 3′-UTR of mRNA has a great variety of regulatory functions that are controlled by the physical characteristics of the region. One such characteristic is the length of the 3′-UTR, which in the mammalian genome has considerable variation. This region of the mRNA transcript can range from 60 nucleotides to about 4000. On average the length for the 3′-UTR in humans is approximately 800 nucleotides, while the average length of 5'-UTRs is only about 200 nucleotides. The length of the 3′-UTR is significant since longer 3′-UTRs are associated with lower levels of gene expression. One possible explanation for this phenomenon is that longer regions have a higher probability of possessing more miRNA binding sites that have the ability to inhibit translation. In addition to length, the nucleotide composition also differs significantly between the 5' and 3′-UTR. The mean G+C percentage of the 5'-UTR in warm-blooded vertebrates is about 60% as compared to only 45% for 3′-UTRs. This is important because an inverse correlation has been observed between the G+C% of 5' and 3′-UTRs and their corresponding lengths. The UTRs that are GC-poor tend to be longer than those located in GC-rich genomic regions.
Sequences within the 3′-UTR also have the ability to degrade or stabilize the mRNA transcript. Modifications that control a transcript's stability allow expression of a gene to be rapidly controlled without altering translation rates. One group of elements in the 3′-UTR that can help destabilize an mRNA transcript are the AU-rich elements (AREs). These elements range in size from 50 to 150 base pairs and generally contain multiple copies of the pentanucleotide AUUUA. Early studies indicated that AREs can vary in sequence and fall into three main classes that differ in the number and arrangement of motifs. Another set of elements that is present in both the 5' and 3′-UTR are iron response elements (IREs). The IRE is a stem-loop structure within the untranslated regions of mRNAs that encode proteins involved in cellular iron metabolism. The mRNA transcript containing this element is either degraded or stabilized depending upon the binding of specific proteins and the intracellular iron concentrations.
The 3′-UTR also contains sequences that signal additions to be made, either to the transcript itself or to the product of translation. For example, there are two different polyadenylation signals present within the 3′-UTR that signal the addition of the poly(A) tail. These signals initiate the synthesis of the poly(A) tail at a defined length of about 250 base pairs. The primary signal used is the nuclear polyadenylation signal (PAS) with the sequence AAUAAA located toward the end of the 3′-UTR. However, during early development cytoplasmic polyadenylation can occur instead and regulate the translational activation of maternal mRNAs. The element that controls this process is called the CPE which is AU-rich and located in the 3′-UTR as well. The CPE generally has the structure UUUUUUAU and is usually within 100 base pairs of the nuclear PAS. Another specific addition signaled by the 3′-UTR is the incorporation of selenocysteine at UGA codons of mRNAs encoding selenoproteins. Normally the UGA codon encodes for a stop of translation, but in this case a conserved stem-loop structure called the selenocysteine insertion sequence (SECIS) causes for the insertion of selenocysteine instead.
Role in gene expression
The 3′-untranslated region plays a crucial role in gene expression by influencing the localization, stability, export, and translation efficiency of an mRNA. It contains various sequences that are involved in gene expression, including microRNA response elements (MREs), AU-rich elements (AREs), and the poly(A) tail. In addition, the structural characteristics of the 3′-UTR as well as its use of alternative polyadenylation play a role in gene expression.
MicroRNA response elements
The 3′-UTR often contains microRNA response elements (MREs), which are sequences to which miRNAs bind. miRNAs are short, non-coding RNA molecules capable of binding to mRNA transcripts and regulating their expression. One miRNA mechanism involves partial base pairing of the 5' seed sequence of an miRNA to an MRE within the 3′-UTR of an mRNA; this binding then causes translational repression.
AU-rich elements
In addition to containing MREs, the 3′-UTR also often contains AU-rich elements (AREs), which are 50 to 150 bp in length and usually include many copies of the sequence AUUUA. ARE binding proteins (ARE-BPs) bind to AU-rich elements in a manner that is dependent upon tissue type, cell type, timing, cellular localization, and environment. In response to different intracellular and extracellular signals, ARE-BPs can promote mRNA decay, affect mRNA stability, or activate translation. This mechanism of gene regulation is involved in cell growth, cellular differentiation, and adaptation to external stimuli. It therefore acts on transcripts encoding cytokines, growth factors, tumor suppressors, proto-oncogenes, cyclins, enzymes, transcription factors, receptors, and membrane proteins.
Poly(A) tail
The poly(A) tail contains binding sites for poly(A) binding proteins (PABPs). These proteins cooperate with other factors to affect the export, stability, decay, and translation of an mRNA. PABPs bound to the poly(A) tail may also interact with proteins, such as translation initiation factors, that are bound to the 5' cap of the mRNA. This interaction causes circularization of the transcript, which subsequently promotes translation initiation. Furthermore, it allows for efficient translation by causing recycling of ribosomes. While the presence of a poly(A) tail usually aids in triggering translation, the absence or removal of one often leads to exonuclease-mediated degradation of the mRNA. Polyadenylation itself is regulated by sequences within the 3′-UTR of the transcript. These sequences include cytoplasmic polyadenylation elements (CPEs), which are uridine-rich sequences that contribute to both polyadenylation activation and repression. CPE-binding protein (CPEB) binds to CPEs in conjunction with a variety of other proteins in order to elicit different responses.
Structural characteristics
While the sequence that constitutes the 3′-UTR contributes greatly to gene expression, the structural characteristics of the 3′-UTR also play a large role. In general, longer 3′-UTRs correspond to lower expression rates since they often contain more miRNA and protein binding sites that are involved in inhibiting translation. Human transcripts possess 3′-UTRs that are on average twice as long as other mammalian 3′-UTRs. This trend reflects the high level of complexity involved in human gene regulation. In addition to length, the secondary structure of the 3′-untranslated region also has regulatory functions. Protein factors can either aid or disrupt folding of the region into various secondary structures. The most common structure is a stem-loop, which provides a scaffold for RNA binding proteins and non-coding RNAs that influence expression of the transcript.
Alternative polyadenylation
Another mechanism involving the structure of the 3′-UTR is called alternative polyadenylation (APA), which results in mRNA isoforms that differ only in their 3′-UTRs. This mechanism is especially useful for complex organisms as it provides a means of expressing the same protein but in varying amounts and locations. It is utilized by about half of human genes. APA can result from the presence of multiple polyadenylation sites or mutually exclusive terminal exons. Since it can affect the presence of protein and miRNA binding sites, APA can cause differential expression of mRNA transcripts by influencing their stability, export to the cytoplasm, and translation efficiency.
Methods of study
Scientists use a number of methods to study the complex structures and functions of the 3′ UTR. Even if a given 3′-UTR in an mRNA is shown to be present in a tissue, the effects of localization, functional half-life, translational efficiency, and trans-acting elements must be determined to understand the 3′-UTR's full functionality. Computational approaches, primarily by sequence analysis, have shown the existence of AREs in approximately 5 to 8% of human 3′-UTRs and the presence of one or more miRNA targets in as many as 60% or more of human 3′-UTRs. Software can rapidly compare millions of sequences at once to find similarities between various 3′ UTRs within the genome. Experimental approaches have been used to define sequences that associate with specific RNA-binding proteins; specifically, recent improvements in sequencing and cross-linking techniques have enabled fine mapping of protein binding sites within the transcript. Induced site-specific mutations, for example those that affect the termination codon, polyadenylation signal, or secondary structure of the 3′-UTR, can show how mutated regions can cause translation deregulation and disease. These types of transcript-wide methods should help our understanding of known cis elements and trans-regulatory factors within 3′-UTRs.
Disease
3′-UTR mutations can be very consequential because one alteration can be responsible for the altered expression of many genes. Transcriptionally, a mutation may affect only the allele and genes that are physically linked. However, since 3′-UTR binding proteins also function in the processing and nuclear export of mRNA, a mutation can also affect other unrelated genes. Dysregulation of ARE-binding proteins (AUBPs) due to mutations in AU-rich regions can lead to diseases including tumorigenesis (cancer), hematopoietic malignancies, leukemogenesis, and developmental delay/autism spectrum disorders. An expanded number of trinucleotide (CTG) repeats in the 3’-UTR of the dystrophia myotonica protein kinase (DMPK) gene causes myotonic dystrophy. Retro-transposal 3-kilobase insertion of tandem repeat sequences within the 3′-UTR of fukutin protein is linked to Fukuyama-type congenital muscular dystrophy. Elements in the 3′-UTR have also been linked to human acute myeloid leukemia, alpha-thalassemia, neuroblastoma, Keratinopathy, Aniridia, IPEX syndrome, and congenital heart defects. The few UTR-mediated diseases identified only hint at the countless links yet to be discovered.
Future development
Despite current understanding of 3′-UTRs, they are still relative mysteries. Since mRNAs usually contain several overlapping control elements, it is often difficult to specify the identity and function of each 3′-UTR element, let alone the regulatory factors that may bind at these sites. Additionally, each 3′-UTR contains many alternative AU-rich elements and polyadenylation signals. These cis- and trans-acting elements, along with miRNAs, offer a virtually limitless range of control possibilities within a single mRNA. Future research through the increased use of deep-sequencing based ribosome profiling will reveal more regulatory subtleties as well as new control elements and AUBPs.
See also
Five prime untranslated region
UTRdb
UTRome
References
Further reading
External links
Brief introduction to mRNA regulatory elements
UTResource 3′ UTR analysis
UTRome.org 3′ UTRs in nematodes
Medical Subject Heading: 3′ Untranslated Regions
RNA
Gene expression | Three prime untranslated region | [
"Chemistry",
"Biology"
] | 2,936 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
345,929 | https://en.wikipedia.org/wiki/SECIS%20element | In biology, the SECIS element (SECIS: selenocysteine insertion sequence) is an RNA element around 60 nucleotides in length that adopts a stem-loop structure. This structural motif (pattern of nucleotides) directs the cell to translate UGA codons as selenocysteines (UGA is normally a stop codon). SECIS elements are thus a fundamental aspect of messenger RNAs encoding selenoproteins, proteins that include one or more selenocysteine residues.
In bacteria the SECIS element appears soon after the UGA codon it affects. In archaea and eukaryotes, it occurs in the 3' UTR of an mRNA, and can cause multiple UGA codons within the mRNA to code for selenocysteine. One archaeal SECIS element, in Methanococcus, is located in the 5' UTR.
The SECIS element appears defined by sequence characteristics, i.e. particular nucleotides tend to be at particular positions in it, and a characteristic secondary structure. The secondary structure is the result of base-pairing of complementary RNA nucleotides, and causes a hairpin-like structure. The eukaryotic SECIS element includes non-canonical A-G base pairs, which are uncommon in nature, but are critically important for correct SECIS element function. Although the eukaryotic, archaeal and bacterial SECIS elements each share a general hairpin structure, they are not alignable, e.g. an alignment-based scheme to recognize eukaryotic SECIS elements will not be able to recognize archaeal SECIS elements. However, in Lokiarcheota, SECIS elements are more similar to eukaryotic elements.
In bioinformatics, several computer programs have been created that search for SECIS elements within a genome sequence, based on the sequence and secondary structure characteristics of SECIS elements. These programs have been used in searches for novel selenoproteins.
Species distribution
The SECIS element is found in a wide variety of organisms from all three domains of life (including their viruses).
References
External links
Gene expression
Cis-regulatory RNA elements | SECIS element | [
"Chemistry",
"Biology"
] | 462 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
346,030 | https://en.wikipedia.org/wiki/Improper%20integral | In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals (or, equivalently, Darboux integrals), this typically involves unboundedness, either of the set over which the integral is taken or of the integrand (the function being integrated), or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. If a regular definite integral (which may retronymically be called a proper integral) is worked out as if it is improper, the same answer will result.
In the simplest case of a real-valued function of a single variable integrated in the sense of Riemann (or Darboux) over a single interval, improper integrals may be in any of the following forms:
, where is undefined or discontinuous somewhere on
The first three forms are improper because the integrals are taken over an unbounded interval. (They may be improper for other reasons, as well, as explained below.) Such an integral is sometimes described as being of the "first" type or kind if the integrand otherwise satisfies the assumptions of integration. Integrals in the fourth form that are improper because has a vertical asymptote somewhere on the interval may be described as being of the "second" type or kind. Integrals that combine aspects of both types are sometimes described as being of the "third" type or kind.
In each case above, the improper integral must be rewritten using one or more limits, depending on what is causing the integral to be improper. For example, in case 1, if is continuous on the entire interval , then
The limit on the right is taken to be the definition of the integral notation on the left.
If is only continuous on and not at itself, then typically this is rewritten as
for any choice of . Here both limits must converge to a finite value for the improper integral to be said to converge. This requirement avoids the ambiguous case of adding positive and negative infinities (i.e., the "" indeterminate form). Alternatively, an iterated limit could be used or a single limit based on the Cauchy principal value.
If is continuous on and , with a discontinuity of any kind at , then
for any choice of . The previous remarks about indeterminate forms, iterated limits, and the Cauchy principal value also apply here.
The function can have more discontinuities, in which case even more limits would be required (or a more complicated principal value expression).
Cases 2–4 are handled similarly. See the examples below.
Improper integrals can also be evaluated in the context of complex numbers, in higher dimensions, and in other theoretical frameworks such as Lebesgue integration or Henstock–Kurzweil integration. Integrals that are considered improper in one framework may not be in others.
Examples
The original definition of the Riemann integral does not apply to a function such as on the interval , because in this case the domain of integration is unbounded. However, the Riemann integral can often be extended by continuity, by defining the improper integral instead as a limit
The narrow definition of the Riemann integral also does not cover the function on the interval . The problem here is that the integrand is unbounded in the domain of integration. In other words, the definition of the Riemann integral requires that both the domain of integration and the integrand be bounded. However, the improper integral does exist if understood as the limit
Sometimes integrals may have two singularities where they are improper. Consider, for example, the function integrated from 0 to (shown right). At the lower bound of the integration domain, as goes to 0 the function goes to , and the upper bound is itself , though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of /6. To integrate from 1 to , a Riemann sum is not possible. However, any finite upper bound, say (with ), gives a well-defined result, . This has a finite limit as goes to infinity, namely /2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing /6. Replacing 1/3 by an arbitrary positive value (with ) is equally safe, giving . This, too, has a finite limit as goes to zero, namely /2. Combining the limits of the two fragments, the result of this improper integral is
This process does not guarantee success; a limit might fail to exist, or might be infinite. For example, over the bounded interval from 0 to 1 the integral of does not converge; and over the unbounded interval from 1 to the integral of does not converge.
It might also happen that an integrand is unbounded near an interior point, in which case the integral must be split at that point. For the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. For example:
But the similar integral
cannot be assigned a value in this way, as the integrals above and below zero in the integral domain do not independently converge. (However, see Cauchy principal value.)
Convergence of the integral
An improper integral converges if the limit defining it exists. Thus for example one says that the improper integral
exists and is equal to L if the integrals under the limit exist for all sufficiently large t, and the value of the limit is equal to L.
It is also possible for an improper integral to diverge to infinity. In that case, one may assign the value of ∞ (or −∞) to the integral. For instance
However, other improper integrals may simply diverge in no particular direction, such as
which does not exist, even as an extended real number. This is called divergence by oscillation.
A limitation of the technique of improper integration is that the limit must be taken with respect to one endpoint at a time. Thus, for instance, an improper integral of the form
can be defined by taking two separate limits; to which
provided the double limit is finite. It can also be defined as a pair of distinct improper integrals of the first kind:
where c is any convenient point at which to start the integration. This definition also applies when one of these integrals is infinite, or both if they have the same sign.
An example of an improper integral where both endpoints are infinite is the Gaussian integral An example which evaluates to infinity is But one cannot even define other integrals of this kind unambiguously, such as since the double limit is infinite and the two-integral method
yields an indeterminate form, In this case, one can however define an improper integral in the sense of Cauchy principal value:
The questions one must address in determining an improper integral are:
Does the limit exist?
Can the limit be computed?
The first question is an issue of mathematical analysis. The second one can be addressed by calculus techniques, but also in some cases by contour integration, Fourier transforms and other more advanced methods.
Types of integrals
There is more than one theory of integration. From the point of view of calculus, the Riemann integral theory is usually assumed as the default theory. In using improper integrals, it can matter which integration theory is in play.
For the Riemann integral (or the Darboux integral, which is equivalent to it), improper integration is necessary both for unbounded intervals (since one cannot divide the interval into finitely many subintervals of finite length) and for unbounded functions with finite integral (since, supposing it is unbounded above, then the upper integral will be infinite, but the lower integral will be finite).
The Lebesgue integral deals differently with unbounded domains and unbounded functions, so that often an integral which only exists as an improper Riemann integral will exist as a (proper) Lebesgue integral, such as . On the other hand, there are also integrals that have an improper Riemann integral but do not have a (proper) Lebesgue integral, such as . The Lebesgue theory does not see this as a deficiency: from the point of view of measure theory, and cannot be defined satisfactorily. In some situations, however, it may be convenient to employ improper Lebesgue integrals as is the case, for instance, when defining the Cauchy principal value. The Lebesgue integral is more or less essential in the theoretical treatment of the Fourier transform, with pervasive use of integrals over the whole real line.
For the Henstock–Kurzweil integral, improper integration is not necessary, and this is seen as a strength of the theory: it encompasses all Lebesgue integrable and improper Riemann integrable functions.
Improper Riemann integrals and Lebesgue integrals
In some cases, the integral
can be defined as an integral (a Lebesgue integral, for instance) without reference to the limit
but cannot otherwise be conveniently computed. This often happens when the function f being integrated from a to c has a vertical asymptote at c, or if c = ∞ (see Figures 1 and 2). In such cases, the improper Riemann integral allows one to calculate the Lebesgue integral of the function. Specifically, the following theorem holds :
If a function f is Riemann integrable on [a,b] for every b ≥ a, and the partial integrals
are bounded as b → ∞, then the improper Riemann integrals
both exist. Furthermore, f is Lebesgue integrable on [a, ∞), and its Lebesgue integral is equal to its improper Riemann integral.
For example, the integral
can be interpreted alternatively as the improper integral
or it may be interpreted instead as a Lebesgue integral over the set (0, ∞). Since both of these kinds of integral agree, one is free to choose the first method to calculate the value of the integral, even if one ultimately wishes to regard it as a Lebesgue integral. Thus improper integrals are clearly useful tools for obtaining the actual values of integrals.
In other cases, however, a Lebesgue integral between finite endpoints may not even be defined, because the integrals of the positive and negative parts of f are both infinite, but the improper Riemann integral may still exist. Such cases are "properly improper" integrals, i.e. their values cannot be defined except as such limits. For example,
cannot be interpreted as a Lebesgue integral, since
But is nevertheless integrable between any two finite endpoints, and its integral between 0 and ∞ is usually understood as the limit of the integral:
Singularities
One can speak of the singularities of an improper integral, meaning those points of the extended real number line at which limits are used.
Cauchy principal value
Consider the difference in values of two limits:
The former is the Cauchy principal value of the otherwise ill-defined expression
Similarly, we have
but
The former is the principal value of the otherwise ill-defined expression
All of the above limits are cases of the indeterminate form .
These pathologies do not affect "Lebesgue-integrable" functions, that is, functions the integrals of whose absolute values are finite.
Summability
An improper integral may diverge in the sense that the limit defining it may not exist. In this case, there are more sophisticated definitions of the limit which can produce a convergent value for the improper integral. These are called summability methods.
One summability method, popular in Fourier analysis, is that of Cesàro summation. The integral
is Cesàro summable (C, α) if
exists and is finite . The value of this limit, should it exist, is the (C, α) sum of the integral.
An integral is (C, 0) summable precisely when it exists as an improper integral. However, there are integrals which are (C, α) summable for α > 0 which fail to converge as improper integrals (in the sense of Riemann or Lebesgue). One example is the integral
which fails to exist as an improper integral, but is (C,α) summable for every α > 0. This is an integral version of Grandi's series.
Multivariable improper integrals
The improper integral can also be defined for functions of several variables. The definition is slightly different, depending on whether one requires integrating over an unbounded domain, such as , or is integrating a function with singularities, like .
Improper integrals over arbitrary domains
If is a non-negative function that is Riemann integrable over every compact cube of the form , for , then the improper integral of f over is defined to be the limit
provided it exists.
A function on an arbitrary domain A in is extended to a function on by zero outside of A:
The Riemann integral of a function over a bounded domain A is then defined as the integral of the extended function over a cube containing A:
More generally, if A is unbounded, then the improper Riemann integral over an arbitrary domain in is defined as the limit:
Improper integrals with singularities
If f is a non-negative function which is unbounded in a domain A, then the improper integral of f is defined by truncating f at some cutoff M, integrating the resulting function, and then taking the limit as M tends to infinity. That is for , set . Then define
provided this limit exists.
Functions with both positive and negative values
These definitions apply for functions that are non-negative. A more general function f can be decomposed as a difference of its positive part and negative part , so
with and both non-negative functions. The function f has an improper Riemann integral if each of and has one, in which case the value of that improper integral is defined by
In order to exist in this sense, the improper integral necessarily converges absolutely, since
Notes
Bibliography
.
.
.
External links
Numerical Methods to Solve Improper Integrals at Holistic Numerical Methods Institute
Integral calculus | Improper integral | [
"Mathematics"
] | 2,995 | [
"Integral calculus",
"Calculus"
] |
346,611 | https://en.wikipedia.org/wiki/Axiom%20of%20countability | In mathematics, an axiom of countability is a property of certain mathematical objects that asserts the existence of a countable set with certain properties. Without such an axiom, such a set might not provably exist.
Important examples
Important countability axioms for topological spaces include:
sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set
first-countable space: every point has a countable neighbourhood basis (local base)
second-countable space: the topology has a countable base
separable space: there exists a countable dense subset
Lindelöf space: every open cover has a countable subcover
σ-compact space: there exists a countable cover by compact spaces
Relationships with each other
These axioms are related to each other in the following ways:
Every first-countable space is sequential.
Every second-countable space is first countable, separable, and Lindelöf.
Every σ-compact space is Lindelöf.
Every metric space is first countable.
For metric spaces, second-countability, separability, and the Lindelöf property are all equivalent.
Related concepts
Other examples of mathematical objects obeying axioms of countability include sigma-finite measure spaces, and lattices of countable type.
References
General topology
Mathematical axioms | Axiom of countability | [
"Mathematics"
] | 274 | [
"General topology",
"Mathematical logic",
"Topology",
"Mathematical axioms"
] |
346,681 | https://en.wikipedia.org/wiki/%CE%A3-compact%20space | In mathematics, a topological space is said to be σ-compact if it is the union of countably many compact subspaces.
A space is said to be σ-locally compact if it is both σ-compact and (weakly) locally compact. That terminology can be somewhat confusing as it does not fit the usual pattern of σ-(property) meaning a countable union of spaces satisfying (property); that's why such spaces are more commonly referred to explicitly as σ-compact (weakly) locally compact, which is also equivalent to being exhaustible by compact sets.
Properties and examples
Every compact space is σ-compact, and every σ-compact space is Lindelöf (i.e. every open cover has a countable subcover). The reverse implications do not hold, for example, standard Euclidean space (Rn) is σ-compact but not compact, and the lower limit topology on the real line is Lindelöf but not σ-compact. In fact, the countable complement topology on any uncountable set is Lindelöf but neither σ-compact nor locally compact. However, it is true that any locally compact Lindelöf space is σ-compact.
(The irrational numbers) is not σ-compact.
A Hausdorff, Baire space that is also σ-compact, must be locally compact at at least one point.
If G is a topological group and G is locally compact at one point, then G is locally compact everywhere. Therefore, the previous property tells us that if G is a σ-compact, Hausdorff topological group that is also a Baire space, then G is locally compact. This shows that for Hausdorff topological groups that are also Baire spaces, σ-compactness implies local compactness.
The previous property implies for instance that Rω is not σ-compact: if it were σ-compact, it would necessarily be locally compact since Rω is a topological group that is also a Baire space.
Every hemicompact space is σ-compact. The converse, however, is not true; for example, the space of rationals, with the usual topology, is σ-compact but not hemicompact.
The product of a finite number of σ-compact spaces is σ-compact. However the product of an infinite number of σ-compact spaces may fail to be σ-compact.
A σ-compact space X is second category (respectively Baire) if and only if the set of points at which is X is locally compact is nonempty (respectively dense) in X.
See also
Notes
References
Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). .
Compactness (mathematics)
General topology
Properties of topological spaces | Σ-compact space | [
"Mathematics"
] | 578 | [
"General topology",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
346,769 | https://en.wikipedia.org/wiki/Order%20of%20approximation | In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is.
Usage in science and engineering
In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero-order approximation is also common. Cardinal numerals are occasionally used in expressions like an order-zero approximation, an order-one approximation, etc.
The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives.
The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy.
In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion. This affects accuracy. The error usually varies within the interval. Thus the terms (zeroth, first, second, etc.) used above meaning do not directly give information about percent error or significant figures. For example, in the Taylor series expansion of the exponential function,
the zeroth-order term is the first-order term is second-order is and so forth. If each higher order term is smaller than the previous. If then the first order approximation,
is often sufficient. But at the first-order term, is not smaller than the zeroth-order term, And at even the second-order term, is greater than the zeroth-order term.
Zeroth-order
Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined.
A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example,
could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x values and the y values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7 ± 2.0 in the interval of x from −0.5 to 2.5, considering the standard deviation.
If the data points are reported as
the zeroth-order approximation results in
The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example,
One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series is useful and helps predict an analytic solution, but the approximation alone does not provide conclusive evidence.
First-order
First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has , or four thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given.
A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example:
is an approximate fit to the data.
In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess".
Second-order
Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has , or thirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above.
A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example:
is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order").
Higher-order
While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number.
Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation.
Colloquial usage
These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement.
See also
Linearization
Perturbation theory
Taylor series
Chapman–Enskog method
Big O notation
Order of accuracy
References
Perturbation theory
Numerical analysis | Order of approximation | [
"Physics",
"Mathematics"
] | 1,738 | [
"Computational mathematics",
"Quantum mechanics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Perturbation theory"
] |
346,906 | https://en.wikipedia.org/wiki/Gauss%20gun | The Gauss gun (often called a Gauss rifle or Gauss cannon) is a device that uses permanent magnets and the physics of the Newton's cradle to accelerate a projectile. Gauss guns are distinct from and predate coil guns, although many works of science fiction (and occasionally educators) have confused the two. Typical use of the Gauss rifle is to demonstrate the effects of energy and momentum transfer, however, self-assembling microbots based on the principle have been proposed for tissue penetration.
Mechanism
In its frequent incarnation as a physics demonstration, a Gauss gun usually consists of series of ferromagnetic balls on a nonmagnetic track. On the track is a permanent magnet with a ball, the projectile, stuck to the front of it. Between the projectile and the magnet is a spacer, usually consisting of one or more additional balls. Yet another ball, the trigger ball, is released from behind the magnet. It is attracted to and accelerates toward the magnet. When it strikes the back of the magnet, it transfers its momentum to the projectile ball, which is knocked off the front of the stack, as in a Newton's cradle. Because the spacer kept it far away from the magnet, the projectile loses less energy escaping from the magnet's influence than the trigger ball gave it, so it leaves the stack with a higher velocity than the trigger ball entered with.
Once the ball is launched, the trigger ball must be pried off the back of the magnet before it can be used again. This is where the energy to shoot the gun ultimately comes from.
Multi-stage Gauss guns are also possible, with the projectile of each stage becoming the trigger for the next, carrying its energy forward so that each stage contributes energy to the final projectile.
See also
Newton's Cradle
List of science demonstrations
Galilean cannon
References
Sources
External Links
A Youtube video explaining the Gauss gun
Science demonstrations
Physics education | Gauss gun | [
"Physics"
] | 396 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
346,992 | https://en.wikipedia.org/wiki/Dimension%20theorem%20for%20vector%20spaces | In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite or infinite (in the latter case, it is a cardinal number), and defines the dimension of the vector space.
Formally, the dimension theorem for vector spaces states that:
As a basis is a generating set that is linearly independent, the dimension theorem is a consequence of the following theorem, which is also useful:
In particular if is finitely generated, then all its bases are finite and have the same number of elements.
While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma, which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary -modules for rings having invariant basis number.
In the finitely generated case the proof uses only elementary arguments of algebra, and does not require the axiom of choice nor its weaker variants.
Proof
Let be a vector space, be a linearly independent set of elements of , and be a generating set. One has to prove that the cardinality of is not larger than that of .
If is finite, this results from the Steinitz exchange lemma. (Indeed, the Steinitz exchange lemma implies every finite subset of has cardinality not larger than that of , hence is finite with cardinality not larger than that of .) If is finite, a proof based on matrix theory is also possible.
Assume that is infinite. If is finite, there is nothing to prove. Thus, we may assume that is also infinite. Let us suppose that the cardinality of is larger than that of . We have to prove that this leads to a contradiction.
By Zorn's lemma, every linearly independent set is contained in a maximal linearly independent set . This maximality implies that spans and is therefore a basis (the maximality implies that every element of is linearly dependent from the elements of , and therefore is a linear combination of elements of ). As the cardinality of is greater than or equal to the cardinality of , one may replace with , that is, one may suppose, without loss of generality, that is a basis.
Thus, every can be written as a finite sum
where is a finite subset of As is infinite, has the same cardinality as . Therefore has cardinality smaller than that of . So there is some which does not appear in any . The corresponding can be expressed as a finite linear combination of s, which in turn can be expressed as finite linear combination of s, not involving . Hence is linearly dependent on the other s, which provides the desired contradiction.
Kernel extension theorem for vector spaces
This application of the dimension theorem is sometimes itself called the dimension theorem. Let
be a linear transformation. Then
that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank–nullity theorem for a fuller discussion.
Notes
References
Theorems in abstract algebra
Theorems in linear algebra
Articles containing proofs | Dimension theorem for vector spaces | [
"Mathematics"
] | 687 | [
"Theorems in algebra",
"Theorems in linear algebra",
"Articles containing proofs",
"Theorems in abstract algebra"
] |
347,027 | https://en.wikipedia.org/wiki/Amorphous%20metal | An amorphous metal (also known as metallic glass, glassy metal, or shiny metal) is a solid metallic material, usually an alloy, with disordered atomic-scale structure. Most metals are crystalline in their solid state, which means they have a highly ordered arrangement of atoms. Amorphous metals are non-crystalline, and have a glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity and can show metallic luster.
Amorphous metals can be produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. Small batches of amorphous metals have been produced through a variety of quick-cooling methods, such as amorphous metal ribbons produced by sputtering molten metal onto a spinning metal disk (melt spinning). The rapid cooling (millions of degrees Celsius per second) comes too fast for crystals to form and the material is "locked" in a glassy state. Alloys with cooling rates low enough to allow formation of amorphous structure in thick layers (over ) have been produced; bulk metallic glasses. Batches of amorphous steel with three times the strength of conventional steel alloys have been produced. New techniques such as 3D printing, also characterised by high cooling rates, are an active research topic.
History
The first reported metallic glass was Au75Si25, produced at Caltech by Klement, Willens, and Duwez in 1960. This and other early glass-forming alloys had to be rapidly cooled (on the order of one megakelvin per second, 106 K/s) to avoid crystallization. An important consequence of this was that metallic glasses could be produced in a few forms (typically ribbons, foils, or wires) in which one dimension was small so that heat could be extracted quickly enough to achieve the required cooling rate. As a result, metallic glass specimens (with a few exceptions) were limited to thicknesses of less than one hundred microns.
In 1969, an alloy of 77.5% palladium, 6% copper, and 16.5% silicon was found to have critical cooling rate between 100 and 1000 K/s.
In 1976, Liebermann and Graham developed a method of manufacturing thin ribbons of amorphous metal on a supercooled fast-spinning wheel. This was an alloy of iron, nickel, and boron. The material, known as Metglas, was commercialized in the early 1980s and became used for low-loss power distribution transformers (amorphous metal transformer). Metglas-2605 is composed of 80% iron and 20% boron, has a Curie temperature of and a room temperature saturation magnetization of 1.56 teslas.
In the early 1980s, glassy ingots with a diameter of were produced with an alloy of 55% palladium, 22.5% lead, and 22.5% antimony, by surface etching followed with heating-cooling cycles. Using boron oxide flux, the achievable thickness increased to one centimeter.
In 1982, a study on amorphous metal structural relaxation indicated a relationship between the specific heat and temperature of (Fe0.5Ni0.5)83P17. As the material was heated, the two properties displayed a negative relationship starting at 375 K, due to the change in relaxed amorphous states. When the material was annealed for periods from 1 to 48 hours, the properties instead displayed a positive relationship starting at 475 K for all annealing periods, since the annealing induced structure disappears at that temperature. In this study, amorphous alloys demonstrated glass transition and a super cooled liquid region. Between 1988 and 1992, more studies found more glass-type alloys with glass transition and a super cooled liquid region. From those studies, bulk glass alloys were made of La, Mg, and Zr, and these alloys demonstrated plasticity even with ribbon thickness from 20 μm to 50 μm. The plasticity was a stark difference to past amorphous metals that became brittle at those thicknesses.
In 1988, alloys of lanthanum, aluminium, and copper ore were revealed to be glass-forming. Al-based metallic glasses containing scandium exhibited a record-type tensile mechanical strength of about .
Bulk amorphous alloys of several millimeters in thickness were rare, although Pd-based amorphous alloys had been formed into rods with a diameter by quenching, and spheres with a diameter were formed by repetition flux melting with B2O3 and quenching.
New techniques were found in 1990, producing alloys that form glasses at cooling rates as low as one kelvin per second. These cooling rates can be achieved by simple casting into metallic molds. These alloys can be cast into parts several centimeters thick while retaining an amorphous structure. The best glass-forming alloys were based on zirconium and palladium, but alloys based on iron, titanium, copper, magnesium, and other metals are known. The process exploited a phenomenon called "confusion". Such alloys contain many elements (often four or more) such that upon cooling sufficiently quickly, constituent atoms cannot achieve an equilibrium crystalline state before their mobility is lost. In this way, the random disordered state of the atoms is "locked in".
In 1992, the commercial amorphous alloy, Vitreloy 1 (41.2% Zr, 13.8% Ti, 12.5% Cu, 10% Ni, and 22.5% Be), was developed at Caltech, as a part of Department of Energy and NASA research of new aerospace materials.
By 2000, research in Tohoku University and Caltech yielded multicomponent alloys based on lanthanum, magnesium, zirconium, palladium, iron, copper, and titanium, with critical cooling rate between 1 K/s and 100 K/s, comparable to oxide glasses.
In 2004, bulk amorphous steel was successfully produced by a groups at Oak Ridge National Laboratory, which refers to their product as "glassy steel", and another at University of Virginia, named "DARVA-Glass 101". The product is non-magnetic at room temperature and significantly stronger than conventional steel.
In 2018, a team at SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University reported the use of artificial intelligence to predict and evaluate samples of 20,000 different likely metallic glass alloys in a year.
Properties
Amorphous metal is usually an alloy rather than a pure metal. The alloys contain atoms of significantly different sizes, leading to low free volume (and therefore up to orders of magnitude higher viscosity than other metals and alloys) in molten state. The viscosity prevents the atoms from moving enough to form an ordered lattice. The material displays low shrinkage during cooling, and resistance to plastic deformation. The absence of grain boundaries, the weak spots of crystalline materials, leads to better wear resistance and lesscorrosion. Amorphous metals, while technically glasses, are much tougher and less brittle than oxide glasses and ceramics. Amorphous metals are either non-ferromagnetic, if they are composed of Ln, Mg, Zr, Ti, Pd, Ca, Cu, Pt and Au, or ferromagnetic, if they are composed of Fe, Co, and Ni.
Thermal conductivity is lower than in crystalline metals. As formation of amorphous structure relies on fast cooling, this limits the thickness of amorphous structures. To form amorphous structure despite slower cooling, the alloy has to be made of three or more components, leading to complex crystal units with higher potential energy and lower odds of formation. The atomic radius of the components has to be significantly different (over 12%), to achieve high packing density and low free volume. The combination of components should have negative mixing heat, inhibiting crystal nucleation and prolonging the time the molten metal stays in supercooled state.
As temperatures change, the electrical resistivity of amorphous metals behaves very different than that of regular metals. While resistivity in crystalline metals generally increases with temperature, following Matthiessen's rule, resistivity in many amorphous metals decreases with increasing temperature. This effect can be observed in amorphous metals of high resistivities between 150 and 300 microohm-centimeters. In these metals, the scattering events causing the resistivity of the metal are not statistically independent, thus explaining the breakdown of Matthiessen's rule. The fact that the thermal change of the resistivity in amorphous metals can be negative over a large range of temperatures and correlated to their absolute resistivity values was identified by Mooij in 1973, becoming Mooijs-rule.
Alloys of boron, silicon, phosphorus, and other glass formers with magnetic metals (iron, cobalt, nickel) have high magnetic susceptibility, with low coercivity and high electrical resistance. Usually the electrical conductivity of a metallic glass is of the same low order of magnitude as of a molten metal just above the melting point. The high resistance leads to low losses by eddy currents when subjected to alternating magnetic fields, a property useful for e.g. transformer magnetic cores. Their low coercivity also contributes to low loss.
Buckel and Hilsch discovered the superconductivity of amorphous metal thin films experimentally in the early 1950s. For certain metallic elements the superconducting critical temperature Tc can be higher in the amorphous state (e.g. upon alloying) than in the crystalline state, and in several cases Tc increases upon increasing the structural disorder. This behavior can be explained by the effect of structural disorder on electron-phonon coupling.
Amorphous metals have higher tensile yield strengths and higher elastic strain limits than polycrystalline metal alloys, but their ductilities and fatigue strengths are lower.
Amorphous alloys have a variety of potentially useful properties. In particular, they tend to be stronger than crystalline alloys of similar chemical composition, and they can sustain larger reversible ("elastic") deformations than crystalline alloys. Amorphous metals derive their strength directly from their non-crystalline structure, which does not have defects (such as dislocations) that limit their strength. Vitreloy is an amorphous metal with a tensile strength almost double that of high-grade titanium. However, metallic glasses at room temperature are not ductile and tend to fail suddenly and surprisingly when loaded in tension, which limits applicability in reliability-critical applications. Metal matrix composites consisting of a ductile crystalline metal matrix containing dendritic particles or fibers of an amorphous glass metal are an alternative.
Perhaps the most useful property of bulk amorphous alloys is that they are true glasses, which means that they soften and flow upon heating. This allows for easy processing, such as by injection molding, in much the same way as polymers. As a result, amorphous alloys have been commercialized for use in sports equipment, medical devices, and as cases for electronic equipment.
Thin films of amorphous metals can be deposited as protective coatings via high velocity oxygen fuel.
Applications
Commercial
The most important application exploits the magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high efficiency transformers at line frequency and in some higher frequency transformers. Amorphous steel is very brittle that makes it difficult to punch into motor laminations. Electronic article surveillance (such as passive ID tags) often uses metallic glasses because of these magnetic properties.
Ti-based metallic glass, when made into thin pipes, have a high tensile strength of , elastic elongation of 2% and high corrosion resistance. A Ti–Zr–Cu–Ni–Sn metallic glass was used to improve the sensitivity of a Coriolis flow meter. This flow meter is about 28-53 times more sensitive than conventional meters, which can be applied in fossil-fuel, chemical, environmental, semiconductor and medical science industries.
Zr-Al-Ni-Cu based metallic glass can be shaped into pressure sensors for automobile and other industries. Such sensors are smaller, more sensitive, and possess greater pressure endurance than conventional stainless steel. Additionally, this alloy was used to make the world's smallest geared motor with diameter at the time.
Potential
Amorphous metals exhibit unique softening behavior above their glass transition and this softening has been increasingly explored for thermoplastic forming of metallic glasses. Such low softening temperature supports simple methods for making nanoparticlecomposites (e.g. carbon nanotubes) and bulk metallic glasses. It has been shown that metallic glasses can be patterned on extremely length scales as small as 10 nm. This may solve problems of nanoimprint lithography where expensive nano-molds made of silicon break easily. Nano-molds made from metallic glasses are easy to fabricate and more durable than silicon molds. The superior electronic, thermal and mechanical properties of bulk metallic glasses compared to polymers make them a good option for developing nanocomposites for electronic application such as field electron emission devices.
Ti40Cu36Pd14Zr10 is believed to be noncarcinogenic, is about three times stronger than titanium, and its elastic modulus nearly matches bones. It has a high wear resistance and does not produce abrasion powder. The alloy does not undergo shrinkage on solidification. A surface structure can be generated that is biologically attachable by surface modification using laser pulses, allowing better joining with bone.
Laser powder bed fusion (LPBF) has been used to process Zr-based bulk metallic glass (BMG) for biomedical applications. Zr-based BMGs shows good biocompatibility, supporting osteoblastic cell growth similar to Ti-6Al-4V alloy. The favorable response coupled with the ability to tailor surface properties through SLM highlights the promise of SLM Zr- based BMGs like AMLOY-ZR01 for orthopaedic implant applications. However, their degradation under inflammatory conditions requires further investigation.
Mg60Zn35Ca5 is under investigation as a biomaterial for implantation into bones as screws, pins, or plates, to fix fractures. Unlike traditional steel or titanium, this material dissolves in organisms at a rate of roughly 1 millimeter per month and is replaced with bone tissue. This speed can be adjusted by varying the zinc content.
Bulk metallic glasses seem to exhibit superior properties. SAM2X5-630 is claimed to have the highest recorded plasticity for any steel alloy, essentially the highest threshold at which a material can withstand an impact without deforming permanently. The alloy can withstand pressure and stress of up to without permanent deformation. This is the highest impact resistance of any bulk metallic glass ever recorded . This makes it as an attractive option for armour material and other applications that require high stress tolerance.
Additive manufacturing
One challenge when synthesising a metallic glass is that the techniques often only produce very small samples, due to the need for high cooling rates. 3D-printing methods have been suggested as a method to create larger bulk samples. Selective laser melting (SLM) is one example of an additive manufacturing method that has been used to make iron based metallic glasses. Laser foil printing (LFP) is another method where foils of the amorphous metals are stacked and welded together, layer by layer.
Modeling and theory
Bulk metallic glasses have been modeled using atomic scale simulations (within the density functional theory framework) in a similar manner to high entropy alloys. This has allowed predictions to be made about their behavior, stability and many more properties. As such, new bulk metallic glass systems can be tested and tailored for a specific purpose (e.g. bone replacement or aero-engine component) without as much empirical searching of the phase space or experimental trial and error. Ab-initio molecular dynamics (MD) simulation confirmed that the atomic surface structure of a Ni-Nb metallic glass observed by scanning tunneling microscopy is a kind of spectroscopy. At negative applied bias it visualizes only one soft of atoms (Ni) owing to the structure of electronic density of states calculated using ab-initio MD simulation.
One common way to try and understand the electronic properties of amorphous metals is by comparing them to liquid metals, which are similarly disordered, and for which established theoretical frameworks exist. For simple amorphous metals, good estimations can be reached by semi-classical modelling of the movement of individual electrons using the Boltzmann equation and approximating the scattering potential as the superposition of the electronic potential of each nucleus in the surrounding metal. To simplify the calculations, the electronic potentials of the atomic nuclei can be truncated to give a muffin-tin pseudopotential. In this theory, there are two main effects that govern the change of resistivity with increasing temperatures. Both are based on the induction of vibrations of the atomic nuclei of the metal as temperatures increase. One is, that the atomic structure gets increasingly smeared out as the exact positions of the atomic nuclei get less and less well defined. The other is the introduction of phonons. While the smearing out generally decreases the resistivity of the metal, the introduction of phonons generally adds scattering sites and therefore increases resistivity. Together, they can explain the anomalous decrease of resistivity in amorphous metals, as the first part outweighs the second. In contrast to regular crystalline metals, the phonon contribution in an amorphous metal does not get frozen out at low temperatures. Due to the lack of a defined crystal structure, there are always some phonon wavelengths that can be excited. While this semi-classical approach holds well for many amorphous metals, it generally breaks down under more extreme conditions. At very low temperatures, the quantum nature of the electrons leads to long range interference effects of the electrons with each other in what is called "weak localization effects". In very strongly disordered metals, impurities in the atomic structure can induce bound electronic states in what is called "Anderson localization", effectively binding the electrons and inhibiting their movement.
See also
Bioabsorbable metallic glass
Glass-ceramic-to-metal seals
Liquidmetal
Materials science
Structure of liquids and glasses
Amorphous brazing foil
References
Further reading
External links
Liquidmetal Design Guide
"Metallic glass: a drop of the hard stuff" at New Scientist
Glass-Like Metal Performs Better Under Stress Physical Review Focus, June 9, 2005
"Overview of metallic glasses"
New Computational Method Developed By Carnegie Mellon University Physicist Could Speed Design and Testing of Metallic Glass (2004) (the alloy database developed by Marek Mihalkovic, Michael Widom, and others)
New tungsten-tantalum-copper amorphous alloy developed at the Korea Advanced Institute of Science and Technology Digital Chosunilbo (English Edition) : Daily News in English About Korea
Amorphous Metals in Electric-Power Distribution Applications
Amorphous and Nanocrystalline Soft Magnets
Metallic glasses and those composites, Materials Research Forum LLC, Millersville, PA, USA, (2018), p. 336
Alloys
Metallurgy
Glass | Amorphous metal | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,975 | [
"Glass",
"Metallurgy",
"Unsolved problems in physics",
"Materials science",
"Homogeneous chemical mixtures",
"Amorphous metals",
"Alloys",
"Chemical mixtures",
"nan",
"Amorphous solids"
] |
347,049 | https://en.wikipedia.org/wiki/Valuation%20%28algebra%29 | In algebra (in particular in algebraic geometry or algebraic number theory), a valuation is a function on a field that provides a measure of the size or multiplicity of elements of the field. It generalizes to commutative algebra the notion of size inherent in consideration of the degree of a pole or multiplicity of a zero in complex analysis, the degree of divisibility of a number by a prime number in number theory, and the geometrical concept of contact between two algebraic or analytic varieties in algebraic geometry. A field with a valuation on it is called a valued field.
Definition
One starts with the following objects:
a field and its multiplicative group K×,
an abelian totally ordered group .
The ordering and group law on are extended to the set } by the rules
for all ∈ ,
for all ∈ .
Then a valuation of is any map
that satisfies the following properties for all a, b in K:
if and only if ,
,
, with equality if v(a) ≠ v(b).
A valuation v is trivial if v(a) = 0 for all a in K×, otherwise it is non-trivial.
The second property asserts that any valuation is a group homomorphism on K×. The third property is a version of the triangle inequality on metric spaces adapted to an arbitrary Γ (see Multiplicative notation below). For valuations used in geometric applications, the first property implies that any non-empty germ of an analytic variety near a point contains that point.
The valuation can be interpreted as the order of the leading-order term. The third property then corresponds to the order of a sum being the order of the larger term, unless the two terms have the same order, in which case they may cancel and the sum may have larger order.
For many applications, is an additive subgroup of the real numbers in which case ∞ can be interpreted as +∞ in the extended real numbers; note that for any real number a, and thus +∞ is the unit under the binary operation of minimum. The real numbers (extended by +∞) with the operations of minimum and addition form a semiring, called the min tropical semiring, and a valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together.
Multiplicative notation and absolute values
The concept was developed by Emil Artin in his book Geometric Algebra writing the group in multiplicative notation as :
Instead of ∞, we adjoin a formal symbol O to Γ, with the ordering and group law extended by the rules
for all ∈ ,
for all ∈ .
Then a valuation of is any map
satisfying the following properties for all a, b ∈ K:
if and only if ,
,
, with equality if .
(Note that the directions of the inequalities are reversed from those in the additive notation.)
If is a subgroup of the positive real numbers under multiplication, the last condition is the ultrametric inequality, a stronger form of the triangle inequality , and is an absolute value. In this case, we may pass to the additive notation with value group by taking .
Each valuation on defines a corresponding linear preorder: . Conversely, given a "" satisfying the required properties, we can define valuation }, with multiplication and ordering based on and .
Terminology
In this article, we use the terms defined above, in the additive notation. However, some authors use alternative terms:
our "valuation" (satisfying the ultrametric inequality) is called an "exponential valuation" or "non-Archimedean absolute value" or "ultrametric absolute value";
our "absolute value" (satisfying the triangle inequality) is called a "valuation" or an "Archimedean absolute value".
Associated objects
There are several objects defined from a given valuation ;
the value group or valuation group = v(K×), a subgroup of (though v is usually surjective so that = );
the valuation ring Rv is the set of a ∈ with v(a) ≥ 0,
the prime ideal mv is the set of a ∈ K with v(a) > 0 (it is in fact a maximal ideal of Rv),
the residue field kv = Rv/mv,
the place of associated to v, the class of v under the equivalence defined below.
Basic properties
Equivalence of valuations
Two valuations v1 and v2 of with valuation group Γ1 and Γ2, respectively, are said to be equivalent if there is an order-preserving group isomorphism such that v2(a) = φ(v1(a)) for all a in K×. This is an equivalence relation.
Two valuations of K are equivalent if and only if they have the same valuation ring.
An equivalence class of valuations of a field is called a place. Ostrowski's theorem gives a complete classification of places of the field of rational numbers these are precisely the equivalence classes of valuations for the p-adic completions of
Extension of valuations
Let v be a valuation of and let L be a field extension of . An extension of v (to L) is a valuation w of L such that the restriction of w to is v. The set of all such extensions is studied in the ramification theory of valuations.
Let L/K be a finite extension and let w be an extension of v to L. The index of Γv in Γw, e(w/v) = [Γw : Γv], is called the reduced ramification index of w over v. It satisfies e(w/v) ≤ [L : K] (the degree of the extension L/K). The relative degree of w over v is defined to be f(w/v) = [Rw/mw : Rv/mv] (the degree of the extension of residue fields). It is also less than or equal to the degree of L/K. When L/K is separable, the ramification index of w over v is defined to be e(w/v)pi, where pi is the inseparable degree of the extension Rw/mw over Rv/mv.
Complete valued fields
When the ordered abelian group is the additive group of the integers, the associated valuation is equivalent to an absolute value, and hence induces a metric on the field . If is complete with respect to this metric, then it is called a complete valued field. If K is not complete, one can use the valuation to construct its completion, as in the examples below, and different valuations can define different completion fields.
In general, a valuation induces a uniform structure on , and is called a complete valued field if it is complete as a uniform space. There is a related property known as spherical completeness: it is equivalent to completeness if but stronger in general.
Examples
p-adic valuation
The most basic example is the -adic valuation νp associated to a prime integer p, on the rational numbers with valuation ring where is the localization of at the prime ideal . The valuation group is the additive integers For an integer the valuation νp(a) measures the divisibility of a by powers of p:
and for a fraction, νp(a/b) = νp(a) − νp(b).
Writing this multiplicatively yields the -adic absolute value, which conventionally has as base , so .
The completion of with respect to νp is the field of p-adic numbers.
Order of vanishing
Let K = F(x), the rational functions on the affine line X = F1, and take a point a ∈ X. For a polynomial with , define va(f) = k, the order of vanishing at x = a; and va(f /g) = va(f) − va(g). Then the valuation ring R consists of rational functions with no pole at x = a, and the completion is the formal Laurent series ring F((x−a)). This can be generalized to the field of Puiseux series K{{t}} (fractional powers), the Levi-Civita field (its Cauchy completion), and the field of Hahn series, with valuation in all cases returning the smallest exponent of t appearing in the series.
-adic valuation
Generalizing the previous examples, let be a principal ideal domain, be its field of fractions, and be an irreducible element of . Since every principal ideal domain is a unique factorization domain, every non-zero element a of can be written (essentially) uniquely as
where the es are non-negative integers and the pi are irreducible elements of that are not associates of . In particular, the integer ea is uniquely determined by a.
The π-adic valuation of K is then given by
If π' is another irreducible element of such that (π') = (π) (that is, they generate the same ideal in R), then the π-adic valuation and the π'-adic valuation are equal. Thus, the π-adic valuation can be called the P-adic valuation, where P = (π).
P-adic valuation on a Dedekind domain
The previous example can be generalized to Dedekind domains. Let be a Dedekind domain, its field of fractions, and let P be a non-zero prime ideal of . Then, the localization of at P, denoted RP, is a principal ideal domain whose field of fractions is . The construction of the previous section applied to the prime ideal PRP of RP yields the -adic valuation of .
Vector spaces over valuation fields
Suppose that ∪ {0} is the set of non-negative real numbers under multiplication. Then we say that the valuation is non-discrete if its range (the valuation group) is infinite (and hence has an accumulation point at 0).
Suppose that X is a vector space over K and that A and B are subsets of X. Then we say that A absorbs B if there exists a α ∈ K such that λ ∈ K and |λ| ≥ |α| implies that B ⊆ λ A. A is called radial or absorbing if A absorbs every finite subset of X. Radial subsets of X are invariant under finite intersection. Also, A is called circled if λ in K and |λ| ≥ |α| implies λ A ⊆ A. The set of circled subsets of L is invariant under arbitrary intersections. The circled hull''' of A is the intersection of all circled subsets of X containing A.
Suppose that X and Y are vector spaces over a non-discrete valuation field K, let A ⊆ X, B ⊆ Y, and let f : X → Y be a linear map. If B is circled or radial then so is . If A is circled then so is f(A) but if A is radial then f(A) will be radial under the additional condition that f'' is surjective.
See also
Discrete valuation
Euclidean valuation
Field norm
Absolute value (algebra)
Notes
References
. A masterpiece on algebra written by one of the leading contributors.
Chapter VI of
External links
Algebraic geometry
Field (mathematics) | Valuation (algebra) | [
"Mathematics"
] | 2,315 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
347,113 | https://en.wikipedia.org/wiki/Tiger%20team | A tiger team is a team of specialists assembled to work on a specific goal, or to solve a particular problem.
Origin of the term
A 1964 paper entitled Program Management in Design and Development used the term tiger teams and defined it as "a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem or simulation". Walter C. Williams gave this definition in response to the question "How best can advancements in reliability/maintainability state-of-the-art be attained and used with compressed schedules?" Williams was an engineer at the Manned Spacecraft Center and part of the Edwards Air Force Base National Advisory Committee for Aeronautics.
The paper consists of anecdotes and answers to questions from a panel on improving issues in program management concerning testing and quality assurance in aerospace vehicle development and production. The panel consisted of Williams, Col. J. R. Dempsey of General Dynamics, Lt. Gen. W. A. Davis from the Ballistic Systems Div., Norton Air Force Base, A. S. Crossfield from North American Aviation.
Examples
A tiger team was crucial to the Apollo 13 crewed lunar mission in 1970. During the mission, part of the Apollo 13 Service Module malfunctioned and exploded. A team of specialists was formed to address the resulting problems and bring the astronauts back to Earth safely, led by NASA Flight and Mission Operations Director Gene Kranz. Kranz and the members of his "White Team", later designated the "Tiger Team", received the Presidential Medal of Freedom for their efforts in the Apollo 13 mission.
In security work, a tiger team is a group that tests an organization's ability to protect its assets by attempting to defeat its physical or information security. In this context, the tiger team is often a permanent team as security is typically an ongoing priority. For example, one implementation of an information security tiger team approach divides the team into two co-operating groups: one for vulnerability research, which finds and researches the technical aspects of a vulnerability, and one for vulnerability management, which manages communication and feedback between the team and the organization, as well as ensuring each discovered vulnerability is tracked throughout its life-cycle and ultimately resolved.
An initiative involving tiger teams was implemented by the United States Department of Energy (DOE) under then-Secretary James D. Watkins. From 1989 through 1992 the DOE formed tiger teams to assess 35 DOE facilities for compliance with environment, safety, and health requirements. Beginning in October 1991 smaller tiger teams were formed to perform more detailed follow-up assessments to focus on the most pressing issues.
The NASA Engineering and Safety Center (NESC) puts together "tiger teams" of engineers and scientists from multiple NASA centers to assist solving complex problems when requested by a project or program.
See also
Penetration test
Red team
References
Hacking (computer security)
Software testing
Emergency management
Aerospace engineering
Biological engineering
Problem solving | Tiger team | [
"Engineering",
"Biology"
] | 607 | [
"Software engineering",
"Biological engineering",
"Software testing",
"Aerospace engineering"
] |
347,322 | https://en.wikipedia.org/wiki/Holotype | A holotype (Latin: holotypus) is a single physical example (or illustration) of an organism used when the species (or lower-ranked taxon) was formally described. It is either the single such physical example (or illustration) or one of several examples, but explicitly designated as the holotype. Under the International Code of Zoological Nomenclature (ICZN), a holotype is one of several kinds of name-bearing types. In the International Code of Nomenclature for algae, fungi, and plants (ICN) and ICZN, the definitions of types are similar in intent but not identical in terminology or underlying concept.
For example, the holotype for the butterfly Plebejus idas longinus is a preserved specimen of that subspecies, held by the Museum of Comparative Zoology at Harvard University. In botany and mycology, an isotype is a duplicate of the holotype, generally pieces from the same individual plant or samples from the same genetic individual.
A holotype is not necessarily "typical" of that taxon, although ideally it is. Sometimes just a fragment of an organism is the holotype, particularly in the case of a fossil. For example, the holotype of Pelorosaurus humerocristatus (Duriatitan), a large herbivorous dinosaur from the early Cretaceous period, is a fossil leg bone stored at the Natural History Museum in London. Even if a better specimen is subsequently found, the holotype is not superseded.
Replacements for holotypes
Under the ICN, an additional and clarifying type could be designated an epitype under article 9.8, where the original material is demonstrably ambiguous or insufficient.
A conserved type (ICN article 14.3) is sometimes used to correct a problem with a name which has been misapplied; this specimen replaces the original holotype.
In the absence of a holotype, another type may be selected, out of a range of different kinds of type, depending on the case, a lectotype or a neotype.
For example, in both the ICN and the ICZN a neotype is a type that was later appointed in the absence of the original holotype. Additionally, under the ICZN the commission is empowered to replace a holotype with a neotype, when the holotype turns out to lack important diagnostic features needed to distinguish the species from its close relatives. For example, the crocodile-like archosaurian reptile Parasuchus hislopi Lydekker, 1885 was described based on a premaxillary rostrum (part of the snout), but this is no longer sufficient to distinguish Parasuchus from its close relatives. This made the name Parasuchus hislopi a nomen dubium. Indian-American paleontologist Sankar Chatterjee proposed that a new type specimen, a complete skeleton, be designated. The International Commission on Zoological Nomenclature considered the case and agreed to replace the original type specimen with the proposed neotype.
The procedures for the designation of a new type specimen when the original is lost come into play for some recent, high-profile species descriptions in which the specimen designated as the holotype was a living individual that was allowed to remain in the wild (e.g. a new species of capuchin monkey, genus Cebus, the bee species Marleyimyia xylocopae, or the Arunachal macaque Macaca munzala). In such a case, there is no actual type specimen available for study, and the possibility exists that—should there be any perceived ambiguity in the identity of the species—subsequent authors can invoke various clauses in the ICZN Code that allow for the designation of a neotype. Article 75.3.7 of the ICZN requires that the designation of a neotype must be accompanied by "a statement that the neotype is, or immediately upon publication has become, the property of a recognized scientific or educational institution, cited by name, that maintains a research collection, with proper facilities for preserving name-bearing types, and that makes them accessible for study", but there is no such requirement for a holotype.
See also
Allotype (zoology)
Genetypes—genetic sequence data from type specimens
Paratype
Type (biology)
Type species
References
External links
BOA Photographs of type specimens of Neotropical Rhopalocera.
Zoological nomenclature
Botanical nomenclature | Holotype | [
"Biology"
] | 916 | [
"Botanical nomenclature",
"Zoological nomenclature",
"Botanical terminology",
"Biological nomenclature"
] |
347,560 | https://en.wikipedia.org/wiki/Inheritance%20%28genetic%20algorithm%29 | In genetic algorithms, inheritance is the ability of modeled objects to mate, mutate (similar to biological mutation), and propagate their problem solving genes to the next generation, in order to produce an evolved solution to a particular problem. The selection of objects that will be inherited from in each successive generation is determined by a fitness function, which varies depending upon the problem being addressed.
The traits of these objects are passed on through chromosomes by a means similar to biological reproduction. These chromosomes are generally represented by a series of genes, which in turn are usually represented using binary numbers. This propagation of traits between generations is similar to the inheritance of traits between generations of biological organisms. This process can also be viewed as a form of reinforcement learning, because the evolution of the objects is driven by the passing of traits from successful objects which can be viewed as a reward for their success, thereby promoting beneficial traits.
Process
Once a new generation is ready to be created, all of the individuals that have been successful and have been chosen for reproduction are randomly paired together. Then the traits of these individuals are passed on through a combination of crossover and mutation. This process follows these basic steps:
Pair off successful objects for mating.
Determine randomly a crossover point for each pair.
Switch the genes after the crossover point in each pair.
Determine randomly if any genes are mutated in the child objects.
After following these steps, two child objects will be produced for every pair of parent objects used. Then, after determining the success of the objects in the new generation, this process can be repeated using whichever new objects were most successful. This will usually be repeated until either a desired generation is reached or an object that meets a minimum desired result from the fitness function is found.
While crossover and mutation are the common genetic operators used in inheritance, there are also other operators such as regrouping and colonization-extinction.
Example
Assume these two strings of bits represent the traits being passed on by two parent objects:
Object 1: 1100011010110001
Object 2: 1001100110011001
Now, consider that the crossover point is randomly positioned after the fifth bit:
Object 1: 11000 | 11010110001
Object 2: 10011 | 00110011001
During crossover, the two objects will swap all of the bits after the crossover point, leading to:
Object 1: 11000 | 00110011001
Object 2: 10011 | 11010110001
Finally, mutation is simulated on the objects by there being zero or more bits flipped randomly. Assuming the tenth bit for object 1 is mutated, and the second and seventh bits are mutated for object 2, the final children produced by this inheritance would be:
Object 1: 1100000111011001
Object 2: 1101110010110001
See also
Artificial intelligence
Bioinformatics
Speciation (genetic algorithm)
References
External links
BoxCar 2D An interactive example of the use of a genetic algorithm to construct 2-dimensional cars.
Genetic algorithms | Inheritance (genetic algorithm) | [
"Biology"
] | 607 | [
"Genetics techniques",
"Genetic algorithms"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.