id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
5,653,768
https://en.wikipedia.org/wiki/XEUS
XEUS (X-ray Evolving Universe Spectroscopy) was a space observatory plan developed by the European Space Agency (ESA) as a successor to the successful XMM-Newton X-ray satellite telescope. It was merged to the International X-ray Observatory (IXO) around 2008, but as that project ran into issues in 2011, the ESA component was forked off into Advanced Telescope for High Energy Astrophysics (Athena). XEUS consisted of a mirror spacecraft that carried a large X-ray telescope, with a mirror area of about 5 m² and an imaging resolution better than 5 arcsec; for X-ray radiation with an energy of 1 keV. A detector spacecraft would have flown in formation with the telescope at a distance of approximately 50 m, in the focus of the telescope. The detectors would have included a wide-field X-ray imager with an energy resolution of 150 eV at 6 keV, as well as a cryogenic narrow-field imager with an energy resolution of 2 eV at 1 keV. XEUS could have measured the X-ray spectrum and thereby the composition, temperature and velocities of hot matter in the early universe. It would address diverse questions like the origin and nature of black holes, their relation with star formation, baryons evolution, and the formation of the heavy elements in the Universe. The technology required for the follow-on project of XEUS, the International X-ray Observatory, eventually leading to Advanced Telescope for High Energy Astrophysics (Athena) which is currently under development. XEUS was one of the candidates for the Cosmic Vision programme of the European Space Agency. Recent developments In May 2008, ESA and NASA established a coordination group involving three agencies - ESA, NASA and JAXA - with the intent of exploring a joint mission merging the ongoing XEUS and Constellation-X (Con-X) projects. This proposed the start of a joint study for the International X-ray Observatory (IXO). See also Calorimeter References External links ESA International X-Ray Observatory Mission Site Press Release Micronit and Cosine Develop Next-Generation ESA X-ray Telescope NASA International X-Ray Observatory Mission Site European Space Agency space probes Cancelled spacecraft Space telescopes X-ray telescopes
XEUS
[ "Astronomy" ]
456
[ "Space telescopes" ]
5,653,826
https://en.wikipedia.org/wiki/Urea-to-creatinine%20ratio
In medicine, the urea-to-creatinine ratio (UCR), known in the United States as BUN-to-creatinine ratio, is the ratio of the blood levels of urea (BUN) (mmol/L) and creatinine (Cr) (μmol/L). BUN only reflects the nitrogen content of urea (MW 28) and urea measurement reflects the whole of the molecule (MW 60), urea is just over twice BUN (60/28 = 2.14). In the United States, both quantities are given in mg/dL The ratio may be used to determine the cause of acute kidney injury or dehydration. The principle behind this ratio is the fact that both urea (BUN) and creatinine are freely filtered by the glomerulus; however, urea reabsorbed by the renal tubules can be regulated (increased or decreased) whereas creatinine reabsorption remains the same (minimal reabsorption). Definition Urea and creatinine are nitrogenous end products of metabolism. Urea is the primary metabolite derived from dietary protein and tissue protein turnover. Creatinine is the product of muscle creatine catabolism. Both are relatively small molecules (60 and 113 daltons, respectively) that distribute throughout total body water. In Europe, the whole urea molecule is assayed, whereas in the United States only the nitrogen component of urea (the blood or serum urea nitrogen, i.e., BUN or SUN) is measured. The BUN, then, is roughly one-half (7/15 or 0.466) of the blood urea. The normal range of urea nitrogen in blood or serum is 5 to 20 mg/dl, or 1.8 to 7.1 mmol urea per liter. The range is wide because of normal variations due to protein intake, endogenous protein catabolism, state of hydration, hepatic urea synthesis, and renal urea excretion. A BUN of 15 mg/dl would represent significantly impaired function for a woman in the thirtieth week of gestation. Her higher glomerular filtration rate (GFR), expanded extracellular fluid volume, and anabolism in the developing fetus contribute to her relatively low BUN of 5 to 7 mg/dl. In contrast, the rugged rancher who eats in excess of 125 g protein each day may have a normal BUN of 20 mg/dl. The normal serum creatinine (sCr) varies with the subject's body muscle mass and with the technique used to measure it. For the adult male, the normal range is 0.6 to 1.2 mg/dl, or 53 to 106 μmol/L by the kinetic or enzymatic method, and 0.8 to 1.5 mg/dl, or 70 to 133 μmol/L by the older manual Jaffé reaction. For the adult female, with her generally lower muscle mass, the normal range is 0.5 to 1.1 mg/dl, or 44 to 97 μmol/L by the enzymatic method. Technique Multiple methods for analysis of BUN and creatinine have evolved over the years. Most of those in current use are automated and give clinically reliable and reproducible results. There are two general methods for the measurement of urea nitrogen. The diacetyl, or Fearon, reaction develops a yellow chromogen with urea, and this is quantified by photometry. It has been modified for use in autoanalyzers and generally gives relatively accurate results. It still has limited specificity, however, as illustrated by spurious elevations with sulfonylurea compounds, and by colorimetric interference from hemoglobin when whole blood is used. In the more specific enzymatic methods, the enzyme urease converts urea to ammonia and carbonic acid. These products, which are proportional to the concentration of urea in the sample, are assayed in a variety of systems, some of which are automated. One system checks the decrease in absorbance at 340 nm when the ammonia reacts with alpha-ketoglutaric acid. The Astra system measures the rate of increase in conductivity of the solution in which urea is hydrolyzed. Even though the test is now performed mostly on serum, the term BUN is still retained by convention. The specimen should not be collected in tubes containing sodium fluoride because the fluoride inhibits urease. Also chloral hydrate and guanethidine have been observed to increase BUN values. The 1886 Jaffé reaction, in which creatinine is treated with an alkaline picrate solution to yield a red complex, is still the basis of most commonly used methods for measuring creatinine. This reaction is nonspecific and subject to interference from many noncreatinine chromogens, including acetone, acetoacetate, pyruvate, ascorbic acid, glucose, cephalosporins, barbiturates, and protein. It is also sensitive to pH and temperature changes. One or another of the many modifications designed to nullify these sources of error is used in most clinical laboratories today. For example, the recent kinetic-rate modification, which isolates the brief time interval during which only true creatinine contributes to total color formation, is the basis of the Astra modular system. More specific, non-Jaffé assays have also been developed. One of these, an automated dry-slide enzymatic method, measures ammonia generated when creatinine is hydrolyzed by creatinine iminohydrolase. Its simplicity, precision, and speed highly recommend it for routine use in the clinical laboratory. Only 5-fluorocytosine interferes significantly with the test. Creatinine must be determined in plasma or serum and not whole blood because erythrocytes contain considerable amounts of noncreatinine chromogens. To minimize the conversion of creatine to creatinine, specimens must be as fresh as possible and maintained at pH 7 during storage. The amount of urea produced varies with substrate delivery to the liver and the adequacy of liver function. It is increased by a high-protein diet, by gastrointestinal bleeding (based on plasma protein level of 7.5 g/dl and a hemoglobin of 15 g/dl, 500 ml of whole blood is equivalent to 100 g protein), by catabolic processes such as fever or infection, and by antianabolic drugs such as tetracyclines (except doxycycline) or glucocorticoids. It is decreased by low-protein diet, malnutrition or starvation, and by impaired metabolic activity in the liver due to parenchymal liver disease or, rarely, to congenital deficiency of urea cycle enzymes. The normal subject on a 70 g protein diet produces about 12 g of urea each day. This newly synthesized urea distributes throughout total body water. Some of it is recycled through the enterohepatic circulation. Usually, a small amount (less than 0.5 g/day) is lost through the gastrointestinal tract, lungs, and skin; during exercise, a substantial fraction may be excreted in sweat. The bulk of the urea, about 10 g each day, is excreted by the kidney in a process that begins with glomerular filtration. At high urine flow rates (greater than 2 ml/min), 40% of the filtered load is reabsorbed, and at flow rates lower than 2 ml/min, reabsorption may increase to 60%. Low flow, as in urinary tract obstruction, allows more time for reabsorption and is often associated with increases in antidiuretic hormone (ADH), which increases the permeability of the terminal collecting tubule to urea. During ADH-induced antidiuresis, urea secretion contributes to the intratubular concentration of urea. The subsequent buildup of urea in the inner medulla is critical to the process of urinary concentration. Reabsorption is also increased by volume contraction, reduced renal plasma flow as in congestive heart failure, and decreased glomerular filtration. Creatinine formation begins with the transamidination from arginine to glycine to form glycocyamine or guanidoacetic acid (GAA). This reaction occurs primarily in the kidneys, but also in the mucosa of the small intestine and the pancreas. The GAA is transported to the liver where it is methylated by S-adenosyl methionine (SAM) to form creatine. Creatine enters the circulation, and 90% of it is taken up and stored by muscle tissue. Interpretation Normal serum values Serum ratios The reference interval for normal BUN/creatinine serum ratio is 12 : 1 to 20 : 1. An elevated BUN:Cr due to a low or low-normal creatinine and a BUN within the reference range is unlikely to be of clinical significance. Specific causes of elevation Acute kidney injury (previously termed acute renal failure) The ratio is predictive of prerenal injury when BUN:Cr exceeds 20 or when urea:Cr exceeds 100. In prerenal injury, urea increases disproportionately to creatinine due to enhanced proximal tubular reabsorption that follows the enhanced transport of sodium and water. Gastrointestinal bleeding The ratio is useful for the diagnosis of bleeding from the gastrointestinal (GI) tract in patients who do not present with overt vomiting of blood. In children, a BUN:Cr ratio of 30 or greater has a sensitivity of 68.8% and a specificity of 98% for upper gastrointestinal bleeding. A common assumption is that the ratio is elevated because of amino acid digestion, since blood (excluding water) consists largely of the protein hemoglobin and is broken down by digestive enzymes of the upper GI tract into amino acids, which are then reabsorbed in the GI tract and broken down into urea. However, elevated BUN:Cr ratios are not observed when other high protein loads (e.g., steak) are consumed. Renal hypoperfusion secondary to the blood lost from the GI bleed has been postulated to explain the elevated BUN:Cr ratio. However, other research has found that renal hypoperfusion cannot fully explain the elevation. Advanced age Because of decreased muscle mass, elderly patients may have an elevated BUN:Cr at baseline. Other causes Hypercatabolic states, high-dose glucocorticoids, and resorption of large hematomas have all been cited as causes of a disproportionate rise in BUN relative to the creatinine. References External links Diagnostic nephrology Gastroenterology Ratios
Urea-to-creatinine ratio
[ "Mathematics" ]
2,318
[ "Arithmetic", "Ratios" ]
5,654,027
https://en.wikipedia.org/wiki/Synapse
In the nervous system, a synapse is a structure that allows a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron or a target effector cell. Synapses can be classified as either chemical or electrical, depending on the mechanism of signal transmission between neurons. In the case of electrical synapses, neurons are coupled bidirectionally with each other through gap junctions and have a connected cytoplasmic milieu. These types of synapses are known to produce synchronous network activity in the brain, but can also result in complicated, chaotic network level dynamics. Therefore, signal directionality cannot always be defined across electrical synapses. Synapses are essential for the transmission of neuronal impulses from one neuron to the next, playing a key role in enabling rapid and direct communication by creating circuits. In addition, a synapse serves as a junction where both the transmission and processing of information occur, making it a vital means of communication between neurons. At the synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on the terminals of axons and the postsynaptic part is located on a dendrite or soma. Astrocytes also exchange information with the synaptic neurons, responding to synaptic activity and, in turn, regulating neurotransmission. Synapses (at least chemical synapses) are stabilized in position by synaptic adhesion molecules (SAMs) projecting from both the pre- and post-synaptic neuron and sticking together where they overlap; SAMs may also assist in the generation and functioning of synapses. Moreover, SAMs coordinate the formation of synapses, with various types working together to achieve the remarkable specificity of synapses. In essence, SAMs function in both excitatory and inhibitory synapses, likely serving as the mediator for signal transmission. History Santiago Ramón y Cajal proposed that neurons are not continuous throughout the body, yet still communicate with each other, an idea known as the neuron doctrine. The word "synapse" was introduced in 1897 by the English neurophysiologist Charles Sherrington in Michael Foster's Textbook of Physiology. Sherrington struggled to find a good term that emphasized a union between two separate elements, and the actual term "synapse" was suggested by the English classical scholar Arthur Woollgar Verrall, a friend of Foster. The word was derived from the Greek synapsis (), meaning "conjunction", which in turn derives from synaptein (), from syn () "together" and haptein () "to fasten". However, while the synaptic gap remained a theoretical construct, and was sometimes reported as a discontinuity between contiguous axonal terminations and dendrites or cell bodies, histological methods using the best light microscopes of the day could not visually resolve their separation which is now known to be about 20 nm. It needed the electron microscope in the 1950s to show the finer structure of the synapse with its separate, parallel pre- and postsynaptic membranes and processes, and the cleft between the two. Types Chemical and electrical synapses are two ways of synaptic transmission. In a chemical synapse, electrical activity in the presynaptic neuron is converted (via the activation of voltage-gated calcium channels) into the release of a chemical called a neurotransmitter that binds to receptors located in the plasma membrane of the postsynaptic cell. The neurotransmitter may initiate an electrical response or a secondary messenger pathway that may either excite or inhibit the postsynaptic neuron. Chemical synapses can be classified according to the neurotransmitter released: glutamatergic (often excitatory), GABAergic (often inhibitory), cholinergic (e.g. vertebrate neuromuscular junction), and adrenergic (releasing norepinephrine). Because of the complexity of receptor signal transduction, chemical synapses can have complex effects on the postsynaptic cell. In an electrical synapse, the presynaptic and postsynaptic cell membranes are connected by special channels called gap junctions that are capable of passing an electric current, causing voltage changes in the presynaptic cell to induce voltage changes in the postsynaptic cell. In fact, gap junctions facilitate the direct flow of electrical current without the need for neurotransmitters, as well as small molecules like calcium. Thus, the main advantage of an electrical synapse is the rapid transfer of signals from one cell to the next. Mixed chemical electrical synapses are synaptic sites that feature both a gap junction and neurotransmitter release. This combination allows a signal to have both a fast component (electrical) and a slow component (chemical). The formation of neural circuits in nervous systems appears to heavily depend on the crucial interactions between chemical and electrical synapses. Thus these interactions govern the generation of synaptic transmission. Synaptic communication is distinct from an ephaptic coupling, in which communication between neurons occurs via indirect electric fields. An autapse is a chemical or electrical synapse that forms when the axon of one neuron synapses onto dendrites of the same neuron. Excitatory and inhibitory Excitatory synapse: Enhances the probability of depolarization in postsynaptic neurons and the initiation of an action potential. Inhibitory synapse: Diminishes the probability of depolarization in postsynaptic neurons and the initiation of an action potential. An influx of Na+ driven by excitatory neurotransmitters opens cation channels, depolarizing the postsynaptic membrane toward the action potential threshold. In contrast, inhibitory neurotransmitters cause the postsynaptic membrane to become less depolarized by opening either Cl- or K+ channels, reducing firing. Depending on their release location, the receptors they bind to, and the ionic circumstances they encounter, various transmitters can be either excitatory or inhibitory. For instance, acetylcholine can either excite or inhibit depending on the type of receptors it binds to. For example, glutamate serves as an excitatory neurotransmitter, in contrast to GABA, which acts as an inhibitory neurotransmitter. Additionally, dopamine is a neurotransmitter that exerts dual effects, displaying both excitatory and inhibitory impacts through binding to distinct receptors. The membrane potential prevents Cl- from entering the cell, even when its concentration is much higher outside than inside. The reversal potential for Cl- in many neurons is quite negative, nearly equal to the resting potential. Opening Cl- channels tends to buffer the membrane potential, but this effect is countered when the membrane starts to depolarize, allowing more negatively charged Cl- ions to enter the cell. Consequently, it becomes more difficult to depolarize the membrane and excite the cell when Cl- channels are open. Similar effects result from the opening of K+ channels. The significance of inhibitory neurotransmitters is evident from the effects of toxins that impede their activity. For instance, strychnine binds to glycine receptors, blocking the action of glycine and leading to muscle spasms, convulsions, and death. Interfaces Synapses can be classified by the type of cellular structures serving as the pre- and post-synaptic components. The vast majority of synapses in the mammalian nervous system are classical axo-dendritic synapses (axon synapsing upon a dendrite), however, a variety of other arrangements exist. These include but are not limited to axo-axonic, dendro-dendritic, axo-secretory, axo-ciliary, somato-dendritic, dendro-somatic, and somato-somatic synapses. In fact, the axon can synapse onto a dendrite, onto a cell body, or onto another axon or axon terminal, as well as into the bloodstream or diffusely into the adjacent nervous tissue. Conversion of chemical into electrical signals Neurotransmitters are tiny signal molecules stored in membrane-enclosed synaptic vesicles and released via exocytosis. Indeed, a change in electrical potential in the presynaptic cell triggers the release of these molecules. By attaching to transmitter-gated ion channels, the neurotransmitter causes an electrical alteration in the postsynaptic cell and rapidly diffuses across the synaptic cleft. Once released, the neurotransmitter is swiftly eliminated, either by being absorbed by the nerve terminal that produced it, taken up by nearby glial cells, or broken down by specific enzymes in the synaptic cleft. Numerous Na+-dependent neurotransmitter carrier proteins recycle the neurotransmitters and enable the cells to maintain rapid rates of release. At chemical synapses, transmitter-gated ion channels play a vital role in rapidly converting extracellular chemical impulses into electrical signals. These channels are located in the postsynaptic cell's plasma membrane at the synapse region, and they temporarily open in response to neurotransmitter molecule binding, causing a momentary alteration in the membrane's permeability. Additionally, transmitter-gated channels are comparatively less sensitive to the membrane potential than voltage-gated channels, which is why they are unable to generate self-amplifying excitement on their own. However, they result in graded variations in membrane potential due to local permeability, influenced by the amount and duration of neurotransmitter released at the synapse. Recently, mechanical tension, a phenomenon never thought relevant to synapse function has been found to be required for those on hippocampal neurons to fire. Release of neurotransmitters Neurotransmitters bind to ionotropic receptors on postsynaptic neurons, either causing their opening or closing. The variations in the quantities of neurotransmitters released from the presynaptic neuron may play a role in regulating the effectiveness of synaptic transmission. In fact, the concentration of cytoplasmic calcium is involved in regulating the release of neurotransmitters from presynaptic neurons. The chemical transmission involves several sequential processes: Synthesizing neurotransmitters within the presynaptic neuron. Loading the neurotransmitters into secretory vesicles. Controlling the release of neurotransmitters into the synaptic cleft. Binding of neurotransmitters to postsynaptic receptors. Ceasing the activity of the released neurotransmitters. Synaptic polarization The function of neurons depends upon cell polarity. The distinctive structure of nerve cells allows action potentials to travel directionally (from dendrites to cell body down the axon), and for these signals to then be received and carried on by post-synaptic neurons or received by effector cells. Nerve cells have long been used as models for cellular polarization, and of particular interest are the mechanisms underlying the polarized localization of synaptic molecules. PIP2 signaling regulated by IMPase plays an integral role in synaptic polarity. Phosphoinositides (PIP, PIP2, and PIP3) are molecules that have been shown to affect neuronal polarity. A gene (ttx-7) was identified in Caenorhabditis elegans that encodes myo-inositol monophosphatase (IMPase), an enzyme that produces inositol by dephosphorylating inositol phosphate. Organisms with mutant ttx-7 genes demonstrated behavioral and localization defects, which were rescued by expression of IMPase. This led to the conclusion that IMPase is required for the correct localization of synaptic protein components. The egl-8 gene encodes a homolog of phospholipase Cβ (PLCβ), an enzyme that cleaves PIP2. When ttx-7 mutants also had a mutant egl-8 gene, the defects caused by the faulty ttx-7 gene were largely reversed. These results suggest that PIP2 signaling establishes polarized localization of synaptic components in living neurons. Presynaptic modulation Modulation of neurotransmitter release by G-protein-coupled receptors (GPCRs) is a prominent presynaptic mechanism for regulation of synaptic transmission. The activation of GPCRs located at the presynaptic terminal, can decrease the probability of neurotransmitter release. This presynaptic depression involves activation of Gi/o-type G-proteins that mediate different inhibitory mechanisms, including inhibition of voltage-gated calcium channels, activation of potassium channels, and direct inhibition of the vesicle fusion process. Endocannabinoids, synthesized in and released from postsynaptic neuronal elements and their cognate receptors, including the (GPCR) CB1 receptor located at the presynaptic terminal, are involved in this modulation by a retrograde signaling process, in which these compounds are synthesized in and released from postsynaptic neuronal elements and travel back to the presynaptic terminal to act on the CB1 receptor for short-term or long-term synaptic depression, that causes a short or long lasting decrease in neurotransmitter release. Effects of drugs on ligand-gated ion channels Drugs have long been considered crucial targets for transmitter-gated ion channels. The majority of medications utilized to treat schizophrenia, anxiety, depression, and sleeplessness work at chemical synapses, and many of these pharmaceuticals function by binding to transmitter-gated channels. For instance, some drugs like barbiturates and tranquilizers bind to GABA receptors and enhance the inhibitory effect of GABA neurotransmitter. Thus, reduced concentration of GABA enables the opening of Cl- channels. Furthermore, psychoactive drugs could potentially target many other synaptic signalling machinery components. In fact, numerous neurotransmitters are released by Na+-driven carriers and are subsequently removed from the synaptic cleft. By inhibiting such carriers, synaptic transmission is strengthened as the action of the transmitter is prolonged. For example, Prozac is an antidepressant medication that works by preventing the absorption of serotonin neurotransmitter. Also, other antidepressants operate by inhibiting the reabsorption of both serotonin and norepinephrine. Biogenesis In nerve terminals, synaptic vesicles are produced quickly to compensate for their rapid depletion during neurotransmitter release. Their biogenesis involves segregating synaptic vesicle membrane proteins from other cellular proteins and packaging those distinct proteins into vesicles of appropriate size. Besides, it entails the endocytosis of synaptic vesicle membrane proteins from the plasma membrane. Synaptoblastic and synaptoclastic refer to synapse-producing and synapse-removing activities within the biochemical signalling chain. This terminology is associated with the Bredesen Protocol for treating Alzheimer's disease, which conceptualizes Alzheimer's as an imbalance between these processes. As of October 2023, studies concerning this protocol remain small and few results have been obtained within a standardized control framework. Role in memory Potentiation and depression It is widely accepted that the synapse plays a key role in the formation of memory. The stability of long-term memory can persist for many years; nevertheless, synapses, the neurological basis of memory, are very dynamic. The formation of synaptic connections significantly depends on activity-dependent synaptic plasticity observed in various synaptic pathways. Indeed, the connection between memory formation and alterations in synaptic efficacy enables the reinforcement of neuronal interactions between neurons. As neurotransmitters activate receptors across the synaptic cleft, the connection between the two neurons is strengthened when both neurons are active at the same time, as a result of the receptor's signaling mechanisms. The strength of two connected neural pathways is thought to result in the storage of information, resulting in memory. This process of synaptic strengthening is known as long-term potentiation (LTP). By altering the release of neurotransmitters, the plasticity of synapses can be controlled in the presynaptic cell. The postsynaptic cell can be regulated by altering the function and number of its receptors. Changes in postsynaptic signaling are most commonly associated with a N-methyl-d-aspartic acid receptor (NMDAR)-dependent LTP and long-term depression (LTD) due to the influx of calcium into the post-synaptic cell, which are the most analyzed forms of plasticity at excitatory synapses. Mechanism of protein kinase Moreover, Ca2+/calmodulin (CaM)-dependent protein kinase II (CaMKII) is best recognized for its roles in the brain, particularly in the neocortex and hippocampal regions because it serves as a ubiquitous mediator of cellular Ca2+ signals. CaMKII is abundant in the nervous system, mainly concentrated in the synapses in the nerve cells. Indeed, CaMKII has been definitively identified as a key regulator of cognitive processes, such as learning, and neural plasticity. The first concrete experimental evidence for the long-assumed function of CaMKII in memory storage was demonstrated While Ca2+/CaM binding stimulates CaMKII activity, Ca2+-independent autonomous CaMKII activity can also be produced by a number of other processes. CaMKII becomes active by autophosphorylating itself upon Ca2+/calmodulin binding. CaMKII is still active and phosphorylates itself even after Ca2+ is cleaved; as a result, the brain stores long-term memories using this mechanism. Nevertheless, when the CaMKII enzyme is dephosphorylated by a phosphatase enzyme, it becomes inactive, and memories are lost. Hence, CaMKII plays a vital role in both the induction and maintenance of LTP. Experimental models For technical reasons, synaptic structure and function have been historically studied at unusually large model synapses, for example: Squid giant synapse Neuromuscular junction (NMJ), a cholinergic synapse in vertebrates, glutamatergic in insects Ciliary calyx in the ciliary ganglion of chicks Calyx of Held in the brainstem Ribbon synapse in the retina Schaffer collateral synapses in the hippocampus. These synapses are small, but their pre- and postsynaptic neurons are well separated (CA3 and CA1, respectively). Synapses and diseases Synapses function as ensembles within particular brain networks to control the amount of neuronal activity, which is essential for memory, learning, and behavior. Consequently, synaptic disruptions might have negative effects. In fact, alterations in cell-intrinsic molecular systems or modifications to environmental biochemical processes can lead to synaptic dysfunction. The synapse is the primary unit of information transfer in the nervous system, and correct synaptic contact creation during development is essential for normal brain function. In addition, several mutations have been connected to neurodevelopmental disorders, and that compromised function at different synapse locations is a hallmark of neurodegenerative diseases. Synaptic defects are causally associated with early appearing neurological diseases, including autism spectrum disorders (ASD), schizophrenia (SCZ), and bipolar disorder (BP). On the other hand, in late-onset degenerative pathologies, such as Alzheimer's (AD), Parkinson's (PD), and Huntington's (HD) diseases, synaptopathy is thought to be the inevitable end-result of an ongoing pathophysiological cascade. These diseases are identified by a gradual loss in cognitive and behavioral function and a steady loss of brain tissue. Moreover, these deteriorations have been mostly linked to the gradual build-up of protein aggregates in neurons, the composition of which may vary based on the pathology; all have the same deleterious effects on neuronal integrity. Furthermore, the high number of mutations linked to synaptic structure and function, as well as dendritic spine alterations in post-mortem tissue, has led to the association between synaptic defects and neurodevelopmental disorders, such as ASD and SCZ, characterized by abnormal behavioral or cognitive phenotypes. Nevertheless, due to limited access to human tissue at late stages and a lack of thorough assessment of the essential components of human diseases in the available experimental animal models, it has been difficult to fully grasp the origin and role of synaptic dysfunction in neurological disorders. Additional images See also Active zone Autapse Cooperative synapse formation Exocytosis Immunological synapse Neurotransmitter vesicle Neurexin Postsynaptic density Synaptopathy References Signal transduction
Synapse
[ "Chemistry", "Biology" ]
4,570
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
5,654,032
https://en.wikipedia.org/wiki/HTC%20Typhoon
The HTC Typhoon is a smartphone that runs the Microsoft Windows Mobile operating system. The phone is manufactured by Taiwanese HTC Corporation (HTC). At the time when the Typhoon was made, HTC was not in the business of selling devices to end-users. Instead, the company had many partners who would rebrand and distribute its devices. It is based on the ARM Texas Instruments OMAP 730 processor running at 200 MHz. It has 32 MB internal RAM and 64 MB of flash ROM, and is expandable via a miniSD slot. It has a TFT display with 65,536 colours at a resolution of 176x220. It runs Microsoft Windows Mobile 2003 SE as its operating system, however it is also capable of running Windows Mobile 5.0 after a version was leaked onto the internet. It supports Java applications. Additionally, hacked, or "cooked" versions of Windows Mobile 6, 6.1 and 6.5 have circulated on the internet. Versions "Typhoon" is the HTC codename for this device, and the device has been rebranded by several distributors and cell phone carriers, under the following names: Audiovox SMT5600 Dopod 565 i-mate SP3 Krome Intellekt iQ700 Orange SPV C500 Qtek 8010 Vitelcom/Movistar TSM520 O2 Xphone IIm External links Review of the C500 Windows Mobile Standard devices Typhoon References
HTC Typhoon
[ "Technology" ]
295
[ "Mobile technology stubs", "Mobile phone stubs" ]
5,654,154
https://en.wikipedia.org/wiki/Immunological%20synapse
In immunology, an immunological synapse (or immune synapse) is the interface between an antigen-presenting cell or target cell and a lymphocyte such as a T cell, B cell, or natural killer cell. The interface was originally named after the neuronal synapse, with which it shares the main structural pattern. An immunological synapse consists of molecules involved in T cell activation, which compose typical patterns—activation clusters. Immunological synapses are the subject of much ongoing research. Structure and function The immune synapse is also known as the supramolecular activation cluster or SMAC. This structure is composed of concentric rings each containing segregated clusters of proteins—often referred to as the bull’s-eye model of the immunological synapse: c-SMAC (central-SMAC) composed of the θ isoform of protein kinase C, CD2, CD4, CD8, CD28, Lck, and Fyn. p-SMAC (peripheral-SMAC) within which the lymphocyte function-associated antigen-1 (LFA-1) and the cytoskeletal protein talin are clustered. d-SMAC (distal-SMAC) enriched in CD43 and CD45 molecules. New investigations, however, have shown that a "bull’s eye" is not present in all immunological synapses. For example, different patterns appear in the synapse between a T-cell and a dendritic cell. This complex as a whole is postulated to have several functions including but not limited to: Regulation of lymphocyte activation Transfer of peptide-MHC complexes from APCs to lymphocytes Directing secretion of cytokines or lytic granules Recent research has proposed a striking parallel between the immunological synapse and the primary cilium based mainly on similar actin rearrangement, orientation of the centrosome towards the structure and involvement of similar transport molecules (such as IFT20, Rab8, Rab11). This structural and functional homology is the topic of ongoing research. Formation The initial interaction occurs between LFA-1 present in the p-SMAC of a T-cell, and non-specific adhesion molecules (such as ICAM-1 or ICAM-2) on a target cell. When bound to a target cell, the T-cell can extend pseudopodia and scan the surface of target cell to find a specific peptide:MHC complex. The process of formation begins when the T-cell receptor (TCR) binds to the peptide:MHC complex on the antigen-presenting cell and initiates signaling activation through formation of microclusters/lipid rafts. Specific signaling pathways lead to polarization of the T-cell by orienting its centrosome toward the site of the immunological synapse. The symmetric centripetal actin flow is the basis of formation of the p-SNAP ring. The accumulation and polarization of actin is triggered by TCR/CD3 interactions with integrins and small GTPases (such as Rac1 or Cdc42). These interactions activate large multi-molecular complexes (containing WAVE (Scar), HSP300, ABL2, SRA1, and NAP1 and others) to associate with Arp2/3, which directly promotes actin polymerization. As actin is accumulated and reorganized, it promotes clustering of TCRs and integrins. The process thereby upregulates itself via positive feedback. Some parts of this process may differ in CD4+ and CD8+ cells. For example, synapse formation is quick in CD8+ T cells, because for CD8+ T cells it is fundamental to eliminate the pathogen quickly. In CD4+ T cells, however, the whole process of the immunological synapse formation can take up to 6 hours. In CD8+ T cells, the synapse formation leads to killing of the target cell via secretion of cytolytic enzymes. CD8+ T lymphocytes contain lytic granules – specialized secretory lysosomes filled with perforin, granzymes, lysosomal hydrolases (for example cathepsins B and D, β-hexosaminidase) and other cytolytic effector proteins. Once these proteins are delivered to the target cell, they induce its apoptosis. The effectivity of killing of the target cell depends on the strength of the TCR signal. Even after receiving weak or short-lived signals, the MTOC polarizes towards the immunological synapse, but in that case the lytic granules are not trafficked and therefore the killing effect is missing or poor. NK-cell synapse NK cells are known to form synapses with cytolytic effect towards the target cell. In the initiation step, NK cell approaches the target cell, either accidentally or intentionally due to the chemotactic signalling. Firstly, the sialyl Lewis X present on the surface of target cell is recognized by CD2 on NK cell. If the KIR receptors of NK cell find their cognate antigen on the surface of target cell, formation of the lytic synapse is inhibited. If such signal is missing, a tight adhesion via LFA1 and MAC1 is promoted and enhanced by additional signals such as CD226-ligand and CD96-CD155 interactions. Lytic granules are secretory organelles filled with perforin, granzymes and other cytolytic enzymes. After initiation of the cell-cell contact, the lytic granules of NK cells move around the microtubules towards the centrosome, which also relocalizes towards the site of synapse. Then, the contents of lytic granules is released and via vesicles with SNARE proteins transferred to the target cell. Inhibitory immunological synapse of NK cells When an NK cell encounters a self cell, it forms a so-called inhibitory immunological synapse to prevent unwanted cytolysis of target cell. In this process, the killer-cell immunoglobulin-like receptors (KIRs) containing long cytoplasmic tails with immunoreceptor tyrosine-based inhibitory motifs (ITIMs) are clustered in the site of synapse, bind their ligand on the surface of target cell and form the supramolecular inhibitory cluster (SMIC). SMIC then acts to prevent rearrangement of actin, block the recruitment of activatory receptors to the site of synapse and finally, promote detachment from the target cell. This process is essential in protecting NK cells from killing self cells. History Immunological synapses were first discovered by Abraham Kupfer at the National Jewish Medical and Research Center in Denver. Their name was coined by Michael Dustin at NYU who studied them in further detail. Daniel M. Davis and Jack Strominger showed structured immune synapses for a different lymphocyte, the Natural Killer cell, and published this around the same time. Abraham Kupfer first presented his findings during a Keystone Symposia in 1995, when he showed three-dimensional images of immune cells interacting with one another. Key molecules in the synapse are the T cell receptor and its counterpart the major histocompatibility complex (MHC). Also important are LFA-1, ICAM-1, CD28, and CD80/CD86. References External links Immunological Synapse - Cell Centered Database Immune system
Immunological synapse
[ "Biology" ]
1,606
[ "Immune system", "Organ systems" ]
5,654,239
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%203
Bone morphogenetic protein 3, also known as osteogenin, is a protein in humans that is encoded by the BMP3 gene. The protein encoded by this gene is a member of the transforming growth factor beta superfamily. It, unlike other bone morphogenetic proteins (BMP's) inhibits the ability of other BMP's to induce bone and cartilage development. It is a disulfide-linked homodimer. It negatively regulates bone density. BMP3 is an antagonist to other BMP's in the differentiation of osteogenic progenitors. It is highly expressed in fractured tissues. Cancer BMP3 is hypermethylated in many cases of colorectal cancer (CRC) and hence along with other hypermethylated genes, may be used as a biomarker to detect early stage CRC. References External links Further reading Bone morphogenetic protein Developmental genes and proteins TGFβ domain
Bone morphogenetic protein 3
[ "Biology" ]
200
[ "Induced stem cells", "Developmental genes and proteins" ]
5,654,541
https://en.wikipedia.org/wiki/Software%20security%20assurance
Software security assurance is a process that helps design and implement software that protects the data and resources contained in and controlled by that software. Software is itself a resource and thus must be afforded appropriate security. What is software security assurance? Software Security Assurance (SSA) is the process of ensuring that software is designed to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects. The software security assurance process begins by identifying and categorizing the information that is to be contained in, or used by, the software. The information should be categorized according to its sensitivity. For example, in the lowest category, the impact of a security violation is minimal (i.e. the impact on the software owner's mission, functions, or reputation is negligible). For a top category, however, the impact may pose a threat to human life; may have an irreparable impact on software owner's missions, functions, image, or reputation; or may result in the loss of significant assets or resources. Once the information is categorized, security requirements can be developed. The security requirements should address access control, including network access and physical access; data management and data access; environmental controls (power, air conditioning, etc.) and off-line storage; human resource security; and audit trails and usage records. What causes software security problems? All security vulnerabilities in software are the result of security bugs, or defects, within the software. In most cases, these defects are created by two primary causes: (1) non-conformance, or a failure to satisfy requirements; and (2) an error or omission in the software requirements. Non-conformance, or a failure to satisfy requirements A non-conformance may be simple–the most common is a coding error or defect–or more complex (i.e., a subtle timing error or input validation error). The important point about non-conformance is that verification and validation techniques are designed to detect them and security assurance techniques are designed to prevent them. Improvements in these methods, through a software security assurance program, can improve the security of software. Errors or omissions in software requirements The most serious security problems with software-based systems are those that develop when the software requirements are incorrect, inappropriate, or incomplete for the system situation. Unfortunately, errors or omissions in requirements are more difficult to identify. For example, the software may perform exactly as required under normal use, but the requirements may not correctly deal with some system state. When the system enters this problem state, unexpected and undesirable behavior may result. This type of problem cannot be handled within the software discipline; it results from a failure of the system and software engineering processes which developed and allocated the system requirements to the software. Software security assurance activities There are two basic types of Software Security Assurance activities. Some focus on ensuring that information processed by an information system is assigned a proper sensitivity category, and that the appropriate protection requirements have been developed and met in the system. Others focus on ensuring the control and protection of the software, as well as that of the software support tools and data. At a minimum, a software security assurance program should ensure that: A security evaluation has been performed for the software. Security requirements have been established for the software. Security requirements have been established for the software development and/or operations and maintenance (O&M) processes. Each software review, or audit, includes an evaluation of the security requirements. A configuration management and corrective action process is in place to provide security for the existing software and to ensure that any proposed changes do not inadvertently create security violations or vulnerabilities. Physical security for the software is adequate. Building in security Improving the software development process and building better software are ways to improve software security, by producing software with fewer defects and vulnerabilities. A first-order approach is to identify the critical software components that control security-related functions and pay special attention to them throughout the development and testing process. This approach helps to focus scarce security resources on the most critical areas. Tools and techniques There are many commercial off-the-shelf (COTS) software packages that are available to support software security assurance activities. However, before they are used, these tools must be carefully evaluated and their effectiveness must be assured. Common weaknesses enumeration One way to improve software security is to gain a better understanding of the most common weaknesses that can affect software security. With that in mind, there is a current community-based program called the Common Weaknesses Enumeration project, which is sponsored by The Mitre Corporation to identify and describe such weaknesses. The list, which is currently in a very preliminary form, contains descriptions of common software weaknesses, faults, and flaws. Security architecture/design analysis Security architecture/design analysis verifies that the software design correctly implements security requirements. Generally speaking, there are four basic techniques that are used for security architecture/design analysis. Logic analysis Logic analysis evaluates the equations, algorithms, and control logic of the software design. Data analysis Data analysis evaluates the description and intended usage of each data item used in design of the software component. The use of interrupts and their effect on data should receive special attention to ensure interrupt handling routines do not alter critical data used by other routines. Interface analysis Interface analysis verifies the proper design of a software component's interfaces with other components of the system, including computer hardware, software, and end-users. Constraint analysis Constraint analysis evaluates the design of a software component against restrictions imposed by requirements and real-world limitations. The design must be responsive to all known or anticipated restrictions on the software component. These restrictions may include timing, sizing, and throughput constraints, input and output data limitations, equation and algorithm limitations, and other design limitations. Secure code reviews, inspections, and walkthroughs Code analysis verifies that the software source code is written correctly, implements the desired design, and does not violate any security requirements. Generally speaking, the techniques used in the performance of code analysis mirror those used in design analysis. Secure Code reviews are conducted during and at the end of the development phase to determine whether established security requirements, security design concepts, and security-related specifications have been satisfied. These reviews typically consist of the presentation of material to a review group. Secure code reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed. Informal reviews Informal secure code reviews can be conducted on an as-needed basis. To conduct an informal review, the developer simply selects one or more reviewer(s) and provides and/or presents the material to be reviewed. The material may be as informal as pseudo-code or hand-written documentation. Formal reviews Formal secure code reviews are conducted at the end of the development phase for each software component. The client of the software appoints the formal review group, who may make or affect a "go/no-go" decision to proceed to the next step of the software development life cycle. Inspections and walkthroughs A secure code inspection or walkthrough is a detailed examination of a product on a step-by-step or line-by-line (of source code) basis. The purpose of conducting secure code inspections or walkthroughs is to find errors. Typically, the group that does an inspection or walkthrough is composed of peers from development, security engineering and quality assurance. Security testing Software security testing, which includes penetration testing, confirms the results of design and code analysis, investigates software behaviour, and verifies that the software complies with security requirements. Special security testing, conducted in accordance with a security test plan and procedures, establishes the compliance of the software with the security requirements. Security testing focuses on locating software weaknesses and identifying extreme or unexpected situations that could cause the software to fail in ways that would cause a violation of security requirements. Security testing efforts are often limited to the software requirements that are classified as "critical" security items. See also Secure by design Computer security Security engineering Software protection References Security engineering Software quality
Software security assurance
[ "Engineering" ]
1,688
[ "Systems engineering", "Security engineering" ]
5,654,597
https://en.wikipedia.org/wiki/Shadow%20system
Shadow system is a term used in information services for any application relied upon for business processes that is not under the jurisdiction of a centralized information systems department. That is, the information systems department did not create it, was not aware of it, and does not support it. Overview Shadow systems (a.k.a. shadow data systems, data shadow systems, shadow information technology, shadow accounting systems or in short: Shadow IT) consist of small scale databases and/or spreadsheets developed for and used by end users, outside the direct control of an organization's IT department. The design and development process for these systems tends to fall into one of two categories. In the first case, these systems are developed on an adhoc basis rather than as part of a formal project and are not tested, documented or secured with the same rigor as more formally engineered reporting solutions. This makes them comparatively quick and cheap to develop, but unsuitable in most cases for long term use. In the second case, the systems are developed by experienced software developers that are not part of the organization's information systems department. These systems may be off-the-shelf software products or custom solutions developed by contract programmers. Depending on the expertise of the developers, these solutions may exceed the reliability of those created by the organization's information systems department. The term can also refer to legitimate, managed replicas of operational databases that are isolated from the user base of the main system. These sub-systems can be used to track illegitimate changes to the primary data-store by 'back doors' exploited by expert but un-authorized users. As stated in Price Waterhouse Coopers report on Spreadsheet Risk Management "The Use of Spreadsheets: Considerations for Section 404 of the Sarbanes-Oxley Act" : "Many companies rely on spreadsheets as a key component in their financial reporting and operational processes. However, it is clear that the flexibility of spreadsheets has sometimes come at a cost. It is important that management identify where control breakdowns could lead to potential material misstatements and that controls for significant spreadsheets be documented, evaluated and tested. And, perhaps more importantly, management should evaluate whether it is possible to implement adequate controls over significant spreadsheets to sufficiently mitigate this risk, or if spreadsheets related to significant accounts or with higher complexity should be migrated to an application system with a more formalized information technology control environment." Cause An organization that has a centralized Information Services department usually requires rigorous guidelines for developing a new system or application. Simultaneously, with the rise of powerful desktop applications that give savvy end-users the ability to author sophisticated tools on their own, a business group often finds it more expedient to create the application themselves. Pressure to analyze information in new ways Any organization faces a multitude of pressures to change and respond to new government regulations, customer demands and action by competitors. In order to respond to these changes, organizations need to be able to understand all aspects of their business and often ask questions of itself that have never been asked before. Ongoing pressure for change creates an ongoing pressure to analyze data in new ways and get information quickly into the hands of people who need it. Only through creative and flexible reporting are businesses able to spot new trends and identify new opportunities rapidly enough to take full advantage of them. The type of data analysis that most frequently necessitates the development or purchase of Shadow Systems usually comes from the needs of the user. Since the centralized information systems department usually reports to the organization’s CFO or COO, the systems that they develop are designed for their needs. The needs of departmental managers are often quite different, requiring more detailed analysis that incorporate variables not contained in the solution designed by the central information systems department. Increased power of personal computer hardware and software The greatly increased power of personal computer hardware and software analysis tools has meant that individual users now have all of the computing power they need right in front of them. Large databases containing all of an organization's customer, supplier, or accounting information; the kind that could once only be stored on a central corporate mainframe, can now be contained easily on a single laptop. Rigorous controls and the breadth of required skills leads to unresponsive information technology or IT departments Quite properly, when a reporting system is put together by IT professionals, they need to consider all aspects of how the system will be used. In addition to just putting the information together they need to consider the following: How can it be ensured that the data produced is accurate? Who is authorized to see this information? How can security be enforced? How is the system to be backed up/replicated in case of failure? User documentation must be written so that the system can be given to new users. Technical documentation must be produced so that support staff can maintain it. The load that any new tool places on existing systems needs to be managed and minimized. The various skills that are required to achieve all of this means that inevitably a number of different people will all be involved in the task of creating the new report. This increases the amount of time and effort it takes to put a rigorously engineered solution in place. Shadow Systems typically ignore this kind of rigor, making them much faster to implement, but less reliable and more difficult to maintain. Problems When Shadow Data Systems are created by end users whose main area of expertise is something other than software engineering, they are subject to the following problems: Poorly designed Shadow Data Systems often suffer from poor design. Errors may be hard to find, modifications may be difficult, and long-term support may be troublesome. Not scalable Typically, Shadow Data Systems are only used by one or two people. Unless they are developed by experienced programmers, it may be difficult to scale them up to support tens or hundreds of users. Poorly documented Shadow Data Systems are often lack adequate documentation. Knowledge about the system is passed by word-of-mouth and can be confined to a very small number of people. This knowledge is then lost completely if one or two staff members leave. Untested Around two thirds of the effort involved in professional software development is expended in testing. Shadow Data Systems undergo much more cursory testing and may have latent errors that only become apparent after a long period of production use. May allow unauthorized access to sensitive information Shadow Data Systems hold substantial chunks of company data and can include confidential information about customers, suppliers or staff. The access control processes for these systems are often much more lax than for a centralized company database and may not even exist at all. Physically locating sensitive data on desktop or laptop computers can leave an organization very exposed if the computer is stolen. Easy to introduce errors Data in local databases and spreadsheets can very easily be modified, either intentionally or otherwise. Once changed it can be hard to track what changes have been made and what the original data looked like. Where the system manipulates the data it can introduce more subtle errors that remain completely undetected for long periods. Back up Shadow systems existing on a single computer are often not regularly backed up. It is best to have these systems on a computer that is regularly backed up or on a server. Several versions of the truth There may be many different shadow systems within an organization reporting against the same data. Each one may add filters and manipulate the data in different ways. This can lead to apparent inconsistencies in their output. Where two shadow systems disagree, either or both of them may be wrong. Advantages When Shadow Data Systems are created by an experienced programmer or software engineer with significant input from departmental management, the resulting solution frequently exceeds the capability of those created by the organizations centralized information systems department. The experience of the programmer / software engineer easily removes most of the previously stated problems. And, when combined with input of departmental management, the resulting product actually meets the needs of the end user. References Information technology management
Shadow system
[ "Technology" ]
1,601
[ "Information technology", "Information technology management" ]
5,654,744
https://en.wikipedia.org/wiki/Quintaglio%20Ascension%20Trilogy
The Quintaglio Ascension Trilogy is a series of novels written by Canadian science fiction author Robert J. Sawyer. The books depict an Earth-like world on a moon which orbits a gas giant, inhabited by a species of highly evolved, sentient Tyrannosaurs, among various other creatures from the late Cretaceous period, imported to this moon by aliens 65 million years prior to the story. The series consists of three books: Far-Seer, Fossil Hunter, and Foreigner. The trilogy The Quintaglios The Quintaglio are a fictional species of sapient theropods which first appeared in Robert J. Sawyer's short story "Uphill Climb", and later on starred in his Quintaglio Ascension Trilogy. Descended from earth's dinosaurs, (specifically, Tyrannosaurs), they live on a moon orbiting a gas giant that they refer to as "The Face of God". Evolution As stated above, Quintaglios are Tyrannosaurs. It is stated in Fossil Hunter that they're directly descended from Nanotyrannus (although debate among paleontologists since the series' initial creation suggests Nanotyrannus might simply be a juvenile form of Tyrannosaurus rex; if so, that species would be the Quintaglio's true direct ancestor). Isolation on the Quintaglio Moon (along with some genetic modifications to their Tyrannosaur ancestors by the aliens that transplanted them there to push them in the right direction), ensured that one day they would evolve into a sentient species. 65 million years later, they did, and the Quintaglio race emerged. Physiology and comparison to Tyrannosaurs Quintaglios resemble miniature Tyrannosaurs, and share many features in common with their ancestors. They eat only meat, and have massive heads with jaws filled with rows of sharp, serrated teeth. They have short, muscular necks, stocky torsos, solid black forward-facing eyes, thick muscular tails, and powerful hind legs ending with three birdlike talons. However, due to the 65 million years they have had to evolve, Quintaglios vary from tyrannosaurs in several significant ways. Tyrannosaurs have very stubby forelimbs, with only two visible fingers. Quintaglios, on the other hand, have longer, more well developed arms with dexterous five-fingered hands, (four fingers and an opposable digit) similar to a human, (unlike humans, though, most Quintaglios are left-handed). The fingers terminate in curved, retractable claws, which can be extended when the Quintaglio is threatened, (though they are capable of extending and retracting them at will). They are much smaller than a Tyrannosaur, although still large; an old adult Quintaglio standing about high. Rather than standing with their backs parallel to the ground like a normal theropod, they usually maintain a semi-erect posture, although while running they do stoop forward into a traditional theropod-like stance. They are far more intelligent than their ancestors. While they are indeed dinosaurs, Quintaglios possess a variety of traits that are more reminiscent of lizards. They are capable of limited regeneration; a Quintaglio can grow back a severed limb or tail, although complex, vital structures such as organs cannot be grown back. Male Quintaglios possess a dewlap sack on their throats, similar to a frog's or those on some types of birds, which they can inflate with air when they are sexually aroused or in "dagamant" (likely "battle frenzy" though no literal translation is given, dagamant is brought on by extreme territorial encroachment and occasionally other factors). Quintaglios continue to grow throughout their entire lives, like crocodiles, although their growth rate slows with age. Similar to certain lizards, Quintaglios have a small salt-secretion gland beneath the surface of the muzzle, but the opening for it is simply a very tiny hole halfway down the side of the muzzle. Except in an extreme close-up view, it would be all but invisible. Quintaglio hide is tough and leathery. As humans have lost most body hair, Quintaglios have lost most scales and scutes, but these may be present in individuals. Quintaglio skin is almost entirely green, although it may be freckled, mottled, or splotched with brown or yellow in some individuals, and with black in old individuals. Oddly enough, Quintaglios cannot lie; their muzzles turn blue when they say something untrue, and for this reason the colour blue is reviled among Quintaglios as "The Liars' Tint". Those who can lie without their muzzles turning blue are called demons or "Aug-Ta-Rot", which literally means "Those who can lie in the light of day", and though there exist Quintaglios capable of doing so, this is not widely known, and Aug-Ta-Rot are believed to exist only in mythology. Lifestyle Quintaglios are exclusively carnivorous, like their Tyrannosaur ancestors. They bring down larger prey by hunting in packs. While they are advanced enough to use weapons to kill prey, their culture forbids it; Quintaglios hunt the old-fashioned way, running prey down and dispatching it via tooth and claw. Quintaglio hunts are led by special female Quintaglios who perpetually emit the pheromones characteristic of being in heat, though they are typically also sterile. Quintaglios live a mostly nomadic lifestyle, travelling in packs and following herds of animals. They set up camp for a short while, then move on so that prey can repopulate extensively hunted areas. Quintaglio population density is kept fairly low for this reason, and also due to the culling of egglings by the Bloodpriests. The culling of eggs is necessary as Quintaglios have a large sense of territoriality, socially acceptable close quarters means anywhere from 7 to 10 paces between each person. Quntaglios who feel that their space has been violated, along with other factors (such as having not hunted in a long time) tend to get fierce, and if not calmed might enter dagamant and fight the intruder(s) to the death. Quintaglios are sparsely clad, and wear little clothing beyond simple sashes, hats, belts and jewellery. Priests, however, wear robes; junior priests wear robes of black and red, senior priests wear colourful, banded robes coloured like The Face of God (these are changed to white robes after Far-Seer), and Bloodpriests wear purple robes. Religion and tradition Quintaglios are a very religious people. Their original creation myth tells of a goddess who laid the "Eight Eggs of Creation". From the first egg came all the water, and from the second came the land. From the third came all the air, and from the fourth came the sun. From the fifth came the stars, planets and moons, and from the sixth came all the plants. Finally, from the seventh came all the herbivores (Ceratopsians, Ankylosaurs, Sauropods and Hadrosaurs, among others), and from the eighth and final egg came the carnivores which preyed upon them, (Tyrannosaurs, Ornithomimids and Dromaeosaurids, among others.) God created the five original female hunters by biting off her left arm, each of her fingers becoming a Quintaglio. They wished to create as God did, so she bit off her right arm and these became the first males, the mates of the original hunters. Quintaglio tradition states that a Quintaglio must go on at least one proper hunt in their life in order to go through the rites of passage. After a successful hunt, the Quintaglio gets a hunting tattoo which symbolises their passage into adulthood. Adults with no hunting tattoo are accorded no status at all. The Quintaglio mythos was further expanded when roughly 150 years prior to the story, the Quintaglio prophet "Larsk" discovered what he believed to be the face of God, and a religion was built around the worship of The Face of God. Larsk's descendants became the Royal Family, and rule all of land. Dy-Dybo, former Prince and current Emperor, is a part of the Royal Family and a direct descendant of Larsk. This religion adds a new tradition to Quintaglio society; that of sailing across the ocean to retrace Larsk's voyage and gaze upon The Face of God. The story of the first book in the series revolves around Afsan (the main character), discrediting this notion while on one of these voyages, and challenging Quintaglio tradition by proving that the Face of God was nothing more than a planet. There also exists a cult known as the Lubalites. This cult is based around the worship of the original five hunters, particularly Lubal, and rejects the notion that The Face of God is actually God, adhering more closely to the original creation myths set forth in the sacred scrolls. Worship of the original five was banned by Larsk, but after Far-Seer the Larskian faith is discredited and the Lubalites are free to engage in worship of the original five without persecution. A special order in the Quintaglio Priesthood are known as the "Halpataars", or "Bloodpriests". In order to prevent overpopulation, a Bloodpriest is assigned to devour seven out of every eight Quintaglio hatchlings a day after they hatch. The Bloodpriest first goes into a trance, then dons a purple robe and enters the nest; there, he chases the hatchlings and eats all but the fastest, strongest one. The order of the Bloodpriests is exclusively male; the original Bloodpriest was Mekt, one of the original five hunters, but she passed on the tradition to males because she felt it was inappropriate for one who lays eggs to dispatch hatchlings. Territoriality and dagamant While Quintaglios consider themselves to be civilised beings, deep down their thoughts and actions are ruled by their primal, territorial instincts. Quintaglios hate physical contact with one another, value their privacy and have a wide circumference of personal space. Spending too much time in the company of others, or extended time in close quarters with other Quintaglios can cause them to enter an animalistic frenzy known as dagamant. This also happens if exposed to sufficiently alien stimuli; it is implied that this is triggered by the uncanny valley effect after some Quantaglio are driven to immediate dagamant by the merest sight of the Yellow Quintaglios. When a Quintaglio is in dagamant, he or she will bob their torsos up and down, and males will inflate their dewlap sacks. A Quintaglio under dagamant loses all conscious control over their actions, and will attack with unrelenting viciousness and bloodlust until it wears off or one or both are killed; for this reason, to kill another while in dagamant is not considered murder by Quintaglios. Seeing a Quintaglio in dagamant can trigger it in others- in overpopulated areas, or on crowded ships, "Mass Dagamant" has been known to occur. The books frequently refer to a past event of a Mass Dagamant aboard the ship Galadoreter, in which the entire crew went into a territorial frenzy and everybody on board was killed. Some Quintaglios are exempt from dagamant; Toroca, Afsan's son, has a subdued territorial instinct, and it is implied that Dybo is less susceptible to it as well, although not entirely. However, this is extremely rare, and nearly all other Quintaglios have the territorial instinct. Even Afsan himself, among the most level-headed and rational of Quintaglios, had killed while under the madness of dagamant, not once, but twice; the first time was in Far-Seer, aboard the Dasheter, where he and Dybo were challenged by a sailor named Nor-Gampar in full dagamant; Afsan killed Gampar and nearly attacked Dybo before coming to his senses. The second time was during the mass dagamant caused by the bloodpriest repute in Fossil Hunter, and he killed Rodlox's aide Pod-Oro. It is later discovered that dagamant is an almost entirely psychological condition. Due to the horrifying experience of the Bloodpriest ritual, experienced within a day of birth, the Quintaglio brain imprints with an extreme fear of anything outside itself. Anyone who has not been through the ritual suffers from little if any tendency to enter dagamant, and is capable of interacting with and living with people in close quarters with little ill effect. Technology Quintaglios have a level of technology comparable to our own Renaissance. They have sailing ships, and electricity hasn't been discovered yet, nor have fossil fuels or solar power. The telescope and the microscope are recent inventions and modern-style medicine is in its infancy. Aviation is also in its infancy, and haven't gotten more advanced than gliders yet. Thanks to Afsan and Toroca, Quintaglios are also aware of the solar system and evolution. Buildings are made out of stone or mud, and not much attention is given to them; due to frequent earthquakes, structures rarely stand for very long. Quintaglios have no firearms, and due to their laws and customs pertaining to using weapons to kill food it is highly unlikely that they ever will. In the epilogue at the end of Foreigner, the Quintaglio's technological prowess has increased dramatically due to new technology recovered from the ancient alien spaceship. The Quintaglios have apparently achieved spaceflight, built computers and set up temporary colonies on nearby moons. They are on their way to colonising new worlds, and it is implied that they are on their way to Earth. Reproduction and family structures The estrous cycle of female Quintaglios is 18 thousand days, and a female will hopefully mate 4 times in her lifetime every cycle, normally. There are some exceptions though, as Natova, and one of her daughter with Afsan, arose when 16 thousand days old. Also some females will often at the estrous cycle, thus having very low fertility. While male can mate at almost anytime, they have to wait for a chance, when a female is currently about to enter the cycle and accept the invitation. Females do not have to stick with one male every time, instead, they follow their instinct and choose the right spouse for the moment. Despite living on their own territory, the Quintaglios often have a long lasting friendship with the previous spouses, with the possibility of some forms of pair bonding. The female will usually give birth after the mating and lay exactly eight eggs. And the egg will be of custody of the public nursery. When all of the eggs hatched, a Bloodpriest will pick one, often the strongest and swiftest, and devour all others. As the child is designed to belong to the tribe, the parents and the child often do not know each other for the lifetime - only by guessing through their ages, as well as some traits. Only the nursery's record would reveal the truth. There are some exceptions - in royal family, and Afsan's family, all children would be let survive and the lineage of the members are quite clear. Later, it was discovered that siblings have anxieties when being together, and may cause serious sibling rivalry accidents. In Foreigner, it is decided to end the tradition of killing hatchlings and instead just not allowing seven out of the eight eggs to hatch. This decision came from Toroca, after it was proven that the Bloodpriests' tradition was actually psychologically scarring the survivor and causing increased territoriality. It was also decided that the survivor should be chosen randomly among the unhatched eggs, in order to increase the variability of the population. It is also hinted that the Quintaglios begin to develop parent-child relationship universally, in the new era. The Others In Foreigner, a second species of sentient theropods is discovered on an archipelago on the other side of the world; they are simply known as "The Others". They are similar to the Quintaglios, but both genders have dewlaps, are yellow with grey highlights, and are smaller in stature. They can lie without physical changes, and have about four eggs each time. Character histories The following is a list of the main and secondary characters featured in the books. Main characters Sal-Afsan: The main character of the series. He was the apprentice astrologer to the court astrologer, Tak Saleed. Afsan is described as being thin for a Quintaglio; ironically, the name Afsan means "Meaty thighbone". Afsan is a mathematical Genius- as a child; he never had any problem solving puzzles set forth by his teaching master. Afsan is open minded, inquisitive, and rational; the necessary qualities for a good scientist. On his pilgrimage, Afsan discovers that the Face of God is just a planet, and also discovers that their world is a moon that orbits it; furthermore, the Quintaglio moon is too close to the planet, and that it causes stresses that will one day reduce it to a ring of rubble surrounding the planet. When he declares his findings, he is met with opposition from the Master of the Faith, Det Yenalb, and is blinded as punishment for his "heresy". Afsan's ability to keep a level headed, strategic perspective has also made him an expert hunter. His kills were few, but memorable and dramatic- on his first hunt, he killed the biggest Sauropod ever seen, slew the water serpent Kal-Ta-Goot during his pilgrimage, and single-handedly killed a Fangjaw on his trip back to capital city. It is this hunting prowess, as well as his proclamation of the end of the world, that caused the Lubalites (practitioners of the Cult of the Original Five Hunters) to believe that he is "The One" foretold in their ancient prophecy. Afsan also mated with Wab-Novato, and produced a clutch of eight eggs with her. Their egglings were spared the culling of the bloodpriests because the Lubalites believed Afsan was The One, and so would not eat them. Afsan's son, Toroca, is a prominent character in Fossil Hunter and Foreigner. In the second book, Afsan is made Dybo's court astrologer, and is given a "Seeing Eye" Goanna named "Gork" by his aide, former palace butcher Pal-Cadool. Though Afsan is now blind, this doesn't stop him from trying to track down a murderer when his children start to get killed off one-by-one. He also proposes a solution for Emperor Dy-Dybo when his Brother, Dy-Rodlox, challenges him for position of Emperor. In the beginning of the third book, he suffers an accident which nearly kills him, but as a result he regenerates his eyes; however, he is still blind, despite having eyes that are physically fully functional. Afsan undergoes extensive psychoanalysis with Nav-Mokleb to try and solve this problem. Afsan is wounded by a gunshot during a confrontation with "The Others"; the doctors, having no experience with the effects of injuries caused by weapons, leave the bullet inside of him not realizing that it will kill him. He lives long enough for his eyesight to come back, and he says goodbye to his friends and loved ones before his death. In the epilogue of Foreigner, Afsan is long dead, but a computer emulating his exact thoughts and mannerisms has been built. It journeys with the last of the Quintaglios to return to their original home; Earth. Tak-Saleed: Afsan's mentor and the former court astrologer under Empress Len-Lends. He is a gruff, crotchety, ancient Quintaglio, and creche-mate of Var-Keenir. He went through six apprentice astrologers before he took on Afsan. He actually found the truth of celestial bodies before Afsan, and was glad to know Afsan discovered more before his death. Dy-Dybo: Former prince, now the Emperor of all of Land, and Afsan's close friend. Despite not being the fastest or the strongest of Lends' egglings - actually the weakest and the easiest one to be manipulated, he nevertheless became the greatest ruler the Quintaglio race has ever seen. He has a good sense of humour, and a legendary appetite. In Fossil Hunter, a political scandal concerning his legitimacy as Emperor is one of the main plot threads. Wab-Novato: Inventor of the Far Seer and the first astronaut. She led the important project of great escape, and discovered many valuable ancient remains. She and Afsan mated and laid eight eggs together, one of them died in infancy, and other 3 died for sibling rivalry in adolescence. She also have a daughter later with another male friend, but the daughter was indifferent to her and later died in an accident. After that she chose Afsan again to mate for the third time. Pal-Cadool: The palace butcher - also a Lubalite and, later, the loyal assistant and close friend to Afsan. Var-Keenir: The captain of the Dasheter and the childhood friend of Tak-Saleed. He starts off as a cold, Captain Ahab-like character, obsessed with hunting the Elasmosaur Kal-Ta-Goot. After this event he mellows and becomes a warmer character and a close friend of Afsan. Det-Yenalb: A high priest and the Antagonist of the first book He attempted to have Afsan executed, for he saw him and his ideas as a threat to Quintaglio civilisation. Yenalb dies in battle against Pal-Cadool. Kee-Toroca: Afsan and Novato's son, and the protagonist of the second book. He is a geologist, and eventually becomes the Quintaglio version of Charles Darwin when he formulates a theory of natural selection. He is the first one to contact with alien dinosaurs and learn their languages. He has a foster alien son, and a biological son possibly with Babnol, who shared the same name. Wab-Babnol: Toroca's love interest, and a member of the Geological Survey team. She was born with a prominent horn on the tip of her snout, a physical feature which causes her great embarrassment. Dy-Rodlox: The antagonist of the second book. A brother of Dybo and the lord of Edz'Toolar, he is powerful and aggressive, and is the strongest of Empress Lends' children. He believes he was meant to be the true emperor, not Dybo, and challenges him for his right to rule. Nav-Mokelb: An important character in the third book, she is the inventor of psychoanalysis, and her studies eventually reveal the startling truth behind the Quintaglio territorial impulse. She is also noteworthy for being one the minority of females constantly emitting sex pheromones, though she was ineligible to be a huntress, the usual profession of such females, due to a debilitating childhood injury. The world of the Quintaglios Geography The Quintaglios live on a moon, that orbits around a gas giant called "Galatjaroob", which means "The Face of God". The moon is mostly covered by water, but has a single huge continent on the far side, a small archipelago of islands on the other, and a southern and northern ice cap. The continent the Quintaglios live on is called simply called "Land", and is split up into provinces. There are eight provinces, and they are called, (from west to east) Jam'Toolar Fra'Toolar Arj'Toolar Chu'Toolar Mar'Toolar Edz'Toolar Kev'Toolar Capital Capital City lies on the far eastern end of land, in the shadow of the Ch'Mar volcanoes. It is where most of the action takes place. Arj'Toolar is Afsan's home province. Featured dinosaurs and other creatures In addition to Quintaglios, many other creatures inhabit the Quintaglio Moon. All originally came from earth. Some have remained pretty much unchanged since the Cretaceous period, whereas others have evolved since then into completely new species. There are no mammals on the Quintaglio moon, and birds are extinct. Following is a list and a brief description of creatures known to inhabit the Quintaglio moon, first with the Quintaglio term and then the human one. Shovelmouth (Hadrosaurs): Large, duckbilled dinosaurs, they are hunted as food by Quintaglios and occasionally used as beasts-of-burden. Despite having stringy meat, they form the staple of the Quintaglio diet. Corythosaurus, Parasaurolophus and Lambeosaurus are seen (although the latter is apparently extinct) as well as completely new varieties of shovelmouth/hadrosaur, including one with a three-pointed crest and a breed from Arj'Toolar that is orange with blue stripes, and reportedly the tastiest kind of all. Armourback (Ankylosaurs): A few Armourbacks are seen and mentioned in passing. They are noted as being extremely difficult to kill. The Lubalites and Palace staff use a few Armourbacks as riding mounts during the final battle in Far-Seer, and the Quintaglio version of the Turtles all the way down story substitutes turtles with armourbacks. Mekt, one of the Original Five and the first bloodpriest, apparently killed an Armourback. Hornface (Ceratopsians): Hunted as food and occasionally domesticated by Quintaglios. Three species are confirmed to exist. The most commonly seen variety are "Triple Hornface" (Triceratops), but "Spikefrills" (Styracosaurus) and "Boss-Nosed Hornface" (Pachyrhinosaurus) are also depicted. Einiosaurus is also mentioned but like the Lambeosaurus is apparently extinct. A Triple Hornface apparently killed Lubal, one of the Original Five. Det-Yenalb rides a Spikefrill in the final battle in Far-Seer, and Lub-Galpook (Afsan's daughter and hunt leader) brings along a caravan of Boss-Nosed Hornfaces to act as beasts of burden during the capture of a Blackdeath. Thunderbeast (Sauropods, possibly Alamosaurus): Thunderbeasts are the biggest herbivores living on the Quintaglio Moon. Afsan's hunting party went after a staggeringly huge one on Afsan's first hunt, and he made a big impression by being the one to actually kill it, by climbing all the way up its neck and biting out its throat. Runningbeast (Ornithomimids): The fastest creatures in all of land, they are used by Quintaglios like horses. Two varieties exist; a green type and a beige type. Afsan rides one on his trip back to the capital. Fangjaw: Fangjaws are unique to the Quintaglio moon, a fleet footed, quadrupedal, carnivorous dinosaur that evolved from an unspecified theropod species. They have elongated jaws with two big teeth sticking up from the lower jaw, and apparently hunt Thunderbeasts, Shovelmouths and Runningbeasts. Afsan killed one on his last hunt before being blinded, impressing everybody by managing to bring one down on his first try. Wingfinger (Pterosaurs): Because birds are extinct on the Quintaglio Moon, pterosaurs were able to rule the skies of the dinosaurs' new home unchallenged, and evolved into a huge variety of new species. The Quintaglio's southern ice-cap is inhabited exclusively by pterosaurs, which have evolved since then into completely new creatures, such as "Divers" (pterosaurs similar to Penguins) and "Stilts", (a bizarre pterosaur derivative which uses its long arms like legs.) It is the pterosaurs of the South Pole that give Toroca his idea of evolution. Fish Lizard (Ichthyosaurs): Fish lizards inhabit the seas of the Quintaglio moon. Baby ones are often hauled aboard and eaten on sailing trips, (the dorsal fin and the tail are apparently the best parts). Toroca fights an adult Fish Lizard when swimming back to the Dasheter from the Others' archipelago. Kal-Ta-Goot/Water Serpent (Plesiosaurs): A single plesiosaur is seen the first book, Far-Seer. Its name is Kal-Ta-Goot. In a sub-plot remarkably similar to (and probably a reference to) Moby-Dick, Captain Var-Keenir is obsessed with killing it after it bites off his tail in an encounter prior to the story. Afsan is the one who kills the creature in the end. As Kal is the only plesiosaur seen or referenced to in the book, it is unknown whether Kal-Ta-Goot is the name of the species or the individual plesiosaur Keenir was obsessed with killing. If not for Keenir's obsession with Kal, the Dasheter never would have gone out of sight of The Face of God, and Afsan would never have been able to sail across the entire ocean and prove the world was round. Terrorclaw (Dromaeosaurs): They are never seen, although an event mentioned in-passing has Novato apparently having a kill of hers stolen by a pack of Terrorclaws and she escapes from them by climbing up a tree. Blackdeath (Tyrannosaurus rex): The Apex predator on the Quintaglio Moon, they are named so after their pitch black hide. The males possess a dewlap sack, same as Quintaglios do. Blackdeaths are impossible for a Quintaglio to kill without the aid of weapons, which Quintaglio custom forbids, thus they are completely inaccessible as prey. However, Lub-Galpook's hunting party is able to capture one alive. Emperor Dy-Dybo, Dy-Rodlox and the other apprentice governors are forced to fight the same Blackdeath in an arena at the end of Fossil Hunter, to repay their exemption from the culling of the Bloodpriests, where it manages to kill all of them except for Dybo and Spenress. A Blackdeath also takes the place of the Giant in "Rewdan and the Vine", the Quintaglio version of Jack and the Beanstalk. Lizards (A monitor lizard named "Gork" becomes Afsan's pet and "Seeing Eye" lizard after he is blinded by Yenalb.) Frogs Salamanders Snakes Turtles Alligators Fish Sharks Insects The Others. A race of sentient dinosaurs discovered in Foreigner, similar to the Quintaglios, though of a different species. Jijaki. The Jijaki are an advanced alien species descended from Opabinia, transplanted to another world. The Jijaki spread life throughout the universe, seeding many species, including the Quintaglios themselves. The Jijaki became extinct millions of years prior to the story. Themes and allegory The Quintaglio Ascension Trilogy is intended to be an allegory to the human races' Age of Enlightenment. Each book features a Quintaglio equivalent to a prominent human thinker. Sal-Afsan is a Quintaglio version of Galileo, his son Toroca is the Quintaglio equivalent to Charles Darwin, and Mokleb is a Quintaglio Sigmund Freud. The Quintaglio Ascension Trilogy has an underlying theme of standing up for the truth in the face of overwhelming opposition, of dedication to a cause no matter what. It champions new, innovative ideas overcoming fundamentalist dogma, of rationality overcoming mysticism. These themes are explored in other books by Robert J. Sawyer. Reception The Quintaglio Ascension Trilogy has generally been very well received; the Toronto Star called Far-Seer "One of the year's outstanding SF books"", Far-Seer, Fossil Hunter and Foreigner consistently receive four- to five-star ratings in user reviews on amazon.com, and both Far-Seer and Fossil Hunter received Homer awards for "Best Novel" during their initial release dates. The books have been praised for their creativity, endearing characters, and social relevance. Sawyer has remarked in his short story anthology Iterations that the Quintaglio Ascension has generated the most fan-mail for anything he has written. However, the series has received some negative criticism. Some reviewers have said that the Quintaglios act too human, while others point out the implausibility of a technological civilisation developing from a nomadic hunting society. Sawyer defends his work by stating that the human-like behavior of the Quintaglios was necessary for readers to connect with the characters, and that agriculture is not necessarily a pre-requisite for a developed civilisation (a point he explores in greater detail with his Neanderthal Parallax trilogy). See also "Uphill Climb" Robert J. Sawyer References External links Robert J. Sawyer's web site The first chapter of Far-Seer Novels by Robert J. Sawyer Speculative evolution
Quintaglio Ascension Trilogy
[ "Biology" ]
7,071
[ "Biological hypotheses", "Speculative evolution", "Hypothetical life forms" ]
5,654,862
https://en.wikipedia.org/wiki/Fire%20lookout%20tower
A fire lookout tower, fire tower, or lookout tower is a tower that provides housing and protection for a person known as a "fire lookout", whose duty it is to search for wildfires in the wilderness. It is a small building, usually on the summit of a mountain or other high vantage point to maximize viewing distance and range, known as view shed. From this vantage point the fire lookout can see smoke that may develop, determine the location by using a device known as an Osborne Fire Finder, and call for wildfire suppression crews. Lookouts also report weather changes and plot the location of lightning strikes during storms. The location of the strike is monitored for a period of days afterwards, in case of ignition. A typical fire lookout tower consists of a small room, known as a cab, atop a large steel or wooden tower. Historically, the tops of tall trees have also been used to mount permanent platforms. Sometimes natural rock may be used to create a lower platform. In cases where the terrain makes a tower unnecessary, the structure is known as a ground cab. Ground cabs are called towers, even if they don't sit on a tower. Towers gained popularity in the early 1900s, and fires were reported using telephones, carrier pigeons and heliographs. Although many fire lookout towers have fallen into disrepair from neglect, abandonment and declining budgets, some fire service personnel have made efforts to preserve older fire towers, arguing that a person watching the forest for wildfire can be an effective and cheap fire control measure. History United States The history of fire lookout towers predates the United States Forest Service, founded in 1905. Many townships, private lumber companies, and State Forestry organizations operated fire lookout towers on their own accord. The Great Fire of 1910, also known as the Big Blowup, burned through the states of Washington, Idaho, and Montana. The smoke from this fire drifted across the entire country to Washington D.C. — both physically and politically — and it challenged the five-year-old Forest Service to address new policies regarding fire suppression, and the fire did much to create the modern system of fire rules, organizations, and policies. One of the rules as a result of the 1910 fire stated "all fires must be extinguished by 10 a.m. the following morning." To prevent and suppress fires, the U.S. Forest Service made another rule that townships, corporations and States would bear the cost of contracting fire suppression services, because at the time there was not the large Forest Service Fire Department that exists today. As a result of the above rules, early fire detection and suppression became a priority. Towers began to be built across the country. While earlier lookouts used tall trees and high peaks with tents for shelters, by 1911 permanent cabins and cupolas were being constructed on mountaintops. Beginning in 1910, the New Hampshire Timberlands Owners Association, a fire protection group, was formed and soon after, similar organizations were set up in Maine and Vermont. A leader of these efforts, W.R. Brown, an officer of the Brown Company which owned over 400,000 acres of timberland, set up a series of effective forest-fire lookout towers, possibly the first in the nation, and by 1917 helped establish a forest-fire insurance company. In 1933, during the Great Depression, President Franklin Delano Roosevelt formed the Civilian Conservation Corps (CCC), consisting of young men and veterans of World War I. It was during this time that the CCC set about building fire lookout towers, and access roads to those towers. The U.S. Forest Service took great advantage of the CCC workforce and initiated a massive program of construction projects, including fire lookout towers. In California alone, some 250 lookout towers and cabs were built by CCC workers between 1933 and 1942. The heyday of fire lookout towers was from 1930 through 1950. During World War II, the Aircraft Warning Service was established, operating from mid-1941 to mid-1944. Fire lookouts were assigned additional duty as Enemy Aircraft Spotters, especially on the West Coast of the United States. From the 1960s through the 1990s the towers took a back seat to new technology, aircraft, and improvements in radios. The promise of space satellite fire detection and modern cell phones tried to compete with the remaining fire lookout towers, but in several environments, the technology failed. Fires detected from space are already too large to make accurate assessments for control. Cell phones in wilderness areas still suffer from lack of signal. Today, some fire lookout towers remain in service, because having human eyes being able to detect smoke and call in the fire report allows fire management officials to decide early how the fire is to be managed. The more modern policy is to "manage fire", not simply to suppress it. Fire lookout towers provide a reduction in time of fire detection to time of fire management assessment. Idaho had the most known lookout sites (966); 196 of them still exist, with roughly 60 staffed each summer. Kansas is the only U.S. state that has never had a lookout. A number of fire lookout tower stations, including many in New York State near the Adirondack Forest Preserve and Catskill Park, have been listed on the National Register of Historic Places. Japan During the Edo period in Japan housed the . Usually the fire lookout tower was built near a , and was equipped with a ladder, lookout platform, and an (ja). From these towers watchmen could observe the entire town, and in the event of a fire they would ring the alarm bell, calling up firemen and warning town residents. In some towns the bells were also used to mark the time. While the fire lookout towers remained fully equipped into the Shōwa period, they were later replaced by telephone and radio broadcasting systems in many cities. Canada Like the United States, fire towers were built across Canada to protect the valuable trees for the forestry industry. Most towers were built in the early 1920s to 1950s and were a mix of wood and steel structures. A total of 325 towers dotted the landscape of Ontario in the 1960s, and today approx. 156 towers span the province, but only a handful of towers remained in use after the 1970s. They are still in use in British Columbia, Alberta, Saskatchewan, Manitoba, Ontario and a few of the Maritime Provinces. Nova Scotia decommissioned the last of its 32 fire towers in 2015 and had them torn down by a contractor. Germany The first fire lookout tower was built to the plans of Forstmeister Walter Seitz between 1890 and 1900, located in the "Muskauer Forst" near Weißwasser. Warnings were transmitted by light signal. For transmission of location, Seitz divided the forest area into so-called "Jagen", numbered areas, with that number to be transmitted to the city. He received a patent for this system in 1902. Seitz traveled to the 1904 Louisiana Purchase Exposition for a presentation of his idea in the USA. Russia As wood had been a key building material in Russia for centuries, urban fires were a constant threat to the towns and cities. To address that issue, in the early 19th century a program was launched to construct fire stations equipped with lookout towers called kalancha, overlooking mostly low-rise quarters. Watchmen standing vigil there could signal other stations as well as their own using simple signals. Surviving towers are often local landmarks. Today Australia Fire towers are still in use in Australia, particularly in the mountainous regions of the south-eastern states. Victoria's Forest Fire Management operates 72 towers across the state during the fire season with towers being constructed as recently as 2016. Jimna Fire Tower in Southeastern Queensland is the tallest fire tower in the country, at 47 meters above the ground, and is included on the state heritage register. United States Today hundreds of towers are still in service with paid-staff and/or volunteer citizens. In some areas, the fire lookout operator often receives hundreds of forest visitors during a weekend and provides a needed “pre-fire suppression” message, supported by handouts from the "Smokey Bear", or "Woodsy Owl" education campaigns. This educational information is often distributed to young hikers that make their way up to the fire lookout tower. In this aspect, the towers are remote way stations and interpretive centers. The fire lookout tower also acts as a sentinel in the forest attracting lost or injured hikers, that make their way to the tower knowing they can get help. In some locations around the country, fire lookout towers can be rented by public visitors that obtain a permit. These locations provide a unique experience for the camper, and in some rental locations, the check out time is enforced when the fire lookout operator returns for duty, and takes over the cab for the day shift. Fire lookout towers are an important part of American history and several organizations have been founded to save, rebuild, restore, and operate fire lookout towers. Germany Starting in 2002, traditional fire watch was replaced by "FireWatch", optical sensors located on old lookout towers or mobile phone masts. Based on a system developed by the DLR for analyzing gases and particles in space, a terrestrial version for forest fire smoke detection was developed by DLR and IQ Wireless. Currently, about 200 of these sensors are installed around Germany, while similar systems have been deployed in other European countries, Mexico, Kazakhstan and the USA. Canada Several Canadian provinces have fire lookout towers. Dorset, Ontario's Scenic Tower was built on site of former fire lookout tower (1922-1962). Types Wooden towers Many fire lookout towers are simply cabs that have been fitted to large railroad water tank towers that are high. One of the last wooden fire lookout towers in Southern California was the South Mount Hawkins Fire Lookout, in the Angeles National Forest. A civilian effort is underway to rebuild the tower after its loss in the Curve Fire of September 2002. The typical cab of a wooden tower can be from to Example — South Mount Hawkins before the fire Example — Boucher Hill Lookout, Palomar Mountain State Park, San Diego CA Steel towers Steel towers can vary in size and height. They are very sturdy, but tend to sway in the wind more than wooden towers. The typical cab of a steel tower can be from to Example — Los Pinos Lookout, Cleveland National Forest, San Diego CA Example — Red Mountain Lookout, San Bernardino National Forest, Riverside CA Example — High Point Lookout, Cleveland National Forest, Palomar Mountain, San Diego CA Example — Mount Lofty Fire Tower, South Australia Aermotors The Aermotor Company, originally of Chicago, Illinois, was the first and lead manufacturer of steel fire towers from the 1910s to the mid-1920s. These towers have very small cabs, as the towers are based on Aermotor windmill towers. These towers are often found in the U.S. Midwest and South, but a few are in the mountainous West. In the northeast, all of the towers in the Adirondack Mountains and most in the Catskills were Aermotor towers erected between 1916 and 1921. The typical cab of an Aermoter had a cab with a fire locating device mounted in the center. Access was by way of a trap door in the floor. Lakota Peak Lookout Summit Ridge Lookout The Fire Towers of New York Example — Adirondack Towers Ground cabs Ground cabs are still known as "towers" even though there may be no such tower under the cab. These towers can be one, two or three stories tall with foundations made of natural stone or concrete. These towers vary greatly in size, but many are simple wooden or steel tower cabs that were constructed using the same plans, sans the tower. Example — Tahquitz Peak Lookout Example — Winchester Mountain Lookout Example — Mt. Tamalpais Lookout in California Lookout trees The simplest kind consist of a ladder to a suitable height. Such trees could have platforms on the ground next to them for maps and a fire finder. A more elaborate version, such as the Gloucester tree in Australia, added a permanent platform to the tree by building a wooden or, later, metal structure at the top of the tree, with metal spikes hammered into the trunk to form a spiral ladder. These 'platform trees' were often equipped with telephones, fire finder tables, seats and guy-wires. Other types There are many different types of lookouts. In the early days, the fire lookout operator simply climbed a denuded tree and sat on a platform chair atop that tree. An old fishing boat was once dragged to the top of a high hill and used as a fire lookout tower. Very little is known about the horse-mounted fire lookout, but they, too, rode the ridges patrolling the forest for smoke. Records Tallest lookout tower in the world: Warren Bicentennial Tree Lookout, Western Australia — . Tallest all-steel lookout tower in the world: Beard Tower, SE of Manjimup, Western Australia — . Tallest lookout tower in the U.S.: Woodworth Tower, Alexandria, Louisiana — . Highest lookout site in the world: Fairview Peak Lookout, Colorado — . Lowest lookout sites in the world: Pine Island L.O., Florida & Evans Pines L.O., Florida — . Countries continuing to use fire lookout towers Australia Belgium Brazil Canada (Alberta, B.C., Manitoba, Nova Scotia, Ontario, Saskatchewan) France Germany Greece Indonesia Israel Italy Latvia Mexico New Zealand Norway Poland Portugal South Africa Spain Turkey United States Uruguay See also List of fire lookout towers Lookout tree Watchtower Drill tower, used in firefighting practice Hose tower, used in some fire stations to dry firehoses Fire control tower, used to control gun fire from coastal batteries List of New Jersey Forest Fire Service fire towers Firewatch, a game centered around a fire lookout tower in Shoshone National Forest References The Lookout Network newsletter External links Fire Lookouts US Forest Service History Pages, Forest History Society Forest Fire Lookout Association Ontario’s Fire Tower Lookouts Fire Lookout Towers in Australia Eyes of the Forest: Idaho's Fire Lookouts Documentary produced by Idaho Public Television "A Day in the Life of a Fire Lookout" in Marin County, California Wildfire suppression Towers
Fire lookout tower
[ "Engineering" ]
2,874
[ "Structural engineering", "Towers" ]
5,655,191
https://en.wikipedia.org/wiki/Successive-approximation%20ADC
A successive-approximation ADC is a type of analog-to-digital converter (ADC) that digitizes each sample from a continuous analog waveform using a binary search through all possible quantization levels. Algorithm The successive-approximation analog-to-digital converter circuit typically contains four chief subcircuits: A sample-and-hold circuit that acquires the input voltage . An analog voltage comparator that compares to the output of a digital-to-analog converter (DAC). A successive-approximation register that is updated by results of the comparator to provide the DAC with a digital code whose accuracy increases each successive iteration. A DAC that supplies the comparator with an analog voltage relative to the reference voltage (which corresponds to the full-scale range of the ADC) and proportional to the digital code of the SAR. The successive-approximation register is initialized with 1 in the most significant bit (MSB) and zeroes in the lower bits. The register's code is fed into the DAC, which provides an analog equivalent of its digital code (initially ) to the comparator for comparison with the sampled input voltage. If this analog voltage exceeds , then the comparator causes the SAR to reset this bit; otherwise, the bit is left as 1. Then the next bit is set to 1 and the same test is done, continuing this binary search until every bit in the SAR has been tested. The resulting code is the digital approximated output of the sampled input voltage. The algorithm's objective for the iteration is to approximately digitize the input voltage to an accuracy of relative to the reference voltage. To show this mathematically, the normalized input voltage is represented as in by letting . The algorithm starts with an initial approximation of and during each iteration produces the following approximation: approximation: where the binary signum function mathematically represents the comparison of the previous iteration's approximation with the normalized input voltage :It follows using mathematical induction that the approximation of the iteration theoretically has a bounded accuracy of: . Inaccuracies in non-ideal analog circuits When implemented as a real analog circuit, circuit inaccuracies and noise may cause the binary search algorithm to incorrectly remove values it believes cannot be, so a successive-approximation ADC might not output the closest value. It is very important for the DAC to accurately produce all analog values for comparison against the unknown in order to produce a best match estimate. The maximal error can easily exceed several LSBs, especially as the error between the actual and ideal becomes large. Manufacturers may characterize the accuracy using an effective number of bits (ENOB) smaller than the actual number of output bits. , the component-matching limitations of the DAC generally limited the linearity to about 12 bits in practical designs and mandated some form of trimming or calibration to achieve the necessary linearity for more than 12 bits. , SAR ADCs are limited to 18 bits, while delta-sigma ADCs (which can be 24 bits) are better suited if more than 16 bits are needed. SAR ADCs are commonly found on microcontrollers because they are easy to integrate into a mixed-signal process, but suffer from inaccuracies from the internal reference voltage resistor ladder and clock and signal noise from the rest of the microcontroller, so external ADC chips may provide better accuracy. Examples Example 1: The steps to converting an analog input to 9-bit digital, using successive-approximation, are shown here for all voltages from 5 V to 0 V in 0.1 V iterations. Since the reference voltage is 5 V, when the input voltage is also 5 V, all bits are set. As the voltage is decreased to 4.9 V, only some of the least significant bits are cleared. The MSB will remain set until the input is one half the reference voltage, 2.5 V. The binary weights assigned to each bit, starting with the MSB, are 2.5, 1.25, 0.625, 0.3125, 0.15625, 0.078125, 0.0390625, 0.01953125, 0.009765625. All of these add up to 4.990234375, meaning binary 111111111, or one LSB less than 5. When the analog input is being compared to the internal DAC output, it effectively is being compared to each of these binary weights, starting with the 2.5 V and either keeping it or clearing it as a result. Then by adding the next weight to the previous result, comparing again, and repeating until all the bits and their weights have been compared to the input, the result, a binary number representing the analog input, is found. Example 2: The working of a 4-bit successive-approximation ADC is illustrated below. The MSB is initially set to 1 whereas the remaining digits are set to zero. If the input voltage is lower than the value stored in the register, on the next clock cycle, the register changes its value to that illustrated in the figure by following the green line. If the input voltage is higher, then on the next clock cycle, the register changes its value to that illustrated in the figure by following the red line. The simplified structure of this type of ADC that acts on volts range can be expressed as an algorithm: Initialize register with MSB set to 1 and all other values set to zero. In the n clock cycle, if voltage is higher than digital equivalent voltage of the number in register, the (n+1) digit from the left is set to 1. If the voltage were lower than digital equivalent voltage, then n digit from left is set to zero and the next digit is set to 1. To perform a conversion, an N-bit ADC requires N such clock cycles excluding the initial state. The successive-approximation ADC can be alternatively explained by first uniformly assigning each digital output to corresponding ranges as shown. It can be seen that the algorithm essentially divides the voltage range into two regions and checks which of the two regions the input voltage belongs to. Successive steps involve taking the identified region from before and further dividing the region into two and continuing identification. This occurs until all possible choices of digital representations are exhausted, leaving behind an identified region that corresponds to only one of the digital representations. Variants Counter type ADC: The D to A converter can be easily turned around to provide the inverse function A to D conversion. The principle is to adjust the DAC's input code until the DAC's output comes within LSB to the analog input which is to be converted to binary digital form. Servo tracking ADC: It is an improved version of a counting ADC. The circuit consists of an up-down counter with the comparator controlling the direction of the count. The analog output of the DAC is compared with the analog input. If the input is greater than the DAC output signal, the output of the comparator goes high and the counter is caused to count up. The tracking ADC has the advantage of being simple. The disadvantage, however, is the time needed to stabilize as a new conversion value is directly proportional to the rate at which the analog signal changes. Charge-redistribution successive-approximation ADC One of the most common SAR ADC implementations uses a charge-scaling DAC consisting of an array of individually-switched capacitors sized in powers of two and an additional duplicate of the smallest capacitor, for a total of capacitors for bits. Thus if the largest capacitance is , then the array's total capacitance is . The switched capacitor array acts as both the sample-and-hold element and the DAC. Redistributing their charge will adjust their net voltage, which is feed into the negative input of a comparator (whose positive input is always grounded) to perform the binary search using the following steps: Discharge: The capacitors are discharged. (Note, discharging to comparator's offset voltage will automatically provide offset cancellation.) Sampling: The capacitors are switched to the input signal . After a brief sampling period, the capacitors will hold a charge equal to their respective capacitance times (and minus the offset voltage upon each of them), so the array holds a total charge of . Hold: The capacitors are then switched to ground. This provides the comparator's negative input with a voltage of . Conversion: the actual conversion process proceeds with the following steps in each iteration, starting with the largest capacitor as the test capacitor for the MSB, and then testing each next smaller capacitor in order for each bit of lower significance: Redistribution: The current test capacitor is switched to . The test capacitor forms a charge divider with the remainder of the array whose ratio depends on the capacitor's relative size. In the first iteration, the ratio is , so the comparator's negative input becomes . On the iteration, the ratio will be , so the iteration of this redistribution step effectively adds to the voltage. Comparison: The comparator's output determines the bit's value for to the current test capacitor. In the first iteration, if is greater than , then the comparator will output a digital 1 and otherwise output a digital 0. Update Switch: A digital 0 result will leave the current test capacitor connected to for subsequent iterations, while a digital 1 result will switch the capacitor back to ground. Thus, each iteration may or may not add to the comparator's negative input voltage. For instance, the voltage at the end of the first iteration will be . End Of Conversion: After all capacitors are tested in the same manner, the comparator's negative input voltage will have converged as close as possible (given the resolution of the DAC) to the comparator's offset voltage. See also Quantization noise Digital-to-analog converter References Further reading CMOS Circuit Design, Layout, and Simulation, 3rd Edition; R. J. Baker; Wiley-IEEE; 1208 pages; 2010; Data Conversion Handbook; Analog Devices; Newnes; 976 pages; 2004; External links Understanding SAR ADCs: Their Architecture and Comparison with Other ADCs - Maxim Choose the right A/D converter for your application - TI Electronic circuits Digital signal processing Analog circuits Approximations
Successive-approximation ADC
[ "Mathematics", "Engineering" ]
2,172
[ "Analog circuits", "Electronic engineering", "Mathematical relations", "Approximations" ]
5,655,393
https://en.wikipedia.org/wiki/Mineral%20hydration
In inorganic chemistry, mineral hydration is a reaction which adds water to the crystal structure of a mineral, usually creating a new mineral, commonly called a hydrate. In geological terms, the process of mineral hydration is known as retrograde alteration and is a process occurring in retrograde metamorphism. It commonly accompanies metasomatism and is often a feature of wall rock alteration around ore bodies. Hydration of minerals occurs generally in concert with hydrothermal circulation which may be driven by tectonic or igneous activity. Processes There are two main ways in which minerals hydrate. One is conversion of an oxide to a double hydroxide, as with the hydration of calcium oxide—CaO—to calcium hydroxide—Ca(OH)2. The other is with the incorporation of water molecules directly into the crystalline structure of a new mineral, as with the hydration of feldspars to clay minerals, garnet to chlorite, or kyanite to muscovite. Mineral hydration is also a process in the regolith that results in conversion of silicate minerals into clay minerals. Some mineral structures, for example, montmorillonite, are capable of including a variable amount of water without significant change to the mineral structure. Hydration is the mechanism by which hydraulic binders such as Portland cement develop strength. A hydraulic binder is a material that can set and harden submerged in water by forming insoluble products in a hydration reaction. The term hydraulicity or hydraulic activity is indicative of the chemical affinity of the hydration reaction. Examples of hydrated minerals Examples of hydrated minerals include: silicates (, ) phyllosilicates, clay minerals "commonly found on Earth as weathering products of rocks or in hydrothermal systems" chlorite muscovite non-silicates oxides (, , , etc.) and oxy-hydroxides brucite, goethite, FeO(OH) carbonates (, etc.) hydromagnesite, ikaite, , the unstable hexahydrate form of calcium carbonate hydroxylated minerals saponite talc hydroxysulfides (mixed sulfides-hydroxides) tochilinite, a hydroxysulfide or hydrated sulfide mineral of iron(II) and magnesium of chemical formula: , also written , in IMA notation valleriite, an uncommon sulfide-hydroxide mineral of iron(II) and copper of chemical formula: , or See also Clay-water interaction Hydration reaction Iron(III) oxide-hydroxide Ferrihydrite References Metamorphic petrology Inorganic reactions es:Hidratación mineral
Mineral hydration
[ "Chemistry" ]
554
[ "Hydrate minerals", "Inorganic reactions", "Hydrates" ]
5,655,436
https://en.wikipedia.org/wiki/Antibody%20microarray
An antibody microarray (also known as antibody array) is a specific form of protein microarray. In this technology, a collection of captured antibodies are spotted and fixed on a solid surface such as glass, plastic, membrane, or silicon chip, and the interaction between the antibody and its target antigen is detected. Antibody microarrays are often used for detecting protein expression from various biofluids including serum, plasma and cell or tissue lysates. Antibody arrays may be used for both basic research and medical and diagnostic applications. Background The concept and methodology of antibody microarrays were first introduced by Tse Wen Chang in 1983 in a scientific publication and a series of patents, when he was working at Centocor in Malvern, Pennsylvania. Chang coined the term “antibody matrix” and discussed “array” arrangement of minute antibody spots on small glass or plastic surfaces. He demonstrated that a 10×10 (100 in total) and 20×20 (400 in total) grid of antibody spots could be placed on a 1×1 cm surface. He also estimated that if an antibody is coated at a 10 μg/mL concentration, which is optimal for most antibodies, 1 mg of antibody can make 2,000,000 dots of 0.25 mm diameter. Chang's invention focused on the employment of antibody microarrays for the detection and quantification of cells bearing certain surface antigens, such as CD antigens and HLA allotypic antigens, particulate antigens, such as viruses and bacteria, and soluble antigens. The principle of "one sample application, multiple determinations", assay configuration, and mechanics for placing absorbent dots described in the paper and patents should be generally applicable to different kinds of microarrays. When Tse Wen Chang and Nancy T. Chang were setting up Tanox, Inc. in Houston, Texas in 1986, they purchased the rights on the antibody matrix patents from Centocor as part of the technology base to build their new startup. Their first product in development was an assay, termed “immunosorbent cytometry”, which could be employed to monitor the immune status, i.e., the concentrations and ratios of CD3+, CD4+, and CD8+ T cells, in the blood of HIV-infected individuals. The theoretical background for protein microarray-based ligand binding assays was further developed by Roger Ekins and colleagues in the late 1980s. According to the model, antibody microarrays would not only permit simultaneous screening of an analyte panel, but would also be more sensitive and rapid than conventional screening methods. Interest in screening large protein sets only arose as a result of the achievements in genomics by DNA microarrays and the Human Genome Project. The first array approaches attempted to miniaturize biochemical and immunobiological assays usually performed in 96-well microtiter plates. While 96-well plate-based antibody arrays have high-throughput capability, the small surface area in each well limits the number of antibody spots and thus, the number of analytes detected. Other solid supports, such as glass slides and nitrocellulose membranes, were subsequently utilized to develop arrays which could accommodate larger panels of antibodies. Nitrocellulose membrane-based arrays are flexible, easy to handle, and have increased protein binding capacity, but are less amenable to high throughput or automated processing. Chemically derivatized glass slides allow for printing of sub-microliter sized antibody spots, reducing the array surface area without sacrificing spot density. This in turn reduces the volume of sample consumed. Glass slide-based arrays, owing to their smooth and rigid structure, can also be easily fitted to high-throughput liquid handling systems. Most antibody array systems employ 1 of 2 non-competitive methods of immunodetection: single-antibody (label-based) detection and 2-antibody (sandwich-based) detection. The latter method, in which analyte detection requires the binding of 2 distinct antibodies (a capture antibody and a reporter antibody, each binding to a unique epitope), confers greater specificity and lower background signal compared with label-based immunodetection (where only 1 capture antibody is used and detection is achieved by chemically labeling all proteins in the starting sample). Sandwich-based antibody arrays usually attain the highest specificity and sensitivity (ng – pg levels) of any array format; their reproducibility also enables quantitative analysis to be performed. Due to the difficulty of developing matched antibody pairs that are compatible with all other antibodies in the panel, small arrays often make use of a sandwich approach. Conversely, high-density arrays are easier to develop at a lower cost using the single antibody label-based approach. In this methodology, one set of specific antibodies is used and all the proteins in a sample are labelled directly by fluorescent dyes or haptens. Initial uses of antibody-based array systems included detecting IgGs and specific subclasses, analyzing antigens, screening recombinant antibodies, studying yeast protein kinases, analyzing autoimmune antibodies, and examining protein-protein interactions. The first approach to simultaneously detect multiple cytokines from physiological samples using antibody array technology was by Ruo-Pan Huang and colleagues in 2001. Their approach used Hybond ECL membranes to detect a small panel of 24 cytokines from cell culture conditioned media and patient's sera and was able to profile cytokine expression at physiological levels. Huang took this technology and started a new business, RayBiotech, Inc., the first to successfully commercialize a planar antibody array. In the last ten years, the sensitivity of the method was improved by an optimization of the surface chemistry as well as dedicated protocols for their chemical labeling. Currently, the sensitivity of antibody arrays is comparable to that of ELISA and antibody arrays are regularly used for profiling experiments on tissue samples, plasma or serum samples and many other sample types. One main focus in antibody array based profiling studies is biomarker discovery, specifically for cancer. For cancer-related research, the development and application of an antibody array comprising 810 different cancer-related antibodies was reported in 2010. Also in 2010, an antibody array comprising 507 cytokines, chemokines, adipokines, growth factors, angiogenic factors, proteases, soluble receptors, soluble adhesion molecules, and other proteins was used to screen the serum of ovarian cancer patients and healthy individuals and found a significant difference in protein expression between normal and cancer samples. More recently, antibody arrays have helped determine specific allergy-related serum proteins whose levels are associated with glioma and can reduce the risk years before diagnosis. Protein profiling with antibody arrays have also proven successful in areas other than cancer research, specifically in neurological diseases such as Alzheimer's. A number of studies have attempted to identify biomarker panels that can distinguish Alzheimer's patients, and many have used antibody arrays in this process. Jaeger and colleagues measured nearly 600 circulatory proteins to discover biological pathways and networks affected in Alzheimer's and explored the positive and negative relationships of the levels of those individual proteins and networks with the cognitive performance of Alzheimer's patients. Currently the largest commercially available sandwich-based antibody array detects 1000 different proteins. In addition, antibody microarray based protein profiling services are available analyzing protein abundance and protein phosphorylation or ubiquitinylation status of 1030 proteins in parallel. Antibody arrays are often used for detecting protein expression from many sample types, but also in those with various preparations. Jiang and colleagues illustrated nicely the correlation between array protein expression in two different blood preparations: serum and dried blood spots. These different blood sample preparations were analyzed using three antibody array platforms: sandwich-based, quantitative, and label-based, and a strong correlation in protein expression was found, suggesting that dried blood spots, which are a more convenient, safe, and inexpensive means of obtaining blood especially in non-hospitalized public health areas, can be used effectively with antibody array analysis for biomarker discovery, protein profiling, and disease screening, diagnosis, and treatment. Applications Using antibody microarray in different medical diagnostic areas has attracted researchers attention. Digital bioassay is an example of such research domains. In this technology, an array of microwells on a glass/polymer chip are seeded with magnetic beads (coated with fluorescent tagged antibodies), subjected to targeted antigens and then characterised by a microscope through counting fluorescing wells. A cost-effective fabrication platform (using OSTE polymers) for such microwell arrays has been recently demonstrated and the bio-assay model system has been successfully characterised. Furthermore, immunoassays on thiol-ene "synthetic paper" micropillar scaffolds have shown to generate a superior fluorescence signal. See also ELISA Protein microarray DNA microarray Tissue microarray Chemical compound microarray Microarray imprinting and surface energy patterning References Microarrays 1983 introductions Reagents for biochemistry
Antibody microarray
[ "Chemistry", "Materials_science", "Biology" ]
1,873
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques", "Biochemistry", "Reagents for biochemistry" ]
5,655,722
https://en.wikipedia.org/wiki/Aquadag
Aquadag is a trade name for a water-based colloidal graphite coating commonly used in cathode ray tubes (CRTs). It is manufactured by Acheson Industries, a subsidiary of ICI. The name is a shortened form of "Aqueous Deflocculated Acheson Graphite", but has become a generic term for conductive graphite coatings used in vacuum tubes. Other related products include Oildag, Electrodag and Molydag. Deflocculation refers to the distribution of powdered high purity graphite in an aqueous solution containing approximately 2% to 10% by weight of various Tannic/Gallotannic acid variants and separating the colloidal graphite suspension from the remaining unsuspended graphite particulates. The product names are often printed with DAG in upper case (e.g. AquaDAG). It is used as an electrically conductive coating on insulating surfaces, and as a lubricant. Properties Aquadag consists of a dispersion of colloidal graphite in distilled water. It is provided in concentrated paste form and is usually diluted with distilled water to a desired consistency before application. It can be applied by brushing, swabbing, spraying, or dipping, after which the surface is dried, leaving a layer of pure graphite. After drying the coating is electrically conductive. Its resistance and other electrical properties vary with degree of dilution and application method. When diluted 1:1 and applied by brush its resistance is: Air-dried ~800 ohms per square Heated to 200 °C ~500 ohms per square Heated to 300 °C ~20–30 ohms per square Use in cathode ray tubes A conductive aquadag coating applied to the inside of the glass envelope of cathode ray tubes, serves as a high-voltage electrode. The coating covers the inside walls of the "bell" of the CRT tube, from just inside the neck, and stops just short of the screen. Due to the graphite, it is electrically conductive and forms part of the high-voltage positive electrode, the second anode, which accelerates the electron beam. The second anode is a metal cylinder inside the neck of the tube, connected to a high positive voltage of 18 to 25 kilovolts. It has spring clips, which press against the walls of the tube, making contact with the aquadag coating so it also carries this high positive voltage. The electron beam from the electron gun in the neck of the tube is accelerated by the high voltage on the anode and passes through it to strike the screen. The aquadag coating has two functions: it maintains a uniform electric field inside the tube near the screen, so the electron beam remains collimated and is not distorted by external fields, and it collects the electrons after they have hit the screen, serving as the return path for the cathode current. When the electron beam hits the screen, in addition to causing the fluorescent phosphor coating to give off light, it also knocks other electrons out of the surface. These secondary electrons are attracted to the high positive voltage of the coating and return through it to the anode power supply. Without the coating a negative space charge would develop near the screen, deflecting the electron beam. A typical value of beam current collected by the anode coating is 0.6 mA. In some CRTs the aquadag coating performs a third function, as a filter capacitor for the high-voltage anode supply. A second conductive coating is applied to part of the outside of the tube facing the inside coating. This outside coating is connected to the ground side of the anode supply, thus the full anode voltage is applied between the coatings. The sandwich of the two coatings separated by the dielectric glass wall of the tube form a final capacitor to filter out ripple from the anode supply. Although the capacitance is small, around 500 pF, due to the low anode current it is sufficient to act as a filter capacitor. In the television tube manufacturing industry, the manufacturing step that applies the aquadag is called "dagging". Other uses Aside from its use in the production of CRTs, Aquadag is used in many types of high-voltage lab apparatus where a conductive coating is needed on an insulating surface. The surfaces of some metals (most notably aluminum) can develop nonconductive oxide layers, which tend to disrupt the electrostatic field produced around the surface of the metal when used as an electrode. Aquadag is not subject to such effects and provides a completely uniform equipotential surface for electrostatics. Producers of continuous filament fiberglass will coat their product with Aquadag when a conductive property is required. Aquadag was also used in the production of some copper oxide rectifiers, to help make the ohmic connections to their counterelectrodes. Other dags There are also deflocculated graphite products dispersed in liquids other than water. Acheson has extended the use of the dag brandname to non-graphite products e.g. the copper-based Electrodag 437 conductive paint. References External links AquaDAG Product data sheet from Vacuum tubes
Aquadag
[ "Physics" ]
1,103
[ "Vacuum tubes", "Vacuum", "Matter" ]
5,656,245
https://en.wikipedia.org/wiki/Removable%20User%20Identity%20Module
Removable User Identity Module (R-UIM, usually pronounced as "R-yuim") is a card developed for cdmaOne/CDMA2000 ("CDMA") handsets that extends the GSM SIM card to CDMA phones and networks. To work in CDMA networks, the R-UIM contains an early version of the CSIM application. The card also contains SIM (GSM) application, so it can work on both networks. It is physically compatible with GSM SIMs and can fit into existing GSM phones as it is an extension of the GSM 11.11 standard. This interface brings one of the main advantages of GSM to CDMA network phones. By having a removable identity card, CDMA users can change phones while keeping their phone numbers by simply swapping the cards. This simplifies many situations such as phone upgrades, phone replacements due to damage, or using the same phone on a different provider's CDMA network. The R-UIM card has been superseded by CSIM on UICC. This technique allows all three applications (SIM, CSIM, and USIM) to coexist on a single smartcard, allowing the card to be used in virtually any phone worldwide that supports smart cards. The CSIM application, a port of R-UIM functionality to the UICC, is defined in standard. This form of card is widely used in China under the CDMA service of China Telecom (which was acquired by China Unicom in 2008). However, it is also used elsewhere such as India, Indonesia, Japan, Taiwan, Thailand, and the US. See also CDMA subscriber identity module (CSIM) Subscriber identity module (SIM) Universal subscriber identity module (USIM) W-SIM MEID References External links Qualcomm TIA Standardizes Removable User Identity Modules Why do CDMA Subscribers Need the R-UIM? – PDF whitepaper Dual-mode R-UIM Mobile telecommunications standards 3rd Generation Partnership Project 2 standards
Removable User Identity Module
[ "Technology" ]
421
[ "Mobile telecommunications", "Mobile telecommunications standards" ]
5,656,456
https://en.wikipedia.org/wiki/Java%20Analysis%20Studio
Java Analysis Studio (JAS) is an object oriented data analysis package developed for the analysis of particle physics data. The latest major version is JAS3. JAS3 is a fully AIDA-compliant data analysis system. It is popular for data analysis in areas of particle physics which are familiar with the Java programming language. The Studio uses many other libraries from the FreeHEP project. External links Java Analysis Studio 3 website AIDA: Abstract Interfaces for Data Analysis — open interfaces and formats for particle physics data processing Data analysis software Experimental particle physics Free software programmed in Java (programming language) Free statistical software Numerical software Physics software
Java Analysis Studio
[ "Physics", "Mathematics" ]
128
[ "Mathematical software", "Computational physics", "Experimental physics", "Particle physics", "Numerical software", "Experimental particle physics", "Particle physics stubs", "Computational physics stubs", "Physics software" ]
11,048,177
https://en.wikipedia.org/wiki/Ellerman%20bomb
In solar physics, Ellerman bombs are intense, small-scale brightenings in the Sun's photosphere. They are only observed in the wings of the Hα, Hβ, and Hγ hydrogen spectral lines and take place in emerging flux regions where emerging magnetic fields interact with existing fields. They are named after Ferdinand Ellerman who studied them in detail in the 20th century. History Intense brightenings resembling what would later be referred to as Ellerman bombs were first reported by Walter M. Mitchell in 1909. In 1917, observations of this phenomenon made at the Mount Wilson Solar Observatory were described in detail by Ferdinand Ellerman. He referred to them as "solar hydrogen bombs" in reference to the phenomenon only appearing in observations of hydrogen spectral lines. Description As originally described in Ellerman's 1917 paper, Ellerman bombs are intense brightenings in the wings of the Hα, Hβ, and Hγ hydrogen spectral lines with no brightening of the line cores or of other spectral lines. They occur in intergranular lanes in the photosphere exclusively at the sites of emerging flux regions where emerging vertical magnetic fields interact with the existing intergranular field. This interaction is suggested to result in magnetic reconnection, producing the brightenings associated with Ellerman bombs. The lack of observed brightening of the Hα core is attributed to Ellerman bombs being a photospheric phenomenon. In growing active regions, dense chromospheric, Hα fibrils form a canopy above the photosphere blocking Hα emission from Ellerman bombs below. As a result, only emission in the Hα wings pass through and are observed. References Solar phenomena Sun
Ellerman bomb
[ "Physics" ]
345
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
11,048,263
https://en.wikipedia.org/wiki/Ember
An ember, also called a hot coal, is a hot lump of smouldering solid fuel, typically glowing, composed of greatly heated wood, coal, or other carbon-based material. Embers (hot coals) can exist within, remain after, or sometimes precede, a fire. Embers are, in some cases, as hot as the fire which created them. They radiate a substantial amount of heat long after the fire has been extinguished, and if not taken care of properly can rekindle a fire that is thought to be completely extinguished and can pose a fire hazard. In order to avoid the danger of accidentally spreading a fire, many campers pour water on the embers or cover them in dirt. Alternatively, embers can be used to relight a fire after it has gone out without the need to rebuild the fire – in a conventional fireplace, a fire can easily be relit up to 12 hours after it goes out, provided that there is enough space for air to circulate between the embers and the introduced fuel. They are often used for cooking, such as in charcoal barbecues. This is because embers radiate a more consistent form of heat, as opposed to an open fire, which is constantly changing along with the heat it radiates. An ember is formed when a fire has only partially burnt a piece of fuel, and there is still usable chemical energy in that piece of fuel. This happens because the usable chemical energy is so deep into the center that air (specifically oxygen) does not reach it, and it therefore does not combust (carbon-based fuel + O2 → CO2 + H2O + C + other chemicals involved). It continues to stay hot and does not lose its thermal energy quickly because combustion is still happening at a low level. The small yellow, orange and red lights often seen among the embers are actually combustion; the combustion is just not happening at a fast enough rate to create a flame. Once the embers are completely 'burned through', the remains are oxidized minerals like carbon, calcium and phosphorus. At that point they are called ashes. Embers play a large role in forest fires, wildland fires or wildland urban interface fires. Because embers are typically burnt leaves and thus small and lightweight, they can easily become airborne. During a large fire, with the right conditions, embers can be blown far ahead of the fire front, starting spot fires several kilometres/miles away. A number of practical measures can be undertaken by homeowners to reduce the consequences of such an "ember attack" that bombards especially wooden structures and starts property fires. References External links Fire ru:Горение#Тление
Ember
[ "Chemistry" ]
567
[ "Combustion", "Fire" ]
11,048,453
https://en.wikipedia.org/wiki/SiReNT
The Singapore Satellite Positioning Reference Network (SiReNT), is an infrastructure network launched by the Survey Services section of the Singapore Land Authority in 2006. Its purpose is to define Singapore's official spatial reference framework and to support the cadastral system in SVY21. It is a multi-purpose high precision positioning infrastructure which provides both Post Process Differential Global Positioning System (DGPS) DGPS services and Real Time DGPS services. The system supports all types of GPS positioning modes and formats. SiReNT comprises five GPS reference stations connected to a data control centre at government data centre. Four of the five reference stations are located at the extreme corners of the island of Singapore, with the fifth located in the centre of the island. The four external reference stations are located at Nanyang Technological University, Keppel Club, Loyang, and Senoko, with the designations SNTU, SKEP, SLOY, and SSEK, respectively. The central location is at Nanyang Polytechnic, designated by SNYP. The entire set-up is made up of advanced GPS equipment and sophisticated computer hardware, software, communications and network. SiReNT supports a great variety of applications. It provides data reliability, efficiency and productivity of survey work for land surveyors with the aid of GPS technology. It also offers a wide range of GPS data services with various accuracy levels ranging from metres to centimetres to suit different applications from positioning to tracking and monitoring. These GPS reference stations receive satellite signals 24 hours a day and transmit GPS data continuously to the data control centre for storage and processing. Corrections processed from the data are then streamed to subscribed users. SiReNT offers 4 types of services, namely Post Processing (PP) On-Demand, Post Processing (PP) Archive, Real Time Kinematic (RTK) and low accuracy Differential Global Positioning System (DGPS) to suit different applications. In 2010, SiReNT introduced support for telematics and structural monitoring solutions. References Global Positioning System
SiReNT
[ "Technology", "Engineering" ]
404
[ "Global Positioning System", "Aerospace engineering", "Wireless locating", "Aircraft instruments" ]
11,048,579
https://en.wikipedia.org/wiki/NGC%204449
NGC 4449, also known as Caldwell 21, is an irregular Magellanic type galaxy in the constellation Canes Venatici, being located about 13 million light-years away. It is part of the M94 Group or Canes Venatici I Group that is relatively close to the Local Group hosting our Milky Way galaxy. Characteristics This galaxy is similar in nature to the Milky Way's satellite galaxy, the Large Magellanic Cloud (LMC), though is not as bright nor as large. NGC 4449 has a general bar shape, also characteristic of the LMC, with scattered young blue star clusters. Unlike the Large Magellanic Cloud, however, NGC 4449 is considered a starburst galaxy due to its high rate of star formation (twice the one of the LMC) and includes several massive and young star clusters, one of them in the galaxy's center. Photos of the galaxy show the pinkish glow of atomic hydrogen gas, the telltale tracer of massive star forming regions. NGC 4449 is surrounded by a large envelope of neutral hydrogen that extends over an area of 75 arc minutes (14 times larger than the optical diameter of the galaxy). The envelope shows distortions and irregularities likely caused by interactions with nearby galaxies. Interactions with nearby galaxies are thought to have influenced star formation in NGC 4449 and, in fact, in 2012 two small galaxies have been discovered interacting with this galaxy: a very low surface brightness disrupted dwarf spheroidal with the same stellar mass as NGC 4449's halo but with a ratio of dark matter to stellar matter between 5 and 10 times that of NGC 4449 and a highly flattened globular cluster with two tails of young stars that may be the nucleus of a gas-rich galaxy. Both satellites have apparently been disrupted by NGC 4449 and are now being absorbed by it. At least one ultraluminous X-ray source (ULX) is known in NGC 4449, called NGC 4449 X7. There are three candidates that have been identified as optical counterparts to NGC 4449 X7 (i.e. they may be associated with the ULX). They are all early (B-type to F-type) supergiants that are estimated to be about 40 to 50 million years old and about 8 times the mass of the Sun. In May 2024, the James Webb Space Telescope captured a detailed image of NGC 4449, highlighting widespread starburst activity. This infrared image revealed intricate structures of gas, dust, and newly forming stars, further enriching our understanding of star formation processes influenced by interactions with nearby galaxies. This discovery emphasizes NGC 4449's role as a key site for studying galaxy evolution and stellar birth. References External links Astronomy Picture of the Day – May 3, 2007, 10 July 2007, and 25 February 2011 Irregular galaxies Barred irregular galaxies Canes Venatici 4449 07592 40973 021b M94 Group
NGC 4449
[ "Astronomy" ]
606
[ "Canes Venatici", "Constellations" ]
11,050,053
https://en.wikipedia.org/wiki/Artemotil
Artemotil (INN; also known as β-arteether), is a fast acting blood schizonticide specifically indicated for the treatment of chloroquine-resistant Plasmodium falciparum malaria and cerebral malaria cases. It is a semi-synthetic derivative of artemisinin, a natural product of the Chinese plant Artemisia annua. It is currently only used as a second line drug in severe cases of malaria. References Antimalarial agents Ethers Organic peroxides Sesquiterpenes Trioxanes Ethoxy compounds Heterocyclic compounds with 4 rings
Artemotil
[ "Chemistry" ]
130
[ "Organic compounds", "Functional groups", "Ethers", "Organic peroxides" ]
11,050,227
https://en.wikipedia.org/wiki/Plutonium-241
Plutonium-241 ( or Pu-241) is an isotope of plutonium formed when plutonium-240 captures a neutron. Like some other plutonium isotopes (especially 239Pu), 241Pu is fissile, with a neutron absorption cross section about one-third greater than that of 239Pu, and a similar probability of fissioning on neutron absorption, around 73%. In the non-fission case, neutron capture produces plutonium-242. In general, isotopes with an odd number of neutrons are both more likely to absorb a neutron and more likely to undergo fission on neutron absorption than isotopes with an even number of neutrons. Decay properties Plutonium-241 is a beta emitter with a half-life of 14.3 years, corresponding to a decay of about 5% of 241Pu nuclei over a one-year period. This decay has a Q-value of and a mean of , and does not emit gamma rays. The longer spent nuclear fuel waits before reprocessing, the more 241Pu decays to americium-241, which is nonfissile (although fissionable by fast neutrons) and an alpha emitter with a half-life of 432 years; 241Am is a major contributor to the radioactivity of nuclear waste on a scale of hundreds or thousands of years. In its fully ionized state, the beta-decay half-life of 241Pu94+ decreases to 4.2 days, and only bound-state beta decay is possible. Plutonium-241 also has a rare alpha decay branch to uranium-237, occurring in about 0.002% of decays. With a Q-value of , it can emit Auger electrons and associated X-rays, unlike the beta-decay process. Role in nuclear fuel Americium has lower valence and lower electronegativity than plutonium, neptunium or uranium, so in most nuclear reprocessing, americium tends to fractionate with the alkaline fission products – lanthanides, strontium, caesium, barium, yttrium – rather than with other actinides. Americium is therefore not recycled into nuclear fuel unless special efforts are made. In a thermal reactor, 241Am captures a neutron to become americium-242, which quickly becomes curium-242 (or, 17.3% of the time, 242Pu) via beta decay. Both 242Cm and 242Pu are much less likely to absorb a neutron, and even less likely to fission; however, 242Cm is short-lived (half-life 160 days) and almost always undergoes alpha decay to 238Pu rather than capturing another neutron. In short, 241Am needs to absorb two neutrons before again becoming a fissile isotope. References Actinides Isotopes of plutonium Fissile materials
Plutonium-241
[ "Chemistry" ]
591
[ "Explosive chemicals", "Fissile materials", "Isotopes of plutonium", "Isotopes" ]
11,050,437
https://en.wikipedia.org/wiki/Youth%20philanthropy
Youth philanthropy is the donation of time, energy or resources, including money, by children and youth towards philanthropic causes. According to one study, "youth philanthropy is, at the broadest level, youth giving of their time, talents and treasure." It is seen as an effective means in which youth develop knowledge of and participate in philanthropic projects such as volunteering, grant writing, and community service. About Youth philanthropy educates young people about social change in order to identify community problems and design the most appropriate solutions in a systemic way. Philanthropy in this case is defined as anything young people do to make the world around them a better place. Focused on youth-adult partnerships and youth voice, youth philanthropy is seen as a successful application of service learning. Youth philanthropy helps young people develop skills, knowledge, confidence and leadership abilities. Youth philanthropy is also identified as a particularly effective means for educating children and youth about volunteerism and civic engagement. Within the Jewish community institutions such as synagogues, day schools, Jewish federations and other organizations have created Jewish youth philanthropy programs to provide Jewish youth with opportunities to engage in grantmaking activities through a Jewish lens. The Jewish Teen Funders Network serves as a central address for Jewish youth philanthropy, and aims to help grow and strengthen the burgeoning field. See also Youth programs A Kid's Guide to Giving Kids Helping Kids References External links Learning to Give The Learning By Giving Foundation National Center for Family Philanthropy Philanthropy P
Youth philanthropy
[ "Biology" ]
292
[ "Philanthropy", "Behavior", "Altruism" ]
11,050,610
https://en.wikipedia.org/wiki/Journal%20of%20Land%20Use%20and%20Environmental%20Law
The Journal of Land Use & Environmental Law is published twice a year at the Florida State University College of Law. Founded in 1983, it is Florida's first and only student publication in the field. The law review ranks among the top environmental and land use law journals based on citations. The Journal is edited and published entirely by law students at Florida State University College of Law. It is managed by an executive board popularly elected annually by the members. References External links American law journals Florida State University Academic journals established in 1983 Environmental law journals Urban studies and planning journals Law journals edited by students Land use
Journal of Land Use and Environmental Law
[ "Environmental_science" ]
120
[ "Environmental science journals", "Environmental social science stubs", "Environmental social science", "Environmental science journal stubs" ]
11,050,654
https://en.wikipedia.org/wiki/Pharmaconomist
In Denmark (including Greenland and Faroe Islands), pharmaconomists () are experts in pharmaceuticals () who have trained with a 3-year tertiary degree. Pharmaconomy () describes either their professional practice or their training courses. Work The majority of the Danish pharmaconomists work at community pharmacies (chemists' shops or drug stores) and at hospital pharmacies and hospitals. Some pharmaconomists work within the chemical industry, the pharmaceutical industry and in medical or clinical laboratories. Other pharmaconomists teach pharmacy students and pharmaconomy students at colleges or universities, such as at the University of Copenhagen's Faculty of Health and Medical Sciences or at the Pharmakon—Danish College of Pharmacy Practice. Pharmaconomists are also employed by the Danish Ministry of Interior and Health, Danish Medicines Agency and Danish Association of Pharmacies. Some pharmaconomists do work as pharmaceutical consultants. Education The 3 year higher education corresponds to 180 ECTS points (European Credit Transfer and Accumulation System). Pharmakon—Danish College of Pharmacy Practice During his or her education programme at Pharmakon—Danish College of Pharmacy Practice, the pharmaconomist student studies human and animal anatomy, physiology, pathology, pharmacology, pharmaconomy, pharmacy practice, pharmaceutics, toxicology, pharmacognosy, clinical pharmacy, pharmacotherapy, pharmaceutical sciences, chemistry, pharmaceutical chemistry, biochemistry, biology, microbiology, molecular biology, genetics, cytology, medicine, veterinary medicine, zoology, diagnosis, medical prescription, pharmacy law, medical sociology, patient safety, health care, psychology, psychiatry, pedagogy, communication, information technology (IT), bioethics, medical ethics, safety, leadership, organization, logistics, economy, quality assurance (QA), sales and marketing. Difference between a pharmaconomist and a pharmacist There are two different professional groups with pharmaceutical education in Denmark: Pharmaconomists (with a 3-year higher tertiary education) Pharmacists (with a 5-year higher tertiary education) Due to his or her higher education as a health professional, the pharmaconomist has by law the same independent competence in all Danish pharmacies as a pharmacist — i.e. for example to dispense and check medical prescriptions, to counsel and advise patients/customers about the use of medicine/pharmaceuticals and to dispense, sell and provide information about medical prescriptions and about prescription medicine and over-the-counter medicine (OTC). The pharmaconomist also undertakes specialist and managerial operation of pharmacies and undertakes managerial duty service. The only difference by law is that only a pharmacist may own a Danish pharmacy — i.e. become a pharmacy owner. Like pharmacists, pharmaconomists can work as pharmacy managers and HR managers (or as chief pharmaconomists). Trade union The Danish Association of Pharmaconomists is a trade union who represents about 5,700 pharmaconomists in Denmark (i.e. 98% of all Danish pharmaconomists). Translation into other languages The Danish title farmakonom (pharmaconomist) comes from the Greek "pharmakon" (meaning "pharmaceuticals") and "nom" (meaning "expert in"). In Denmark a pharmaconomist is also referred to as lægemiddelkyndig (expert in pharmaceuticals). Lægemiddelkyndig comes from the Danish "lægemiddel" (meaning "pharmaceuticals") and "kyndig" (meaning "expert in"). The title "pharmaconomist" in other languages: English: pharmaconomist (plural: pharmaconomists) Danish: farmakonom (plural: farmakonomer) Faroese: farmakonomur (plural: farmakonomar) French: pharmaconome (plural: pharmaconomes) German: Pharmakonom (plural: Pharmakonomen) Greenlandic: farmakonomit (plural: farmakonominullu) Italian: farmaconomista (plural: farmaconomisti) Spanish: farmaconomista (plural: farmaconomistas) Swedish: farmakonom (plural: farmakonomer) The title "expert in pharmaceuticals" in other languages: English: expert in pharmaceuticals (plural: experts in pharmaceuticals) Danish: lægemiddelkyndig (plural: lægemiddelkyndige) French: expert en medicaments (plural: experts en medicaments) German: Arzneimittelexperte (plural: Arzneimittelexperten) Italian: esperto in farmaci (plural: esperti in farmaci) Spanish: experto en fármacos (plural: expertos en fármacos) Swedish: läkemedelsexpert (plural: läkemedelsexperter) The academic discipline of "pharmaconomy" in other languages: English: pharmaconomy Danish: farmakonomi German: Pharmakonomie French: pharmaconomie Italian: farmaconomia Spanish: farmaconomía Swedish: farmakonomi See also Professional Further Education in Clinical Pharmacy and Public Health History of pharmacy Sources Pharmakon—Danish College of Pharmacy Practice The Danish Association of Pharmaconomists The Danish Pharmaceutical Association Official Curriculum of the Danish Education of Pharmaconomists (September 2007) Official Executive Order on the Education of Pharmaconomists (June 2007) Information about pharmaconomists Pharmaceutical industry Education in Denmark Pharmacy in Denmark Pharmaconomists
Pharmaconomist
[ "Chemistry", "Biology" ]
1,256
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
11,050,673
https://en.wikipedia.org/wiki/Strategic%20move
A strategic move in game theory is an action taken by a player outside the defined actions of the game in order to gain a strategic advantage and increase one's payoff. Strategic moves can either be unconditional moves or response rules. The key characteristics of a strategic move are that it involves a commitment from the player, meaning the player can only restrict their own choices and that the commitment has to be credible, meaning that once employed it must be in the interest of the player to follow through with the move. Credible moves should also be observable to the other players. Strategic moves are not warnings or assurances. Warnings and assurances are merely statements of a player's interest, rather than an actual commitment from the player. The term was coined by Thomas Schelling in his 1960 book, The Strategy of Conflict, and has gained wide currency in political science and industrial organization. References Thomas Schelling: The Strategy of Conflict, Harvard University press (1960). Avinash Dixit & Barry Nalebuff: Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life, W.W. Norton (1991) Game theory
Strategic move
[ "Mathematics" ]
237
[ "Game theory" ]
11,051,153
https://en.wikipedia.org/wiki/Sensu
Sensu is a Latin word meaning "in the sense of". It is used in a number of fields including biology, geology, linguistics, semiotics, and law. Commonly it refers to how strictly or loosely an expression is used in describing any particular concept, but it also appears in expressions that indicate the convention or context of the usage. Common qualifiers Sensu is the ablative case of the noun sensus, here meaning "sense". It is often accompanied by an adjective (in the same case). Three such phrases are: sensu stricto – "in the strict sense", abbreviation s.s. or s.str.; sensu lato – "in the broad sense", abbreviation s.l.; sensu amplo – "in a relaxed, generous (or 'ample') sense", a similar meaning to sensu lato. Søren Kierkegaard uses the phrase sensu eminenti to mean "in the pre-eminent [or most important or significant] sense". When appropriate, comparative and superlative adjectives may also be used to convey the meaning of "more" or "most". Thus sensu stricto becomes sensu strictiore ("in the stricter sense" or "more strictly speaking") and sensu strictissimo ("in the strictest possible sense" or "most strictly speaking"). Current definitions of the plant kingdom (Plantae) offer a biological example of when such phrases might be used. One definition of Plantae is that it consists of all green plants (comprising green algae and land plants), all red algae and all glaucophyte algae. A stricter definition excludes the red and glaucophyte algae; the group defined in this way could be called Plantae in sensu stricto. An even stricter definition excludes green algae, leaving only land plants; the group defined in this way could be called Plantae in sensu strictiore. Conversely, where convenient, some authors derive expressions such as "sensu non strictissimo", meaning "not in the narrowest possible sense". A similar form is in use to indicate the sense of a particular context, such as "Nonmonophyletic groups are ... nonnatural (sensu cladistics) in that ..." or "... computation of a cladogram (sensu phenetics) ..." Also the expression sensu auctorum (abbreviation: sensu auct.) is used to mean "in the sense of certain authors", who can be designated or described. It normally refers to a sense which is considered invalid and may be used in place of the author designation of a taxon in such a case (for instance, "Tricholoma amethystinum sensu auct." is an erroneous name for a mushroom which should really be "Lepista personata (Fr.) Cooke"). Qualifiers and contexts A related usage is in a concept-author citation ("sec. Smith", or "sensu Smith"), indicating that the intended meaning is the one defined by that author. (Here "sec." is an abbreviation of "secundum", meaning "following" or "in accordance with".) Such an author citation is different from the citation of the nomenclatural "author citation" or "authority citation". In biological taxonomy the author citation following the name of a taxon simply identifies the author who originally published the name and applied it to the type, the specimen or specimens that one refers to in case of doubt about the definition of a species. Given that an author (such as Linnaeus, for example) was the first to supply a definite type specimen and to describe it, it is to be hoped that his description would stand the tests of time and criticism, but even if it does not, then as far as practical the name that he had assigned will apply. It still will apply in preference to any subsequent names or descriptions that anyone proposes, whether his description was correct or not, and whether he had correctly identified its biological affinities or not. This does not always happen of course; all sorts of errors occur in practice. For example, a collector might scoop a netful of small fish and describe them as a new species; it then might turn out that he had failed to notice that there were several (possibly unrelated) species in the net. It then is not clear what he had named, so his name can hardly be taken seriously, either s.s. or s.l. After a species has been established in this manner, specialist taxonomists may work on the subject and make certain types of changes in the light of new information. In modern practice it is greatly preferred that the collector of the specimens immediately passes them to specialists for naming; it is rarely possible for non-specialists to tell whether their specimens are of new species or not, and in modern times not many publications or their referees would accept an amateur description. In any event, the person who finally classifies and describes a species has the task of taxonomic circumscription. Circumscription means in essence that anyone competent in the matter can tell which creatures are included in the species described, and which are excluded. It is in this process of species description that the question of the sense arises, because that is where the worker produces and argues their view of the proper circumscription. Equally, or perhaps even more strongly, the arguments for deciding questions concerning higher taxa such as families or orders, require very difficult circumscription, where changing the sense applied could totally upset an entire scheme of classification, either constructively or disastrously. Note that the principles of circumscription apply in various ways in non-biological senses. In biological taxonomy the usual assumption is that circumscription reflects the shared ancestry perceived as most likely in the light of the currently available information; in geology or legal contexts far wider and more arbitrary ranges of logical circumscription commonly apply, not necessarily formally uniformly. However, the usage of expressions incorporating sensu remains functionally similarly intelligible among the fields. In geology for example, in which the concept of ancestry is looser and less pervasive than in biology, one finds usages such as: "This ambiguity ... has led to a ... dual interpretation of the Kimmeridgian Stage; the longer sensu anglico meaning, or the shorter sensu gallico meaning." Here the "anglico" or English meaning referred to interpretations by English geologists, derived from English materials and conditions, whereas "gallico" referred to interpretations by French and German geologists, derived from continental materials and conditions. "...genetic stratigraphic sequences sensu Galloway (1989)" meaning those sequences so referred to by Galloway, much as in the biological usage in referring to the terminology of particular authorities. "The second progradational unit plus PAN-4 are correlatable to the Pontian sensu stricto (sensu Sacchi 2001)." Here we have a meta-reference: the Pontian in the sense that Sacchi had applied it as sensu stricto. Examples in practical taxonomy Sensu is used in the taxonomy of living creatures to specify which circumscription of a given taxon is meant, where more than one circumscription can be defined. "The family Malvaceae s.s. is cladistically monophyletic." This means that the members of the entire family of plants under the name Malvaceae (strictly speaking), over 1000 species, including the closest relatives of cotton and hibiscus, all descend from a shared ancestor, specifically, that they, and no other extant plant taxa, share a notional most recent common ancestor (MRCA). If this is correct, that ancestor might have been a single species of plant. Conversely the assertion also means that the family includes all surviving species descended from that ancestor. Other species of plants that some people might (broadly speaking or s.l.) have included in the family would not have shared that MRCA (or ipso facto they too would have been members of the family Malvaceae s.s. In short, this circumscription s.s. includes all and only plants that have descended from that particular ancestral stock. "In the broader APG circumscription the family Malvaceae s.l. includes Malvaceae s.s. and also the families Bombacaceae, Sterculiaceae and Tiliaceae." Here the circumscription is broader, stripped of some of its constraints by saying sensu lato; that is what speaking more broadly amounts to. Discarding such constraints might be for historical reasons, for example when people usually speak of the polyphyletic taxon because the members were long believed to form a "true" taxon and the standard literature still refers to them together. Alternatively a taxon might include members simply because they form a group that is convenient to work with in practice. In this example, we can know from additional sources that we are dealing with the latter case: by adding other groups of plants to the family Malvaceae s.l., including those related to cacao, cola, durian, and jute, the APG circumscription omits some of the criteria by which the new members previously had been excluded. The s.l. group remains monophyletic. "The 'clearly non-monophyletic' series Cyrtostylis sensu A.S. George has been virtually dismantled..." This remark specifies Alex George's particular description of that series. It is a different kind of circumscription, alluding to the fact that A.S. George called them a series. "Sensu A.S. George" means that A.S. George discussed the Cyrtostylis in that series, and that members of that series are the ones under discussion in the same sense—how A. S. George saw them; the current author might or might not approve George's circumscription, but George's is the circumscription currently under consideration. See also Glossary of scientific naming References External links Scientific terminology Latin biological phrases Botanical nomenclature
Sensu
[ "Biology" ]
2,156
[ "Botanical nomenclature", "Botanical terminology", "Biological nomenclature", "Latin biological phrases" ]
11,051,720
https://en.wikipedia.org/wiki/Solar-powered%20desalination%20unit
A solar-powered desalination unit produces potable water from saline water through direct or indirect methods of desalination powered by sunlight. Solar energy is the most promising renewable energy source due to its ability to drive the more popular thermal desalination systems directly through solar collectors and to drive physical and chemical desalination systems indirectly through photovoltaic cells. Direct solar desalination produces distillate directly in the solar collector. An example would be a solar still which traps the Sun's energy to obtain freshwater through the process of evaporation and condensation. Indirect solar desalination incorporates solar energy collection systems with conventional desalination systems such as multi-stage flash distillation, multiple effect evaporation, freeze separation or reverse osmosis to produce freshwater. Direct solar desalination Solar stills One type of solar desalination unit is a solar still, it is also similar to a condensation trap. A solar still is a simple way of distilling water, using the heat of the Sun to drive evaporation from humid soil, and ambient air to cool a condenser film. Two basic types of solar stills are box and pit stills. In a pit still, impure water is contained outside the collector, where it is evaporated by sunlight shining through clear plastic. The pure water vapor condenses on the cool inside plastic surface and drips down from the weighted low point, where it is collected and removed. The box type is more sophisticated. The basic principles of solar water distillation are simple, yet effective, as distillation replicates the way nature makes rain. The sun's energy heats water to the point of evaporation. As the water evaporates, water vapor rises, condensing on the glass surface for collection. This process removes impurities, such as salts and heavy metals, and eliminates microbiological organisms. The end result is water cleaner than the purest rainwater. Indirect solar desalination Indirect solar desalination systems comprise two sub-systems: a solar collection system and a desalination system. The solar collection system is used, either to collect heat using solar collectors and supply it via a heat exchanger to a thermal desalination process, or to convert electromagnetic solar radiation to electricity using photovoltaic cells to power an electricity-driven desalination process. Solar-powered reverse osmosis Osmosis is a natural phenomenon in which water passes through a membrane from a lower to a higher concentration solution. The flow of water can be reversed if a pressure larger than the osmotic pressure is applied on the higher concentration side. In Reverse osmosis desalination systems, seawater pressure is raised above the natural osmotic pressure, forcing pure water through membrane pores to the fresh water side. Reverse osmosis (RO) is the most common desalination process in terms of installed capacity due to its superior energy efficiency compared to thermal desalination systems, despite requiring extensive water pre-treatment. Furthermore, part of the consumed mechanical energy can be reclaimed from the concentrated brine effluent with an energy recovery device. Solar-powered RO desalination is common in demonstration plants due to the modularity and scalability of both photovoltaic (PV) and RO systems. A detailed economic analysis and a thorough optimisation strategy of PV powered RO desalination were carried out with favorable results reported. Economic and reliability considerations are the main challenges to improving PV powered RO desalination systems. However, the quickly dropping PV panel costs are making solar-powered desalination ever more feasible. A solar powered desalination unit designed for remote communities has been tested in the Northern Territory of Australia. The "reverse-osmosis solar installation" (ROSI) uses membrane filtration to provide a reliable and clean drinking water stream from sources such as brackish groundwater. Solar energy overcomes the usually high-energy operating costs as well as greenhouse emissions of conventional reverse osmosis systems. ROSI can also remove trace contaminants such as arsenic and uranium that may cause certain health problems, and minerals such as calcium carbonate which causes water hardness. Project leader Dr Andrea Schaefer from the University of Wollongong's Faculty of Engineering said ROSI has the potential to bring clean water to remote communities throughout Australia that do not have access to a town water supply and/or the electricity grid. Groundwater (which may contain dissolved salts or other contaminants) or surface water (which may have high turbidity or contain microorganisms) is pumped into a tank with an ultrafiltration membrane, which removes viruses and bacteria. This water is fit for cleaning and bathing. Ten percent of that water undergoes nanofiltration and reverse osmosis in the second stage of purification, which removes salts and trace contaminants, producing drinking water. A photovoltaic solar array tracks the Sun and powers the pumps needed to process the water, using the plentiful sunlight available in remote regions of Australia not served by the power grid. Solar photo voltaic power is considered a viable option to power a reverse osmosis desalination plant. The techno-economics both in standalone mode and in PV-biodisel hybrid mode for capacities from 0.05 MLD to 300 MLD were examined by researchers at IIT Madras. As a technology demonstrator, a plant of 500 litre /day capacity has been designed, installed and functional there. Energy storage While the intermittent nature of sunlight and its variable intensity throughout the day makes desalination during nighttime challenging, several energy storage options can be used to permit 24 hour operation. Batteries can store solar energy for use at night. Thermal energy storage systems ensure constant performance at night or on cloudy days, improving overall efficiency. Alternatively, stored gravitational energy can be harnessed to provide energy to a solar-powered reverse osmosis unit during non-sunlight hours. See also Solar desalination Solar still References Solar energy in Australia Appropriate technology Water treatment Solar-powered devices Membrane technology
Solar-powered desalination unit
[ "Chemistry", "Engineering", "Environmental_science" ]
1,242
[ "Separation processes", "Water treatment", "Water pollution", "Membrane technology", "Environmental engineering", "Water technology" ]
11,051,988
https://en.wikipedia.org/wiki/National%20Biodiversity%20Network
The National Biodiversity Network (UK) (NBN) is a collaborative venture set up in 2000 in the United Kingdom committed to making biodiversity information available through various media, including on the internet via the NBN Atlas—the data search website of the NBN. Description It is estimated that up to 60,000 people routinely record biodiversity information in the UK and Ireland. Most of this effort is voluntary and is organised through about 2,000 national societies and recording schemes. The UK government through its agencies also collects biodiversity data and one of the principal elements for the collation and interpretation of this data is the network of Local Environmental Records Centres. In 2012, it had been listed in the top 1,000 UK charities that raised most donations. NBN Trust The NBN Trust—the organisation facilitating the building of the Network—supports agreed standards for the collection, collation and exchange of biodiversity data and encourages improved access. The present partnership consists of over 200 public and voluntary organisations and individual members. The NBN Atlas currently holds over 300 million species records from over 1000 different datasets (August 2024). Data on the NBN Atlas can be accessed by anyone interested in UK, Northern Ireland and Isle of Man wildlife and can be searched at many different levels, as it allows the viewing of distribution maps and the downloading of data by using a variety of interactive tools. The maps can be customised by date range and can show changes in a species’ distribution. The organisation believes that, by providing tools to make wildlife data accessible in a digitised and exchangeable form and by providing easy access to the information people need, wise and informed decisions can be made to ensure the natural environment is protected now and for future generations. Team The National Biodiversity Network Trust employs a team to facilitate and co-ordinate its growth and development and is governed by a Board of Trustees. The NBN Trust is a registered charity. See also Biological recording References External links NBN Atlas (data) View the NBN Trust Strategy 2022 - 2027 Association of Local Environmental Records Centres - for more information on Local Environmental Records Centres National Forum for Biological Recording Biodiversity Ecology organizations Ecological experiments Biodiversity databases
National Biodiversity Network
[ "Biology", "Environmental_science" ]
438
[ "Biodiversity databases", "Environmental science databases", "Biodiversity" ]
11,052,041
https://en.wikipedia.org/wiki/Determination%20of%20equilibrium%20constants
Equilibrium constants are determined in order to quantify chemical equilibria. When an equilibrium constant is expressed as a concentration quotient, it is implied that the activity quotient is constant. For this assumption to be valid, equilibrium constants must be determined in a medium of relatively high ionic strength. Where this is not possible, consideration should be given to possible activity variation. The equilibrium expression above is a function of the concentrations [A], [B] etc. of the chemical species in equilibrium. The equilibrium constant value can be determined if any one of these concentrations can be measured. The general procedure is that the concentration in question is measured for a series of solutions with known analytical concentrations of the reactants. Typically, a titration is performed with one or more reactants in the titration vessel and one or more reactants in the burette. Knowing the analytical concentrations of reactants initially in the reaction vessel and in the burette, all analytical concentrations can be derived as a function of the volume (or mass) of titrant added. The equilibrium constants may be derived by best-fitting of the experimental data with a chemical model of the equilibrium system. Experimental methods There are four main experimental methods. For less commonly used methods, see Rossotti and Rossotti. In all cases the range can be extended by using the competition method. An example of the application of this method can be found in palladium(II) cyanide. Potentiometric measurements A free concentration [A] or activity {A} of a species A is measured by means of an ion selective electrode such as the glass electrode. If the electrode is calibrated using activity standards it is assumed that the Nernst equation applies in the form where is the standard electrode potential. When buffer solutions of known pH are used for calibration the meter reading will be a pH. At 298 K, 1 pH unit is approximately equal to 59 mV. When the electrode is calibrated with solutions of known concentration, by means of a strong acid–strong base titration, for example, a modified Nernst equation is assumed. where is an empirical slope factor. A solution of known hydrogen ion concentration may be prepared by standardization of a strong acid against borax. Constant-boiling hydrochloric acid may also be used as a primary standard for hydrogen ion concentration. Range and limitations The most widely used electrode is the glass electrode, which is selective for the hydrogen ion. This is suitable for all acid–base equilibria. values between about 2 and 11 can be measured directly by potentiometric titration using a glass electrode. This enormous range of stability constant values (ca. 100 to 1011) is possible because of the logarithmic response of the electrode. The limitations arise because the Nernst equation breaks down at very low or very high pH. When a glass electrode is used to obtain the measurements on which the calculated equilibrium constants depend, the precision of the calculated parameters is limited by secondary effects such as variation of liquid junction potentials in the electrode. In practice it is virtually impossible to obtain a precision for log β better than ±0.001. Spectrophotometric measurements Absorbance It is assumed that the Beer–Lambert law applies. where is the optical path length, is a molar absorbance at unit path length and is a concentration. More than one of the species may contribute to the absorbance. In principle absorbance may be measured at one wavelength only, but in present-day practice it is common to record complete spectra. Range and limitations An upper limit on of 4 is usually quoted, corresponding to the precision of the measurements, but it also depends on how intense the effect is. Spectra of contributing species should be clearly distinct from each other Fluorescence (luminescence) intensity It is assumed that the scattered light intensity is a linear function of species’ concentrations. where is a proportionality constant. Range and limitations The magnitude of the constant may be higher than the value of the molar extinction coefficient, ε, for a species. When this is so, the detection limit for that species will be lower. At high solute concentrations, fluorescence intensity becomes non-linear with respect to concentration due to self-absorption of the scattered radiation. NMR chemical shift measurements Chemical exchange is assumed to be rapid on the NMR time-scale. An individual chemical shift is the mole-fraction-weighted average of the shifts of nuclei in contributing species. Example: the pKa of the hydroxyl group in citric acid has been determined from 13C chemical shift data to be 14.4. Neither potentiometry nor ultraviolet–visible spectroscopy could be used for this determination. Range and limitations Limited precision of chemical shift measurements also puts an upper limit of about 4 on . Limited to diamagnetic systems. 1H NMR cannot be used with solutions of compounds in 1H2O. Calorimetric measurements Simultaneous measurement of and for 1:1 adducts is routinely carried out using isothermal titration calorimetry. Extension to more complex systems is limited by the availability of suitable software. Range and limitations Insufficient evidence is currently available. The competition method The competition method may be used when a stability constant value is too large to be determined by a direct method. It was first used by Schwarzenbach in the determination of the stability constants of complexes of EDTA with metal ions. For simplicity consider the determination of the stability constant of a binary complex, AB, of a reagent A with another reagent B. where the [X] represents the concentration, at equilibrium, of a species X in a solution of given composition. A ligand C is chosen which forms a weaker complex with A The stability constant, KAC, is small enough to be determined by a direct method. For example, in the case of EDTA complexes A is a metal ion and C may be a polyamine such as diethylenetriamine. The stability constant, K for the competition reaction can be expressed as It follows that where K is the stability constant for the competition reaction. Thus, the value of the stability constant may be derived from the experimentally determined values of K and . Computational methods It is assumed that the collected experimental data comprise a set of data points. At each th data point, the analytical concentrations of the reactants, , etc. are known along with a measured quantity, , that depends on one or more of these analytical concentrations. A general computational procedure has four main components: Definition of a chemical model of the equilibria Calculation of the concentrations of all the chemical species in each solution Refinement of the equilibrium constants Model selection The value of the equilibrium constant for the formation of a 1:1 complex, such as a host-guest species, may be calculated with a dedicated spreadsheet application, Bindfit: In this case step 2 can be performed with a non-iterative procedure and the pre-programmed routine Solver can be used for step 3. The chemical model The chemical model consists of a set of chemical species present in solution, both the reactants added to the reaction mixture and the complex species formed from them. Denoting the reactants by A, B..., each complex species is specified by the stoichiometric coefficients that relate the particular combination of reactants forming them. {\mathit p A} + \mathit q B \cdots <=> A_\mathit{p}B_\mathit{q} \cdots: When using general-purpose computer programs, it is usual to use cumulative association constants, as shown above. Electrical charges are not shown in general expressions such as this and are often omitted from specific expressions, for simplicity of notation. In fact, electrical charges have no bearing on the equilibrium processes other that there being a requirement for overall electrical neutrality in all systems. With aqueous solutions the concentrations of proton (hydronium ion) and hydroxide ion are constrained by the self-dissociation of water. H2O <=> H+ + OH-: With dilute solutions the concentration of water is assumed constant, so the equilibrium expression is written in the form of the ionic product of water. When both H+ and OH− must be considered as reactants, one of them is eliminated from the model by specifying that its concentration be derived from the concentration of the other. Usually the concentration of the hydroxide ion is given by In this case the equilibrium constant for the formation of hydroxide has the stoichiometric coefficients −1 in regard to the proton and zero for the other reactants. This has important implications for all protonation equilibria in aqueous solution and for hydrolysis constants in particular. It is quite usual to omit from the model those species whose concentrations are considered negligible. For example, it is usually assumed then there is no interaction between the reactants and/or complexes and the electrolyte used to maintain constant ionic strength or the buffer used to maintain constant pH. These assumptions may or may not be justified. Also, it is implicitly assumed that there are no other complex species present. When complexes are wrongly ignored a systematic error is introduced into the calculations. Equilibrium constant values are usually estimated initially by reference to data sources. Speciation calculations A speciation calculation is one in which concentrations of all the species in an equilibrium system are calculated, knowing the analytical concentrations, TA, TB etc. of the reactants A, B etc. This means solving a set of nonlinear equations of mass-balance for the free concentrations [A], [B] etc. When the pH (or equivalent e.m.f., E).is measured, the free concentration of hydrogen ions, [H], is obtained from the measured value as or and only the free concentrations of the other reactants are calculated. The concentrations of the complexes are derived from the free concentrations via the chemical model. Some authors include the free reactant terms in the sums by declaring identity (unit) constants for which the stoichiometric coefficients are 1 for the reactant concerned and zero for all other reactants. For example, with 2 reagents, the mass-balance equations assume the simpler form. In this manner, all chemical species, including the free reactants, are treated in the same way, having been formed from the combination of reactants that is specified by the stoichiometric coefficients. In a titration system the analytical concentrations of the reactants at each titration point are obtained from the initial conditions, the burette concentrations and volumes. The analytical (total) concentration of a reactant R at the th titration point is given by where R0 is the initial amount of R in the titration vessel, is the initial volume, [R] is the concentration of R in the burette and is the volume added. The burette concentration of a reactant not present in the burette is taken to be zero. In general, solving these nonlinear equations presents a formidable challenge because of the huge range over which the free concentrations may vary. At the beginning, values for the free concentrations must be estimated. Then, these values are refined, usually by means of Newton–Raphson iterations. The logarithms of the free concentrations may be refined rather than the free concentrations themselves. Refinement of the logarithms of the free concentrations has the added advantage of automatically imposing a non-negativity constraint on the free concentrations. Once the free reactant concentrations have been calculated, the concentrations of the complexes are derived from them and the equilibrium constants. Note that the free reactant concentrations can be regarded as implicit parameters in the equilibrium constant refinement process. In that context the values of the free concentrations are constrained by forcing the conditions of mass-balance to apply at all stages of the process. Equilibrium constant refinement The objective of the refinement process is to find equilibrium constant values that give the best fit to the experimental data. This is usually achieved by minimising an objective function, , by the method of non-linear least-squares. First the residuals are defined as Then the most general objective function is given by The matrix of weights, , should be, ideally, the inverse of the variance-covariance matrix of the observations. It is rare for this to be known. However, when it is, the expectation value of U is one, which means that the data are fitted within experimental error. Most often only the diagonal elements are known, in which case the objective function simplifies to with when . Unit weights, , are often used but, in that case, the expectation value of is the root mean square of the experimental errors. The minimization may be performed using the Gauss–Newton method. Firstly the objective function is linearised by approximating it as a first-order Taylor series expansion about an initial parameter set, . The increments are added to the corresponding initial parameters such that is less than . At the minimum the derivatives , which are simply related to the elements of the Jacobian matrix, where is the th parameter of the refinement, are equal to zero. One or more equilibrium constants may be parameters of the refinement. However, the measured quantities (see above) represented by are not expressed in terms of the equilibrium constants, but in terms of the species concentrations, which are implicit functions of these parameters. Therefore, the Jacobian elements must be obtained using implicit differentiation. The parameter increments are calculated by solving the normal equations, derived from the conditions that at the minimum. The increments are added iteratively to the parameters where is an iteration number. The species concentrations and values are recalculated at every data point. The iterations are continued until no significant reduction in is achieved, that is, until a convergence criterion is satisfied. If, however, the updated parameters do not result in a decrease of the objective function, that is, if divergence occurs, the increment calculation must be modified. The simplest modification is to use a fraction, , of calculated increment, so-called shift-cutting. In this case, the direction of the shift vector, , is unchanged. With the more powerful Levenberg–Marquardt algorithm, on the other hand, the shift vector is rotated towards the direction of steepest descent, by modifying the normal equations, where is the Marquardt parameter and is an identity matrix. Other methods of handling divergence have been proposed. A particular issue arises with NMR and spectrophotometric data. For the latter, the observed quantity is absorbance, , and the Beer–Lambert law can be written as It can be seen that, assuming that the concentrations, c, are known, that absorbance, , at a given wavelength, , and path length , is a linear function of the molar absorptivities, . With 1 cm path-length, in matrix notation There are two approaches to the calculation of the unknown molar absorptivities (1) The values are considered parameters of the minimization and the Jacobian is constructed on that basis. However, the values themselves are calculated at each step of the refinement by linear least-squares: using the refined values of the equilibrium constants to obtain the speciation. The matrix is an example of a pseudo-inverse. Golub and Pereyra showed how the pseudo-inverse can be differentiated so that parameter increments for both molar absorptivities and equilibrium constants can be calculated by solving the normal equations. (2) The Beer–Lambert law is written as The unknown molar absorbances of all "coloured" species are found by using the non-iterative method of linear least-squares, one wavelength at a time. The calculations are performed once every refinement cycle, using the stability constant values obtaining at that refinement cycle to calculate species' concentration values in the matrix . Parameter errors and correlation In the region close to the minimum of the objective function, , the system approximates to a linear least-squares system, for which Therefore, the parameter values are (approximately) linear combinations of the observed data values and the errors on the parameters, , can be obtained by error propagation from the observations, , using the linear formula. Let the variance-covariance matrix for the observations be denoted by and that of the parameters by . Then, When , this simplifies to In most cases the errors on the observations are un-correlated, so that is diagonal. If so, each weight should be the reciprocal of the variance of the corresponding observation. For example, in a potentiometric titration, the weight at a titration point, , can be given by where is the error in electrode potential or pH, is the slope of the titration curve and is the error on added volume. When unit weights are used (, ) it is implied that the experimental errors are uncorrelated and all equal: , where is known as the variance of an observation of unit weight, and is an identity matrix. In this case is approximated by where is the minimum value of the objective function and and are the number of data and parameters, respectively. In all cases, the variance of the parameter is given by and the covariance between parameters and is given by . Standard deviation is the square root of variance. These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors—which, by definition, cannot be quantified. Note that even though the observations may be uncorrelated, the parameters are always correlated. Derived constants When cumulative constants have been refined it is often useful to derive stepwise constants from them. The general procedure is to write down the defining expressions for all the constants involved and then to equate concentrations. For example, suppose that one wishes to derive the pKa for removing one proton from a tribasic acid, LH3, such as citric acid. The stepwise association constant for formation of LH3 is given by Substitute the expressions for the concentrations of LH3 and into this equation whence and since its value is given by Note the reverse numbering for pK and log β. When calculating the error on the stepwise constant, the fact that the cumulative constants are correlated must accounted for. By error propagation and Model selection Once a refinement has been completed the results should be checked to verify that the chosen model is acceptable. generally speaking, a model is acceptable when the data are fitted within experimental error, but there is no single criterion to use to make the judgement. The following should be considered. The objective function When the weights have been correctly derived from estimates of experimental error, the expectation value of is 1. It is therefore very useful to estimate experimental errors and derive some reasonable weights from them as this is an absolute indicator of the goodness of fit. When unit weights are used, it is implied that all observations have the same variance. is expected to be equal to that variance. Parameter errors One would want the errors on the stability constants to be roughly commensurate with experimental error. For example, with pH titration data, if pH is measured to 2 decimal places, the errors of should not be much larger than 0.01. In exploratory work where the nature of the species present is not known in advance, several different chemical models may be tested and compared. There will be models where the uncertainties in the best estimate of an equilibrium constant may be somewhat or even significantly larger than , especially with those constants governing the formation of comparatively minor species, but the decision as to how large is acceptable remains subjective. The decision process as to whether or not to include comparatively uncertain equilibria in a model, and for the comparison of competing models in general, can be made objective and has been outlined by Hamilton. Distribution of residuals At the minimum in the system can be approximated to a linear one, the residuals in the case of unit weights are related to the observations by The symmetric, idempotent matrix is known in the statistics literature as the hat matrix, . Thus, and where is an identity matrix and and are the variance-covariance matrices of the residuals and observations, respectively. This shows that even though the observations may be uncorrelated, the residuals are always correlated. The diagram at the right shows the result of a refinement of the stability constants of Ni(Gly)+, Ni(Gly)2 and (where GlyH = glycine). The observed values are shown a blue diamonds and the species concentrations, as a percentage of the total nickel, are superimposed. The residuals are shown in the lower box. The residuals are not distributed as randomly as would be expected. This is due to the variation of liquid junction potentials and other effects at the glass/liquid interfaces. Those effects are very slow compared to the rate at which equilibrium is established. Physical constraints Some physical constraints are usually incorporated in the calculations. For example, all the concentrations of free reactants and species must have positive values and association constants must have positive values. With spectrophotometric data the calculated molar absorptivity (or emissivity) values should all be positive. Most computer programs do not impose this constraint on the calculations. Chemical constraints When determining the stability constants of metal-ligand complexes, it is common practice to fix ligand protonation constants at values that have been determined using data obtained from metal-free solutions. Hydrolysis constants of metal ions are usually fixed at values which were obtained using ligand-free solutions. When determining the stability constants for ternary complexes, MpAqBr it is common practice the fix the values for the corresponding binary complexes Mp′Aq′ and Mp′′Bq′′, at values which have been determined in separate experiments. Use of such constraints reduces the number of parameters to be determined, but may result in the calculated errors on refined stability constant values being under-estimated. Other models If the model is not acceptable, a variety of other models should be examined to find one that best fits the experimental data, within experimental error. The main difficulty is with the so-called minor species. These are species whose concentration is so low that the effect on the measured quantity is at or below the level of error in the experimental measurement. The constant for a minor species may prove impossible to determine if there is no means to increase the concentration of the species. . Thermodynamic principles of host–guest interactions The thermodynamics of the host- guest interaction can be assessed by NMR spectroscopy, UV/visible spectroscopy, and isothermal titration calorimetry. Quantitative analysis of binding constant values provides useful thermodynamic information. An association constant, can be defined by the expression where {HG} is the thermodynamic activity of the complex at equilibrium. {H} represents the activity of the host and {G} the activity of the guest. The quantities , and are the corresponding concentrations and is a quotient of activity coefficients. In practice the equilibrium constant is usually defined in terms of concentrations. When this definition is used, it is implied that the quotient of activity coefficients has a numerical value of one. It then appears that the equilibrium constant, has the dimension 1/concentration, but that cannot be true since the standard Gibbs free energy change, is proportional to the logarithm of . This apparent paradox is resolved when the dimension of is defined to be the reciprocal of the dimension of the quotient of concentrations. The implication is that is regarded as having a constant value under all relevant experimental conditions. Nevertheless it is common practice to attach a dimension, such as millimole per litre or micromole per litre, to a value of K that has been determined experimentally. A Large value indicates that host and guest molecules interact strongly to form the host–guest complex. Determination of binding constant values and kinetic constant Simple host–guest complexation When the host and guest molecules combine to form a single complex, the equilibrium is represented as and the equilibrium constant, K, is defined as where [X] denotes the concentration of a chemical species X (all activity coefficients are assumed to have a numerical values of 1). The mass-balance equations, at any data point, where and represent the total concentrations, of host and guest, can be reduced to a single quadratic equation in, say, [G] and so can be solved analytically for any given value of K. The concentrations [H] and [HG] can then derived. The next step in the calculation is to calculate the value, , of a quantity corresponding to the quantity observed . Then, a sum of squares, U, over all data points, np, can be defined as and this can be minimized with respect to the stability constant value, K, and a parameter such the chemical shift of the species HG (nmr data) or its molar absorbency (uv/vis data). The minimization can be performed in a spreadsheet application such as EXCEL by using the in-built SOLVER utility. This procedure is applicable to 1:1 adducts. General complexation reaction For each equilibrium involving a host, H, and a guest G the equilibrium constant, , is defined as The values of the free concentrations, and are obtained by solving the equations of mass balance with known or estimated values for the stability constants. Then, the concentrations of each complex species may also be calculated as . The relationship between a species' concentration and the measured quantity is specific for the measurement technique, as indicated in each section above. Using this relationship, the set of parameters, the stability constant values and values of properties such as molar absorptivity or specified chemical shifts, may be refined by a non-linear least-squares refinement process. For a more detailed exposition of the theory see Determination of equilibrium constants. Some dedicated computer programs are listed at Implementations. Cooperativity In cooperativity, the initial ligand binding affects the host's affinity for subsequent ligands. In positive cooperativity, the first binding event enhances the affinity of the host for another ligand. Examples of positive and negative cooperativity are hemoglobin and aspartate receptor, respectively. The thermodynamic properties of cooperativity have been studied in order to define mathematical parameters that distinguish positive or negative cooperativity. The traditional Gibbs free energy equation states: . However, to quantify cooperativity in a host–guest system, the binding energy needs to be considered. The schematic on the right shows the binding of A, binding of B, positive cooperative binding of A–B, and lastly, negative cooperative binding of A–B. Therefore, an alternate form of the Gibbs free energy equation would be where: = free energy of binding A = free energy of binding B = free energy of binding for A and B tethered = sum of the free energies of binding It is considered that if more than the sum of and , it is positively cooperative. If is less, then it is negatively cooperative. Host–guest chemistry is not limited to receptor-lingand interactions. It is also demonstrated in ion-pairing systems. Such interactions are studied in an aqueous media utilizing synthetic organometallic hosts and organic guest molecules. For example, a poly-cationic receptor containing copper (the host) is coordinated with molecules such as tetracarboxylates, tricarballate, aspartate, and acetate (the guests). This study illustrates that entropy rather than enthalpy determines the binding energy of the system leading to negative cooperativity. The large change in entropy originates from the displacement of solvent molecules surrounding the ligand and the receptor. When multiple acetates bind to the receptor, it releases more water molecules to the environment than a tetracarboxylate. This led to a decrease in free energy implying that the system is cooperating negatively. In a similar study, utilizing guanidinium and Cu(II) and polycarboxylate guests, it is demonstrated that positive cooperatively is largely determined by enthalpy. In addition to thermodynamic studies, host–guest chemistry also has biological applications. Implementations Some simple systems are amenable to spreadsheet calculations. A large number of general-purpose computer programs for equilibrium constant calculation have been published. See for a bibliography. The most frequently used programs are: Potentiometric data: Hyperquad, BEST PSEQUAD, ReactLab pH PRO Spectrophotometric data:HypSpec, SQUAD, Specfit, ReactLab EQUILIBRIA NMR data HypNMR, EQNMR Calorimetric data HypΔH. Affinimeter Commercial Isothermal titration calorimeters are usually supplied with software with which an equilibrium constant and standard formation enthalpy for the formation of a 1:1 adduct can be obtained. Some software for handling more complex equilibria may also be supplied. References Equilibrium chemistry Analytical chemistry
Determination of equilibrium constants
[ "Chemistry" ]
6,026
[ "Equilibrium chemistry", "nan" ]
11,052,418
https://en.wikipedia.org/wiki/Tropical%20peat
Tropical peat is a type of histosol that is found in tropical latitudes, including South East Asia, Africa, and Central and South America. Tropical peat mostly consists of dead organic matter from trees instead of spaghnum which are commonly found in temperate peat. This soils usually contain high organic matter content, exceeding 75% with dry low bulk density around . Areas of tropical peat are found mostly in South America (about 46% by area) although they are also found in Africa, Central America, Asia and elsewhere around the tropics. Tropical peatlands are significant carbon sinks and store large amounts of carbon and their destruction can have a significant impact on the amount of atmospheric carbon dioxide. Tropical peatlands are vulnerable to destabilisation through human and climate induced changes. Estimates of the area (and hence volume) of tropical peatlands vary but a reasonable estimate is in the region of . Although tropical peatlands only cover about 0.25% of the Earth's land surface they contain 50,000–70,000 million tonnes of carbon (about 3% global soil carbon). In addition, tropical peatlands support diverse ecosystems and are home to a number of endangered species including the orangutan. The native peat swamp forests contain a number of valuable timber-producing trees plus a range of other products of value to local communities, such as bark, resins and latex. Land-use changes and fire, mainly associated with plantation development and logging (deforestation and drainage), are reducing this carbon store and contributing to greenhouse gas (GHG) emissions. The problems that result from development of tropical peatlands stem mainly from a lack of understanding of the complexities of this ecosystem and the fragility of the relationship between peat and forest. Once the forest is removed and the peat is drained, the surface peat oxidises and loses stored carbon rapidly to the atmosphere (as carbon dioxide). This results in progressive loss of the peat surface, leading to local flooding and, due to the large areas involved, global climate change. Failure to account for such emissions results in underestimates of the rate of increase in atmospheric GHGs and the extent of human induced climate change. See also Peat Peat swamp forest References External links BBC - Borneo healing plants threatened BBC - Asian peat fires add to warming Wise Use of Tropical Peatlands: Focus on Southeast Asia CARBOPEAT Project International Peatland Society Pedology Types of soil Wetlands
Tropical peat
[ "Environmental_science" ]
495
[ "Hydrology", "Wetlands" ]
11,052,939
https://en.wikipedia.org/wiki/SPRESI%20database
The SPRESI data collection is one of the largest databases for organic chemistry worldwide. The database covers the scientific literature from 1974 to 2014, focusing on organic synthesis. It contains information on 5.8 million chemical structures and 4.6 million chemical reactions abstracted from 700,000 references. History Since 1974 the data collection has been jointly built by VINITI(All-Russian Institute of Scientific and Technical Information of the Russian Academy of Sciences, based in Moscow) and ZIC (Zentrale Informationsverarbeitung Chemie, based in east Berlin, up to 1989) and the data are now maintained by the VINITI Institute. Since 1990 InfoChem GmbH, part of DeepMatter Group, based in Munich, Germany, has been the distributor of this data collection and developed the database SPRESIweb and the app SPRESImobile. Database Content The SPRESI database contains information on organic substances, including coverage of reactions, structures and properties. Over 32 million records of factual data, such as physical properties (boiling/melting points, refractive indexes, etc.), reaction conditions (catalysts, yields, etc.) and keywords have also been abstracted. Links to the literature in which the substances are described are also given. Access The SPRESI data collection can be accessed online via the web-application SPRESIweb, developed and distributed by InfoChem. Alternatively the complete set or subsets of the database can be acquired as raw data in SDF/RDF chemical file format. References (SPRESIweb) Chemical databases
SPRESI database
[ "Chemistry" ]
326
[ "Chemical databases" ]
11,053,456
https://en.wikipedia.org/wiki/Dictation%20machine
A dictation machine is a sound recording device most commonly used to record speech for playback or to be typed into print. It includes digital voice recorders and tape recorder. The name "Dictaphone" is a trademark of the company of the same name, but it has also become a common term for all dictation machines, as a genericized trademark. History Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D.C.In 1879, and continued until they were granted basic patents in 1886 for recording in wax. Thomas A. Edison had invented the phonograph in 1877, but the fame bestowed on him for this invention — sometimes called his most original — was not due to its efficiency. Recording with his tinfoil phonograph was too difficult to be practical, as the tinfoil tore easily, and even when the stylus was properly adjusted. Although Edison had hit upon the secret of sound recording, immediately after his discovery he did not improve it, allegedly because of an agreement to spend the next five years developing the New York City electric light and power systems. By 1881 the Volta associates had success in improving an Edison tinfoil machine to some extent. Wax was put in the grooves of the heavy iron cylinder, and no tinfoil was used. The basic distinction between the Edison's first phonograph patent, and the Bell and [Charles Sumner] Tainter patent of 1886 was the method of recording. Edison's method was to indent the sound waves on a piece of tinfoil, while Bell and Tainter's invention called for cutting, or 'engraving', the sound waves into a wax record with a sharp recording stylus. Among the later improvements by the Volta Associates, the Graphophone used a cutting stylus to create lateral zig-zag grooves of uniform depth into the wax-coated cardboard cylinders, rather than the up-down vertically-cut grooves of Edison's contemporary phonograph machine designs. Notably, Bell and Tainter developed wax-coated cardboard cylinders for their record cylinders, instead of Edison's cast iron cylinder, covered with a removable film of tinfoil (the actual recording medium, which was prone to damage during installation or removal. Tainter received a separate patent for a tube assembly machine to automatically produce the coiled cardboard tubes, which served as the foundation for the wax cylinder records. Besides being far easier to handle, the wax recording medium also allowed for lengthier recordings and created superior playback quality. Additionally the Graphophones initially deployed foot treadles to rotate the recordings, then wind-up clockwork drive mechanisms, and finally migrated to electric motors, instead of the manual crank that was used on Edison's phonograph. The numerous improvements allowed for a sound quality that was significantly better than Edison's machine. Shortly after Thomas Edison invented the phonograph, the first device for recording sound, in 1877, he thought that the main use for the new device would be for recording speech in business settings. (Given the low audio frequency of earliest versions of the phonograph, recording music may not have seemed to be a major application.) Some early phonographs were indeed used this way, but this did not become common until the production of reusable wax cylinders in the late 1880s. The differentiation of office dictation devices from other early phonographs, which commonly had attachments for making one's own recordings, was gradual. The machine marketed by the Edison Records company was trademarked as the "Ediphone". Following the invention of the audion tube in 1906, electric microphones gradually replaced the purely acoustical recording methods of earlier dictaphones by the late 1930s. In 1945, the SoundScriber, Gray Audograph and Edison Voicewriter, which cut grooves into a plastic disc, was introduced, and two years later Dictaphone replaced wax cylinders with their Dictabelt technology, which cut a mechanical groove into a plastic belt instead of into a wax cylinder. This was later replaced by magnetic tape recording. While reel-to-reel tape was used for dictation, the inconvenience of threading tape spools led to development of more convenient formats, notably the Compact Cassette, Mini-Cassette, and Microcassette. Digital dictation Digital dictation became possible in the 1990s, as falling computer memory prices made possible pocket-sized digital voice recorders that stored sound on computer memory chips without moving parts. Many early 21st-century digital cameras and smartphones have this capability built in. In the 1990s, improvements in voice recognition technology began to allow computers to transcribe recorded audio dictation into text form, a task that previously required human secretaries or transcribers. The files generated with digital recorders vary in size, depending on the manufacturer and the format the user chooses. The most common file formats that digital recorders generate have one of the extensions WAV, WMA and MP3. Many dictation machines record in the DSS and DS2 format. Dictation audio can be recorded in various audio file formats. Most digital dictation systems use a lossy form of audio compression based on modelling of the vocal tract to minimize hard disk space and optimize network utilization as files are transferred between users. (Note that WAV is not an audio encoding format, but a file format and has little or no bearing on the encoding rate (kbit/s), size or audio quality of the resulting file.) Digital dictation offers several advantages over traditional cassette tape based dictation: The user can instantly rewind or fast forward to any point within the dictation file to review or edit. The random access ability of digital audio allows inserting audio at any point without overwriting the following text. Dictation produces a file which can be transferred electronically, e.g. via WAN, LAN, USB, e-mail, telephony, FTP, etc. Large dictation files can be shared with multiple typists. Sound may be CD quality and can improve transcription accuracy and speed. Digital dictation provides the ability to report on the volume or type of dictation and transcription outstanding or completed within an organization. Despite the advances in technology, analog media are still widely used in dictation recording due to its flexibility, permanence, and robustness. In some cases, speech is recorded where sound quality is paramount and transcription unnecessary, e.g. for broadcasting a theatre play; recording techniques closer to high-fidelity music recording are more appropriate. Methods Portable recorder Portable, hand held, digital recorders are the modern replacement for along with handhelds. Digital portables allow transfer of recordings by docking or plugging into a computer. Digital recorders eliminate the need for cassette tapes. Professional digital hand held recorders are available with slide switch, push button, fingerprint locking, and barcode scanning options. Computer Another common way to record digital dictation is with a computer dictation microphone. There are several different types of computer dictation microphones available, but each one has similar features and operation. Olympus Direct Rec, Philips SpeechMike, and Dictaphone Powermic are all digital computer dictation microphones that also feature push button control for operating dictation or speech recognition software. The dictation microphone operates through a USB on the computer it is used with. Call-in dictation system Call-in dictation systems allow one to record their dictations over the phone. With call in dictation systems, the author dials a phone number, enters a PIN and starts dictating. Touch tone controls allow for start, pause, playback, and sending of dictation audio file. The call-in dictation systems usually feature a Pod that can be plugged into a phone line. The pod can then be plugged into a computer to store dictation audio recording in compatible transcription or management software. Mobile phone Currently there are several digital dictation applications available for mobile phones. With mobile dictation apps, one can record, edit, and send dictation files over networks. Wireless transfer of dictation files decreases turnaround time. Mobile dictation applications allow users to stay connected to dictation workflows through a network, such as the Internet. Software There are two types of digital dictation software: Standalone digital sound recording software: Basic software whereby the audio is recorded as a simple file. Most digital sound recording applications are designed for individuals or a very small number of users, as they do not offer a network efficient way of transferring the audio files other than email, they also do not encrypt or password protect the audio file Digital dictation workflow software: Advanced software for commercial organizations where audio is still played by a typist but the audio file can be securely and efficiently transferred. The workflow element of these advanced systems also allows users to share audio files instantly, create virtual teams, outsource transcription securely, and set up confidential send options or 'ethical walls'. Digital Dictation workflow software is normally Active Directory integrated and can be used in conjunction with document, practice or case management systems. Typical businesses using workflow software are law firms, healthcare organizations, accountancies or surveying firms. Recordings can be made over the telephone, on a computer or via a hand held dictation device that is "docked" to a computer. Transcription Digital dictation is different from speech recognition where audio is analyzed by a computer using speech algorithms in an attempt to transcribe the document. With digital dictation the process of converting digital audio to text may be done using digital transcription software, typically controlled by a foot switch which allows the transcriber to PLAY, STOP, REWIND, and BACKSPACE. Nevertheless, there are Digital Transcription Kits that allow integration with Speech Recognition Software. This gives the typist the option to either type a document manually, or send a document to be converted to text by Software such as Dragon NaturallySpeaking. Common dictation formats Phonograph cylinder (1890s) Gray Audograph (1945) SoundScriber (1945) Edison Voicewriter (late 1940s) Dictabelt (1947) Compact Cassette (1963) Mini-Cassette (1967) Microcassette (1969) Digital dictation (1990s) See also Digital pen IBM dictation machines Speech recognition Volta Laboratory and Bureau References Audiovisual introductions in 1886 Sound recording technology Audio storage Office equipment Alexander Graham Bell Transcription (linguistics)
Dictation machine
[ "Technology" ]
2,190
[ "Recording devices", "Sound recording technology" ]
11,053,630
https://en.wikipedia.org/wiki/Municipal%20annexation%20in%20the%20United%20States
Municipal annexation is a process by which a municipality acquires new territory, most commonly by expanding its boundaries into an adjacent unincorporated area. This has been a common response of cities to urbanization in neighboring areas. It may be done because the neighboring urban areas seek municipal services or because a city seeks control over its suburbs or neighboring unincorporated areas. In the United States, all local governments are considered "creatures of the state" according to Dillon's Rule, which resulted from the work of John Forrest Dillon on the law of municipal corporations. Dillon's Rule implies, among other things, that the boundaries of any jurisdiction falling under state government can be modified by state government action. For this reason, examples of municipal annexation are distinct from annexations involving sovereign states. Shoestring annexation A "shoestring annexation" is a term used for an annexation by a city, town or other municipality in which it acquires new territory that is contiguous to the existing territory but is only connected to it by a thin strip of land. It is sometimes called a "flagpole annexation" because the territory resembles a flagpole, in which the connection is the "pole" and the annexed territory the "flag". Reasons In some states, municipalities are prohibited from annexing land not directly connected to their existing territory. A shoestring or flagpole annexation allows the municipality to do so. Such annexations are sometimes used when a municipality seeks to acquire unincorporated developed land, such as a newly built subdivision separated from it by undeveloped open space. They may also be used when a municipality desires to annex a commercial or industrial area without taking over intervening residential areas, so as to collect tax revenues from the businesses or industry without having to provide services (such as electricity and garbage collection) to residents. Such uses of the technique are often criticized and derided as a form of gerrymandering, and have in fact been used for the purpose of manipulating vote distribution among election precincts and districts. A related strategy is called strip annexation, which involves annexing a narrow strip that encloses a large block of unincorporated land. Strip annexation was widely used by the municipalities of the Phoenix metropolitan area during the 1970s to preemptively gain control of large areas of land before other municipalities, without having to annex more than a thin strip surrounding a large so-called county island. The strip protected the county island from being annexed by other municipalities, thus giving the strip-annexing municipality the ability to slowly annex portions of the county island over time. One such annexation by Chandler in 1974 spurred nearby Gilbert to create the largest county island to date by annexing a strip no more than 200 feet wide that enclosed 51 square miles of unincorporated Maricopa County. The annexation was challenged in court and, although found legal, eventually led to legislation in 1980 outlawing strip annexation. Some municipalities rushed to annex before the law took effect, such as Scottsdale, which annexed a 10 foot wide strip enclosing an 86 square mile county island. Examples Port of Los Angeles The Port of Los Angeles together with the San Pedro, Wilmington and Harbor City neighborhoods of Los Angeles, are connected to the main part of the city by what is called locally the "Shoestring Strip" between Figueroa Street and Vermont Avenue and between Western and Normandie avenues to the south. O'Hare Airport O'Hare Airport is municipally connected to the city of Chicago via a narrow strip of land, approximately 200 feet wide, along Foster Avenue from the Des Plaines River to the airport. This land was annexed in the 1950s to assure the airport was contiguous with the city to keep it under city control. The strip is bounded on the north by Rosemont and the south by Schiller Park. Allston-Brighton The Boston neighborhoods of Allston and Brighton were part of an independent town of Brighton before being annexed by Boston. They are presently connected to the remainder of the city by the Boston University campus. At the time of the annexation, Brookline extended to the Charles River and separated Boston and Brighton. As a result, a shoestring annexation was obtained by Boston from Brookline when Brighton joined Boston. This was made necessary by Brookline's refusal to join Boston a year before Brighton's annexation. Santa Barbara Municipal Airport Santa Barbara Municipal Airport is connected to the city of Santa Barbara, despite being located in the center of the city of Goleta, through 300 feet wide strip of land mostly located under the Pacific Ocean. South San Diego South San Diego, located next to the Mexico–United States border, is physically separated from the rest of San Diego by the cities of National City and Chula Vista. A narrow strip of land at the bottom of San Diego Bay connects these southern neighborhoods with the rest of the city. West Grove West Grove, the western portion of the city of Garden Grove in Orange County, California, is separated from the rest of the city by the city of Stanton. The two portions of the city are connected to the rest of the city by a narrow strip of land along Garden Grove Boulevard from Beach Boulevard to Hoover Street. See also Municipal annexation Municipal deannexation in the United States Amalgamation (politics) Enclave and exclave Boroughitis Paper township References Notes Further reading Staff. MRSC PUBLICATIONS › Annexation Handbook Publication, Municipal Research & Services Center of Washington. Annexation Political geography Metropolitan areas of the United States Urban planning Local government in the United States
Municipal annexation in the United States
[ "Engineering" ]
1,091
[ "Urban planning", "Architecture" ]
11,053,695
https://en.wikipedia.org/wiki/EMBO%20Reports
EMBO Reports is a peer-reviewed scientific journal covering research related to biology at a molecular level. It publishes primary research papers, reviews, and essays and opinion. It also features commentaries on the social impact of advances in the life sciences and the converse influence of society on science. A sister journal to The EMBO Journal, EMBO Reports was established in 2000 and was published on behalf of the European Molecular Biology Organization by Nature Publishing Group since 2003. It is now published by EMBO Press. External links Molecular biology Molecular and cellular biology journals Monthly journals English-language journals Academic journals established in 2000 European Molecular Biology Organization academic journals
EMBO Reports
[ "Chemistry", "Biology" ]
127
[ "Biochemistry", "Molecular and cellular biology journals", "Molecular biology" ]
11,053,791
https://en.wikipedia.org/wiki/Mach%20reflection
Mach reflection is a supersonic fluid dynamics effect, named for Ernst Mach, and is a shock wave reflection pattern involving three shocks. Introduction Mach reflection can exist in steady, pseudo-steady and unsteady flows. When a shock wave, which is moving with a constant velocity, propagates over a solid wedge, the flow generated by the shock impinges on the wedge thus generating a second reflected shock, which ensures that the velocity of the flow is parallel to the wedge surface. Viewed in the frame of the reflection point, this flow is locally steady, and the flow is referred to as pseudosteady. When the angle between the wedge and the primary shock is sufficiently large, a single reflected shock is not able to turn the flow to a direction parallel to the wall and a transition to Mach reflection occurs. In a steady flow situation, if a wedge is placed into a steady supersonic flow in such a way that its oblique attached shock impinges on a flat wall parallel to the free stream, the shock turns the flow toward the wall and a reflected shock is required to turn the flow back to a direction parallel to the wall. When the shock angle exceeds a certain value, the deflection achievable by a single reflected shock is insufficient to turn the flow back to a direction parallel to the wall and transition to Mach reflection is observed. Mach reflection consists of three shocks, namely the incident shock, the reflected shock and a Mach stem, as well as a slip plane. The point where the three shocks meet is known as the 'triple point' in two dimensions, or a shock-shock in three dimensions. Types of Mach reflection The only type of Mach reflection possible in steady flow is direct-Mach reflection, in which the Mach stem is convex away from the oncoming flow, and the slip plane slopes towards the reflecting surface. By new results there is a new configuration of shock waves - configuration with a negative angle of reflection in steady flow. Numerical simulations demonstrate two forms of this configuration - one with a kinked reflected shock wave, and an unstable double Mach configuration, depending on the transition path. In pseudo-steady flows, the triple point moves away from the reflecting surface and the reflection is a direct-Mach reflection. In unsteady flows, it is also possible that the triple point remains stationary relative to the reflecting surface (stationary-Mach reflection), or moves toward the reflecting surface (inverse-Mach reflection). In inverse Mach reflection, the Mach stem is convex toward the oncoming flow, and the slip plane curves away from the reflecting surface. Each one of these configurations can assume one of the following three possibilities: single-Mach reflection, transitional-Mach reflection and double-Mach reflection. See also Gas dynamics Shock wave Shock polar is a graphical tool to determine whether Mach reflection occurs. References External links The discovery of the Mach reflection effect and its demonstration in an auditorium Google Scholar search Fluid dynamics
Mach reflection
[ "Chemistry", "Engineering" ]
591
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
11,053,817
https://en.wikipedia.org/wiki/Ontology%20learning
Ontology learning (ontology extraction,ontology augmentation generation, ontology generation, or ontology acquisition) is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Typically, the process starts by extracting terms and concepts or noun phrases from plain text using linguistic processors such as part-of-speech tagging and phrase chunking. Then statistical or symbolic techniques are used to extract relation signatures, often based on pattern-based or definition-based hypernym extraction techniques. Procedure Ontology learning (OL) is used to (semi-)automatically extract whole ontologies from natural language text. The process is usually split into the following eight tasks, which are not all necessarily applied in every ontology learning system. Domain terminology extraction During the domain terminology extraction step, domain-specific terms are extracted, which are used in the following step (concept discovery) to derive concepts. Relevant terms can be determined, e.g., by calculation of the TF/IDF values or by application of the C-value / NC-value method. The resulting list of terms has to be filtered by a domain expert. In the subsequent step, similarly to coreference resolution in information extraction, the OL system determines synonyms, because they share the same meaning and therefore correspond to the same concept. The most common methods therefore are clustering and the application of statistical similarity measures. Concept discovery In the concept discovery step, terms are grouped to meaning bearing units, which correspond to an abstraction of the world and therefore to concepts. The grouped terms are these domain-specific terms and their synonyms, which were identified in the domain terminology extraction step. Concept hierarchy derivation In the concept hierarchy derivation step, the OL system tries to arrange the extracted concepts in a taxonomic structure. This is mostly achieved with unsupervised hierarchical clustering methods. Because the result of such methods is often noisy, a supervision step, e.g., user evaluation, is added. A further method for the derivation of a concept hierarchy exists in the usage of several patterns that should indicate a sub- or supersumption relationship. Patterns like “X, that is a Y” or “X is a Y” indicate that X is a subclass of Y. Such pattern can be analyzed efficiently, but they often occur too infrequently to extract enough sub- or supersumption relationships. Instead, bootstrapping methods are developed, which learn these patterns automatically and therefore ensure broader coverage. Learning of non-taxonomic relations In the learning of non-taxonomic relations step, relationships are extracted that do not express any sub- or supersumption. Such relationships are, e.g., works-for or located-in. There are two common approaches to solve this subtask. The first is based upon the extraction of anonymous associations, which are named appropriately in a second step. The second approach extracts verbs, which indicate a relationship between entities, represented by the surrounding words. The result of both approaches need to be evaluated by an ontologist to ensure accuracy. Rule discovery During rule discovery, axioms (formal description of concepts) are generated for the extracted concepts. This can be achieved, e.g., by analyzing the syntactic structure of a natural language definition and the application of transformation rules on the resulting dependency tree. The result of this process is a list of axioms, which, afterwards, is comprehended to a concept description. This output is then evaluated by an ontologist. Ontology population At this step, the ontology is augmented with instances of concepts and properties. For the augmentation with instances of concepts, methods based on the matching of lexico-syntactic patterns are used. Instances of properties are added through the application of bootstrapping methods, which collect relation tuples. Concept hierarchy extension In this step, the OL system tries to extend the taxonomic structure of an existing ontology with further concepts. This can be performed in a supervised manner with a trained classifier or in an unsupervised manner via the application of similarity measures. Frame and Event detection During frame/event detection, the OL system tries to extract complex relationships from text, e.g., who departed from where to what place and when. Approaches range from applying SVM with kernel methods to semantic role labeling (SRL) to deep semantic parsing techniques. Tools Dog4Dag (Dresden Ontology Generator for Directed Acyclic Graphs) is an ontology generation plugin for Protégé 4.1 and OBOEdit 2.1. It allows for term generation, sibling generation, definition generation, and relationship induction. Integrated into Protégé 4.1 and OBO-Edit 2.1, DOG4DAG allows ontology extension for all common ontology formats (e.g., OWL and OBO). Limited largely to EBI and Bio Portal lookup service extensions. See also Automatic taxonomy construction Computational linguistics Domain ontology Information extraction Natural language understanding Semantic Web Text mining Bibliography P. Buitelaar, P. Cimiano (Eds.). Ontology Learning and Population: Bridging the Gap between Text and Knowledge, Series information for Frontiers in Artificial Intelligence and Applications, IOS Press, 2008. P. Buitelaar, P. Cimiano, and B. Magnini (Eds.). Ontology Learning from Text: Methods, Evaluation and Applications, Series information for Frontiers in Artificial Intelligence and Applications, IOS Press, 2005. Wong, W. (2009), "Learning Lightweight Ontologies from Text across Different Domains using the Web as Background Knowledge". Doctor of Philosophy thesis, University of Western Australia. Wong, W., Liu, W. & Bennamoun, M. (2012), "Ontology Learning from Text: A Look back and into the Future". ACM Computing Surveys, Volume 44, Issue 4, Pages 20:1-20:36. Thomas Wächter, Götz Fabian, Michael Schroeder: DOG4DAG: semi-automated ontology generation in OBO-Edit and Protégé. SWAT4LS London, 2011. References Natural language processing Ontology learning (computer science) eu:Terminologia ateratze
Ontology learning
[ "Technology" ]
1,327
[ "Natural language processing", "Natural language and computing" ]
11,054,533
https://en.wikipedia.org/wiki/Archard%20equation
The Archard wear equation is a simple model used to describe sliding wear and is based on the theory of asperity contact. The Archard equation was developed much later than (sometimes also known as energy dissipative hypothesis), though both came to the same physical conclusions, that the volume of the removed debris due to wear is proportional to the work done by friction forces. Theodor Reye's model became popular in Europe and it is still taught in university courses of applied mechanics. Until recently, Reye's theory of 1860 has, however, been totally ignored in English and American literature where subsequent works by Ragnar Holm and John Frederick Archard are usually cited. In 1960, and Mikhail Alekseevich Babichev published a similar model as well. In modern literature, the relation is therefore also known as Reye–Archard–Khrushchov wear law. In 2022, the steady-state Archard wear equation was extended into the running-in regime using the bearing ratio curve representing the initial surface topography. Equation where: Q is the total volume of wear debris produced K is a dimensionless constant W is the total normal load L is the sliding distance H is the hardness of the softest contacting surfaces Note that is proportional to the work done by the friction forces as described by Reye's hypothesis. Also, K is obtained from experimental results and depends on several parameters. Among them are surface quality, chemical affinity between the material of two surfaces, surface hardness process, heat transfer between two surfaces and others. Derivation The equation can be derived by first examining the behavior of a single asperity. The local load , supported by an asperity, assumed to have a circular cross-section with a radius , is: where P is the yield pressure for the asperity, assumed to be deforming plastically. P will be close to the indentation hardness, H, of the asperity. If the volume of wear debris, , for a particular asperity is a hemisphere sheared off from the asperity, it follows that: This fragment is formed by the material having slid a distance 2a Hence, , the wear volume of material produced from this asperity per unit distance moved is: making the approximation that However, not all asperities will have had material removed when sliding distance 2a. Therefore, the total wear debris produced per unit distance moved, will be lower than the ratio of W to 3H. This is accounted for by the addition of a dimensionless constant K, which also incorporates the factor 3 above. These operations produce the Archard equation as given above. Archard interpreted K factor as a probability of forming wear debris from asperity encounters. Typically for 'mild' wear, K ≈ 10−8, whereas for 'severe' wear, K ≈ 10−2. Recently, it has been shown that there exists a critical length scale that controls the wear debris formation at the asperity level. This length scale defines a critical junction size, where bigger junctions produce debris, while smaller ones deform plastically. See also References Further reading https://patents.google.com/patent/DE102005060024A1/de (Mentions the term "Reye-Hypothese") Surfaces Materials science Equations Tribology
Archard equation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
681
[ "Tribology", "Applied and interdisciplinary physics", "Mathematical objects", "Materials science", "Surface science", "Equations", "nan", "Mechanical engineering" ]
11,054,590
https://en.wikipedia.org/wiki/Atlas%20Machine%20and%20Supply%2C%20Inc.
Atlas Machine and Supply, Inc., founded in 1907, is one of the largest heavy-capacity industrial machinery engineering, manufacturing and remanufacturing centers in the United States. The company also performs field machining repairs onsite for industries located throughout the United States and around the world. Atlas also designs and repairs industrial compressors and related equipment. History In 1907, the company was officially formed by the Swiss immigrant Gimmel family who immigrated to the United States roughly 30 years prior. Manufacturing began with elevators at a small shop in downtown Louisville. In the decades that followed, Atlas gradually expanded its machine shop and industrial engineering capabilities to include the repair, design, and remanufacturing of heavy industrial machinery. Atlas also became a compressor distributor in the 1940s and continues to provide compressed air solutions today. Other expanded capabilities included the launch of a Field Machining division in 2011 that mirrors its machine shop capabilities for customers with on-site repair needs. Atlas further expanded its onsite expertise with the addition of a new Laser Tracking metrology service in 2012 which allows for completing large-scale machining jobs at the customer's job site. Locations As of August 2023, Atlas had approximately 260 employees. The company's headquarters is located in Louisville, Kentucky and consists of a machine shop totaling more than 100,000 square feet. Additional facilities are located in Cincinnati, Ohio; Columbus, Ohio; Evansville, Indiana; Indianapolis, Indiana; Lexington, Kentucky; Harned, Kentucky; and Nashville, Tennessee. Leadership In 2014 Richard Gimmel III became the fourth generation of his family to lead the company as president. His father and former president, Richard Gimmel Jr., was chairman until his retirement in 2019. In 2023, the company reorganized its operational structure to better manage the company's accelerated growth. The changes included Atlas President Richard Gimmel lll assuming a new leadership role as Chief Executive Officer. References External links Company website Industrial supply companies Manufacturing companies based in Louisville, Kentucky Gas compressors 1907 establishments in Kentucky Manufacturing companies established in 1907 American companies established in 1907
Atlas Machine and Supply, Inc.
[ "Chemistry" ]
421
[ "Gas compressors", "Turbomachinery" ]
11,054,648
https://en.wikipedia.org/wiki/West%20Baden%20Springs%20Hotel
The West Baden Springs Hotel, formerly the West Baden Inn, is part of the French Lick Resort and is a national historic landmark hotel in West Baden Springs, Orange County, Indiana. It has a dome over its atrium. Prior to the completion of the Coliseum in Charlotte, North Carolina, in 1955, the dome was the largest free-spanning dome in the United States. From 1902 to 1913 it was the largest dome in the world. Listed on the National Register of Historic Places in 1974, the hotel became a National Historic Landmark in 1987. It is a National Historic Civil Engineering Landmark and one of the hotels in the National Trust for Historic Preservation's Historic Hotels of America program. Early history Roaming bear and herds of deer and buffalo once visited the salt lick near the present-day site of the West Baden Springs Hotel as they traveled along the Buffalo Trace in southern Indiana. Native Americans also used the area as hunting grounds. Following the arrival of French traders and settlers in the vicinity, the site became known as French Lick. When George Rogers Clark passed through southern Indiana in 1778, he camped less than a mile from the salt licks and mineral springs in Orange County that became known as French Lick and West Baden Springs. The presence of salt deposits enticed the state government to consider mining large quantities of salt for early pioneers to use in preserving meat, but when it was determined that the saline content was insufficient to support large-scale salt mining, the property was offered for sale around 1832. William A. Bowles, a local physician, purchased the land that included the mineral springs and built a small inn. Constructed around 1840–45, it developed into the French Lick Springs Hotel, a popular health resort. Bowles served as a commissioned officer in the U.S. Army during the Mexican–American War. Before his departure for military service in 1846, Bowles signed a five-year lease with John A. Lane, a physician/patent medicine salesman, who agreed to enlarge and improve the facility at French Lick. The business deal would allow Bowles to enjoy an improved facility with the potential for increased business at the lease's end, and Lane would make a potential profit from his investment. Part of Bowles' land included the mineral springs known as Mile Lick, north of French Lick. Much of the property surrounding the Mile Lick springs was marshy, subject to yearly flooding, and unsuitable to farming, but Lane envisioned it as a business area that would surpass French Lick. In 1851 he purchased from Bowles. Lane assembled a sawmill, erected a bridge to traverse Lick Creek, and built a hotel larger than the French Lick Springs, beginning the competition between the two Orange County sites. West Baden Lane opened a hotel around 1852 near the settlement of Mile Lick and named it the Mile Lick Inn. In 1855, when the community was renamed West Baden in reference to Wiesbaden (or Baden-Baden), a spa town in Germany that was known for its mineral springs, Lane changed the hotel's name to the West Baden Inn. By the 1860s it was known as the West Baden Springs Hotel. The property was managed by Lane and Mr. and Mrs. Hugh Wilkins, with the assistance of W. F. Osborn, until 1883, when it was sold to a group of investors who made additional improvements. In 1887 the Monon Railroad built an extension of its line to transport guests to the hotels and springs at French Lick and West Baden, where the two sites competed to offer the best service, entertainment, food and mineral water. By the late 1800s, guests arrived from across the country on seven separate railroads for relaxation and the alleged curative powers of the mineral water. The area's mineral water and baths were alleged to cure more than fifty ailments. Sidewalks led from the hotel to seven numbered springs, all of which were covered by open wooden shelters. West Baden marketed water from its onsite springs under the Sprudel Water brand name. (A gnome named Sprudel was also a part of its logo.) French Lick sold Pluto Water using a red devil as a part of its trademark. In 1888 an investment group called Sinclair and Rhodes, which included Lee Wiley Sinclair from Salem, Indiana, and E. B. Rhodes, acquired the West Baden hotel and of land for $23,000. Although the hotel was destroyed by fire in 1891, it was rebuilt, and over the next several years Sinclair bought his partners' interest in the hotel and became its sole owner. Sinclair turned the facility into an elaborate resort. Advertised as the Carlsbad of America, the cosmopolitan resort included a casino, an opera house, and a covered, two-deck, one-third-mile oval bicycle and pony track. A lighted baseball diamond in the center of the track was used as the spring training grounds for several major league teams including the Cincinnati Reds, Chicago Cubs, and Pittsburgh Pirates, among others. The hotel caught fire on June 14, 1901, but no guests were injured. Sinclair invited Thomas Taggart, the new owner of the French Lick Springs Hotel, to buy the West Baden property, but Taggart rebuffed the offer, boasting that he would expand his facility to handle more guests. Sinclair, who was outraged, decided to build a new, circular-shaped hotel that would be fireproof and have a large dome. His goal was to open the new hotel within a year. Most building professionals rejected the idea of a dome, but Harrison Albright an architect from West Virginia, designed the building. Oliver Westcott, a bridge engineer, designed the dome's trusses. To complete the structure before the first anniversary of the fire, a 500-man crew worked six days a week in ten-hour shifts for 270 days at a total cost of $414,000. Eighth Wonder The new hotel opened on September 15, 1902, to rave reviews. Its formal dedication took place on April 16, 1903, with Indiana governor Winfield T. Durbin and U.S. Senator Charles W. Fairbanks delivering speeches at the event. Advertisements called it the Eighth Wonder of the World. Hotel amenities included a gambling casino and live theater performance every night, as well as opera, concerts, movies, bowling, and billiards. Palm trees grew in the huge atrium, where birds had free range and guests relaxed on overstuffed furniture grouped in clusters under the dome. The massive fireplace in the atrium could accommodate logs as long as . Outdoors, guests had their choice of a natatorium, two golf courses, horseback riding, baseball, several hiking trails, or bicycling on a covered, double-decked oval track. (At , the track was the largest in the country.) To cater to their well-heeled clientele, the hotel's facilities also included a bank and a stock brokerage. A trolley transported guests from the hotel's front door to nearby French Lick. Some early advertisements claimed the hotel had more than 700 rooms, but most sources report the total was around 500. The main building contained six floors. The ground floor held the lobby, hotel management offices, the dining area, shops and meeting rooms; saunas and mineral baths were located on the top floor; guest rooms, built in two concentric circles around the atrium, were located on the second through fifth floors. Rooms on the inner ring offered a view of the atrium, while forty rooms on floors four and six had balconies overlooking the atrium. The hotel rooms were small by modern standards. Most had one or two twin beds and lacked a private bathroom. Notable guests Over the years the West Baden hotel attracted many notable guests. Beginning in the late 1880s, when southern Indiana became a favorite destination of the wealthy, the famous, infamous, and near-famous came to relax, play golf, gamble, enjoy fine dining, and be entertained. As Chris Bundy, author of West Baden Springs: Legacy of Dreams, explained, "These hotels were the Disney World of their time. In those days, it was assumed that if you could afford to come to America [for vacation], you would go to French Lick. It was that well-known overseas." Paul Dresser composed Indiana's state song "On the Banks of the Wabash, Far Away" at the hotel. Boxers John L. Sullivan and James J. Corbett trained there. Diamond Jim Brady and Al Capone and his bodyguards were frequent guests. Politicians who visited the hotel included Chicago's mayor, "Big Bill" Thompson, and New York's governor, Al Smith. General John J. Pershing, writer George Ade, and entertainer Eva Tanguay were also guests. Professional baseball teams that included the Chicago Cubs, Cincinnati Reds, Philadelphia Phillies, Pittsburgh Pirates, St. Louis Browns and St. Louis Cardinals held spring training in the region. Renovation Minor renovations to the property began in 1913, but a fire on February 11, 1917, destroyed the hotel's bottling plant, opera house, bowling alley and hospital, forcing their replacement. Several years prior to the fires, hotel owner Lee Sinclair's health began to fail and his daughter, Lillian, and her husband, Charles Rexford, took over the hotel's operation. When Sinclair died in 1916, management of the hotel was left in the hands of the Rexfords. Charles Rexford opposed any major enhancements, but Lillian ignored his wishes and began a major restoration of the hotel in a Greco-Roman architectural style. Between 1917 and 1919, Italian artisans installed a mosaic terrazzo tile floor composed of two million one-inch squares of marble in the atrium. The atrium fireplace was refaced with glazed ceramic tiles from the Rookwood Pottery Company. Marble wainscotting was added to the atrium's ground level walls, while the brick support columns were wrapped with canvas and painted to resemble marble. Outside, an elaborate veranda was constructed. Wooden shelters at the springs were replaced with brick structures, and a sunken garden was created with a fountain featuring an angel. Edward Ballard, who financed the hotel's improvements, began his career as a bowling alley worker in the hotel, but made a fortune by operating an illegal gambling business in the area. Ballard also owned several nationally recognized touring circuses, including the Hagenbeck-Wallace Circus. Between 1918 and 1919, while the hotel was being refurbished, it served as a U.S. Army hospital for wounded soldiers returning from World War I. Lillian Rexford and Lieutenant Charles Cooper fell in love during his stay at the hotel-hospital. The Rexfords divorced in 1922, and Lillian sold the property to Ballard for $1 million in 1923. Half the money repaid the debt owed to Ballard; Lillian kept the remainder. Business at the hotel boomed in the 1920s; however, as ownership of automobiles increased and tourism destinations in Florida and the western United States became more popular, West Baden declined despite Ballard's efforts to attract more guests with trade shows and conventions. The Wall Street Crash of 1929 began a downward spiral for the hotel. As word of the plummeting stock market spread, people congregated in the brokerage firm's offices at the hotel to confirm the news. Within hours the guests began to depart. Ballard kept the facility open for more than two years, but few people stayed in luxury hotels during the Great Depression. Ballard finally closed the hotel in June 1932. In 1934 he donated the $7 million resort to the Society of Jesus (Jesuits). Schools Jesuit seminary Beginning in 1934 the Jesuits began renovating the property to convert it into an austere seminary named West Baden College, an affiliate of Loyola University Chicago, and most of the hotel's luxurious fixtures, furnishings, and decorations were removed. The lobby was converted into a chapel with the addition of French doors and stained-glass windows. The former hotel's four Moorish towers were removed from the exterior after they fell into disrepair. Truckloads of stone were dumped into the mineral spring pools, then capped with concrete and turned into shrines for the saints. The seminary operated for thirty years, but was closed following the 1963–64 school year due to low enrollment and escalating maintenance costs. The Jesuits sold the property in 1966 and returned to the Chicago area. During their time at West Baden the Jesuits established a cemetery for the seminary's priests that received thirty-nine interments. When the Jesuits sold the facility, they retained ownership of the cemetery land, which the Catholic church in French Lick agreed to maintain. Northwood Institute On November 2, 1966, the Jesuits sold the property to Macauley and Helen Dow Whiting, who donated it to Northwood Institute, a private, coeducational college founded in Midland, Michigan. The former hotel/Jesuit seminary was operated as a satellite campus of Northwood's business management school from 1968 to 1983. By its third year at West Baden Springs, the school's enrollment exceeded 400 students. Basketball legend Larry Bird, who was born in West Baden, held basketball clinics and staged games in the atrium. He briefly attended Northwood, after leaving Indiana University, before resuming his studies and collegiate basketball career at Indiana State University. After the school closed, H. Eugene MacDonald, a former Springs Valley resident, purchased the property in October 1983. MacDonald, who had owned other hotels, wanted to operate the property as a hotel, but lacked the financial resources for the restoration work. He executed a sale-and-leaseback deal with Marlin Properties, a Los Angeles historical renovation developer, for $1.5 million, but a $250,000 payment from Marlin was returned for nonsufficient funds in 1985. Before MacDonald could begin foreclosure proceedings, Marlin declared bankruptcy and the hotel's ownership was tied up in litigation for nearly a decade. Preservation The Jesuits and Northwood's owners maintained the building's structure, leaving it in reasonably good shape when MacDonald purchased it in 1983. The property was declared a National Historic Landmark in 1987, but Marlin failed to preserve the building while it was in bankruptcy. Visitors continued to tour the structure until 1989, when it was declared unsafe, and closed. During the winter of 1991, ice built up on the roof and in drainpipes, leading to the partial collapse of an exterior wall. In 1992 the National Trust for Historic Preservation listed the hotel as one of America's most endangered places and the Historic Landmarks Foundation of Indiana matched an anonymous $70,000 donation to pay for work to stabilize the main structure. Tie rods were installed, the roof was patched, drainage was improved on roof parapets, and the structure around the partially collapsed wall was secured. HLFI also created promotional materials to help find a buyer and promoted the establishment of a local zoning and redevelopment commission. Minnesota Investment Partners purchased the property in May 1994 for $500,000 from the bankruptcy receiver. Grand Casinos, Inc., an MIP investor, provided the funding and held an option on the hotel, but was unsuccessful in its efforts to pass "Boat on a Moat" legislation in 1995 to extend riverboat gambling to a proposed man-made lake adjacent to the hotel. When Grand Casinos walked away from their option, MIP tried to sell the property for $800,000, but a year passed with no interest. In July 1996 MIP accepted a purchase offer of $250,000 from HLFI West Baden, Inc., a new affiliate of HLFI, using funds provided by an anonymous donor. Bill Cook, a billionaire entrepreneur, and his wife, Gayle, from Bloomington, Indiana, have been involved with several historic preservation projects. The Cook Group initiated efforts to stabilize the hotel's structural integrity and begin exterior restoration during the summer of 1996. The thirty-month first phase of the project was completed in early 1999 at a cost of $30 million—two-and-a-half times their initial commitment. In addition to the exteriors of the hotel and outbuildings, the garden was recreated, and the interior atrium, lobby, dining room and adjoining rooms were also completely restored. Over the next five years, the Cook Group spent another $5 million for maintenance. The reconstruction project was featured in West Baden Springs: Save of the Century (1999), a documentary produced by Eugene Brancolini for WTIU Public Television. It chronicled the rise, demise, and restoration of the hotel. Using historical documents, photos, and archival footage, Brancolini's documentary explained how the property regained and even surpassed its former luxury. Casino resort HLFI West Baden unsuccessfully marketed the property nationally for more than five years before realizing that casino gaming would be the key to their success. HLFI joined the Cook Group, Boykin Lodging (owner of the French Lick Springs Hotel), and Orange County citizens to lobby the Indiana legislature to allow casino gambling in the area. The coalition members spent so much time in Indianapolis lobbying for their cause that they became known as "The Orange Shirts", in reference to the color of their T-shirts bearing the slogan, "Save French Lick and West Baden Springs". Legislation was finally approved in 2003 and the required local referendum easily passed. The Trump Organization was initially granted the gambling license by the Indiana Gaming Commission, but Trump's subsequent bankruptcy caused the selection process to begin again. The Cook family decided to form a new company, Blue Sky, LLC, and submitted its application, before purchasing the French Lick Springs Hotel from Boykin Lodging. Blue Sky was awarded the gambling license during the summer of 2005 and stepped up the planning and permitting process for the casino. Construction of the French Lick Resort Casino and renovation of the French Lick Springs Hotel occurred simultaneously in the fall of 2005. Restoration In the spring of 2006, HLFI West Baden deeded the West Baden Springs Hotel to the Cook Group for a token amount in appreciation for the $35 million already invested. Restoration of the hotel resumed in the summer of 2006. The French Lick Springs Hotel and French Lick Resort Casino opened together on November 3, 2006. A gala event on June 23, 2007, marked the reopening of the West Baden Springs Hotel, seventy-five years after it closed. The West Baden hotel's reconfigured space contained 243 rooms and suites, fewer than half of the total in the original structure. The hotel's natatorium was rebuilt using historic photographs as a guide. The total cost of the complete restoration of the West Baden Springs Hotel totaled almost $100 million. Indiana Landmarks holds a perpetual preservation easement on the West Baden Springs Hotel that requires prior approval to make any changes to the hotel's exterior or grounds, even if ownership changes. Recognition The West Baden Springs Hotel was listed on the National Register of Historic Places in 1974 and named a National Historic Landmark in 1987. In 2008 Condé Nast magazine ranked the hotel twenty-first on its list of the "Top 75 Mainland U.S. Resorts." In 2009 the American Automobile Association recognized the hotel as one of the top ten historic hotels in the United States. and awarded it a four- diamond rating. A Zagat Survey in 2009 included the hotel on its list of "Top U.S. Hotels, Resorts & Spas." The National Trust for Historic Preservation has included the hotel in its Historic Hotels of America program. The American Society of Civil Engineers designated the hotel as a National Historic Civil Engineering Landmark. In popular culture In the 1900s and 1910s the African-American employees of the West Baden Springs Hotel played on an early Negro league baseball team called the West Baden Sprudels. They played their rivals, the French Lick Plutos of the nearby French Lick Springs Hotel. The hotel is the setting for Michael Koryta's thriller, So Cold the River (2010), as well as its 2021 movie adaptation. Gallery Notes References * Reprint of History of Lawrence, Orange and Washington Counties (1884). (subscription needed) Further reading External links West Baden Springs Hotel at the Historic Landmarks Foundation of Indiana West Baden Springs Hotel National Historic Landmark Listing Photos of West Baden Springs Hotel Hotel buildings on the National Register of Historic Places in Indiana Railway hotels in the United States Resorts in Indiana Buildings and structures in Orange County, Indiana Historic American Engineering Record in Indiana Hotel buildings completed in 1902 National Register of Historic Places in Orange County, Indiana National Historic Landmarks in Indiana Domes Historic Civil Engineering Landmarks Tourist attractions in Orange County, Indiana 1901 establishments in Indiana Historic Hotels of America
West Baden Springs Hotel
[ "Engineering" ]
4,149
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
11,054,720
https://en.wikipedia.org/wiki/Sudanese%20goat%20marriage%20incident
In 2006, a South Sudanese man named Charles Tombe was forced to "marry" a goat with which he was caught engaging in sexual activity, in the Hai Malakal suburb of Juba, at the time part of Sudan. The owner of the goat subdued the perpetrator and asked village elders to consider the matter. One elder noted that he and the other elders found the perpetrator, tied up by the owner, at the door of the goat shed. The goat's owner reported that, "They said I should not take him to the police, but rather let him pay a dowry for my goat because he used it as his wife." The perpetrator was thus ordered to "marry" the goat, pay the cost of the goat and pay a dowry of SD 15,000 (equating to US$50 in 2006, the GDP per capita was US$1,522 for 2008), with half of the dowry up front. The goat apparently acquired the name "Rose" during the elders' deliberations as part of a joke. On 3 May 2007, it was reported that the goat had died, having choked on a plastic bag. The goat was survived by a four-month-old male kid. In November 2013, the South Sudan Law Society called for a review of all South Sudan's laws to abolish bizarre or cruel practices under customary law, such as "a man being forced to marry a goat called 'Rose' after deflowering her." Press attention The story, first published on 24 February 2006 on the BBC website, attracted media attention and was republished on numerous newspapers, blogs and other websites. Even a year after publication, the story was consistently among the BBC's 10 most emailed articles, with many visitors to the BBC news site passing the tale on to friends. The story received over 100,000 hits on five successive days long after its original publication, and was read by millions of people. The BBC, astonished at this popularity, wondered if there was a campaign to keep the tale at the top of its rankings; however, an investigation by its senior software engineer, Gareth Owen, determined that the demand was genuine. The BBC honoured the goat with a mock obituary when it died in 2007. The death was also reported in many other news outlets, including The Times and Fox News. See also Human–animal marriage References 2007 animal deaths Human–animal interaction Zoophilia Rose Forced marriage
Sudanese goat marriage incident
[ "Biology" ]
500
[ "Human–animal interaction", "Animals", "Humans and other species" ]
11,054,805
https://en.wikipedia.org/wiki/Flyback%20diode
A flyback diode is any diode connected across an inductor used to eliminate flyback, which is the sudden voltage spike seen across an inductive load when its supply current is suddenly reduced or interrupted. It is used in circuits in which inductive loads are controlled by switches, and in switching power supplies and inverters. Flyback circuits have been used since 1930 and were refined starting in 1950 for use in television receivers. The word flyback comes from the horizontal movement of the electron beam in a cathode ray tube, because the beam flew back to begin the next horizontal line. This diode is known by many other names, such as snubber diode, commutating diode, freewheeling diode, flywheel diode, suppressor diode, clamp diode, or catch diode. Operation Fig. 1 shows an inductor connected to a battery - a constant voltage source. The resistor represents the small static resistance of the inductor's wire windings. When the switch is closed, the voltage from the battery is applied to the inductor, causing current from the battery's positive terminal to flow down through the inductor and resistor. The increase in current causes a back EMF (voltage) across the inductor due to Faraday's law of induction which opposes the change in current. Since the voltage across the inductor is limited to the battery's voltage of 24 volts, the rate of increase of the current is limited to an initial value of so the current through the inductor increases slowly as energy from the battery is stored in the inductor's magnetic field. As the current rises, more voltage is dropped across the resistor and less across the inductor, until the current reaches a steady value of with all the battery voltage across the resistance and none across the inductance. However, the current drops rapidly when the switch is opened in Fig. 2. The inductor resists the drop in current by developing a very large induced voltage of polarity in the opposite direction of the battery, positive at the lower end of the inductor and negative at the upper end. This voltage pulse, sometimes called the inductive "kick", which can be much larger than the battery voltage, appears across the switch contacts. It causes electrons to jump the air gap between the contacts, causing a momentary electric arc to develop across the contacts as the switch is opened. The arc continues until the energy stored in the inductor's magnetic field is dissipated as heat in the arc. The arc can damage the switch contacts, causing pitting and burning, eventually destroying them. If a transistor is used to switch the current, such as switching power supplies, the high reverse voltage can destroy the transistor. To prevent the inductive voltage pulse on turnoff, a diode is connected across the inductor, as shown in Fig. 3. The diode doesn't conduct current while the switch is closed because it is reverse-biased by the battery voltage, so it doesn't interfere with the normal operation of the circuit. However, when the switch is opened, the induced voltage across the inductor of opposite polarity forward biases the diode, and it conducts current, limiting the voltage across the inductor and thus preventing the arc from forming at the switch. The inductor and diode momentarily form a loop or circuit powered by the stored energy in the inductor. This circuit supplies a current path to the inductor to replace the current from the battery, so the inductor current does not drop abruptly and does not develop a high voltage. The voltage across the inductor is limited to the forward voltage of the diode, around 0.7 - 1.5V. This "freewheeling" or "flyback" current through the diode and inductor decreases slowly to zero as the magnetic energy in the inductor is dissipated as heat in the series resistance of the windings. This may take a few milliseconds in a small inductor. These images show the voltage spike and its elimination through the use of a flyback diode (1N4007). In this case, the inductor is a solenoid connected to a 24V DC power supply. Each waveform was taken using a digital oscilloscope set to trigger when the voltage across the inductor dipped below zero. Note the different scaling: left image 50V/division, right image 1V/division. In Figure 1, the voltage as measured across the switch, bounces/spikes to around -300 V. In Figure 2, a flyback diode was added in antiparallel with the solenoid. Instead of spiking to -300 V, the flyback diode only allows approximately -1.4 V of potential to be built up (-1.4 V is a combination of the forward bias of the 1N4007 diode (1.1 V) and the foot of wiring separating the diode and the solenoid). The waveform in Figure 2 is also smoother than the waveform in Figure 1, perhaps due to arcing at the switch for Figure 1. In both cases, the total time for the solenoid to discharge is a few milliseconds, though the lower voltage drop across the diode will slow relay dropout. Design When used with a DC coil relay, a flyback diode can cause delayed drop-out of the contacts when power is removed, due to the continued circulation of current in the relay coil and diode. When rapid opening of the contacts is important, a resistor or reverse-biased Zener diode can be placed in series with the diode to help dissipate the coil energy faster, at the expense of higher voltage at the switch. Schottky diodes are preferred in flyback diode applications for switching power converters because they have the lowest forward drop (~0.2 V rather than >0.7 V for low currents) and are able to quickly respond to reverse bias (when the inductor is being re-energized). They, therefore, dissipate less energy while transferring energy from the inductor to a capacitor. Induction at the opening of a contact According to Faraday's law of induction, if the current through an inductance changes, this inductance induces a voltage, so the current will flow as long as there is energy in the magnetic field. If the current can only flow through the air, the voltage is so high that the air conducts. That is why in mechanically switched circuits, the near-instantaneous dissipation which occurs without a flyback diode is often observed as an arc across the opening mechanical contacts. Energy is dissipated in this arc primarily as intense heat, which causes undesirable premature erosion of the contacts. Another way to dissipate energy is through electromagnetic radiation. Similarly, for non-mechanical solid-state switching (i.e., a transistor), large voltage drops across an unactivated solid-state switch can destroy the component in question (either instantaneously or through accelerated wear and tear). Some energy is also lost from the system as a whole and from the arc as a broad spectrum of electromagnetic radiation, in the form of radio waves and light. These radio waves can cause undesirable clicks and pops on nearby radio receivers. To minimise the antenna-like radiation of this electromagnetic energy from wires connected to the inductor, the flyback diode should be connected as physically close to the inductor as practicable. This approach also minimises those parts of the circuit that are subject to an unwanted high-voltage — a good engineering practice. Derivation The voltage at an inductor is, by the law of electromagnetic induction and the definition of inductance: If there is no flyback diode but only something with great resistance (such as the air between two metal contacts), say, , we will approximate it as: If we open the switch and ignore and , we get: or which is a differential equation with the solution: We observe that the current will decrease faster if the resistance is high, such as with air. Now if we open the switch with the diode in place, we only need to consider , and . For , we can assume: so: which is: whose (first order differential equation) solution is: We can calculate the time it needs to switch off by determining for which it is . If = , then Applications Flyback diodes are commonly used when semiconductor devices switch inductive loads off: in relay drivers, H-bridge motor drivers, and so on. A switched-mode power supply also exploits this effect, but the energy is not dissipated to heat and is instead used to pump a packet of additional charge into a capacitor, in order to supply power to a load. When the inductive load is a relay, the flyback diode can noticeably delay the release of the relay by keeping the coil current flowing longer. A resistor in series with the diode will make the circulating current decay faster at the drawback of an increased reverse voltage. A zener diode in series but with reverse polarity with regard to the flyback diode has the same properties, albeit with a fixed reverse voltage increase. Both the transistor voltages and the resistor or zener diode power ratings should be checked in this case. See also 1N400x general-purpose diodes 1N4148 signal diode 1N58xx Schottky diodes Lenz's law References Further reading External links Relay Technical Notes - American Zettler Relay Application Notes - TE Connectivity Relay RC Circuit - Evox Rifa Application Circuits of Miniature Signal Relays - NEC/Tokin Diode Turn-on/off Time and Relay Snubbing - Clifton Laboratories "diode for relay coil spikes and motor shutoff spikes?" - sci.electronics.design Flyback Switch Mode Regulator Calculator - All About Circuits Analog circuits Diodes
Flyback diode
[ "Engineering" ]
2,104
[ "Analog circuits", "Electronic engineering" ]
11,055,023
https://en.wikipedia.org/wiki/Cystatin%20C
Cystatin C or cystatin 3 (formerly gamma trace, post-gamma-globulin, or neuroendocrine basic polypeptide), a protein encoded by the CST3 gene, is mainly used as a biomarker of kidney function. Recently, it has been studied for its role in predicting new-onset or deteriorating cardiovascular disease. It also seems to play a role in brain disorders involving amyloid (a specific type of protein deposition), such as Alzheimer's disease. In humans, all cells with a nucleus (cell core containing the DNA) produce cystatin C as a chain of 120 amino acids. It is found in virtually all tissues and body fluids. It is a potent inhibitor of lysosomal proteinases (enzymes from a special subunit of the cell that break down proteins) and probably one of the most important extracellular inhibitors of cysteine proteases (it prevents the breakdown of proteins outside the cell by a specific type of protein degrading enzymes). Cystatin C belongs to the type 2 cystatin gene family. Role in medicine Kidney function Glomerular filtration rate (GFR), a marker of kidney health, is most accurately measured by injecting compounds such as inulin, radioisotopes such as 51chromium-EDTA, 125I-iothalamate, 99mTc-DTPA or radiocontrast agents such as iohexol, but these techniques are complicated, costly, time-consuming and have potential side-effects. Creatinine is the most widely used biomarker of kidney function. It is inaccurate at detecting mild renal impairment, and levels can vary with muscle mass but not with protein intake. Urea levels might change with protein intake. Formulas such as the Cockcroft and Gault formula and the MDRD formula (see Renal function) try to adjust for these variables. Cystatin C has a low molecular weight (approximately 13.3 kilodaltons), and it is removed from the bloodstream by glomerular filtration in the kidneys. If kidney function and glomerular filtration rate decline, the blood levels of cystatin C rise. Cross-sectional studies (based on a single point in time) suggest that serum levels of cystatin C are a more precise test of kidney function (as represented by the glomerular filtration rate, GFR) than serum creatinine levels. Longitudinal studies (following cystatin C over time) are sparse, but some show promising results. Although studies are somewhat divergent, most studies find that cystatin C levels are less dependent on age, gender, ethnicity, diet, and muscle mass compared to creatinine, and that cystatin C is equal or superior to the other available biomarkers in a range of different patient populations, including diabetic patients, in chronic kidney disease (CKD), and after kidney transplant. It has been suggested that cystatin C might predict the risk of developing CKD, thereby signaling a state of 'preclinical' kidney dysfunction. Additionally, the age-related rise in serum cystatin C is a powerful predictor of adverse age-related health outcomes, including all-cause mortality, death from cardiovascular disease, multimorbidity, and declining physical and cognitive function. The UK's National Institute for Health and Care Excellence (NICE) guideline for the assessment and management of CKD in adults concluded that using serum cystatin C to estimate GFR is more specific for important disease outcomes than use of serum creatinine, and may reduce overdiagnosis in patients with a borderline diagnosis, reducing unnecessary appointments, patient worries, and the overall burden of CKD in the population. Studies have also investigated cystatin C as a marker of kidney function in the adjustment of medication dosages. Cystatin C levels have been reported to be altered in patients with cancer, (even subtle) thyroid dysfunction and glucocorticoid therapy in some but not all situations. Other reports have found that levels are influenced by cigarette smoking and levels of C-reactive protein. However, inflammation does not cause an increase in the production of cystatin C, since elective surgical procedures, producing a strong inflammatory response in patients, do not change the plasma concentration of cystatin C. Levels seem to be increased in HIV infection, which might or might not reflect actual renal dysfunction. The role of cystatin C to monitor GFR during pregnancy remains controversial. Like creatinine, the elimination of cystatin C via routes other than the kidney increases with worsening GFR. Death and cardiovascular disease Kidney dysfunction increases the risk of death and cardiovascular disease. Several studies have found that increased levels of cystatin C are associated with the risk of death, several types of cardiovascular disease (including myocardial infarction, stroke, heart failure, peripheral arterial disease and metabolic syndrome) and healthy aging. Some studies have found cystatin C to be better in this regard than serum creatinine or creatinine-based GFR equations. Because the association of cystatin C with long term outcomes has appeared stronger than what could be expected for GFR, it has been hypothesized that cystatin C might also be linked to mortality in a way independent of kidney function. In keeping with its housekeeping gene properties, it has been suggested that cystatin C might be influenced by the basal metabolic rate. Proposed shrunken pore syndrome The glomerular sieving coefficients for 10–30 kDa plasma proteins in the human kidney are relatively high with coefficients between 0.9 and 0.07. These relatively high sieving coefficients, combined with the high production of ultrafiltrate in health, means that proteins less than or equal to 30 kDa in plasma normally are mainly cleared by the kidneys and at least 85% of the clearance of cystatin C occurs in the kidney. If the pores of the glomerular membrane shrink, the filtration of bigger molecules, e.g. cystatin C, will decrease, whereas the filtration of small molecules, like water and creatinine, will be less affected. In this case, cystatin C-based estimates of GFR, cystatin C, will be lower than creatinine-based estimates creatinine, so that a hypothesized condition, named shrunken pore syndrome, is identified by a low cystatin C/creatinine-ratio. This syndrome is associated with a very strong increase in mortality. Neurologic disorders Mutations in the cystatin 3 gene are responsible for the Icelandic type of hereditary cerebral amyloid angiopathy, a condition predisposing to intracerebral haemorrhage, stroke and dementia. The condition is inherited in a dominant fashion. The monomeric cystatin C forms dimers and oligomers by domain swapping and the structures of both the dimers and oligomers have been determined. Since cystatin 3 also binds amyloid β and reduces its aggregation and deposition, it is a potential target in Alzheimer's disease. Although not all studies have confirmed this, the overall evidence is in favor of a role for CST3 as a susceptibility gene for Alzheimer's disease. Cystatin C levels have been reported to be higher in subjects with Alzheimer's disease. The role of cystatin C in multiple sclerosis and other demyelinating diseases (characterized by a loss of the myelin nerve sheath) remains controversial. Other roles Cystatin C levels are decreased in atherosclerotic (so-called 'hardening' of the arteries) and aneurysmal (saccular bulging) lesions of the aorta. Genetic and prognostic studies also suggest a role for cystatin C. Breakdown of parts of the vessel wall in these conditions is thought to result from an imbalance between proteinases (cysteine proteases and matrix metalloproteinases, increased) and their inhibitors (such as cystatin C, decreased). A few studies have looked at the role of cystatin C or the CST3 gene in age-related macular degeneration. Cystatin C has also been investigated as a prognostic marker in several forms of cancer. Its role in pre-eclampsia remains to be confirmed. Laboratory measurement Cystatin C can be measured in a random sample of serum (the fluid in blood from which the red blood cells and clotting factors have been removed) using immunoassays such as nephelometry or particle-enhanced turbidimetry. It is a more expensive test than serum creatinine (around $2 or $3, compared to $0.02 to $0.15), which can be measured with a Jaffe reaction. Reference values differ in many populations and with sex and age. Across different studies, the mean reference interval (as defined by the 5th and 95th percentile) was between 0.52 and 0.98 mg/L. For women, the average reference interval is 0.52 to 0.90 mg/L with a mean of 0.71 mg/L. For men, the average reference interval is 0.56 to 0.98 mg/L with a mean of 0.77 mg/L. The normal values decrease until the first year of life, remaining relatively stable before they increase again, especially beyond age 50. Creatinine levels increase until puberty and differ according to gender from then on, making their interpretation problematic for pediatric patients. In a large study from the United States National Health and Nutrition Examination Survey, the reference interval (as defined by the 1st and 99th percentile) was between 0.57 and 1.12 mg/L. This interval was 0.55 - 1.18 for women and 0.60 - 1.11 for men. Non-Hispanic blacks and Mexican Americans had lower normal cystatin C levels. Other studies have found that in patients with an impaired renal function, women have lower and blacks have higher cystatin C levels for the same GFR. For example, the cut-off values of cystatin C for CKD for a 60-year-old white women would be 1.12 mg/L and 1.27 mg/L in a black man (a 13% increase). For serum creatinine values adjusted with the MDRD equation, these values would be 0.95 mg/dL to 1.46 mg/dL (a 54% increase). Based on a threshold level of 1.09 mg/L (the 99th percentile in a population of 20- to 39-year-olds without hypertension, diabetes, microalbuminuria or macroalbuminuria or higher than stage 3 chronic kidney disease), the prevalence of increased levels of cystatin C in the United States was 9.6% in subjects of normal weight, increasing in overweight and obese individuals. In Americans aged 60 and 80 and older, serum cystatin is increased in 41% and more than 50%. Molecular biology The cystatin superfamily encompasses proteins that contain multiple cystatin-like sequences. Some of the members are active cysteine protease inhibitors, while others have lost or perhaps never acquired this inhibitory activity. There are three inhibitory families in the superfamily, including the type 1 cystatins (stefins), type 2 cystatins and the kininogens. The type 2 cystatin proteins are a class of cysteine proteinase inhibitors found in a variety of human fluids and secretions, where they appear to provide protective functions. The cystatin locus on the short arm of chromosome 20 contains the majority of the type 2 cystatin genes and pseudogenes. The CST3 gene is located in the cystatin locus and comprises 3 exons (coding regions, as opposed to introns, non-coding regions within a gene), spanning 4.3 kilo-base pairs. It encodes the most abundant extracellular inhibitor of cysteine proteases. It is found in high concentrations in biological fluids and is expressed in virtually all organs of the body (CST3 is a housekeeping gene). The highest levels are found in semen, followed by breastmilk, tears and saliva. The hydrophobic leader sequence indicates that the protein is normally secreted. There are three polymorphisms in the promoter region of the gene, resulting in two common variants. Several single nucleotide polymorphisms have been associated with altered cystatin C levels. Cystatin C is a non-glycosylated, basic protein (isoelectric point at pH 9.3). The crystal structure of cystatin C is characterized by a short alpha helix and a long alpha helix which lies across a large antiparallel, five-stranded beta sheet. Like other type 2 cystatins, it has two disulfide bonds. Around 50% of the molecules carry a hydroxylated proline. Cystatin C forms dimers (molecule pairs) by exchanging subdomains; in the paired state, each half is made up of the long alpha helix and one beta strand of one partner, and four beta strands of the other partner. History Cystatin C was first described as 'gamma-trace' in 1961 as a trace protein together with other ones (such as beta-trace) in the cerebrospinal fluid and in the urine of people with kidney failure. Grubb and Löfberg first reported its amino acid sequence. They noticed it was increased in patients with advanced kidney failure. It was first proposed as a measure of glomerular filtration rate by Grubb and coworkers in 1985. Use of serum creatinine and cystatin C was found very effective in accurately reflecting the GFR in a study reported in the July 5, 2012, issue of the New England Journal of Medicine. References External links The MEROPS online database for peptidases and their inhibitors: I25.004 Proteins Nephrology
Cystatin C
[ "Chemistry" ]
2,965
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology" ]
11,055,227
https://en.wikipedia.org/wiki/Admissible%20representation
In mathematics, admissible representations are a well-behaved class of representations used in the representation theory of reductive Lie groups and locally compact totally disconnected groups. They were introduced by Harish-Chandra. Real or complex reductive Lie groups Let G be a connected reductive (real or complex) Lie group. Let K be a maximal compact subgroup. A continuous representation (π, V) of G on a complex Hilbert space V is called admissible if π restricted to K is unitary and each irreducible unitary representation of K occurs in it with finite multiplicity. The prototypical example is that of an irreducible unitary representation of G. An admissible representation π induces a -module which is easier to deal with as it is an algebraic object. Two admissible representations are said to be infinitesimally equivalent if their associated -modules are isomorphic. Though for general admissible representations, this notion is different than the usual equivalence, it is an important result that the two notions of equivalence agree for unitary (admissible) representations. Additionally, there is a notion of unitarity of -modules. This reduces the study of the equivalence classes of irreducible unitary representations of G to the study of infinitesimal equivalence classes of admissible representations and the determination of which of these classes are infinitesimally unitary. The problem of parameterizing the infinitesimal equivalence classes of admissible representations was fully solved by Robert Langlands and is called the Langlands classification. Totally disconnected groups Let G be a locally compact totally disconnected group (such as a reductive algebraic group over a nonarchimedean local field or over the finite adeles of a global field). A representation (π, V) of G on a complex vector space V is called smooth if the subgroup of G fixing any vector of V is open. If, in addition, the space of vectors fixed by any compact open subgroup is finite dimensional then π is called admissible. Admissible representations of p-adic groups admit more algebraic description through the action of the Hecke algebra of locally constant functions on G. Deep studies of admissible representations of p-adic reductive groups were undertaken by Casselman and by Bernstein and Zelevinsky in the 1970s. Progress was made more recently by Howe, Moy, Gopal Prasad and Bushnell and Kutzko, who developed a theory of types and classified the admissible dual (i.e. the set of equivalence classes of irreducible admissible representations) in many cases. Notes References Chapter VIII of Representation theory
Admissible representation
[ "Mathematics" ]
524
[ "Representation theory", "Fields of abstract algebra" ]
11,055,618
https://en.wikipedia.org/wiki/Tableau%20de%20Concordance
The Tableau de Concordance was the main French diplomatic code used during World War I; the term also refers to any message sent using the code. It was a superenciphered four-digit code that was changed three times between 1 August 1914 and 15 January 1915. The Tableau de Concordance is considered superenciphered because there is more than one step required to use it. First, each word in a message is replaced by four digits via a codebook. These four digits are divided into three groups (one digit, two digits, one digit) so that when the whole message has been translated into code, the four-digit sets can be put together so it looks like the entire message is made up of two-digit pairs. This is called a "Straddle Gimmick." Then, in turn, each of these two digit pairs (and the single digits at the beginning and end) are replaced by two letters. The letters are then combined with no spaces for the final ciphertext. The manual for the Tableau de Concordance included the instruction that if there was not adequate time for completely enciphering the message, it should simply be sent in clear, because a partially enciphered message would have provided insight into the inner workings of the code. Sources The Codebreakers, by David Kahn, copyright 1967, 1996 France in World War I Cryptography
Tableau de Concordance
[ "Mathematics", "Engineering" ]
286
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
11,055,676
https://en.wikipedia.org/wiki/Christian%20views%20on%20environmentalism
Christian views on environmentalism vary greatly amongst different Christians and Christian denominations. Green Christianity is a broad field that encompasses Christian theological reflection on nature, liturgy, and spiritual practices centered on environmental issues, as well as Christian-based activism in the environmental movement. Within the activism arena, green Christianity refers to a diverse group of Christians who emphasize the biblical or theological basis for protecting, celebrating and partnering with the environment. The term indicates not a particular denomination but a shared territory of concern. In the 21st century, and in response to the crises of nature and climate, many major Christian denominations recognise the Biblical calling for responsible -even sacrificial - care of, and partnership with the rest of God's Creation, primarily interpreted as referring to life on Earth. Some branches of Christianity have become environmentally aware relatively recently and such ideas may not be followed by all members and parishioners. According to some social science research, conservative Christians and members of the Christian right are typically less concerned about issues of environmentalism than the general public, and some fundamentalist Christians deny global warming and climate change. Roots of modern debate The status of nature in Christianity has been hotly debated, primarily since Lynn Townsend White Jr. delivered a lecture on the subject to the American Academy of Arts and Sciences in 1966, which was subsequently published in the journal Science. In the article, White places blame for the modern ecological crisis on Christian beliefs perpetuated from the Middle Ages. His conclusion is primarily due to the dominance of the Christian worldview in the West, which is exploitative of nature in an unsustainable manner. He asserts that Judeo-Christians are anti-ecological, hostile towards nature, and imposed a division between humans and soul with an attitude to exploit nature in an unsustainable way, where people thought of themselves as separate from nature. This exploitative attitude, combined with technology in the industrial revolution, wreaked havoc on the ecology. Colonial forestry is a prime example of this destruction of ecology and native faiths. White concluded that Western Christianity bears a substantial "burden of guilt" for the contemporary environmental crisis. The many nationally-based grassroots movements of green or eco-Christians, such as Eco-Church [England & Wales] EcoCongregation Scotland, and various organisations in mainland Europe and the global north would find themselves at odds with such a negative approach, preferring to see in the traditions and scriptures of mainstream Christianity a resource for discernment and spiritual resilience. The 'rule' of humanity, historically taken to justify domination, is then interpreted after the model of Jesus 'servant leadership', of a 'good shepherd' who lays down their life for the sheep [Matthew 20]. Attention to the semantic marginalisation of creation in 20th century translations leads in practice to activism based on faith, rather than Christians finding themselves constrained or limited by Enlightenment or colonial views of human relation to fellow creatures. Basic beliefs Christianity has a long historical tradition of reflection on nature and human responsibility, while historically having a strong tendency toward anthropocentrism. It should be said that the spirituality of Christians in indigenous cultures sees creation without this polarisation of human and other creatures. While some Christians favor a more biocentric approach, Catholic officials and others seek to retain an emphasis on humanity while incorporating environmental concerns within a framework of Creation Care. Christian environmentalists emphasize the ecological responsibilities of all Christians as partners and guardians of all life on God's earth. Beginning with the verse Genesis 1:26–28, God instructs humanity to manage the creation in particular ways. Adam's early purpose was to give care to the Garden of Eden: Green Christians point out that the biblical emphasis is on safeguarding, not ownership – that the earth remains the Lord's (Psalms 24:1) and does not belong to its human inhabitants. Leviticus 25:23 states: As a result of the teachings which emphasis relationship, partnership and safeguarding, Christian environmentalists oppose policies and practices that threaten the health or survival of the planet. Of particular concern to such Christians are the current widespread reliance on non-renewable resources, habitat destruction, pollution, and all other factors that contribute to climate change or otherwise threaten the health of the ecosystem. Due to these positions, many Christian environmentalists have broken ties with conservative political leaders. Beliefs by denomination Anglican Church The Anglican Communion and the Episcopal Church have strong beliefs about the need for environmental awareness and actions. Reducing carbon footprints and moving toward sustainable living are priorities. The British have played a leading role in the modern environmentalist movement, and Prince Philip, Duke of Edinburgh created the Alliance of Religions and Conservation, which from 1995 to 2019 raised awareness of environmental issues and global warming in religious communities. From 2024, the Anglican Church has had an environmental programme which supports the Anglican Communion Environmental Network including initiatives such as Eco Church with network A Rocha. Orthodox Churches Eastern Orthodox Patriarch Bartholomew I of Constantinople, has voiced support for aspects of the environmentalist movement. Fr. John Chryssavgis serves as advisor to the Ecumenical Patriarch, currently Bartholomew I, on environmental issues such as global warming. Bartholomew I views climate change as a spiritual and ethical issue, stating that addressing it requires global collaboration and drastic lifestyle changes, as it affects everyone and cannot be ignored. Orthodox Christian theology is generally more mystical than the traditions which developed in the Christian West, emphasizing the renewal and transfiguration of the whole creation through Christ's redemptive work. Many Eastern Christian monastics, such as those at Mount Athos, are known for cultivating unusually close relationships with wild animals. Armenian Apostolic Church The late Catholicos Karekin I stated that the Armenian Apostolic Church is committed to the defense of creation because harming the gift of God is a sin when a man must care for it. Under Catholicos Karekin II, the Armenian Church produced a seven-year ecological action plan. Ethiopian Orthodox 'Tewahedo' Church Traditionally, Ethiopian Orthodox monasteries and some churches have preserved small sacred forests around their buildings in memory of the Garden of Eden. This has allowed many endangered species to survive where their habitat has otherwise been lost. Lutheran Lutherans approach environmentalism with a deep-rooted theological framework that emphasizes the biblical mandate for stewardship, the interconnectedness of all creation, and the redemptive purpose of God's work in the world. By integrating these principles into their faith practice, Lutherans strive to fulfill their calling to care for God's creation and promote sustainable living practices for the flourishing of all life on earth. Major Lutheran Synods acknowledge that the Bible calls to care for God's creation, and that the dominion that God gave his human creatures has often been abused to the detriment of creation: loss of biodiversity, resource depletion, environmental damage, etc. Christians are called to live according to God's wisdom in creation with his other creatures and as such, sustainable living is needed. Lutherans draw upon Genesis 2:15 and Psalm 24:1 (see above) which emphasize the importance of respecting and safeguarding God's creation. Additionally, Lutherans frequently cite passages such as Romans 8:19–22, which speaks of creation eagerly awaiting its redemption from bondage to decay. This passage underscores the interconnectedness of humanity and the natural world, highlighting the shared destiny of all creation in God's redemptive plan. It prompts believers to work towards the restoration and reconciliation of all things, including the environment, in anticipation of God's ultimate renewal of creation. In the Lutheran theology, the concept of vocation plays a significant role in shaping attitudes towards care of Creation. Martin Luther emphasized the idea that every Christian has a vocation or calling, and this includes responsibilities towards the care of creation. Thus, for Lutherans, environmental stewardship is not merely an optional virtue but an essential aspect of faithful Christian living. Presbyterian churches Care for Creation remains a deep commitment for many Presbyterians. Many mid-twentieth century progressive conservationists were Presbyterian or raised in the Presbyterian faith. Naturalist John Muir and landscape artist William Keith were raised in staunch Calvinist Presbyterian homes in Scotland during the nineteenth century. With Presbyterian photographer Carleton Watkins, they built public support for the US national parks. Their reformed spiritual upbringing informed their ideas about nature and that humanity's role was as God's keeper of the land. Calvinist theology, which emphasizes God's sovereignty over creation, inspired such environmentalists to see God's glory in nature. Seeing that Calvinists, like Presbyterians, believe in God's sustaining power, they consider that the Divine intimately relates to the created order through providence. In his Institutes of the Christian Religion, John Calvin further taught that nature acted as the most apparent medium of God's revelation outside of scripture. The Westminster Confession of Faith echoes this teaching in the first chapter on holy scripture and the fourth on creation. Quakerism The Religious Society of Friends, or Quakers, has a history of environmental concern. Inspired by the testimony of stewardship, Friends have sought to practice ethical economics and creation care since the earliest days of the Society's founding. Numerous organizations and initiatives unite Quakers in the cause of environmental sustainability. Quaker Earthcare Witness, founded in 1987 as the Friends Committee on Unity with Nature, is an organization which calls attention to the current ecological crises. Based on Quaker convictions, the organization argues that the deeper cause of environmental problems has resulted from a more profound spiritual crisis of human separation from the land. The Earth Quaker Action Team (EQAT) is a non-violent protest organization that engages in the fight for ecojustice. Energy companies that they view as ecologically harmful are often the targets of opposition. For example, in 2016, they pressured Philadelphia-based power company PECO to utilize solar energy. In 2010, Bank Like Appalachia Matters (BLAM!) protested for the PNC Bank to stop financing industries engaged in mountaintop coal mining. By 2015, the bank ceased financing such enterprises. Roman Catholic Church Catholic environmental activists have found support in teachings by Pope Paul VI (Octogesima adveniens, #21) and Pope John Paul II (e.g., the encyclical Centesimus annus, #37–38). Pope Francis issued in 2015 the first encyclical letter exclusively on environmental concerns, entitled Laudato si' (Be Praised). In it, he encourages humans to protect the Earth. He endorses climate action and has made cases on Christian environmentalism several times. "Take good care of creation. St. Francis wanted that. People occasionally forgive, but nature never does. If we don't care for the environment, there's no way of getting around it." During a lecture at the University of Molise in July 2014, Francis characterized environmental damage as "one of the greatest challenges of our times". In his letter he also acknowledges some diversity within earlier Catholic thought: "Some committed and prayerful Christians, with the excuse of realism and pragmatism, tend to ridicule expressions of concern for the environment [while others] are passive: they choose not to change their habits and thus become inconsistent". American churches Evangelical churches As the scientific community has presented evidence of climate change, some members of the evangelical community and other Christian groups have emphasized the need for Christian ecology, often employing the phrase "creation care" to indicate the religious basis of their project. Some of these groups are now interdenominational, having begun from an evangelical background and then gained international and interdenominational prominence with increased public awareness of environmental issues. Organizations that have their roots in the evangelical Christian community include A Rocha, the Evangelical Climate Initiative, and the Evangelical Environmental Network. Some prominent members of the Christian right political faction broke with the Bush administration and other conservative politicians over the issue of climate change. Christianity Today endorsed the McCain-Lieberman Bill, which was eventually defeated by the Republican Congress and opposed by Bush. According to the magazine, "Christians should make it clear to governments and businesses that we are willing to adapt our lifestyles and support steps towards changes that protect our environment." The increasing Christian support for strong positions on climate change and related issues has been referred to as "the greening of evangelicals." Many Christians have expressed dissatisfaction with leadership they feel places the interests of big businesses over Christian doctrine. In reaction to the rise of environmentalism, many conservative evangelical Christians have embraced climate change denialism or maintained a neutral stance due to the lack of internal consensus on such issues. The Cornwall Alliance is a Christian right group that promotes free-market environmentalism. The National Association of Evangelicals has stated that "global warming is not a consensus issue" and is internally divided on the Christian response to climate change. Mormonism The Latter Day Saint movement has a complex relationship with environmental concerns, involving not only religion but politics and economics. Mormon environmentalists find theological reasons for stewardship and conservationism through biblical and additional scriptural references including a passage from the Doctrine and Covenants: "And it pleaseth God that he hath given all these things unto man; for unto this end were they made to be used, with judgment, not to excess, neither by extortion". In terms of environmentally friendly policies, The Church of Jesus Christ of Latter-day Saints (LDS Church) has some history of conservationist policies for their meetinghouses and other buildings. The church first placed solar panels on a church meetinghouse in the Tuamotu Islands in 2007. In 2010, the church unveiled five LEED-certified meetinghouse prototypes that are being used for future meetinghouse designs around the world. While the LDS Church has implemented some environmentally friendly policies, not all members of the church identify as environmentalists or support the environmentalism movement. A 2023 survey found that less than half of LDS Church members believe that climate change is caused by human activity and only one in ten view it as a crisis. Presbyterian Church (USA) The mainline Presbyterian Church (USA) has been an outspoken supporter of modern environmental causes. In 2018, it approved a policy for combating environmental racism. Other initiatives include establishing Presbyterian Earth Care Congregations and Green Leaf Seal camps, which involve many member churches and conference centers across the United States. The church's 2010 Earth Care Pledge summarizes critical aspects of creation restoration in four resolutions: worship, education, energy-efficient church facilities, and community outreach for environmental justice. Denominational resources on earth-care for local congregations stay available for distribution. Seventh-day Adventists The Seventh-day Adventist church has stated its commitment to environmental stewardship as well as taking action to avoid the dangers of climate change. Its official statement advocates a "simple, wholesome lifestyle" that does not chase consumerism and the resultant waste. It calls for a "lifestyle reformation ... based on respect for nature, restraint in using the world's resources, reevaluating one's needs, and reaffirming the dignity of created life." In 2010, the Loma Linda University Center for Biodiversity and Conservation Studies was introduced to address the comparative lack of environmental concern among Christians in education, scientific research, and general awareness. Southern Baptist Southern Baptists were among the first Christian groups in the United States to campaign for government control of pollution in the late 1960s. Concerns about possible worship of nature led to a move away from this campaign in the 1980s. In 2008, several pastors revised their views and published a statement on the duty of Christians to care for the environment. The Southern Baptist Environment and Climate Initiative is an independent coalition of Southern Baptist pastors, leaders, and laypersons who believe in stewardship that is both biblically rooted and intellectually informed, and the Convention has published positions on scripturally-mandated stewardship of the environment. United Methodist Church The United Methodist Church believes in the need for environmental stewardship. For Christians, the idea of sustainability flows directly from the biblical call to human beings to be stewards of God's creation. Through various initiatives and programs, the United Methodist Church encourages its members to engage in environmental stewardship practices. This includes supporting sustainable agriculture, advocating for environmental policies, and promoting energy conservation within church facilities. See also Catholic Earthcare Australia Christian vegetarianism Ecojesuit Ecotheology The Green Bible Pollution and the Death of Man Presbyterian Church (USA) Carbon Neutral Resolution References Further reading Allen, R. S., E. Castano, and P. D. Allen. (2007) Conservatism and concern for the environment. Quarterly Journal of Ideology 30(3/4):1-25. Elizabeth Breuilly (Author) with editor Martin Palmer. (1992) Christianity and Ecology Ialenti, Vincent & Meridian 180. "Toward a Global Intellectual Response to Pope Francis' Environmental Thought." Religious Left Law. 1/18/2016. Konisky, D. M., J. Milyo, and L. E. Richardson, Jr. (2008) Environmental policy attitudes: issues, geographic scale, and political trust. Social Science Quarterly 89:1066–1085. Frederick Krueger, American editor. (2012) Greening the Orthodox Parish: A Handbook for Christian Ecological Practice Guth, J. L., J. C. Green, L. A. Kellstedt, and C. E. Smidt. (1995) Faith and the environment: religious beliefs and attitudes on environmental policy. American Journal of Political Science 39:364–382. McCright, A. M., and R. E. Dunlap. (2003) Defeating Kyoto: the conservative movement's impact on U.S. climate change policy. Social Problems 50:348–373. Merritt, Jonathan. (2010) Green Like God: Unlocking the Divine Plan for Our Planet Schultz, P. W., L. Zelezny, and N. J. Dalrymple. (2000) A multinational perspective on the relation between Judeo-Christian religious beliefs and attitudes of environmental concern. Environment and Behavior 32:576–591. Wilkinson, Katharine K. (2012) Between God & Green Oxford University Press External links Creation Care Reading Room, Tyndale Seminary resources for Christian environmental ethics Various resources relating to Christianity and the environment A Rocha – An international Christian nature conservation organization Christian Environmental Association Care of Creation Inc., an evangelical environmental organization Religion and Foreign Policy Initiative, Council on Foreign Relations, http://cfr.org/religion. "Conservative Evangelicals embrace God and green: Why some right-leaning evangelical Christians have become true believers in climate change. God and green go together, these conservatives say" Islam, Christianity, and the Environment Climate For Change: What the church can do about global warming by Elizabeth Groppe America 26 March 2012 Radio Interview with Dr. Heather Eaton on the issue of Christianity, Ecological Literacy and the Environmental Crisis, University of Toronto, 13 July 2007. Global Heat Wave 10 September 2012 issue commentary by the Editors America published by Jesuits The Catholic Climate Covenant Evangelical Leaders Urge Action on Climate Change on NPR Prince Charles discusses the environment with the Pope, Associated Press, 27 April 2009. Creationism.org - Christian Stewardship of the Environment Sarx website for Christian Animal Welfare Environmental ethics
Christian views on environmentalism
[ "Environmental_science" ]
3,912
[ "Environmental ethics" ]
11,055,985
https://en.wikipedia.org/wiki/Allison%20Randal
Allison Randal is a software developer and author. She was the chief architect of the Parrot virtual machine, a member of the board of directors for The Perl Foundation, a director of the Python Software Foundation from 2010 to 2012, and the chairman of the Parrot Foundation. She is also the lead developer of Punie, the port of Perl 1 to Parrot. She is co-author of Perl 6 and Parrot Essentials and the Synopses of Perl 6. She was employed by O'Reilly Media. From August 2010 till February 2012, Randal was the Technical Architect of Ubuntu at Canonical. In 2009, Randal was chair of O'Reilly's Open Source Convention (OSCON). She was elected a fellow of the Python Software Foundation in 2010. She is currently a director of the Open Source Initiative and was its president between 2015 and 2017, taking over from and handing back to Simon Phipps. She also serves on the OpenStack Foundation board of directors. References External links "here be unicorns", Allison Randal's blog An Interview with Allison Randal by Simon Cozens of perl.com Interview with Allison Randal by The Perl Review The Perl Programming Language Year of birth missing (living people) Living people Perl people Perl writers O'Reilly writers American women computer scientists American computer scientists Members of the Open Source Initiative board of directors Python (programming language) people Open source advocates 21st-century American women
Allison Randal
[ "Technology" ]
302
[ "Computing stubs", "Computer specialist stubs" ]
11,056,803
https://en.wikipedia.org/wiki/Fire%20control%20tower
A fire control tower is a structure located near the coastline, used to detect and locate enemy vessels offshore, direct fire upon them from coastal batteries, or adjust the aim of guns by spotting shell splashes. Fire control towers came into general use in coastal defence systems in the late 19th century, as rapid development significantly increased the range of both naval guns and coastal artillery. This made fire control more complex. These towers were used in a number of countries' coastal defence systems through 1945, much later in a few cases such as Sweden. The Atlantic Wall in German-occupied Europe during World War II included fire control towers. The U.S. Coast Artillery fire control system included many fire control towers. These were introduced in the U.S. with the Endicott Program, and were used between about 1900 and the end of WWII. A typical fire control tower A fire control tower usually contained several fire control stations, known variously as observation posts (OPs), base end stations, or spotting stations from which observers searched for enemy ships, fed data on target location to a plotting room, or spotted the fall of fire from their battery, so the aim of the guns could be adjusted. For example, the fire control tower at Site 131-1A contained one OP, two base end stations, and two spotting stations. A shorthand notation was used to identify the stations. For instance, the top story of Site 131-1A was planned to contain base end station #3 and spotting station #3 for Battery #15. The overall plan document for the harbor defenses contained a list that linked the tactical numbers of all batteries to their names. That document also contained an organization chart that identified all the Command (C) and Group (G) codes, like "G3." These towers were arrayed in networks along the coast on either side of the artillery batteries they supported. The number and height of the towers was determined by the range of the guns involved. Many fire control towers were also part of a harbor's antiaircraft warning system. Spotters occupied cramped "crow's nests" on the top floors of the towers that enabled them to lift a trapdoor in the tower's roof and scan the sky for approaching aircraft. When an enemy surface craft was detected, bearings to it were measured from a pair of towers, using instruments like azimuth scopes or depression position finders. Since the distances along the line between the towers (called a baseline) had already been precisely measured by surveyors, the length of this baseline, plus the two bearing angles from two stations at the ends of the line (also called base end stations) to the target, could be used to plot the position of the target by a mathematical process called triangulation. A fire control tower was usually five to ten stories tall, depending on the height of the site at which it was built and the area it had to cover. Often made of poured concrete, its lower floors were usually unoccupied and were capped by occupied observation levels. Staircases ran up to the lowest observation level, and wooden ladders were then used to climb to higher levels. But some fire control structures built atop coastal hills or bluffs only needed to be one- or two-story buildings, and were built of wood or brick. Sometimes these buildings were camouflaged as private homes, and were referred to as fire control "cottages." The center of octagonal concrete mounting pad on the eight floor of 131-1A (which was meant to support a depression position finder) was usually the surveyed point at the end of the baseline (and thus the precise location of the base end station). A survey marker embedded in the tower's roof directly above this pad defined this point. Other observing instruments on lower floors of the tower were usually lined up directly beneath the eighth floor mounting pad and the rooftop marker, so they shared the same latitude and longitude. The pipe stands shown on floors six and seven of the Nahant Site 131-1A tower probably held azimuth scopes, which were less complex telescopes that determined bearings to a target but not its range from the tower. Site 131-1A had electric lights, phones, and radio communications, and a time interval bell that was used for coordinating fire control information. Some fire control towers were also the mounting points for coast surveillance or fire control radar antennas. Although our sample tower has a simple, square appearance, some versions of these towers in New England had round or partly octagonal plans. A network of fire control structures Each major battery of Coast Artillery guns was supported by a network of fire control structures (towers, cottages, or buildings) which were spread out along the nearby coast. Guns of longer range had larger numbers of fire control stations in their networks. Depending on where the target ship was located and upon other tactical conditions, one or more of these stations would be selected to control the fire from a given battery on that target. For a WW2-era example, take Battery Murphy, the two guns in Nahant, MA. Murphy used ten fire control stations that made up Battery Murphy's fire control network, which was spread out over about forty miles of coastline running from Station 1 (Fourth Cliff) in the south to Station 10 (Castle Hill) in the north. Half of these stations were located in tall towers, and half in low-rise cottages. The length of the baselines running between each pair of stations was known very precisely. For example, Station #1 and Station #2 were apart. These distances were plugged into the triangulation equations for the pair of stations involved in sighting on a particular target in order to compute its position. As the target ship moved along the coast, different pairs of fire control stations (and therefore different baselines) would come into play. Very precise measurements were also taken of the distance between the directing point of each battery (often the pintle center of its Gun #1) and each fire control station's observing point. These distance could also be used for target location, if one of the observations was taken from the battery itself and another from the distant station. Gallery Footnotes References FM 4-15, Seacoast Artillery fire control and position finding See also Fire lookout tower, used to spot wild fires Base end station Fire-control system Coast Artillery fire control system Coastal defence and fortification Seacoast defense in the United States Flak tower, similar large concrete towers built during WWII for anti-aircraft defense. Military installations Coastal artillery Towers
Fire control tower
[ "Engineering" ]
1,314
[ "Structural engineering", "Towers" ]
11,056,914
https://en.wikipedia.org/wiki/Job%20safety%20analysis
A job safety analysis (JSA) is a procedure that helps integrate accepted safety and health principles and practices into a particular task or job operation. The goal of a JSA is to identify potential hazards of a specific role and recommend procedures to control or prevent these hazards. Other terms often used to describe this procedure are job hazard analysis (JHA), hazardous task analysis (HTA) and job hazard breakdown. The terms "job" and "task" are commonly used interchangeably to mean a specific work assignment. Examples of work assignments include "operating a grinder," "using a pressurized water extinguisher" or "changing a flat tire." Each of these tasks have different safety hazards that can be highlighted and fixed by using the job safety analysis. Terminology and definitions Workplace hazard categories Workplace hazards can be allocated to six categories: Safety hazards: Ex. spills, working from heights, confined spaces Biological hazards: Ex. bodily fluids, animal droppings, pathogens Physical hazards: Ex. radiation, extreme temperatures, loud noises Ergonomic hazards: Ex. awkward postures, incorrect lifting, vibration Chemical hazards: Ex. vapors and fumes, pesticides, flammable liquids Work organization hazards: Ex. workload demands, job stress, lack of respect Mechanism of injury Mechanism of injury (MOI) is the means by which an injury occurs. It is important because in the absence of an MoI there is no hazard. Common mechanisms of injury are "slips, trips and falls", for example: Hazard: Ex. a tool bag (in walkway) Mechanism of injury: Ex. trip (over tool bag) Injury = Bone fracture Other common mechanisms of injury include: Struck against or by Contact with or by Caught in, on, by or between Exposure to Fall to same or lower level Likelihood Likelihood is how often an event is reasonably and realistically expected to occur in a given time, and may be expressed as a probability, frequency or percentage. Consequence Consequence is the outcome of an event expressed qualitatively or quantitatively, being a loss, injury, disadvantage or gain. There may be a range of possible outcomes associated with an event. Consequence is the severity of the injury or harm that can be reasonably and realistically expected from exposure to the mechanism of injury of the hazard being rated. An implemented control may affect the severity of the injury, but it has no effect on the way the injury occurred. Therefore, when rating risk, the consequence remains the same for both the initial rating and the residual rating. People inherently tend to overestimate severity of consequence when rating risk, but the rating should be both reasonable and realistic. Risk Risk is the combination of likelihood and consequence. The risk at hand ties directly into the likelihood and severity of an incident. Risk authority The risk authority is the organizational level of the person authorized to accept a specified level of risk. For example, different levels of risk authorities may be assigned as follows: "As low as reasonably practicable" (ALARP) As low as reasonably practicable when applied to job safety analysis means that it is not necessary to reduce risk beyond the point where the cost of further control becomes disproportionate to any achievable safety benefit. The "ALARA" acronym ("As low as reasonably achievable") is also in common usage. Reasonably practicable In relation to a duty to ensure health and safety, reasonably practicable means that which is, or was at a particular time, reasonably able to be done to ensure health and safety, taking into account and weighing up all relevant matters including: The likelihood of the hazard or the risk concerned occurring The degree of harm that might result from the hazard or the risk What the person concerned knows, or ought reasonably to know, about the hazard or risk, and about the ways of eliminating or minimizing the risk The availability and suitability of ways to eliminate or minimize the risk After assessing the extent of the risk and the available ways of eliminating or minimizing the risk, the cost associated with available ways of eliminating or minimizing the risk, including whether the cost is grossly disproportionate to the expected reduction of risk Work process The way in which work is performed is called the work process. This entails all actions taken to do a specific role in the workplace. PEPE PEPE is used to assist in identifying hazards. It is an acronym for the four elements that are present in every task of the work process: Process, Environment, People, EMT, which is itself an acronym for 'equipment, materials and tools'. Process In this context, process is about procedures, standards, legislation, safe work instructions, permits and permit systems, risk assessments and policies. Key factors for effective process are that the relevant components are in place, easy to follow and regularly reviewed and updated. Environmental hazards People may be exposed to issues related to: Access and egress Obstructions Weather Dust, heat, cold, noise Darkness Contaminants Isolated workers Other workers Personnel hazards To assist people to be safe in their workplace they need to be provided with sufficient information, training, instructions and supervision. People may be: Untrained Not yet competent Uncertified Inexperienced Unsupervised Affected by alcohol or other drugs Fatigued Inadequately instructed Suffering from stress from home life or workplace bullying Have a poor attitude to, or refuse to follow procedures Equipment, materials and tools (EMT) The right equipment, materials and tools must be selected for the task, and incorrect selections may be hazardous in themselves. The EMT may be hazardous, e.g. sharp, hot, vibrating, heavy, fragile, contain pinch points, a hazardous substance containing hydrocarbons, acids, alkalis, glues, solvents, asbestos, et cetera There may be a need for isolating personnel from energy sources such as electricity, hydraulic, pneumatic, radiation and gravitational sources Is the EMT in date? Does it require certification and/or calibration, tested and tagged? Obstructions should be kept out of walkways and leads and hoses suspended? Hazard controls Controls are the barriers between people and/or assets and the hazards. Controls can also be thought of as "guardrails" that prevent negative impacts from occurring. A hard control provides a physical barrier between the person and the hazard. Hard controls include machine guards, restraint equipment, fencing/barricading. A soft control does not provide a physical barrier between the person and the hazard. Soft controls include signage, procedures, permits, verbal instructions etc. Control effectiveness criteria The effectiveness of a control is measured by its ability to reduce the likelihood of a hazard causing injury or damage. A control is either effective or not. To gauge this effectiveness several control criteria are used, which: Address the relevant aspects of process, environment, people, and equipment, materials and tools (PEPE), Reduce likelihood to as low as reasonably practicable (ALARP), Selected hard controls in preference to soft controls, and Contain a 'doing word'. There is no commonly used mathematical way in which multiple controls for a single hazard can be combined to give a score that meets an organizations acceptable risk level. In instances where the residual risk is greater than the organisations acceptable risk level, consultation with the organizations relevant risk authority should occur. Hierarchy of controls Hierarchy of control is a system used in industry to minimize or eliminate exposure to hazards. It is a widely accepted system promoted by numerous safety organizations. This concept is taught to managers in industry, to be promoted as a standard practice in the workplace. Various illustrations are used to depict this system, most commonly a triangle. The hierarchy of hazard controls are, in descending order of effectiveness: Elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Scope of application A job safety analysis is a documented risk assessment developed when company policy directs employees to do so. Workplace hazard identification and an assessment of those hazards may be required before every job. Analyses are usually developed when directed to do so by a supervisor, when indicated by the use of a first tier risk assessment and when a hazard associated with a task has a likelihood rating of 'possible' or greater. Generally, high consequence, high likelihood task hazards are addressed by way of a job safety analysis. These may include, but are not limited to, those with a history of, or potential for, injury, harm or damage such as those involving: Fire, chemicals or a toxic or oxygen deficient atmosphere Tasks carried out in new environments Rarely performed tasks Tasks that may impact on the integrity or output of a processing system It is important that employees understand that it is not the JSA form that will keep them safe on the job, but rather the process it represents. It is of little value to identify hazards and devise controls if the controls are not put in place. Workers should never be tempted to "sign on" the bottom of a JSA without first reading and understanding it. JSAs are quasi-legal documents, and are often used in incident investigations and court cases. Structure of a job safety analysis The analysis is usually created by the work group who will perform the task. The more minds and experience applied to analysing the hazards in a job, the more successful the work group is likely to be in controlling them. Sometimes it is expedient to review a JSA that was prepared when the same task was performed on a previous occasion, but care should be taken to ensure that all of the hazards for the job are controlled for the new occasion. The JSA is usually recorded in a standardized tabular format with three to as many as five or six columns. The more columns used, the more in-depth the job safety analysis will be. The analysis is subjective to what the role being investigated entails. The headings of the three basic columns are: Job step, hazard and controls. A hazard is any factor that can cause damage to personnel, property or the environment (some companies include loss of production or downtime in the definition as well). A control is any process for controlling a hazard. The job is broken down into its component steps. Then, for each step, hazards are identified. Finally, for each hazard identified, controls are listed. In the example below, the hazards are analyzed for the task of erecting scaffolding and welding lifting lugs: Assessing risk levels Some organizations add columns for risk levels. The risk rating of the hazard prior to applying the control is known as the 'inherent risk rating'. The risk rating of the hazard with the control in place is known as the 'residual' risk rating. Risk, within the occupational health and safety sphere, is defined as the 'effect of uncertainties on objectives'. In the context of rating a risk, it is the correlation of 'likelihood' and 'consequence', where likelihood is a quantitative evaluation of frequency of occurrences over time, and consequence is a qualitative evaluation of both the "Mechanism of Injury" and the reasonable and realistic estimate of "severity of injury". Example: There is historical precedent to reasonably and realistically evaluate that the likelihood of an adverse event occurring while operating a hot particle producing tool, (grinder), is "possible", therefore the activity of grinding meets the workplace hazard criteria. It would also be reasonable and realistic to assume that the mechanism of injury of an eye being struck at high speed with hot metal particles may result in a permanent disability, whether it be the eye of the grinder operator, a crew member or any person passing or working adjacent to, above or below the grinding operation. The severity of reasonably and realistically expected injury may be blindness. Therefore, grinding warrants a high severity rating. Wearing eye protection while in the vicinity of grinding operations reduces the likelihood of this adverse event occurring. If the eye protection was momentarily not used, not fitted correctly or failed and hot high speed particles struck an eye, the expected mechanism of injury (adverse event) has still occurred, hence the consequence rating remains the same for both the inherent and residual consequence rating. It is accepted that the control may affect the severity of injury, however, the rated consequence remains the same as the effect is not predictable. One of the known risk rating anomalies is that likelihood and the severity of injury can be scaled, but mechanism of injury cannot be scaled. This is the reason why the mechanism of injury is bundled with severity, to allow a rating to be given. The MoI is an important factor as it suggests the obvious controls. Identifying responsibilities Another column that is often added to a JSA form or worksheet is the Responsible column. The Responsible column is for the name of the individual who will put the particular control in place. Defining who is responsible for actually putting the controls in place that have been identified on the JSA worksheet ensures that an individual is accountable for doing so. Application of the JSA After the JSA worksheet is completed, the work group that is about to perform the task would have a toolbox talk, to discuss the hazards and controls, delegate responsibilities, ensure that all equipment and personal protective equipment described in the JSA are available, that contingencies such as fire fighting are understood, communication channels and hand signals are agreed etc. Then, if everybody in the work group agrees that it is safe to proceed with the task, work can commence. If at any time during the task circumstances change, then work should be stopped (sometimes called a "time-out for safety"), and the hazards and controls described in the JSA should be reassessed and additional controls used or alternative methods devised. Again, work should only continue when every member of the work group agrees it is safe to do so. When the task is complete it is often of benefit to have a close-out or "tailgate" meeting, to discuss any lessons learned so that they may be incorporated into the JSA the next time the task is undertaken. References Occupational safety and health Hazard analysis
Job safety analysis
[ "Engineering" ]
2,849
[ "Safety engineering", "Hazard analysis" ]
11,056,991
https://en.wikipedia.org/wiki/Jurassic%20Park
Jurassic Park, later also referred to as Jurassic World, is an American science fiction media franchise created by Michael Crichton and centered on a disastrous attempt to create a theme park of cloned dinosaurs. It began in 1990 when Universal Pictures and Amblin Entertainment bought the rights to Crichton's novel Jurassic Park before it was published. The book was successful, as was Steven Spielberg's 1993 film adaptation. The film received a theatrical 3D re-release in 2013, and was selected in 2018 for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". Crichton's 1995 sequel novel, The Lost World, was followed by a 1997 film adaptation, also directed by Spielberg. Crichton did not write any further sequels in the series, although Spielberg would return as executive producer for each subsequent film, starting with Jurassic Park III (2001). In 2015, a second trilogy of films began with the fourth film in the series, Jurassic World. The film was financially successful, and was followed by Jurassic World: Fallen Kingdom (2018) and Jurassic World Dominion (2022). The Jurassic World films were co-written by Colin Trevorrow, who also directed the first and third installments in the trilogy. Jurassic World Rebirth, a new film set after the preceding trilogy, is scheduled for release in 2025, without Trevorrow's involvement. Numerous video games and comic books based on the franchise have been created since the release of the 1993 film, and several water rides have been opened at various Universal Studios theme parks. Lego has produced several animated projects based on the Jurassic World films, including Lego Jurassic World: Legend of Isla Nublar, a miniseries released in 2019. DreamWorks Animation also produced two animated series for Netflix, Jurassic World Camp Cretaceous (2020–2022) and Jurassic World: Chaos Theory (2024), both set during the Jurassic World trilogy. As of 2000, the franchise had generated $5 billion in revenue, making it one of the highest-grossing media franchises of all time. The film series is also one of the highest-grossing of all time, having earned over $6 billion at the worldwide box office as of 2022. The original Jurassic Park was the first to surpass $1 billion, doing so during its 2013 re-release. This was followed by each installment in the Jurassic World trilogy. Background Premise and dinosaurs The Jurassic Park franchise focuses on genetically engineered dinosaurs running amok on an island theme park in Costa Rica. The dinosaurs are cloned by extracting ancient DNA from mosquitoes, which sucked the blood of dinosaurs and then became fossilized in amber, preserving the DNA. Scientists then fill gaps in the genome using frog DNA. Although the films primarily take place on fictional islands located in the Pacific coast of Central America, Jurassic World: Fallen Kingdom (2018) and Jurassic World Dominion (2022) see the dinosaurs relocated throughout the world, including the U.S. mainland. The film series is notable for its recreation of dinosaurs, achieved primarily through animatronics and computer-generated imagery. The first film was praised for its dinosaur effects, and created an increased interest in the field of paleontology, while changing the public perception of dinosaurs with its modern portrayal. The World trilogy largely ignored recent paleontological findings to maintain continuity with the Park trilogy, leading to criticism among paleontologists. Both Jurassic Park III and Jurassic World include a scene where a character states that any inaccuracies in the dinosaurs can be attributed to the fact that they are genetically-engineered animals. To better reflect modern discoveries, Jurassic World Dominion (2022) expanded upon the concept of feathered dinosaurs which was first introduced in Jurassic Park III (2001). InGen International Genetic Technologies, Inc. (InGen) is the fictional company responsible for cloning the dinosaurs. According to the novels, it is based in Palo Alto, California, and has one location in Europe as well. Nevertheless, most of InGen's research took place on the fictional islands of Isla Sorna and Isla Nublar, near Costa Rica. While the first novel indicated InGen was just one of any number of small 1980s genetic engineering start-ups, the events of the novel and film revealed to a select group that InGen had discovered a method for cloning dinosaurs, which would be placed in an island theme park attraction. InGen was well established in the first novel as the entity behind the park, but for simplicity the first film emphasized the Jurassic Park brand. The InGen name is visible in the film — on computer screens, helicopters, etc. — but is never spoken. InGen's corporate identity is more prominent in the second film. By the time that Jurassic World takes place, InGen and all its intellectual property have been absorbed by the Masrani Global Corporation. Beacham's Encyclopedia of Popular Fiction describes InGen as comparable to other "sleazy organizations". Other sources reference the company's receiving a baby T. rex (in The Lost World: Jurassic Park) as an allusion to other exploitative entrepreneurs depicted in the 1933 film King Kong. Ken Gelder describes InGen as "resolutely secretive", like the tax firm in John Grisham's 1991 novel The Firm. Biosyn In the novels, Biosyn Corporation (or Biosyn for short) is InGen's corporate rival. The company is controversial for its industrial espionage in the genetics industry. Lewis Dodgson, an employee of Biosyn, helps the company in its theft of corporate secrets. Biosyn is interested in acquiring InGen's dinosaur DNA, believing the animals present a variety of uses such as hunting trophies and pharmaceutical test subjects. Dodgson makes only a minor appearance in the first film, and his employer is not named. However, Biosyn is featured in several video games. The company, as Biosyn Genetics, makes its film debut in Jurassic World Dominion (2022). By the time that the film takes place, Dodgson has become the company's CEO. Biosyn's employees now include geneticist Dr. Henry Wu and mathematician Dr. Ian Malcolm, the latter working as the company's in-house philosopher. With dinosaurs loose around the world and captured by governments, Biosyn has a contract to house the animals at its headquarters in the Dolomites mountain range in Italy. In addition to performing pharmaceutical research on the dinosaurs, the company has also captured 14-year-old orphan Maisie Lockwood and unleashed giant locusts to devour their rivals' crops. By the end of the film, this plot is foiled and exposed to the public. The film's director, Colin Trevorrow, described Biosyn not as an "evil" corporation, but rather an entity with thousands of employees who have the best intentions in mind, only to feel betrayed by Dodgson upon learning of his actions. Isla Nublar Isla Nublar () is a fictional Central American island that serves as the main setting in the first novel and its film adaptation, as well as Jurassic World. According to the novel, its name means "Cloud Island" in Spanish. The tropical island is located west of Costa Rica and has an inactive volcano. In the first novel and film, Isla Nublar is the location of Jurassic Park, a dinosaur theme park proposed by InGen, but it fails to open after the animals escape. In the novel, the Costa Rican government declares the island unsafe and has it napalmed; in the film series, the island continues to exist until the Jurassic World trilogy. In Jurassic World, the theme park idea has been carried out successfully by Masrani Global Corporation. By the end of the film however, the island is overrun by dinosaurs once more following the Indominus rex incident. In Jurassic World: Fallen Kingdom, Isla Nublar is destroyed when its volcano becomes active again and erupts. In the films, several Hawaiian islands stood in as Isla Nublar, including Oahu and Kauai. Some filming also took place on sound stages, in California for the original film, and in Louisiana for Jurassic World. Isla Sorna Isla Sorna, also called Site B, is another fictional Central American island. It is southwest of Isla Nublar, and west of Costa Rica. It is the main setting for the second novel and its film adaptation, as well as the third film. Isla Sorna is where InGen conducted much of its dinosaur research. It is here that the dinosaurs were bred before being shipped off to Isla Nublar; a laboratory on the latter island was built only as a showroom for tourists. Isla Sorna is significantly larger than Isla Nublar and has various climates including tropical, highland tropical and temperate rainforest. It is part of a five-island chain known as Las Cinco Muertes (Spanish for "The Five Deaths"), although the other islands do not play a role in the novels or films. However, they are used as the main setting for the 2018 video game Jurassic World Evolution. InGen abandons Isla Sorna after the events of the first novel and film, and the dinosaurs are left to live freely and reproduce. At the end of the second film, it is stated that Isla Sorna has been set up as a biological preserve for the animals, after a failed attempt to relocate them to a new theme park in San Diego. The status of Isla Sorna is not mentioned in Jurassic World or Jurassic World: Fallen Kingdom, but a promotional website for the latter film states that the island ecosystem suffered a breakdown after illegally-cloned animals were introduced there. The surviving dinosaurs were relocated to Isla Nublar for the opening of the Jurassic World theme park, leaving Sorna abandoned. Jurassic World Dominion shows the two adult Tyrannosaurus from Isla Sorna encountering the Tyrannosaurus from Isla Nublar. In the same film, Ramsay Cole mentions that Isla Sorna's dinosaurs have been relocated to Biosyn's valley along with those from Isla Nublar that have been rounded up. The island briefly appears in video footage from 1986 shown to Maisie Lockwood by Henry Wu. For the second film, Humboldt County, California served as the primary location for scenes set on Isla Sorna, giving it a forest climate. Filming also took place on sound stages at Universal Studios Hollywood, and a beach scene was shot on Kauai. The third film largely uses Oahu and Kauai to represent Isla Sorna, as the original film had done for Isla Nublar. A jungle set was also built on a sound stage at Universal Studios. Novels Jurassic Park (1990) In 1983, Michael Crichton originally conceived a screenplay about a pterosaur being cloned from fossil DNA. After wrestling with this idea for a while, he came up with the story of Jurassic Park. Crichton worked on the book for several years; he decided his first draft would have a theme park for the setting (similar to his 1973 film Westworld) and a young boy as the main character. Response was extremely negative, so Crichton rewrote the story to make it from an adult's point of view, which resulted in more positive feedback. Steven Spielberg learned of the novel in October 1989 while he and Crichton were discussing a screenplay that would become the TV series ER. Warner Bros. Pictures, Columbia Pictures, 20th Century Fox, and Universal Pictures bid for the rights to the novel before its publication. In May 1990, Universal acquired the rights, with the backing of Spielberg's Amblin Entertainment. Crichton put up a non-negotiable fee for $1.5 million as well as a substantial percentage of the gross. Universal further paid Crichton $500,000 to adapt his own novel (Malia Scotch Marmo, who was a writer on Spielberg's 1991 film Hook, wrote the next draft of Jurassic Park, but was not credited; David Koepp wrote the final draft, which left out much of the novel's exposition and violence, and made numerous changes to the characters). Universal desperately needed money to keep their company alive, and partially succeeded with Jurassic Park, as it became a critical and commercial success. The Lost World (1995) After the film adaptation of Jurassic Park was released to home video, Crichton was pressured from many sources for a sequel novel. Crichton declined all offers until Spielberg himself told him that he would be keen to direct a film adaptation of the sequel, if one were written. Crichton began work almost immediately and in 1995 published The Lost World. Crichton confirmed that his novel had elements taken from the novel of the same name by Sir Arthur Conan Doyle. The book was also an outstanding success, both with professional and amateur critics. The film adaptation, The Lost World: Jurassic Park, began production in September 1996. Jurassic Park Adventures (2001–2002) Scott Ciencin wrote a trilogy of spin-off novels based upon Jurassic Park III. The series contained Jurassic Park Adventures: Survivor and Jurassic Park Adventures: Prey, both released in 2001, and Jurassic Park Adventures: Flyers, released the following year. The Evolution of Claire (2018) The Evolution of Claire (Jurassic World) is a young adult novel written by Tess Sharpe. It is based upon the Jurassic World trilogy, and was released in 2018 in conjunction with the release of Jurassic World: Fallen Kingdom. It is a spin-off set in 2004, prior to the opening of the Jurassic World theme park. The novel is about college freshman Claire Dearing during her summer internship at the park. Maisie Lockwood Adventures (2022) Maisie Lockwood Adventures (Jurassic World) is a children's book series written by Tess Sharpe and illustrated by Chloe Dominique. It is based upon the Jurassic World trilogy, and was released in 2022 in conjunction with the release of Jurassic World Dominion. Consisting of the novels Off the Grid and The Yosemite Six, the novels tell the adventures of Maisie Lockwood as she navigates a world filled with dinosaurs both ferocious and friendly. Films Jurassic Park trilogy Jurassic Park (1993) John Hammond (Richard Attenborough) is the owner of Jurassic Park, a theme park located on Isla Nublar. After an incident with a velociraptor, Hammond brings in three specialists to sign off on the park to calm investors. The specialists, paleontologist Alan Grant (Sam Neill), paleobotanist Ellie Sattler (Laura Dern), and chaos theorist Ian Malcolm (Jeff Goldblum), are surprised to see that the island park's main attraction are living, breathing dinosaurs, created with a mixture of fossilized DNA and genetic cross-breeding/cloning. When lead programmer Dennis Nedry (Wayne Knight) turns off the park's power to sneak out with samples of the dinosaur embryos to sell to a corporate rival, the dinosaurs break free, and the survivors are forced to find a way to turn the power back on and make it out alive. The film also stars Bob Peck, Martin Ferrero, BD Wong, Ariana Richards, Joseph Mazzello, and Samuel L. Jackson. Spielberg cited Godzilla as an inspiration for Jurassic Park, specifically Godzilla, King of the Monsters! (1956), which he grew up watching. During production, Spielberg described Godzilla as "the most masterful of all the dinosaur movies because it made you believe it was really happening". Jurassic Park's biggest impact on subsequent films was a result of its breakthrough use of computer-generated imagery. The film is regarded as a landmark for visual effects. It received positive reviews from critics, who praised the effects, though reactions to other elements of the picture, such as character development, were mixed. During its release, the film grossed more than $914 million worldwide, becoming the most successful film released up to that time (surpassing E.T. the Extra-Terrestrial and surpassed 4 years later by Titanic), and it is currently the 17th highest grossing feature film (taking inflation into account, it is the 20th-highest-grossing film in North America). It is the most financially successful film for NBCUniversal and Steven Spielberg. Recently, Jurassic Park has been proposed to be recognized as Intangible Geoheritage due to its cultural impact on the people's views about dinosaurs, including a change in the popular iconography of carnivorous dinosaurs. Jurassic Park had two re-releases: The first on September 23, 2011, in the United Kingdom and the second in which it was converted into 3D on April 5, 2013, for its 20th anniversary, which resulted in the film passing the $1 billion mark at the worldwide box office. In 2018, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". The Lost World: Jurassic Park (1997) Before The Lost World was published, a film adaptation was already in pre-production, with its release occurring in May 1997. The film was a commercial success, breaking many box-office records when released. The film was released to mixed reviews, similar to its predecessor in terms of characterization. Critical response to The Lost World has since become more favorable, with some publications calling it the best Jurassic Park sequel. Much like the first film, The Lost World: Jurassic Park made a number of changes to the plot and characters from the book, replacing the corporate rivals with an internal power struggle and changing the roles or characterizations of several protagonists. When a vacationing family stumbles upon the dinosaurs of Isla Sorna, a secondary island where the animals were bred en masse and allowed to grow before being transported to the park, Ian Malcolm (Jeff Goldblum) is called in by John Hammond (Richard Attenborough) to lead a team to document the island to turn it into a preserve, where the animals can roam free without interference from the outside world. Malcolm agrees to go when he discovers his girlfriend, paleontologist Sarah Harding (Julianne Moore) is already on the island, while at the same time Hammond's nephew, Peter Ludlow (Arliss Howard), has taken over his uncle's company and leads a team of hunters to capture the creatures and bring them back to a theme park in San Diego. The two groups clash and are ultimately forced to work together to evade the predatory creatures and survive the second island. The film also stars Pete Postlethwaite, Richard Schiff, Vince Vaughn, Vanessa Lee Chester, Peter Stormare, and a young Camilla Belle. Jurassic Park III (2001) Joe Johnston had been interested in directing the sequel to Jurassic Park and approached his friend Steven Spielberg about the project. While Spielberg wanted to direct the first sequel, he agreed that if there was ever a third film, Johnston could direct. Spielberg, nevertheless, stayed involved in this film by becoming its executive producer. Production began on August 30, 2000, with filming in California, and the Hawaiian islands of Kauai, Oahu, and Molokai. It is the first Jurassic Park film not to be based on a novel, although it does incorporate some unused plot elements from the Crichton novels, such as the river escape and the pterosaur aviary. Jurassic Park III had a troubled production, and received mixed reviews from critics. When their son Eric (Trevor Morgan) goes missing while parasailing at Isla Sorna, the Kirbys (William H. Macy and Téa Leoni) hire Alan Grant (Sam Neill) under false pretenses to help them navigate the island. Believing it to be nothing more than sight-seeing, and that he will act as a dinosaur guide from the safety of their plane, he is startled to find them landing on the ground, where they are stalked by a Spinosaurus, which destroys their plane. As they search for the Kirbys' son, the situation grows dire as Velociraptors hunt their group and they must find a way off the island. The film also stars Alessandro Nivola, Michael Jeter, Mark Harelik, and Laura Dern. Jurassic World series Jurassic World (2015) Steven Spielberg devised a story idea for a fourth film in 2001, during production of Jurassic Park III. In 2002, William Monahan was hired to write the script, with the film's release scheduled for 2005. Early aspects of the plot included dinosaurs escaping to the mainland, and an army of genetically modified dinosaur-human mercenaries. Monahan finished the first draft of the script in 2003. Sam Neill and Richard Attenborough were set to reprise their characters, while Keira Knightley was in talks for two separate roles. In 2004, John Sayles wrote two drafts of the script. Sayles' first draft involved a team of Deinonychus being trained for use in rescue missions. Both drafts were scrapped, and a new script was being worked on in 2006. Laura Dern was contacted to reprise her role, with the film expected for release in 2008. The film was further delayed by the 2007–08 Writers Guild of America strike. Mark Protosevich wrote two film treatments in 2011, which were rejected. Rise of the Planet of the Apes screenwriters Rick Jaffa and Amanda Silver were hired in 2012 to write an early draft of the script. In 2013, Colin Trevorrow was announced as a director and co-writer, with the film scheduled for release on June 12, 2015. The film was shot in Univisium 2.00:1. The film features a new park, Jurassic World, built on the remains of the original park on Isla Nublar. The film sees the park run by Simon Masrani (Irrfan Khan) and Masrani Corp, and features the return of Dr. Henry Wu (BD Wong) from the first film. Chris Pratt, Bryce Dallas Howard, and Jake Johnson star, while Vincent D'Onofrio portrayed the main antagonist, Vic Hoskins. The cast also includes Lauren Lapkus, Ty Simpkins, Nick Robinson, Omar Sy, and Judy Greer. The primary dinosaur antagonist is Indominus rex, a genetically-modified hybrid of Tyrannosaurus rex and several other species, including Velociraptor, cuttlefish, tree frog, and pit viper. The Indominus Rex also features a chameleon-like camouflage ability, which was a plot element from the second Crichton novel unused in previous films. Jurassic World received generally positive reviews. It was the first film ever to gross over $500 million worldwide in its opening weekend, and grossed over $1.6 billion through the course of its theatrical run, making it the world's third highest-grossing film at the time. It became the second highest-grossing film of 2015, and remains the ninth highest-grossing film of all time. When adjusted for monetary inflation, Jurassic World is the highest-grossing film in the franchise. Jurassic World: Fallen Kingdom (2018) A sequel to Jurassic World was released on June 22, 2018. The film was directed by J. A. Bayona and written by Trevorrow and Connolly, with Trevorrow and Spielberg as executive producers. The film stars Chris Pratt, Bryce Dallas Howard, Rafe Spall, Justice Smith, Daniella Pineda, James Cromwell, Toby Jones, Ted Levine, BD Wong, Isabella Sermon, and Geraldine Chaplin, with Jeff Goldblum reprising his role as Dr. Ian Malcolm. During early conversations on Jurassic World, Spielberg told Trevorrow that he was interested in having several more films made. In April 2014, Trevorrow announced that sequels to Jurassic World had been discussed: "We wanted to create something that would be a little bit less arbitrary and episodic, and something that could potentially arc into a series that would feel like a complete story". Trevorrow, who said he would direct the film if asked, later told Spielberg that he would only focus on directing one film in the series. Trevorrow believed that different directors could bring different qualities to future films. Bayona was once considered to direct Jurassic World, but he declined as he felt there was not enough time for production. Filming took place from February to July 2017, in the United Kingdom and Hawaii. Former Jurassic World manager Claire Dearing and Velociraptor handler Owen Grady join a mission to rescue Isla Nublar's dinosaurs from a volcanic eruption by relocating them to a new island sanctuary. They discover that the mission is part of a scheme to sell the captured dinosaurs on the black market in order to fund his party's genetic research. The captured dinosaurs are brought to an estate in northern California, where several of the creatures are auctioned and subsequently shipped to their new owners. A new hybrid dinosaur, the Indoraptor (one of the primary antagonists of the film), escapes and terrorizes people at the estate, forcing Owen and Claire to survive the chaos and rampage in the estate. A subplot about human cloning was introduced in the film. Fallen Kingdom, similar to the second installment, The Lost World, re-explores the themes about the aftermath of the dinosaur park's demise on Isla Nublar and dinosaurs being used for exploitation by humans. Fallen Kingdom grossed over $1.3 billion, making it the third Jurassic film to pass the billion-dollar mark. It is the third highest-grossing film of 2018, and the 22nd highest-grossing film of all time. It received mixed reviews from critics. Jurassic World Dominion (2022) Jurassic World Dominion was released on June 10, 2022. It was directed by Trevorrow, with a screenplay written by him and Emily Carmichael, based on a story by Trevorrow and Connolly. Trevorrow and Spielberg serve as executive producers for the film, with Frank Marshall and Patrick Crowley as producers. The film stars Chris Pratt and Bryce Dallas Howard, returning from the previous Jurassic World films. Sam Neill, Laura Dern and Jeff Goldblum also reprise their characters for major roles, marking the trio's first film appearance together since the original Jurassic Park film. In addition, Daniella Pineda, Justice Smith, Isabella Sermon, and Omar Sy reprise their roles from the previous two films. Other actors include Mamoudou Athie, DeWanda Wise, Dichen Lachman, and Scott Haze. Campbell Scott portrays the character Lewis Dodgson from the first film, originally played by Cameron Thor. Planning for the film dates to 2014. Trevorrow and Carmichael were writing the script as of April 2018. Trevorrow said the film would focus on the dinosaurs that went open source after being sold and spread around the world in Jurassic World: Fallen Kingdom, allowing people other than Dr. Henry Wu to create their own dinosaurs. Trevorrow stated that the film would be set around the world, and said that the idea of Henry Wu being the only person who knows how to create a dinosaur was far-fetched "after 30 years of this technology existing" within the films' universe. Additionally, the film would focus on the dinosaurs that were freed at the end of Jurassic World: Fallen Kingdom, but it would not depict dinosaurs terrorizing cities and going to war against humans; Trevorrow considered such ideas unrealistic. Instead, Trevorrow was interested in a world where "dinosaur interaction is unlikely but possible—the same way we watch out for bears or sharks". Certain scenes and ideas regarding the integration of dinosaurs into the world were ultimately removed from the Jurassic World: Fallen Kingdom script to be saved for the third film. Filming locations included Canada, England's Pinewood Studios, and the country of Malta. Jurassic World Dominion began filming in February 2020, but was put on hiatus several weeks later as a safety precaution due to the COVID-19 pandemic. Production later resumed that July, with numerous health precautions in place, including COVID-19 testing and social distancing. Filming wrapped four months later. Jurassic World Dominion uses more animatronics than the previous films. The animatronic dinosaurs were created by John Nolan and his team. It is also the second film in the franchise to feature feathered dinosaurs after feathered velociraptors in Jurassic Park III. The film is set four years after the events of Fallen Kingdom, with dinosaurs now living alongside humans around the world. It follows Owen Grady and Claire Dearing as they embark on a rescue mission, while Alan Grant and Ellie Sattler reunite with Ian Malcolm to expose a conspiracy by the genomics corporation Biosyn, a once rival of the defunct InGen. Dominion grossed over $1 billion, becoming the third highest-grossing film of 2022, and the fourth in the franchise to pass the billion-dollar mark. It received mixed-to-negative reviews from critics. An extended version of Jurassic World Dominion was released on 4K Ultra HD, Blu-ray and DVD on August 16, 2022. The extended edition received more favorable reviews and is considered an improvement over the theatrical cut. Jurassic World Rebirth (2025) Jurassic World Dominion concluded the second film trilogy as well as the storyline that began in the original trilogy, although future films in the franchise were not ruled out. Marshall said in May 2020 that Jurassic World Dominion would mark "the start of a new era", in which humans have to adjust to dinosaurs being on the mainland. Marshall reiterated in January 2022 that there could be more films: "We're going to sit down, and we're going to see what the future is". Trevorrow, noting that he spent nine years working on the Jurassic World trilogy, said in May 2022 that he would likely not return for another film, except in a possible advisory role. He expressed interest in having Howard direct a future film. He also suggested that several characters introduced in Dominion could return for future installments, including Kayla Watts (portrayed by DeWanda Wise), Ramsay Cole (Mamoudou Athie), and Soyona Santos (Dichen Lachman). Pratt and Howard do not expect to reprise their roles again, and Neill said Dominion would be the last film for Dern, Goldblum and himself. In January 2024, a new installment was revealed to be officially in development. The film is scheduled for a July 2, 2025 release. David Koepp returned as screenwriter, while Frank Marshall and Patrick Crowley once again serve as producers. Steven Spielberg will serve as executive producer on the project, a joint-venture production between Amblin Entertainment and Universal. Development of the project had been underway for some time prior to its announcement. The film will be directed by Gareth Edwards. It stars Scarlett Johansson, Mahershala Ali, and Jonathan Bailey with Rupert Friend, Manuel Garcia-Rulfo, Luna Blaise, and David Iacono. Filming took place in Thailand, Malta and the U.K. The film is set five years after the events of Dominion. Short films As of 2022, two short films have been released. Both take place between Jurassic World: Fallen Kingdom and Jurassic World Dominion, and are considered canon with the film series. Battle at Big Rock (2019) Battle at Big Rock is the first live-action short film in the franchise, and was released on September 15, 2019. The eight-minute film was directed by Colin Trevorrow, and was co-written by him and Emily Carmichael. The film stars André Holland, Natalie Martinez, Melody Hurd, and Pierson Salvador. The film is set one year after the events of Jurassic World: Fallen Kingdom. In the film, a family goes on a camping trip at the fictional Big Rock National Park in northern California, approximately from where dinosaurs from Fallen Kingdom were let loose. The film chronicles the first major confrontation between humans and the dinosaurs. Jurassic World Dominion prologue (2021) A five-minute Jurassic World Dominion prologue was released in 2021, serving as the franchise's second live-action short film. It was originally intended as the film's opening sequence before being removed from the final cut. It features a prehistoric segment showcasing dinosaurs in their natural habitats, then cuts to the present day as a T. rex wreaks havoc at a drive-in theater. The prologue is used as the opening sequence in the extended edition of Jurassic World Dominion. Television Lego animated projects Lego produced various CGI-animated projects, including the two-part television special Lego Jurassic World: The Secret Exhibit, which aired on NBC on November 29, 2018. A 13-episode miniseries, Lego Jurassic World: Legend of Isla Nublar, premiered in 2019. It was broadcast on Family Channel in Canada and on Nickelodeon in the U.S. Jurassic World Camp Cretaceous (2020–2022) Jurassic World Camp Cretaceous is a CGI-animated series that premiered globally on Netflix on September 18, 2020. It ran for five seasons and 49 episodes, concluding on July 21, 2022. It is a joint project between Netflix, Universal Studios, Amblin Entertainment and DreamWorks Animation. Scott Kreamer and Lane Lueras are the showrunners, and executive produce the series with Spielberg, Marshall, and Trevorrow, while Zack Stentz serves as consulting producer. The series is set during the events of the Jurassic World trilogy, and is about six teenagers attending an adventure camp on Isla Nublar. When the park's dinosaurs escape, the teenagers are stranded and must work together to escape the island. The voice cast includes Paul-Mikél Williams, Jenna Ortega, Ryan Potter, Raini Rodriguez, Sean Giambrone, Kausar Mohammed, Jameela Jamil, and Glen Powell. An interactive special, titled Hidden Adventure, debuted a few months after the series ended. Jurassic World: Chaos Theory (2024) Jurassic World: Chaos Theory, an animated sequel series to Camp Cretaceous, was released on Netflix on May 24, 2024. It is set between the events of Jurassic World: Fallen Kingdom and Jurassic World Dominion. Most of the voice actors reprised their roles, although Potter and Ortega were recast. Live-action series A live-action television series based on the Jurassic World trilogy was reportedly in development as of March 2020. However, Marshall said two years later that such a series had not been discussed, and that his focus was on the films. Speaking about Camp Cretaceous, Marshall said: "I think that's plenty for now". Cancelled projects Escape from Jurassic Park In June 1993, after the theatrical release of Jurassic Park, spokesmen for Amblin and MCA confirmed that an animated series based on the film was in development and awaiting Spielberg's final approval. The series, titled Escape from Jurassic Park, would have consisted of 23 episodes for its first season. The series would have centered on John Hammond's attempts to finish Jurassic Park and open it to the public, while InGen's corporate rival Biosyn is simultaneously planning to open their own dinosaur theme park in Brazil, which ultimately ends with their dinosaurs escaping into the jungles. If produced, it was believed that the project would be the most expensive animated series up to that time. Jeff Segal, president of Universal Cartoon Studios, said: "We are developing a TV series that we anticipate would be computer animated and very sophisticated. However, Spielberg has not had a chance yet to look at either the material or the format for the series". Segal said Universal was considering the possibility of developing the series for prime time, also commenting about the series' storyline: "It would essentially pick up from the closing moments of the movie and it would continue the story in a very dramatic way. The intention would be to continue with the primary characters and also introduce new characters". Segal also said the series would be aimed specifically at the same target audience as the film, while hoping that it would also appeal to young children. Animation veteran and comic artist Will Meugniot (then working at Universal Cartoon Studios for various projects, including Exosquad) contacted artist William Stout to ask if he would be interested in designing the animated series. According to Stout: "This was not going to be a kiddy show (although kids of all ages, including myself, could enjoy it). They wanted the show to be a mature prime time series with top writers and state-of-the-art television animation augmented with quite a bit of CG animation". Universal Animation Studios wanted the show to have the look of a graphic novel. Stout was hired to work on the series and subsequently made a trailer to demonstrate how the series would look, and how it would combine traditional animation with computer animation. The series required Spielberg's final approval before it could go into production. However, Spielberg had grown tired of the massive promotion and merchandise revolving around the film, and never watched the trailer. On July 13, 1993, Margaret Loesch, president of the Fox Children's Network, confirmed that discussions had been held with Spielberg about an animated version of the film. Loesch also said: "At least for now and in the foreseeable future, there will not be an animated Jurassic Park. That's Steven Spielberg's decision, and we respect that decision". Jurassic Park: Chaos Effect Part three of the four-part comic adaptation of The Lost World: Jurassic Park, published by Topps Comics in July 1997, confirmed to readers that a cartoon series based on the film was in development. It was commissioned by Spielberg and would be developed by DreamWorks Animation. In November 1997, it was reported that the cartoon would be accompanied by Jurassic Park: Chaos Effect, a series of dinosaur toys produced by Kenner and based on a premise that scientists had created dinosaur hybrids consisting of DNA from different creatures. The new toys were based on the upcoming cartoon. It was also reported that the cartoon could be ready by March 1998, as a mid-season replacement. The Chaos Effect toyline was released in June 1998, but the animated series was never produced, for unknown reasons. Cast and crew Film cast Television cast Additional film crew Reception Box office performance Critical and public response Accolades Music Merchandise and other media Toys For the 1993 film, Kenner produced a line of action figures and dinosaurs, marketed with the slogan, "If it's not 'Jurassic Park', it's extinct". Paleontologist Jack Horner, who offered his advice for the film's dinosaurs, was also hired as a scientific advisor for the dinosaur toys. Kenner had two years to develop the toys, which sold successfully. Dakin also produced stuffed dinosaurs based on the film. Kenner produced another toy line for the 1997 sequel. The company also released Jurassic Park: Chaos Effect a year later. The toy line's premise involved scientists who had created new dinosaur species by combining the DNA of existing dinosaurs. Kenner's parent company, Hasbro, took over toy production for Jurassic Park III, released in 2001. At that time, Playskool also released a toy line aimed at young children, under the name Jurassic Park Junior. Jurassic Park III toys were also released under the Lego Studios brand. Hasbro also created a toy line for Jurassic World. Some of the toy dinosaurs had been referred to on packaging as males, despite being females in the film. This drew some criticism which accused Hasbro of catering solely to a male demographic. Hasbro updated the pronouns shortly after the toy line's release. The Lego Jurassic World line was also released in 2015. In 2016, Mattel took the toy license from Hasbro, in a deal which started one year later. Mattel produced various toys for Jurassic World: Fallen Kingdom, including dinosaurs, action figures, and Barbie dolls. Mattel's dinosaur toys included symbols which could be scanned with a cell phone, providing facts about each animal through a mobile app known as Jurassic World Facts. Lego and Funko also created toys based on the film. In addition, Mattel released the Jurassic World Legacy Collection, which included toys based on characters and dinosaurs from the Jurassic Park trilogy. In 2019, Mattel unveiled the Amber Collection, a toy line of posable characters and dinosaurs that had been featured in the first film. The Amber Collection continued for several years. In 2020, Mattel also released toys based on Jurassic World Camp Cretaceous. A year later, the company partnered with Target Corporation to sell a line of toys custom-made by the fan site Jurassic Outpost. In 2022, Mattel also released a new series of toys known as the Hammond Collection, focusing on the first three films. New toys were also released in 2023, commemorating the 30th anniversary of the first film. Board games Board games were released by Milton Bradley for the first two Jurassic Park films. Hasbro and Milton Bradley also released two board games for Jurassic Park III. Jurassic Park: Danger!, released by Ravensburger in 2018, pits humans and dinosaurs against each other. Meanwhile, Mondo was working on a board game to be known as Jurassic Park: The Chaos Gene, although it was canceled during development. In 2019, Mondo announced that characters and dinosaurs from the Jurassic Park franchise would be released as playable characters for its Unmatched board game. The first set of characters was released in 2020. In 2021, Hasbro released a version of Monopoly based on the original Jurassic Park film. Jurassic World: The Legacy of Isla Nublar is a legacy board game released in 2022. It was designed by Funko's design studio, Prospero Hall. Comics Topps Comics From June 1993 to August 1997, the now-defunct Topps Comics published comic adaptations of Jurassic Park and The Lost World: Jurassic Park, as well as several tie-in series. Jurassic Park #0–4 (June–September 1993). Adaptation of the film, adapted by Walter Simonson and pencilled by Gil Kane. Each issue had two covers – a main cover by Gil Kane, with the variant by Dave Cockrum. Issue #0 features two prequel stories to the film, and was only available with the trade paperback of the film adaptation. Jurassic Park: Raptor #1–2 (November–December 1993). Written by Steve Englehart and pencilled by Armando Gil and Dell Barras. Jurassic Park: Raptors Attack #1–4 (March–June 1994). Written by Steve Englehart, pencilled by Armando Gil (#1) and Chaz Truog, with covers by Michael Golden. Jurassic Park: Raptors Hijack #1–4 (July–October 1994). Written by Steve Englehart, pencilled by Neil Vokes, with covers by Michael Golden. Jurassic Park Annual #1 (May 1995). Featuring two stories, one being a sequel and one being a prequel. Written by Bob Almond, Michael Golden and Renée Witterstaetter, pencilled by Claude St. Aubin and Ed Murr, with a cover by Michael Golden. Return to Jurassic Park #1–9 (April 1995 – February 1996). An ongoing series, the first four issues were written by Steve Englehart and pencilled by Joe Staton. The next four issues were written by Tom Bierbaum and Mary Bierbaum, being drawn by Armando Gil. The first 8 issues had covers by Michael Golden. The ninth and final issue was a jam book written by Keith Giffen and Dwight Jon Zimmerman, featuring artwork by such acclaimed artists as Jason Pearson, Adam Hughes, Paul Gulacy, John Byrne, Kevin Maguire, Mike Zeck, George Pérez and Paul Chadwick, with a cover by John Bolton. The Lost World: Jurassic Park #1–4 (May–August 1997). Adaptation of the second film, adapted by Don McGregor and pencilled by Jeff Butler (#1–2) and Claude St. Aubin (#3–4). Each issue of the series featured two covers – one by Walter Simonson and a photo cover. IDW Comics Beginning in June 2010, IDW Publishing began publishing Jurassic Park comics. They also acquired the rights to reprint the issues published by Topps in the 1990s, which they began to do in trade paperback format starting in November 2010. After a four-year hiatus, IDW announced a comic series based on Jurassic World that was to be released in 2017. Jurassic Park: Redemption #1–5 (June 2010 – October 2010). Five-issue series written by Bob Schreck with art by Nate van Dyke. Each issue has a main cover penciled by Tom Yeates, with variant covers by Frank Miller, Arthur Adams, Paul Pope, Bernie Wrightson, and Bill Stout, respectively. Jurassic Park: The Devils in the Desert #1–4 (January 2011 – April 2011). Four-issue series written and illustrated by John Byrne. Jurassic Park: Dangerous Games #1–5 (September 2011 – January 2012). Five-issue series written by Greg Bear and Erik Bear, with art by Jorge Jiménez and a variant cover by Geof Darrow. This series has been collected in the following trade paperbacks: Motion comic series In late 2019, a Jurassic World motion comic series was released by Universal on YouTube. The four-part series is set after the events of Jurassic World: Fallen Kingdom and explores various dinosaur attacks throughout the world. Video games Since 1993, numerous Jurassic Park video games have been produced. To accompany the release of the first film, Sega and Ocean Software published several different games for various consoles, including the NES and Sega Genesis. In 1994, Ocean produced a game sequel titled Jurassic Park 2: The Chaos Continues, while Sega released Jurassic Park: Rampage Edition. In addition, Universal Interactive Studios produced Jurassic Park Interactive for the 3DO system. In 1997, several games were released for the second film in the franchise, including some by DreamWorks Interactive. A subsequent game, Trespasser, was released as a "digital sequel" to The Lost World: Jurassic Park. The player assumes the role of Anne, who is the sole survivor of a plane crash on InGen's "Site B" one year after the events of the film. It was released for Microsoft Windows in 1998. The third film spawned six video games for PC and Game Boy Advance. A number of lightgun arcade games were also released for all three films. Jurassic Park: The Game is an episodic video game that takes place during and after the events of the original film. It follows a new group of survivors trying to escape Isla Nublar. It was developed by Telltale Games in a deal with Universal and was released in 2011. Lego Jurassic World is a 2015 action-adventure video game developed by Traveller's Tales and published by Warner Bros. Interactive Entertainment. It follows the plots of the series' first four films. Several park-building games have been released, including Jurassic Park: Operation Genesis (2003), Jurassic World: The Game (2015), Jurassic World Evolution (2018), and Jurassic World Evolution 2 (2021). Jurassic World Aftermath, a virtual reality game, was released in 2020. In December 2023, it was announced that a video game titled Jurassic Park: Survival is in development. With a story set chronologically one day after the first film, the installment follows a young InGen scientist who has been left behind on Isla Nublar. It is being developed by Saber Interactive and will feature action-adventure gameplay from a first-person perspective. Attractions Theme park rides Several water rides based on the series have opened at Universal's theme parks. On June 21, 1996, Universal Studios Hollywood opened Jurassic Park: The Ride. Universal Studios Japan later opened this attraction, and Universal Islands of Adventure opened the attraction under the name Jurassic Park River Adventure as part of its Jurassic Park-themed land which also features an interactive play area and a recreation of the Visitor Center from the first film. The rides are heavily themed on the first three films. Another ride based on the series has also been opened at Universal Studios Singapore (Jurassic Park Rapids Adventure). In 2018, Jurassic Park: The Ride at Universal Studios Hollywood closed for preparations to become Jurassic World: The Ride, which opened on July 12, 2019. A roller coaster, known as VelociCoaster, opened at Universal Islands of Adventure in June 2021. Exhibitions In June 1993, the American Museum of Natural History in New York debuted The Dinosaurs of Jurassic Park, an exhibition featuring dinosaurs that were created for use in the first film. The exhibition opening coincided with the film. Other museums were threatened with legal action for using the word "Jurassic" in exhibit titles. A travelling exhibition, The Lost World: The Life and Death of Dinosaurs, went on tour in 1997. The exhibit was produced in connection with the second film, and its centerpiece was a 70-foot-long recreation of a Mamenchisaurus, a dinosaur featured in the film. Another travelling exhibit, The Dinosaurs of Jurassic Park and The Lost World, went on tour in 1998. It was created by Don Lessem, and featured dinosaurs that were made for the first two films, as well as sets and props, and a video narrated by Jeff Goldblum. It also featured the 70-foot Mamenchisaurus. The exhibit was ongoing as of 2001. Jurassic Park: The Life and Death of Dinosaurs was an exhibition that traveled around the United States during 2002. It was also created by Lessem and included dinosaur sculptures from the films, as well as cast skeletons and fossils. In 2001, Universal Studios and Amblin Entertainment created the Jurassic Park Institute, an educational program that included a website, as well as travelling dinosaur exhibits in later years. The exhibit toured in Japan under the name Jurassic Park Institute Tour, and a video game, Jurassic Park Institute Tour: Dinosaur Rescue, was released to accompany it. The tour, designed by Thinkwell Design & Production, won a Thea award in 2005 for Outstanding Achievement. Jurassic World: The Exhibition was located at the Melbourne Museum in Australia for six months during 2016. The travelling exhibition was also held in 2017, at the Franklin Institute in Philadelphia, and at the Field Museum in Chicago. A new North American tour was launched in 2021, starting in Texas. Live show A live show, titled Jurassic World Live, started touring in 2019. See also List of films featuring dinosaurs Notes References External links American book series American film series American science fiction adventure films Amusement parks in fiction Book series introduced in 1990 Fiction about cloning Fiction about dinosaurs Fiction about genetic engineering Fiction about modern-day dinosaurs Mythopoeia Films adapted into comics Films adapted into television shows Film series introduced in 1993 Films set in amusement parks Islands in fiction Mass media franchises introduced in 1990 American fantasy films American science fiction films Science fiction film franchises Science fiction franchises Universal Pictures franchises Works by Michael Crichton
Jurassic Park
[ "Engineering", "Biology" ]
10,337
[ "Genetic engineering", "Fiction about genetic engineering" ]
11,057,200
https://en.wikipedia.org/wiki/Digital%20Trends
Digital Trends is a Portland, Oregon-based tech news, lifestyle, and information website that publishes news, reviews, guides, how-to articles, descriptive videos and podcasts about technology and consumer electronics products. With offices in Portland, Oregon, New York City, Chicago, and other locations, Digital Trends is operated by Digital Trends Media Group, a media company that also publishes Digital Trends Español, focusing on Spanish speakers worldwide, and a men's lifestyle site The Manual. The site offers reviews and information on a wide array of products that have been shaped by technology. That includes consumer electronics products such as smartphones, video games and systems, laptops, PCs and peripherals, televisions, home theater systems, digital cameras, video cameras, tablets, and more. According to third-party web analytics provider SimilarWeb, the site received over 40 million visits per month . From 2014 to 2021, Digital Trends' editorial team was led by Editor-in-chief Jeremy Kaplan and guided by Co-founders Ian Bell and Dan Gaul. Kaplan left the site in May of 2021. The website's About Us page lists former Mobile Section Editor Andrew Martonik as "interim editor in chief." History Ian Bell and Dan Gaul founded Digital Trends in June 2006 in Lake Oswego, Oregon. In May 2009, Digital Trends moved its headquarters from Lake Oswego into the US Bancorp Tower in Downtown Portland, Oregon. The company opened a second office in New York City in 2012. Digital Trends is a privately funded and owned corporation. Digital Trends en Español, a Spanish-language version of the site that offers original reporting focusing on the spanish speaking consumers worldwide, was launched in December 2014. Editor-in-Chief Juan Garcia leads an international team, among them Milenka Pena, an Emmy Award nominee and Silver Done Award recipient, who works as the News Editor for the Spanish site. Digital Trends saw a surge in popularity in recent years; the site claimed a 100-percent increase in traffic in September 2015, reaching over 24 million unique readers globally and more than 13 million U.S. readers. It currently reaches approximately 30 million readers per month, who view over 100 million pages. In addition to growth, 2015 saw a series of changes for Digital Trends. The site expanded its awards program to include several international trade shows, including Mobile World Congress in Barcelona and IFA in Berlin. It also launched its first car of the year awards and Smart Home awards, underscoring the site's growing investment in these areas. The company also launched DT Design, an in-house creative ad agency, to focus on branded content and high-impact advertising units. In late summer of 2016, Re/Code reported on a deal with Conde Nast to acquire Digital Trends for $120 million, noting that the site is expected to generate $30 million in revenue this year and around $6 million in profit. Bell denied that his company was in talks, but acknowledged that the company "is periodically approached by would-be buyers." Digiday wrote about the deal as well, comparing the site's traffic to "such properties as the Purch network, CNET and The Verge, and ahead of USA Today Tech, Yahoo! Tech, and Business Insider's Tech Insider." In 2018, Facebook executive Bob Gruters joined Digital Trends as its CRO. In June 2020, as Digital Trends posted Black Lives Matter support statements, employees observed racial bias at a "Gin and Juice" party in 2018 and harassment at a 2017 holiday party. CEO Ian Bell noted "I'm not a proponent of cancel culture." In 2020, Gresham, Oregon Mayor Travis Stovall joined Digital Trends' board of directors. As of 2021, Digital Trends built its advertising business around data, including intent-based audience segmentation. The company partnered with Valnet, the parent of Screen Rant, to pool resources and target larger news audiences. See also List of companies based in Oregon References External links • American companies established in 2001 American technology news websites Computing websites Companies based in Portland, Oregon Mass media companies based in New York City 2001 establishments in Oregon Internet properties established in 2006
Digital Trends
[ "Technology" ]
848
[ "Computing websites" ]
11,057,338
https://en.wikipedia.org/wiki/Architects%20Sketch
The "Architects Sketch" is a Monty Python sketch, first seen in episode 17 of Monty Python's Flying Circus, "The Buzz Aldrin Show". The episode was recorded on 18 September 1970 and originally broadcast on 20 October 1970. The following year, an audio version was recorded for Another Monty Python Record. Description The sketch is introduced by a group of Gumbies (on film) who shout "The Architects Sketch" until Mr. Tid (Graham Chapman) yells at them to shut up. They then repeat "Sorry!" until Mr. Tid throws a bucket of water on them from above. The sketch proper begins (on videotape) with Tid in an office with two City gents (Michael Palin and Terry Jones). On a table near the window stand two architectural models of tower blocks. Mr. Tid informs the City gents that he has invited the architects responsible to explain the advantages of their respective designs. First to arrive is Mr. Wiggin (John Cleese), who describes his architectural design and modern construction, and then explains his killing technique starting with a conveyor belt and "rotating knives". It turns out that Mr. Wiggin mainly designs slaughterhouses and has misunderstood the owners' attitude to their tenants. When Mr. Wiggin fails to persuade them to accept his "real beaut" of a design, he launches into an impassioned tirade against "you non-creative garbage" and blackballing Freemasons. When they still reject his design, however, he begs the increasingly uncomfortable City gents to accept him into the Freemasons. Once Wiggin has been persuaded to leave, the second architect, Mr. Leavey (Eric Idle), arrives. As Mr. Leavey describes the strong construction and safety features of his design, a tall tower block, his model collapses and catches fire in the manner of the then recent Ronan Point disaster, accompanied by a large on-screen caption reading "SATIRE". The City gents assure Mr. Leavey that provided the tenants are "of light build and relatively sedentary" there should be no need to make expensive changes to the design. After his design is accepted, the model explodes. The City gents exchange bizarre Masonic handshakes with Leavey. Wiggin reappears at the doorway, breaking the fourth wall to tell the audience, "It opens doors, I'm telling you." This leads into a filmed section about "How to Recognise a Mason", in which Masons are shown engaging in such bizarre behaviour as hopping down Threadneedle Street with their trousers around their ankles. Finally, there follows an animation in which an announcer attempts to "cure" a Mason (an animated cutout of Chapman) through behavioural therapy with a picture of a nude woman; when the subject says, "No", the enraged announcer crushes him with a giant hammer. References Architects Sketch Architects Sketch Architects Sketch 1970 works
Architects Sketch
[ "Engineering" ]
605
[ "Works about architecture", "Architecture" ]
11,057,402
https://en.wikipedia.org/wiki/Sulfolene
Sulfolene, or butadiene sulfone is a cyclic organic chemical with a sulfone functional group. It is a white, odorless, crystalline, indefinitely storable solid, which dissolves in water and many organic solvents. The compound is used as a source of butadiene. Production Sulfolene is formed by the cheletropic reaction between butadiene and sulfur dioxide. The reaction is typically conducted in an autoclave. Small amounts of hydroquinone or pyrogallol are added to inhibit polymerization of the diene. The reaction proceeds at room temperature over the course of days. At 130 °C, only 30 minutes are required. An analogous procedure gives the isoprene-derived sulfone. Reactions Acid-base reactivity The compound is unaffected by acids. It can even be recrystallized from conc. HNO3. The protons in the 2- and 5-positions rapidly exchange with deuterium oxide under alkaline conditions. Sodium cyanide catalyzes this reaction. Isomerization to 2-sulfolene In the presence of base or cyanide, 3-sulfolene isomerizes to a mixture of 2-sulfolene and 3-sulfolene. At 50 °C an equilibrium mixture is obtained containing 42% 3-sulfolene and 58% 2-sulfolene. The thermodynamically more stable 2-sulfolene can be isolated from the mixture of isomers as pure substance in the form of white plates (m.p. 48-49 °C) by heating for several days at 100 °C, because of the thermal decomposition of the 3-sulfolene at temperatures above 80 °C. Hydrogenation Catalytic hydrogenation yields sulfolane, a solvent used in the petrochemical industry for the extraction of aromatics from hydrocarbon streams. The hydrogenation of 3-sulfolene over Raney nickel at approx. 20 bar and 60 °C gives sulfolane in yields of up to 65% only because of the poisoning of the catalyst by sulfur compounds. Halogenation 3-Sulfolene reacts in aqueous solution with bromine to give 3,4-dibromotetrohydrothiophene-1,1-dioxide, which can be dehydrobrominated to thiophene-1,1-dioxide with silver carbonate. Thiophene-1,1-dioxide, a highly reactive species, is also accessible via the formation of 3,4-bis(dimethylamino)tetrahydrothiophene-1,1-dioxide and successive double quaternization with methyl iodide and Hofmann elimination with silver hydroxide. A less cumbersome two-step synthesis is the two-fold dehydrobromination of 3,4-dibromotetrohydrothiophene-1,1-dioxide with either powdered sodium hydroxide in tetrahydrofuran (THF) or with ultrasonically dispersed metallic potassium. Diels-Alder reactions 3-sulfolene is mainly valued as a stand-in for butadiene. The in situ production and immediate consumption of 1,3-butadiene largely avoids contact with the diene, which is a gas at room temperature. One potential drawback, aside from expense, is that the evolved sulfur dioxide can cause side reactions with acid-sensitive substrates. Diels-Alder reaction between 1,3-butadiene and dienophiles of low reactivity usually requires prolonged heating above 100 °C. Such procedures are rather dangerous. If neat butadiene is used, special equipment for work under elevated pressure is required. With sulfolene no buildup of butadiene pressure could be expected as the liberated diene is consumed in the cycloaddition, and therefore the equilibrium of the reversible extrusion reaction acts as an internal "safety valve". 3-Sulfolene reacts with maleic anhydride in boiling xylene to cis-4-cyclohexene-1,2-dicarboxylic anhydride, obtaining yields of up to 90%. 3-Sulfolene reacts also with dienophiles in trans configuration (such as diethyl fumarate) at 110 °C with SO2 elimination in 66–73% yield to the trans-4-cyclohexene-1,2-dicarboxylic diethyl ester. 6,7-Dibromo-1,4-epoxy-1,4-dihydronaphthalene (6,7-Dibromonaphthalene-1,4-endoxide, accessible after debromination from 1,2,4,5-tetrabromobenzene using an equivalent of n-butyllithium and Diels-Alder reaction in furan in 70% yield) reacts with 3-sulfolene in boiling xylene to give a tricyclic adduct. This precursor yields, after treatment with perchloric acid, a dibromo dihydroanthracene which is dehydrogenated in the last step with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) to 2,3-dibromoanthracene. 1,3-Butadiene (formed in the retro-cheletrophic reaction of 3-sulfolene) reacts with dehydrobenzene (benzyne, obtained by thermal decomposition of benzenediazonium-2-carboxylate) in a Diels-Alder reaction in 9% yield to give 1,4-dihydronaphthalene. 2- and 3-Sulfolenes as a dienophile In the presence of very reactive dienes (for example 1,3-diphenylisobenzofuran) butadienesulfone behaves as a dienophile and forms the corresponding Diels-Alder adduct. As early as 1938, Kurt Alder and co-workers reported Diels-Alder adducts from the isomeric 2-sulfolene with 1,3-butadiene and 2-sulfolene with cyclopentadiene. Other cycloadditions The base-catalyzed reaction of 3-sulfolene with carbon dioxide at 3 bar pressure produces 3-sulfolene-3-carboxylic acid in 45% yield. With diazomethane, 3-sulfolene forms in a 1,3-dipolar cycloadduct: Polymerization In 1935, H. Staudinger and co-workers found that the reaction of butadiene and SO2 at room temperature gives a second product in addition to 3-sulfolene. This second product is an amorphous solid polymer. By free-radical polymerization of 3-sulfolene in peroxide-containing diethyl ether, up to 50% insoluble high-molecular-weight poly-sulfolene was obtained. The polymer resists degradation by sulfuric and nitric acids. In subsequent investigations, polymerization of 3-sulfolene was initiated above 100 °C with the radical initiator azobis(isobutyronitrile) (AIBN). 3-sulfolene does not copolymerize with vinyl compounds, however. On the other hand, 2-sulfolene does not homopolymerize, but forms copolymers with vinyl compounds, e.g. acrylonitrile and vinyl acetate. 3-Sulfolene as a recyclable solvent The reversibility of the interconversion of 3-sulfolene with buta-1,3-diene and sulfur dioxide suggests the use of sulfolene as a recyclable aprotic dipolar solvent, in replacement for dimethyl sulfoxide (DMSO), which is often used but difficult to separate and poorly reusable. As a model reaction, the reaction of benzyl azide with 4-toluenesulfonic acid cyanide forming 1-benzyl-5-(4-toluenesulfonyl)tetrazole was investigated. The formation of the tetrazole can also be carried out as a one-pot reaction without the isolation of the benzyl azide with 72% overall yield. After the reaction, the solvent 3-sulfolene is decomposed at 135 °C and the volatile butadiene (b.p. −4.4 °C) and sulfur dioxide (b.p. −10.1 °C) are deposited in a cooling trap at −76 °C charged with excess sulfur dioxide. After the addition of hydroquinone as polymerization inhibition, 3-sulfoles is formed again quantitatively upon heating to room temperature. It appears questionable though, if 3-sulfolene with a useful liquid phase range of only 64 to a maximum of about 100 °C can be used as DMSO substitutes (easy handling, low cost, environmental compatibility) in industrial practice. Uses Aside from its synthetic versatility (see above), sulfolene is used as an additive in electrochemical fluorination. It can increase the yield of perfluorooctanesulfonyl fluoride by about 70%. It is "highly soluble in anhydrous HF and increases the conductivity of the electrolyte solution". In this application, it undergoes a ring opening and is fluorinated to form perfluorobutanesulfonyl fluoride. Further reading References Reagents for organic chemistry Sulfones
Sulfolene
[ "Chemistry" ]
2,031
[ "Sulfones", "Functional groups", "Reagents for organic chemistry" ]
11,057,890
https://en.wikipedia.org/wiki/E-box
An E-box (enhancer box) is a DNA response element found in some eukaryotes that acts as a protein-binding site and has been found to regulate gene expression in neurons, muscles, and other tissues. Its specific DNA sequence, CANNTG (where N can be any nucleotide), with a palindromic canonical sequence of CACGTG, is recognized and bound by transcription factors to initiate gene transcription. Once the transcription factors bind to the promoters through the E-box, other enzymes can bind to the promoter and facilitate transcription from DNA to mRNA. Discovery The E-box was discovered in a collaboration between Susumu Tonegawa's and Walter Gilbert's laboratories in 1985 as a control element in immunoglobulin heavy-chain enhancer. They found that a region of 140 base pairs in the tissue-specific transcriptional enhancer element was sufficient for different levels of transcription enhancement in different tissues and sequences. They suggested that proteins made by specific tissues acted on these enhancers to activate sets of genes during cell differentiation. In 1989, David Baltimore's lab discovered the first two E-box binding proteins, E12 and E47. These immunoglobulin enhancers could bind as heterodimers to proteins through bHLH domains. In 1990, another E-protein, ITF-2A (later renamed E2-2Alt) was discovered that can bind to immunoglobulin light chain enhancers. Two years later, the third E-box binding protein, HEB, was discovered by screening a cDNA library from HeLa cells. A splice-variant of the E2-2 was discovered in 1997 and was found to inhibit the promoter of a muscle-specific gene. Since then, researchers have established that the E-box affects gene transcription in several eukaryotes and found E-box binding factors that identify E-box consensus sequences. In particular, several experiments have shown that the E-box is an integral part of the transcription-translation feedback loop that comprises the circadian clock. Binding E-box binding proteins play a major role in regulating transcriptional activity. These proteins usually contain the basic helix-loop-helix protein structural motif, which allows them to bind as dimers. This motif consists of two amphipathic α-helices, separated by a small sequence of amino acids, that form one or more β-turns. The hydrophobic interactions between these α-helices stabilize dimerization. Besides, each bHLH monomer has a basic region, which helps mediate recognition between the bHLH monomer and the E-box (the basic region interacts with the major groove of the DNA). Depending on the DNA motif ("CAGCTG" versus "CACGTG") the bHLH protein has a different set of basic residues. The E-box binding is modulated by Zn2+ in mice. The CT-Rich Regions (CTRR) located about 23 nucleotides upstream of the E-box is important in E-box binding, transactivation (increased rate of genetic expression), and transcription of circadian genes BMAL1/NPAS2 and BMAL1/CLOCK complexes. The binding specificity of different E-boxes is found to be essential in their function. E-boxes with different functions have a different number and type of binding factor. The consensus sequence of the E-box is usually CANNTG; however, there exist other E-boxes of similar sequences called noncanonical E-boxes. These include, but are not limited to: CACGTT sequence 20 bp upstream of the mouse Period2 (PER2) gene and regulates its expression CAGCTT sequence found within the MyoD core enhancer CACCTCGTGAC sequence in the proximal promoter region of human and rat APOE, which is a protein component of lipoproteins. Role in the circadian clock The link between E-box-regulated genes and the circadian clock was discovered in 1997, when Hao, Allen, and Hardin (Department of Biology at Texas A&M University) analyzed rhythmicity in the period (per) gene in Drosophila melanogaster. They found a circadian transcriptional enhancer upstream of the per gene within a 69 bp DNA fragment. Depending upon PER protein levels, the enhancer drove high levels of mRNA transcription in both LD (light-dark) and DD (constant darkness) conditions. The enhancer was found to be necessary for high-level gene expression but not for circadian rhythmicity. It also works independently as a target of the BMAL1/CLOCK complex. The E-box plays an important role in circadian genes; so far, nine E/E'BOX controlled circadian genes have been identified: PER1, PER2, BHLHB2, BHLHB3, CRY1, DBP, Nr1d1, Nr1d2, and RORC. As the E-box is connected to several circadian genes, it is possible that the genes and proteins associated with it are "crucial and vulnerable points in the (circadian) system." The E-box is one of the top five transcription factor families associated with the circadian phase and is found in most tissues. A total of 320 E-box-controlled genes are found in the SCN (suprachiasmatic nucleus), liver, aorta, adrenal, WAT (white adipose tissue), brain, atria, ventricle, prefrontal cortex, skeletal muscle, BAT (brown adipose tissue), and calvarial bone. E-box like CLOCK-related elements (EL-box; GGCACGAGGC) are also important in maintaining circadian rhythmicity in clock-controlled genes. Similarly to the E-box, the E-box like CLOCK related element can also induce transcription of BMAL1/CLOCK, which can then lead to expression in other EL-box containing genes (Ank, DBP, Nr1d1). However, there are differences between the EL-box and the regular E-box. Suppressing DEC1 and DEC2 has a stronger effect on E-box than on EL-box. Furthermore, HES1, which can bind to a different consensus sequence (CACNAG, known as the N-box), shows suppression effect in EL-box, but not in E-box. Both non-canonical E-boxes and E-box-like sequences are crucial for circadian oscillation. Recent research on this forms an hypothesis that either a canonical or non-canonical E-box followed by an E-box like sequence with 6 base pair interval in between is a necessary combination for circadian transcription. In silico analysis also suggests that such an interval existed in other known clock-controlled genes. Role of proteins which bind to E-boxes There are several proteins that bind to the E-box and affect gene transcription. CLOCK-ARNTL complex The CLOCK-ARNTL (BMAL1) complex is an integral part of the mammalian circadian cycle and vital in maintaining circadian rhythmicity. Knowing that binding activates transcription of the per gene in the promoter region, researchers discovered in 2002 that DEC1 and DEC2 (bHLH transcription factors) repressed the CLOCK-BMAL1 complex through direct interaction with BMAL1 and/or competition for E-box elements. They concluded that DEC1 and DEC2 were regulators of the mammalian molecular clock. In 2006, Ripperger and Schibler discovered that the binding of this complex to the E-box drove circadian DBP transcription and chromatin transitions (a change from chromatin to facultative heterochromatin). It was concluded that CLOCK regulates DBP expression by binding to E-box motifs in enhancer regions located in the first and second introns. MYC (c-Myc, an oncogene) MYC (c-Myc), a gene that codes for a transcription factor Myc, is important in regulating mammalian cell proliferation and apoptosis. In 1991, researchers tested whether c-Myc could bind to DNA by dimerizing it to E12. Dimers of E6, the chimeric protein, were able to bind to an E-box element (GGCCACGTGACC) which was recognized by other HLH proteins. Expression of E6 suppressed the function of c-Myc, which showed a link between the two. In 1996, it was found that Myc heterodimerizes with MAX and that this heterodimeric complex could bind to the CAC(G/A)TG E-box sequence and activate transcription. In 1998, it was concluded that the function of c-Myc depends upon activating transcription of particular genes through E-box elements. MYOD1 (MyoD) MyoD comes from the Mrf bHLH family and its main role is myogenesis, the formation of muscular tissue. Other members in this family include myogenin, Myf5, Myf6, Mist1, and Nex-1. When MyoD binds to the E-box motif CANNTG, muscle differentiation and expression of muscle-specific proteins is initiated. The researchers ablated various parts of the recombinant MyoD sequence and concluded that MyoD used encompassing elements to bind the E-box and the tetralplex structure of the promoter sequence of the muscle specific gene α7 integrin and sarcomeric sMtCK. MyoD regulates HB-EGF (Heparin-binding EGF-like growth factor), a member of the EGF (Epidermal growth factor) family that stimulates cell growth and proliferation. It plays a role in the development of hepatocellular carcinoma, prostate cancer, breast cancer, esophageal cancer, and gastric cancer. MyoD can also bind to noncanonical E boxes of MyoG and regulate its expression. MyoG (Myogenin) MyoG belongs to the MyoD transcription factor family. MyoG-E-Box binding is necessary for neuromuscular synapse formation as an HDAC-Dach2-myogenin signaling pathway in skeletal muscle gene expression has been identified. Decreased MyoG expression has been shown in patients with muscle wasting symptom. MyoG and MyoD have also been shown to involve in myoblast differentiation. They act by transactivating cathepsin B promotor activity and inducing its mRNA expression. TCF3 (E47) E47 is produced by alternative spliced E2A in E47 specific bHLH-encoding exons. Its role is to regulate tissue specific gene expression and differentiation. Many kinases have been associated with E47 including 3pk and MK2. These 2 proteins form a complex with E47 and reduce its transcription activity. CKII and PKA are also shown to phosphorylate E47 in vitro. Similar to other E-box binding proteins, E47 also binds to the CANNTG sequence in the E-box. In homozygous E2A knock-out mice, B cells development stops before the DJ arrangement stage and the B cells fail to mature. E47 has been shown to bind either as heterodimer(with E12) or as homodimer(but weaker). Recent research Although the structural basis for how BMAL1/CLOCK interact with the E-box is unknown, recent research has shown that the bHLH protein domains of BMAL1/CLOCK are highly similar to other bHLH containing proteins, e.g. Myc/Max, which have been crystallized with E-boxes. It is surmised that specific bases are necessary to support this high affinity binding. Furthermore, the sequence constraints on the region around the circadian E-box are not fully understood: it is believed to be necessary but not sufficient for E-boxes to be randomly spaced from each other in the genetic sequence in order for circadian transcription to occur. Recent research involving the E-box has been aimed at trying to find more binding proteins as well as discovering more mechanisms for inhibiting binding. Researchers at the Medical School of Nanjing University found that the amplitude of FBXL3 (F-box/Leucine rich-repeat protein) is expressed via an E-box. They studied mice with FBXL3 deficiency and found that it regulates feedback loops in circadian rhythms by affecting circadian period length. A study published April 4, 2013 by researchers at Harvard Medical School found that the nucleotides on either side of an E-box influences which transcription factors can bind to the E-box itself. These nucleotides determine the 3-D spatial arrangement of the DNA strand and restrict the size of binding transcription factors. The study also found differences in binding patterns between in vivo and in vitro strands. References External links Regulatory sequences DNA
E-box
[ "Chemistry" ]
2,695
[ "Gene expression", "Regulatory sequences" ]
11,058,303
https://en.wikipedia.org/wiki/Trizol
TRIzol is a widely used chemical solution used in the extraction of DNA, RNA, and proteins from cells. The solution was initially used and published by Piotr Chomczyński and Nicoletta Sacchi in 1987. TRIzol is the brand name of guanidinium thiocyanate from the Ambion part of Life Technologies, and Tri-Reagent is the brand name from MRC, which was founded by Chomczynski. Uses in extraction The correct name of the method is guanidinium thiocyanate-phenol-chloroform extraction. The use of TRIzol can result in DNA yields comparable to other extraction methods, and it leads to >50% bigger RNA yield. An alternative method for RNA extraction is phenol extraction and TCA/acetone precipitation. Chloroform should be exchanged with 1-bromo-3-chloropropane when using the new generation TRI Reagent. DNA and RNA from TRIzol and TRI reagent can also be extracted using the Direct-zol Miniprep kit by Zymo Research. This method eliminates the use of Chloroform and 1-bromo-3-chloropropane completely, bypassing phase-separation and precipitation steps. TRIzol is light-sensitive and is often stored in a dark-colored, glass container covered in foil. It is stored at room temperature. When used, it resembles cough syrup, bright pink. The smell of the phenol is extremely strong. TRIzol works by maintaining RNA integrity during tissue homogenization, while at the same time disrupting and breaking down cells and cell components. Hazards Vigilant caution should be taken while using TRIzol (due to the phenol and chloroform). TRIzol is labeled as acute oral, dermal, and inhalation toxicity besides skin corrosion/irritation in the manufacturer MDS. Exposure to TRIzol can be a serious health hazard. Exposure can lead to serious chemical burns, permanent scarring and kidney failure. Experiments should be performed under a chemical hood, with lab coat, nitrile gloves and a plastic apron. TRIzol waste should never be mixed with bleach or acids: the guanidinium thiocyanate in TRIzol reacts to form highly toxic gases. References External links RNA extraction using trizol protocol on OpenWetWare Molecular biology Biochemistry methods
Trizol
[ "Chemistry", "Biology" ]
516
[ "Biochemistry methods", "Biochemistry", "Molecular biology" ]
11,058,555
https://en.wikipedia.org/wiki/Mobile%20emission%20reduction%20credit
A mobile emission reduction credit (MERC) is an emission reduction credit generated within the transportation sector. The term “mobile sources” refers to motor vehicles, engines, and equipment that move, or can be moved, from place to place. Mobile sources include vehicles that operate on roads and highways ("on-road" or "highway" vehicles), as well as nonroad vehicles, engines, and equipment. Examples of mobile sources are passenger cars, light trucks, large trucks, buses, motorcycles, earth-moving equipment, nonroad recreational vehicles (such as dirt bikes and snowmobiles), farm and construction equipment, cranes, lawn and garden power tools, marine engines, ships, railroad locomotives, and airplanes. In California, mobile sources account for about 60 percent of all ozone forming emissions and for over 90 percent of all carbon monoxide (CO) emissions from all sources. Background Government agencies worldwide have struggled with finding new and innovative approaches to address the growing problem of air pollution and global warming. Experts in the field have recognized the importance of developing solutions to reduce greenhouse gas (GHG) emissions. Most proposed strategies to mitigate global climate change focus on reducing the dominant source of GHG emissions to the atmosphere – combustion of fossil fuels, which releases carbon dioxide. Carbon dioxide emissions represent about 84 percent of total U.S. GHG emissions. In the United States, most carbon dioxide (98 percent) is emitted as a result of the combustion of fossil fuels; consequently, carbon dioxide emissions and energy use are highly correlated. General emission reduction strategies The two main approaches that have been developed to address this problem include a command-and-control regulatory system and Emissions credit trading. Three broad types of emissions credit trading programs have emerged: reduction credit, averaging, and cap-and-trade programs. In such programs, a central authority, such as an air pollution control district or a government agency, sets limits or "caps" on certain pollutants. Companies or fleets of vehicles that intend to exceed these limits may buy emission reduction credits (ERCs) from entities that are able to remain below the designated limits. This transfer is usually referred to as a trade. International approach to emission reduction credits Emission trading is contemplated on an international level. The Kyoto Protocol is an agreement made under the United Nations Framework Convention on Climate Change (UNFCCC). The Kyoto Protocol binds ratifying nations to a similar system, with the UNFCCC setting caps for each nation, and utilizes a clean development mechanism (CDM) system. The primary reduction strategy under the Kyoto Protocol is a trading system that essentially makes carbon credits a commodity like oil or gas. United States approach to emission reduction credits The United States (which did not ratify the Kyoto Protocol) has the most experience with domestic emissions trading markets. The Clean Air Act (1970) is a federal law that requires the United States Environmental Protection Agency (EPA) to develop and enforce regulations to protect the general public from exposure to airborne contaminants that are known to be hazardous to human health. The Clean Air Act (1990) or Clean Air Act amendments of 1990 authorized the use of market-based approaches such as emission trading to assist states in attaining and maintaining air quality for all criteria pollutants. EPA's subsequent interpretive rulings expressly allow owners of new sources to obtain emission credits from other companies that operate facilities located in the same air quality control region. To implement an emissions offset program, many states have developed regulations allowing sources to register their emissions reduction credits as ERCs that can be sold to companies required to offset emissions from new or modified sources. Brokerage companies typically handle sales between companies having surplus ERCs and those wanting to acquire such credits. All commonly accepted ERCs in the United States must meet each of five criteria before they can be certified by the relevant regulatory authority as an ERC. Namely, the emission reduction must be real, permanent over the period of credit generation, quantifiable, enforceable, and surplus to emission reductions that are already needed to comply with an existing requirement (local, state, or Federal) or air quality plan. These criteria are intended to ensure that the emission reduction is a permanent reduction from the emissions that would otherwise be allowed to offset the permanent increase in emissions from the new or expanding source. Steps to create a MERC The steps involved to create a MERC are as follows: Identifying an emissions reduction technology for a pollutant Identifying a mobile source Utilize a Portable Emissions Measurement System to measure emissions of the pollutant and take first measurements of the pollutant from the mobile source Analyze the measurements to develop a baseline emissions amount Apply the emissions reduction technology to the mobile source to provide a modified mobile source Connect the Portable Emissions Measurement System to the modified mobile source and take second measurements of the modified mobile source Analyze the second measurements to develop a modified emissions amount Quantify the mobile emissions reduction produced by the emissions reduction technology Convert the mobile emissions reduction into a tradable commodity Monetization of a MERC The process of converting the mobile emissions reduction into a tradable commodity consists of converting the reduction or a portion of the reduction of emissions into at least one tradable credit, and marketing and monetizing the credit. This is followed by receiving information to identify a customer account, assigning the mobile emissions reduction to the customer account, calculating a MERC from the mobile emissions reduction, and crediting the MERC to the customer account. What follows is the exchanging of the MERC in the customer account for monetary assets this includes the following steps: Debiting the MERC from the customer account Receiving information to identify a second customer or purchaser Calculating an emissions amount of the pollutant for the purchaser Assigning a liability value to the emissions amount for the purchaser Accepting payment from the purchaser Using the payment to purchase at least one MERC for the purchaser Crediting the MERC as assets against the liability value assigned to the second customer for the emissions amount, whereby the emissions amount and the liability value in the second customer account is reduced accordingly Target pollutants of mobile emission reduction credits At present, the pollutant may be selected from a group consisting of nitrogen oxides (NOx), carbon monoxides (CO), carbon dioxides (CO2), hydrocarbons (HC), sulfur oxides (SOx), particulate matter (PM) and volatile organic compounds (VOCs). The emissions reduction technology may be selected from a group consisting of alternative fuels, vehicle repairs, vehicle replacements, vehicle retrofits and hybrid engines. The mobile source may be selected from a group consisting of passenger cars, light trucks, large trucks, buses, motorcycles, off-road recreational vehicles, farm equipment, construction equipment, lawn and garden equipment, marine engines, aircraft, locomotives and water vessels. See also Emissions trading Carbon credit Flexible mechanisms Emission factor Joint implementation Chicago Climate Exchange European Climate Exchange European Union Emissions Trading Scheme International Petroleum Exchange List of futures exchanges Personal carbon trading References External links MERCS program - California Air Resources Board's Environmental Protection agency's Rule 27 Banking of Mobile Source Emission Reduction Credits £400 million carbon credit trade with China Emissions trading Emissions reduction
Mobile emission reduction credit
[ "Chemistry" ]
1,457
[ "Greenhouse gases", "Emissions reduction" ]
11,059,015
https://en.wikipedia.org/wiki/Steam%20crane
A steam crane is a crane powered by a steam engine. It may be fixed or mobile and, if mobile, it may run on rail tracks, caterpillar tracks, road wheels, or be mounted on a barge. It usually has a vertical boiler placed at the back so that the weight of the boiler counterbalances the weight of the jib and load. They were very common as railway breakdown cranes, and several have been preserved on heritage railways in the United Kingdom. Manufacturers Black Hawthorn of Gateshead (unrestored example at Beamish Museum Joseph Booth & Bros of Leeds Coles Cranes of Derby (restored example at Beamish Museum) Cowans, Sheldon & Company of Carlisle (rail cranes) Craven Brothers William Fairbairn & Sons of Manchester Ransomes & Rapier of Ipswich Ruston Proctor of Lincoln Stothert & Pitt of Bath Thomas Smith & Sons (Rodley) Ltd. of Leeds See also Crane Crane (railroad) Crane tank Fairbairn steam crane Steam engine Steam shovel References External links Steam cranes inc. Ransomes & Rapier Cowans & Sheldon steam crane Nine Elms steam crane Ransomes & Rapier wartime-ordered 45-ton Steam Breakdown Cranes Cowans Sheldon 15-ton Steam Cranes Model steam crane Cranes (machines) Crane
Steam crane
[ "Engineering" ]
255
[ "Engineering vehicles", "Mechanical engineering stubs", "Cranes (machines)", "Mechanical engineering" ]
11,059,086
https://en.wikipedia.org/wiki/Light-weight%20Identity
Light-weight Identity (LID), or Light Identity Management (LIdM), is an identity management system for online digital identities developed in part by NetMesh. It was first published in early 2005, and is the original URL-based identity system, later followed by OpenID. LID uses URLs as a verification of the user's identity, and makes use of several open-source protocols such as OpenID, Yadis, and PGP/GPG. References See also Digital identity Online identity Online identity management SAML-based products and services Federated identity Identity management
Light-weight Identity
[ "Technology" ]
121
[ "Computer security stubs", "Computing stubs", "Computer network stubs" ]
11,059,367
https://en.wikipedia.org/wiki/MedInfo
MedInfo is the name of the international medical informatics conference organized initially every 3 years and now every other year by the International Medical Informatics Association. It is the most important international conference in the field with health and medical informatics professions attending from all over the world. MedInfo also serves to bring together all officers of the International Medical Informatics Association (IMIA) Board together with national representatives in the General Assembly of IMIA. The General Assembly elects the officers of IMIA. The IMIA Board consists of the President (the Past or the Elect President), Treasurer and Secretary as its officers. In addition it has other Vice Presidents for targeted areas: Membership, MedInfo, Services, Special Affairs, Strategic Plan Implementation, and Working Groups. With the exception of the President and the Vice President of MedInfo all officers serve a three-year term that can be extended for a second three-year term. The President and Vice President are on a two - year term and the Vice President of MedInfo has one 2-year term and is elected the year before the next Medinfo meeting so that he/she can be mentored through one MedInfo cycle. MedInfo conferences MedInfo was held every 3 years since its inception in 1974, after 2013 it is now held every two years. The table below gives an overview of these conferences. Other definitions MedInfo is also an acronym for Medical Informatics See also International Medical Informatics Association References External links MedInfo 2021 Proceedings - https://ebooks.iospress.nl/ISBN/978-1-64368-265-5 MedInfo 2019 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2019-health-and-wellbeing-e-networks-for-all-proceedings-of-the-17th-world-congress-on-medical-and-health-informatics MedInfo 2017 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2019-health-and-wellbeing-e-networks-for-all-proceedings-of-the-17th-world-congress-on-medical-and-health-informatics MedInfo 2015 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2015-ehealth-enabled-health-proceedings-of-the-15th-world-congress-on-health-and-biomedical-informatics Medinfo 2013 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2013-proceedings-of-the-14th-world-congress-on-medical-and-health-informatics MedInfo 2010 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2010 MedInfo 2007 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2007 MedInfo 2004 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2004 MedInfo 2001 Proceedings - https://ebooks.iospress.nl/volume/medinfo-2001 MedInfo’98 Proceedings - https://ebooks.iospress.nl/volume/medinfo-98-9th-world-congress-on-medical-informatics Health informatics Health informatics and eHealth associations
MedInfo
[ "Biology" ]
717
[ "Health informatics", "Medical technology" ]
11,059,393
https://en.wikipedia.org/wiki/John%20MacQueen%20Ward
Sir John MacQueen Ward (born 1 August 1940) is a Scottish businessman. Early life The son of Marcus Waddie Ward and Catherine MacQueen, Ward was educated at Edinburgh Academy and Fettes College. Career He began a career with IBM at its Greenock Manufacturing Plant in 1966, and in 1991 was appointed Managing Director of UK Government and Public Service Business having worked for the company throughout the world. He subsequently held a wide range of business and public sector jobs, including Chairman of CBI Scotland, Chairman of Scottish Qualifications Authority, Chairman of Quality Scotland Foundation, Chairman of the Governing Body (Court) of Edinburgh’s Queen Margaret University, and Chairman of Scottish Homes. Ward joined Macfarlane Group as a non-executive director in 1995 and took over the chairman's role when its eponymous founder retired in 1998. His later appointments included the Chairmanship of Scottish Enterprise and of European Assets Trust NV. He was also a Trustee of the National Museums of Scotland between 2005 and 2012 and was Chairman of Dunfermline Building Society between 1995 and 2007. Awards Ward was appointed a CBE in 1995, the same year he received an Honorary Degree from the University of Strathclyde. He received a Knighthood in the 2002 New Year Honours for Services to Public Life in Scotland. Ward also received an Honorary Doctorate from Heriot-Watt University in 1998 and was elected Fellow of the Royal Society in 2004 for private sector leadership. He appeared at number 10 on The Scotsman's 100 Most Powerful list the same year. References External links Living people 1940 births Scottish businesspeople People educated at Fettes College Fellows of the Institution of Engineering and Technology Fellows of the Royal Society of Edinburgh Knights Bachelor Businesspeople awarded knighthoods Commanders of the Order of the British Empire People educated at Edinburgh Academy Businesspeople from Edinburgh
John MacQueen Ward
[ "Engineering" ]
368
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
11,059,398
https://en.wikipedia.org/wiki/Adult%20diaper
An adult diaper (or adult nappy in Australian English, British English, and Hiberno-English) is a diaper made to be worn by a person with a body larger than that of an infant or toddler. Diapers can be necessary for adults with various conditions, such as incontinence, mobility impairment, severe diarrhea or dementia. Adult diapers are made in various forms, including those resembling traditional child diapers, underpants, and pads resembling sanitary napkins (known as incontinence pads). Superabsorbent polymer is primarily used to absorb bodily wastes and liquids. Alternative terms such as "briefs", "incontinence briefs", or "incontinence products" are also used. Global market The size of the adult diaper market in 2016 was $9.8 billion, an increase from $9.2 billion in 2015. Adult diaper sales in the United States were projected to rise 48 percent from 2015 to 2020, compared to 2.6 percent for baby diapers. The adult incontinence market in Japan was $1.8 billion in 2016, about 20 percent of the world market. Uses Health care People with medical conditions which cause them to experience urinary or fecal incontinence often require diapers or similar products because they are unable to control their bladders or bowels. People who are bedridden or in wheelchairs, including those with good bowel and bladder control, may also wear diapers because they are unable to access the toilet independently. Those with cognitive impairment, such as dementia, may require diapers because they may not recognize their need to reach a toilet. Absorbent incontinence products come in a wide range of types (drip collectors, pads, underwear and adult diapers), each with varying capacities and sizes. The largest volume of products that is consumed falls into the lower absorbency range of products, and even when it comes to adult diapers, the cheapest and least absorbent brands are used the most. This is not because people choose to use the cheapest and least absorbent brands, but rather because medical facilities are the largest consumer of adult diapers, and they have requirements to change patients as often as every two hours. As such, they select products that meet their frequent-changing needs, rather than products that could be worn longer or more comfort. Specialty diapers are required for swimming or pool therapy. These are known as swim diapers or containment swim briefs. They are intended mainly for fecal incontinence, however they can also be useful for temporary urine containment, to maintain dignity while transferring from change room to pool. Manufacturers such as Discovery Trekking, Splash About and Theraquatics commonly utilize a stretch fabric to allow increased adjustability for a snug fit. They are washable and reusable. Law The case Hiltibran et al v. Levy et al in the United States District Court for the Western District of Missouri resulted in that court issuing an order in 2011. That order requires adult diapers funded by Medicaid to be given by Missouri to adults who would be institutionalized without them. Astronauts Astronauts wear trunklike diapers called "Maximum Absorbency Garments", or MAGs, during liftoff and landing. On space shuttle missions, each crew member receives three diapers—for launch, reentry and a spare in case reentry has to be waved off and tried later. The super-absorbent fabric used in disposable diapers, which can hold up to 400 times its weight, was developed so Apollo astronauts could stay on spacewalks and extra-vehicular activity for at least six hours. Originally, only female astronauts would wear Maximum Absorbency Garments, as the collection devices used by men were unsuitable for women; however, reports of their comfort and effectiveness eventually convinced men to start wearing the diapers as well. Public awareness of astronaut diapers rose significantly following the arrest of Lisa Nowak, a NASA astronaut charged with attempted murder, who gained notoriety in the media when the police reported she had driven 900 miles, with an adult diaper so she would not have to stop to urinate. The diapers became fodder for many television comedians, as well as being included in an adaptation of the story in Law & Order: Criminal Intent, despite Nowak's denial that she wore them. Fetishism Adult diapers are also associated with a number of sexual fetishes including diaper fetishism, in which the diaper itself is considered the main object of erotic enhancement, comfort, style, and other positive emotions. Diapers are also a common component of paraphilic infantilism and omorashi, and are occasionally a part of certain BDSM scenes. Increasingly, some companies that make or sell adult diapers have begun to supply products that specifically target and appeal to the kink community, often with higher absorbency or vibrant, cute or playful designs. Other Other situations in which diapers are worn because access to a toilet is unavailable or not allowed for longer than even a normal urinary bladder can hold out include; Guards who must stay on duty and are not permitted to leave their posts; this is sometimes called the "watchman's urinal". It has long been suggested that legislators don a diaper before an extended filibuster, so often that it has been jokingly called "taking to the diaper." Some death row inmates who are about to be executed wear "execution diapers" to collect body fluids expelled during and after their death. People diving in diving suits (in former times often standard diving dress) may wear diapers because they are underwater continuously for several hours. Similarly, pilots may wear them on long flights. In 2003, Hazards magazine reported that workers in various industries were taking to wearing diapers because their bosses denied them toilet breaks during working hours. One woman said that she was having to spend 10% of her pay on incontinence pads for this reason. Chinese media reported in 2006 that diapers are a popular way to avoid long queues for the toilets on railway trains during the Lunar New Year traveling season. In Germany, younger patients in drunken coma are placed in hospital diapers. In 2020, during the COVID-19 pandemic, the Civil Aviation Administration of China recommended that flight attendants wear disposable adult diapers to avoid using the lavatories, barring special circumstances, to avoid infection risks while working onboard aircraft. In New York City during the Holidays (such as New Year's Eve) people wear them so that they are able to relieve themselves without losing their spot. The adult diaper market in Japan is growing. On September 25, 2008, Japanese manufacturers of adult diapers conducted the world's first all-diaper fashion show, dramatizing throughout it many informative dramatic scenarios which addressed various issues relevant to older people in diapers. "It was great to see so many different types of diapers all in one showing," said Aya Habuka, 26. "I learned a lot. This is the first time that diapers are being considered as fashion." In May 2010, the Japanese adult diaper market expanded to be used as an alternative fuel source. The used diapers are shred, dried, and sterilized to be turned into fuel pellets for boilers. The fuel pellets amount for 1/3 the original weight and contains about 5,000 kcal of heat per kilogram. In September 2012, Japanese magazine described the trend of wearing diapers among Japanese women. There are those who believe diapers are a preferable alternative to using the toilet. According to Dr Dipak Chatterjee of Mumbai newspaper Daily News and Analysis, public toilet facilities are so unhygienic that it is actually safer for people—especially women—who are vulnerable to infections to wear adult diapers instead. Seann Odoms of Men's Health magazine believes that wearing diapers can help people of all ages to maintain healthy bowel function. He himself claims to wear diapers full-time for this purported health benefit. "Diapers," he states, "are nothing other than a more practical and healthy form of underwear. They are the safe and healthy way of living." Author Paul Davidson argues that it should be socially acceptable for everyone to wear diapers permanently, claiming that they provide freedom and remove the unnecessary hassle of going to the toilet, just as social advancement has offered solutions to other complications. He writes, "Make the elderly finally feel embraced instead of ridiculed and remove the teasing from the adolescent equation that affects so many children in a negative way. Give every person in this world the opportunity to live, learn, grow and urinate anywhere and anytime without societal pressure to "hold themselves in."" Dignity issues The usage of adult diapers can be a source of embarrassment, and products are often marketed under euphemisms such as incontinence pads. In 2006, seventeen students taking a geriatrics pharmacotherapy course participated in a voluntary "diaper experience" exercise to help them understand the impact incontinence has on older adults. The students, who wore adult diapers for a day before writing a paper about it, described the experience as unfamiliar and physically challenging, noting that being in diapers had a largely negative impact on them and that better solutions to incontinence are required. However, they praised the exercise for giving them insight into incontinence and the effect it has on peoples' lives. In 2008, Ontario's Minister of Health George Smitherman revealed that he was considering wearing adult diapers himself to test their absorbency following complaints that nursing home residents were forced to remain in unchanged diapers for days at a time. Smitherman's proposal earned him criticism from unions who argued that the priority was not the capacity of the diapers but rather staff shortages affecting how often they were changed, and he later apologized. See also Incontinence pad References Diapers Incontinence ABDL
Adult diaper
[ "Biology" ]
2,067
[ "Incontinence", "Diapers", "Excretion" ]
11,059,653
https://en.wikipedia.org/wiki/Sprite%20%28lightning%29
Sprites or red sprites are large-scale electric discharges that occur in the mesosphere, high above thunderstorm clouds, or cumulonimbus, giving rise to a varied range of visual shapes flickering in the night sky. They are usually triggered by the discharges of positive lightning between an underlying thundercloud and the ground. Precis Sprites appear as luminous red-orange flashes. They often occur in clusters above the troposphere at an altitude range of . Sporadic visual reports of sprites go back at least to 1886. They were first photographed on July 4, 1989, by scientists from the University of Minnesota and have subsequently been captured in video recordings thousands of times. Sprites are sometimes inaccurately called upper-atmospheric lightning. However, they are cold plasma phenomena that lack the hot channel temperatures of tropospheric lightning, so they are more akin to fluorescent tube discharges than to lightning discharges. Sprites are associated with various other upper-atmospheric optical phenomena including blue jets and ELVES. History The earliest known report is by Toynbee and Mackenzie in 1886. Nobel laureate C. T. R. Wilson had suggested in 1925, on theoretical grounds, that electrical breakdown could occur in the upper atmosphere, and in 1956 he witnessed what possibly could have been a sprite. They were first documented photographically on July 6, 1989, when scientists from the University of Minnesota, using a low-light video camera, accidentally captured the first image of what would subsequently become known as a sprite. Several years after their discovery they were named sprites (air spirits) after their namesake mythological entity based on their elusive nature. Since the 1989 video capture, sprites have been imaged from the ground, from aircraft and from space, and have become the subject of intensive investigations. A featured high speed video that was captured by Thomas Ashcraft, Jacob L Harley, Matthew G McHarg, and Hans Nielsen in 2019 at about 100,000 frames per second is fast enough to provide better detailing of how sprites develop. However, according to NASA's APOD blog, despite being recorded in photographs and videos for the more than 30 years, the "root cause" of sprite lightning remains unknown, "apart from a general association with positive cloud-to-ground lightning." NASA also notes that not all storms exhibit sprite lightning. In 2016, sprites were observed during Hurricane Matthew's passage through the Caribbean. The role of sprites in the tropical cyclones is presently unknown. Characteristics Sprites have been observed over North America, Central America, South America, Europe, Central Africa (Zaire), Australia, the Sea of Japan and Asia and are believed to occur during most large thunderstorm systems. Rodger (1999) categorized three types of sprites based on their visual appearance. Jellyfish sprite – very large, up to . Column sprite (C-sprite) – large-scale electrical discharges above the earth that are still not totally understood. Carrot sprite – a column sprite with long tendrils. Sprites are colored reddish-orange in their upper regions, with bluish hanging tendrils below, and can be preceded by a reddish halo. They last longer than normal lower stratospheric discharges, which last typically a few milliseconds, and are usually triggered by the discharges of positive lightning between the thundercloud and the ground, although sprites generated by negative ground flashes have also been observed. They often occur in clusters of two or more, and typically span the altitude range , with what appear to be tendrils hanging below, and branches reaching above. Optical imaging using a 10,000 frame-per-second high speed camera showed that sprites are actually clusters of small, decameter scale, () balls of ionization that are launched at an altitude of about and then move downward at speeds of up to ten percent the speed of light, followed a few milliseconds later by a separate set of upward moving balls of ionization. Sprites may be horizontally displaced by up to from the location of the underlying lightning strike, with a time delay following the lightning that is typically a few milliseconds, but on rare occasions may be up to 100 milliseconds. In order to film sprites from Earth, special conditions must be present: of clear view to a powerful thunderstorm with positive lightning between cloud and ground, red-sensitive recording equipment, and a black unlit sky. Mechanism Sprites occur near the top of the mesosphere at about 80 km altitude in response to the electric field generated by lightning flashes in underlying thunderstorms. When a sufficiently large positive lightning strike carries charges to the ground, the cloud top is left with a strongly negative net charge. This can be modeled as a quasi-static electric dipole and for less than 10 milliseconds a strong electric field is generated in the region above the thunderstorm. In the low pressure of the upper mesosphere the breakdown voltage is drastically reduced, allowing for an electron avalanche to occur. Sprites get their characteristic red color from excitation of nitrogen in the low pressure environment of the upper mesosphere. At such low pressures quenching by atomic oxygen is much faster than that of nitrogen, allowing for nitrogen emissions to dominate despite no difference in composition. Sprite halo Sprites are sometimes preceded, by about 1 millisecond, by a sprite halo, a pancake-shaped region of weak, transient optical emissions approximately across and thick. The halo is centered at about altitude above the initiating lightning strike. These halos are thought to be produced by the same physical process that produces sprites, but for which the ionization is too weak to cross the threshold required for streamer formation. They are sometimes mistaken for ELVES, due to their visual similarity and short duration. Research carried out at Stanford University in 2000 indicates that, unlike sprites with bright vertical columnar structure, occurrence of sprite halos is not unusual in association with normal (negative) lightning discharges. Research in 2004 by scientists from Tohoku University found that very low frequency emissions occur at the same time as the sprite, indicating that a discharge within the cloud may generate the sprites. Related aircraft damage Sprites have been blamed for otherwise unexplained accidents involving high altitude vehicular operations above thunderstorms. One example of this is the malfunction of a NASA stratospheric balloon launched on June 6, 1989, from Palestine, Texas. The balloon suffered an uncommanded payload release while flying at over a thunderstorm near Graham, Texas. Months after the accident, an investigation concluded that a "bolt of lightning" traveling upward from the clouds provoked the incident. The attribution of the accident to a sprite was made retroactively, since this term was not coined until late 1993. See also Upper-atmospheric lightning (includes Blue Jets) Aurora (astronomy) Catatumbo lightning Cosmic ray visual phenomena References External links "Red Sprites & Blue Jets" – a digital capture of the VHS video distributed in 1994 by the University of Alaska Fairbanks that popularized the terms – webpage by University of Alaska Fairbanks Ground and Balloon-Borne Observations of Sprites and Jets Darwin Sprites '97 Space Physics Group, University of Otago Sprites, jets and TLE pictures and articles Sprites in Europe: European contributors blog Short professional bio of Dr 'Geoff' McHarg Photography website Petapixel posted a link to a very rare and very clear photograph of a sprite taken by photographer Mike Hollingshead. Article at Photographer Captures Rare Photograph of a Sprite with an Aurora At the Edge of Space – a NOVA program that examines the phenomenon of Sprites Red Sprites Over Adriatic Sea Seen from the Czech Republic (14 January 2019) articles containing video clips electrical phenomena lightning terrestrial plasmas
Sprite (lightning)
[ "Physics" ]
1,611
[ "Physical phenomena", "Electrical phenomena", "Lightning" ]
2,253,990
https://en.wikipedia.org/wiki/Renard%20series
Renard series are a system of preferred numbers dividing an interval from 1 to 10 into 5, 10, 20, or 40 steps. This set of preferred numbers was proposed ca. 1877 by French army engineer Colonel Charles Renard and reportedly published in an 1886 instruction for captive balloon troops, thus receiving the current name in 1920s. His system was adopted by the ISO in 1949 to form the ISO Recommendation R3, first published in 1953 or 1954, which evolved into the international standard ISO 3. The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. One application of the Renard series of numbers is the current rating of electric fuses. Another common use is the voltage rating of capacitors (e.g. 100 V, 160 V, 250 V, 400 V, 630 V). Base series The most basic R5 series consists of these five rounded numbers, which are powers of the fifth root of 10, rounded to two digits. The Renard numbers are not always rounded to the closest three-digit number to the theoretical geometric sequence: R5: 1.00 1.60 2.50 4.00 6.30 Examples If some design constraints were assumed so that two screws in a gadget should be placed between 32 mm and 55 mm apart, the resulting length would be 40 mm, because 4 is in the R5 series of preferred numbers. If a set of nails with lengths between roughly 15 and 300 mm should be produced, then the application of the R5 series would lead to a product repertoire of 16 mm, 25 mm, 40 mm, 63 mm, 100 mm, 160 mm, and 250 mm long nails. If traditional English wine cask sizes had been metricated, the rundlet (18 gallons, ca 68 liters), barrel (31.5 gal., ca 119 liters), tierce (42 gal., ca 159 liters), hogshead (63 gal., ca 239 liters), puncheon (84 gal., ca 318 liters), butt (126 gal., ca 477 liters) and tun (252 gal., ca 954 liters) could have become 63 (or 60 by R″5), 100, 160 (or 150), 250, 400, 630 (or 600) and 1000 liters, respectively. Alternative series If a finer resolution is needed, another five numbers are added to the series, one after each of the original R5 numbers, and one ends up with the R10 series. These are rounded to a multiple of 0.05. Where an even finer grading is needed, the R20, R40, and R80 series can be applied. The R20 series is usually rounded to a multiple of 0.05, and the R40 and R80 values interpolate between the R20 values, rather than being powers of the 80th root of 10 rounded correctly. In the table below, the additional R80 values are written to the right of the R40 values in the column named "R80 add'l". The R40 numbers 3.00 and 6.00 are higher than they "should" be by interpolation, in order to give rounder numbers. In some applications more rounded values are desirable, either because the numbers from the normal series would imply an unrealistically high accuracy, or because an integer value is needed (e.g., the number of teeth in a gear). For these needs, more rounded versions of the Renard series have been defined in ISO 3. In the table below, rounded values that differ from their less rounded counterparts are shown in bold. As the Renard numbers repeat after every 10-fold change of the scale, they are particularly well-suited for use with SI units. It makes no difference whether the Renard numbers are used with metres or millimetres. But one would need to use an appropriate number base to avoid ending up with two incompatible sets of nicely spaced dimensions, if for instance they were applied with both inches and feet. In the case of inches and feet a root of 12 would be desirable, that is, where n is the desired number of divisions within the major step size of twelve. Similarly, a base of two, eight, or sixteen would fit nicely with the binary units commonly found in computer science. Each of the Renard sequences can be reduced to a subset by taking every nth value in a series, which is designated by adding the number n after a slash. For example, "R10″/3 (1…1000)" designates a series consisting of every third value in the R″10 series from 1 to 1000, that is, 1, 2, 4, 8, 15, 30, 60, 120, 250, 500, 1000. Such narrowing of the general original series brings the opposite idea of deepening the series and to redefine it by a strict simple formula. As the beginning of the selected series seen higher, the {1, 2, 4, 8, ...} series can be defined as binary. That means that the R10 series can be formulated as R10 ≈ bR3 = , generating just 9 values of R10, just because of the kind of periodicity. This way rounding is eliminated, as the 3 values of the first period are repeated multiplied by 2. The usual cons however is that the thousand product of such multiplication is shifted slightly: Instead of decadic 1000, the binary 1024 appears, as classics in IT. The pro is that the characteristics is now fully valid, that whatever value multiplied by 2 is also member of the series, any rounding effectively eliminated. The multiplication by 2 is possible in R10 too, to get another members, but the long fractioned numbers complicate the R10 accuracy. See also Preferred numbers Preferred metric sizes 1-2-5 series E series (preferred numbers) Logarithm Decibel Neper Phon Nominal Pipe Size (NPS) Geometric progression References Further reading (Replaced: ) Numbers Industrial design Logarithmic scales of measurement
Renard series
[ "Physics", "Mathematics", "Engineering" ]
1,319
[ "Industrial design", "Design engineering", "Physical quantities", "Quantity", "Mathematical objects", "Logarithmic scales of measurement", "Arithmetic", "Design", "Numbers" ]
2,254,029
https://en.wikipedia.org/wiki/Astrophysical%20plasma
Astrophysical plasma is plasma outside of the Solar System. It is studied as part of astrophysics and is commonly observed in space. The accepted view of scientists is that much of the baryonic matter in the universe exists in this state. When matter becomes sufficiently hot and energetic, it becomes ionized and forms a plasma. This process breaks matter into its constituent particles which includes negatively charged electrons and positively charged ions. These electrically charged particles are susceptible to influences by local electromagnetic fields. This includes strong fields generated by stars, and weak fields which exist in star forming regions, in interstellar space, and in intergalactic space. Similarly, electric fields are observed in some stellar astrophysical phenomena, but they are inconsequential in very low-density gaseous media. Astrophysical plasma is often differentiated from space plasma, which typically refers to the plasma of the Sun, the solar wind, and the ionospheres and magnetospheres of the Earth and other planets. Observing and studying astrophysical plasma Plasmas in stars can both generate and interact with magnetic fields, resulting in a variety of dynamic astrophysical phenomena. These phenomena are sometimes observed in spectra due to the Zeeman effect. Other forms of astrophysical plasmas can be influenced by preexisting weak magnetic fields, whose interactions may only be determined directly by polarimetry or other indirect methods. In particular, the intergalactic medium, the interstellar medium, the interplanetary medium and solar winds consist of diffuse plasmas. Possible related phenomena Scientists are interested in active galactic nuclei because such astrophysical plasmas could be directly related to the plasmas studied in laboratories. Many of these phenomena seemingly exhibit an array of complex magnetohydrodynamic behaviors, such as turbulence and instabilities. In Big Bang cosmology, the entire universe was in a plasma state prior to recombination. Early history Norwegian explorer and physicist Kristian Birkeland predicted that space is filled with plasma. He wrote in 1913: Birkeland assumed that most of the mass in the universe should be found in "empty" space. References External links "US / Russia Collaboration in Plasma Astrophysics" Space plasmas Space physics Solar phenomena Stellar phenomena
Astrophysical plasma
[ "Physics", "Astronomy" ]
447
[ "Space plasmas", "Physical phenomena", "Outer space", "Astrophysics", "Solar phenomena", "Stellar phenomena", "Space physics" ]
2,254,056
https://en.wikipedia.org/wiki/C-element
In digital computing, the Muller C-element (C-gate, hysteresis flip-flop, coincident flip-flop, or two-hand safety circuit) is a small binary logic circuit widely used in design of asynchronous circuits and systems. It outputs 0 when all inputs are 0, it outputs 1 when all inputs are 1, and it retains its output state otherwise. It was specified formally in 1955 by David E. Muller and first used in ILLIAC II computer. In terms of the theory of lattices, the C-element is a semimodular distributive circuit, whose operation in time is described by a Hasse diagram. The C-element is closely related to the rendezvous and join elements, where an input is not allowed to change twice in succession. In some cases, when relations between delays are known, the C-element can be realized as a sum-of-product (SOP) circuit. Earlier techniques for implementing the C-element include Schmitt trigger, Eccles-Jordan flip-flop and last moving point flip-flop. Truth table and delay assumptions For two input signals the C-element is defined by the equation , which corresponds to the following truth table: This table can be turned into a circuit using the Karnaugh map. However, the obtained implementation is naive, since nothing is said about delay assumptions. To understand under what conditions the obtained circuit is workable, it is necessary to do additional analysis, which reveals that delay1 is a propagation delay from node 1 via environment to node 3, delay2 is a propagation delay from node 1 via internal feedback to node 3, delay1 must be greater than delay2. Thus, the naive implementation is correct only for slow environment. Implementations of the C-element Depending on the requirements to the switching speed and power consumption, the C-element can be realized as a coarse- or fine-grain circuit. Also, one should distinguish between single-output and dual-rail realizations of C-element. A dual-rail C-element can be realized on 2-input NANDs (NORs) only. A single-output realization is workable if and only if: The circuit, where each input of a C-element is connected through a separate inverter to its output, is semimodular relatively to the state, where all the inverters are excited. This state is live for the output gate of C-element. Static and semistatic implementations In his report Muller proposed to realize C-element as a majority gate with feedback. However, to avoid hazards linked with skews of internal delays, the majority gate must have as small number of transistors as possible. Generally, C-elements with different timing assumptions can be built on AND-OR-Invert (AOI) or its dual, OR-AND-Invert (OAI) gate and inverter. Yet another option patented by Varshavsky et al. is to shunt the input signals when they are not equal each other. Being very simple, these realizations dissipate more power due to the short-circuits. Connecting an additional majority gate to the inverted output of C-element, we obtain inclusive OR (EDLINCOR) function: . Some simple asynchronous circuits like pulse distributors can be built solely on majority gates. Semistatic C-element stores its previous state using two cross-coupled inverters, similar to an SRAM cell. One of the inverters is weaker than the rest of the circuit, so it can be overpowered by the pull-up and pull-down networks. If both inputs are 0, then the pull-up network changes the latch's state, and the C-element outputs a 0. If both inputs are 1, then the pull-down network changes the latch's state, making the C-element output a 1. Otherwise, the input of the latch is not connected to either or ground, and so the weak inverter dominates and the latch outputs its previous state. There are also versions of semistatic C-element built on devices with negative differential resistance (NDR). NDR is usually defined for small signal, so it is difficult to expect that such a C-element will operate in full range of voltages or currents. Gate-level implementations There is a number of different single-output circuits of C-element built on logic gates. In particular, the so-called Maevsky's implementation is a semimodular, but non-distributive (OR-causal) circuit loosely based on. The NAND3 gate in this circuit can be replaced by two NAND2 gates. Note that Maevsky's C-element is actually a Join element, whose input signals cannot switch twice. Yet another circuit with OR-causality, which operates as a Join element. A realization of C-element on two-input gates only has been proposed by Tsirlin and then synthesized by Starodoubtsev et al. using Taxogram language This circuit coincides with that attributed to Bartky , and can operate without the input latch. Note that both the Maevsky and Tsirlin circuits are based actually on so-called David cell. Its fast transistor-level implementation is used in the semistatic C-element proposed. Yet another semistatic circuit using pass transistors (actually MUX 2:1) has been proposed. Yet another version of the C-element built on two SR-latches has been synthesized by Murphy using Petrify tool. However, this circuit includes inverter connected to one of the inputs. This inverter should have small delay. However, there are realizations of RS latches that already have one inverted input, for example. Some speed-independent approaches assume that zero-delay input inverters are available on all gates, which is a violation of true speed-independence but is fairly safe in practice. Other examples of using this assumption also exist. Non-transistor implementations Other technologies suitable for realizing asynchronous primitives including C-element, are: carbon nanotubes, single-electron tunneling devices, quantum dots, and molecular nanotechnology. Generalization for multiple-valued logic The definition of C-element can be generalized for multiple-valued logic, or even for continuous signals: For example, the truth table for a balanced ternary C-element with two inputs is Since the majority gate is a particular case of threshold gate, any of known realizations of threshold gate can in principle be used for building a C-element. In the multiple-valued case, however, connecting the output of majority gate to one or several inputs may have no desirable effect. For example, using the ternary majority function defined as does not lead to the ternary C-element specified by the truth table, if the sum is not split into pairs. However, even without such a splitting two ternary majority functions are suitable for building a ternary inclusive OR gate. References External links Workcraft tool: Synthesis and verification of C-element Logic gates Digital electronics
C-element
[ "Engineering" ]
1,463
[ "Electronic engineering", "Digital electronics" ]
2,254,112
https://en.wikipedia.org/wiki/Asymmetric%20C-element
Asymmetric C-elements are extended C-elements which allow inputs which only effect the operation of the element when transitioning in one of the directions. Asymmetric inputs are attached to either the minus (-) or plus (+) strips of the symbol. The common inputs which effect both the transitions are connected to the centre of the symbol. When transitioning from zero to one, the C-element will take into account the common and the asymmetric plus inputs. All these inputs must be high for the up transition to take place. Similarly when transitioning from one to zero the C-element will take into account the common and the asymmetric minus inputs. All these inputs must be low for the down transition to happen. The figure shows the gate-level and transistor-level implementations and symbol of the asymmetric C-element. In the figure the plus inputs are marked with a 'P', the minus inputs are marked with an 'm' and the common inputs are marked with a 'C'. In addition, it is possible to extend the asymmetric input convention to inverted C-elements, where a plus (minus) on an input port means that an input is required for the inverted output to fall (rise). References Digital electronics
Asymmetric C-element
[ "Engineering" ]
262
[ "Electronic engineering", "Digital electronics" ]
2,254,244
https://en.wikipedia.org/wiki/Dogbane
Dogbane, dog-bane, dog's bane, and other variations, some of them regional and some transient, are names for certain plants that are reputed to kill or repel dogs; "bane" originally meant "slayer", and was later applied to plants to indicate that they were poisonous to particular creatures. History of the term The earliest reference to such names in common English usage was in the 16th century, in which they were applied to various plants in the Apocynaceae, in particular Apocynum. Some plants in the Asclepiadoideae, now a subfamily of the Apocynaceae, but until recently regarded as the separate family Asclepiadaceae, were also called dogbane even before the two families were united. It is not clear how much earlier the name had been in use in the English language, which originated about 1000 years earlier in mediaeval times. However, centuries before the appearance of the English language, Pedanius Dioscorides, in his De Materia Medica, had already described members of the Apocynaceae, such as Apocynum and Cynanchum by names equivalent to "dogbane"; Apocynum literally means "dog killer" or "dog remover", and "Cynanchum" means "dog strangler". In modern times some species of Nerium, Periploca and Trachelospermum, also in the Apocynaceae, are called dogbane or variants such as "climbing dogbane". Modern significance of the term "dogbane family" Some modern sources note "dogbane" as strictly being the species known as 'Indian hemp', Apocynum, though it is doubtful that such a narrow definition could be justified, even if it were enforceable. Still others consider Asclepias (milkweeds) to be the "true" dogbanes; however, when the majority of authors, horticulturists or gardeners refer to the "dogbanes", they are generally always referencing the entire Apocynaceae family, as a whole. "Dogbane" as a term outside the family Apocynaceae Common names, either informal or vernacular, are seldom definitive, let alone stable. Some poisonous or offensive plants in practically unrelated families had similar common names in the vernacular and writings of various times; for example an edition De Materia Medica, apparently of the early sixteenth century, mentions that species of Aconitum (family Ranunculaceae) were known as either "dog killer" (or murderer) or "wolf killer" ("...Sunt qui Cynoctonon: qui Lycoctonon... uocent"). Again, in modern times Isocoma menziesii in the family Asteraceae is known in some regions as dogbane. Recent aberrant application of the term The term "dogbane" (as well as "cat-scat")—either out of genuine confusion or as a deliberate sales ploy for gardeners desiring a natural animal repellent—has been used without obvious justification to several other groups of plants, such as some species of Plectranthus (ironically, a genus in the catnip subfamily Nepetoideae of the mint family Lamiaceae). While none have been reported to be especially harmful, or even useful against nuisance animals, in the garden, many—such as Plectranthus (Coleus) caninus—have very fragrant, oily leaves which give an intensely pungent, skunk- or Cannabis-like aroma when brushed, disturbed or touched. At times, simply the wind blowing can trigger the release of the essential oils into the surrounding area. The smell has been reported, by some sources, to keep nuisance animals at bay; however, if a plant is not poisonous or otherwise offensive to them, many animals quickly become accustomed to various botanical aromas and remain unbothered. Oftentimes, these plants are more effective at repelling humans from a given area, as the essential oils are strong, sticky, and exude a distinct aroma of marijuana or skunk-spray, which may linger for hours on the skin, gloves, clothing, or any other surface it contacts. References Apocynaceae Plant common names
Dogbane
[ "Biology" ]
894
[ "Plant common names", "Common names of organisms", "Plants" ]
2,254,416
https://en.wikipedia.org/wiki/Business%20record
A business record is a document (hard copy or digital) that records an "act, condition, or event" related to business. Business records include meeting minutes, memoranda, employment contracts, and accounting source documents. It must be retrievable at a later date so that the business dealings can be accurately reviewed as required. Since business is dependent upon confidence and trust, not only must the record be accurate and easily retrieved, but the processes surrounding its creation and retrieval must be perceived by customers and the business community to consistently deliver a full and accurate record with no gaps or additions. Most business records have specified retention periods based on legal requirements and/or internal company policies. This is important because in many countries (including the United States), many documents may be required by law to be disclosed to government regulatory agencies or to the general public. Likewise, they may be discoverable if the business is sued. Under the business records exception in the Federal Rules of Evidence, certain types of business records, particularly those made and kept with regularity, may be considered admissible in court despite containing hearsay. See also Records management Information governance Regulation Fair Disclosure Sarbanes-Oxley Act References Resources ARMA International - Association of Records Managers and Administrators AIIM - Association for Information and Image Management Business documents Information management Records management Information governance
Business record
[ "Technology" ]
270
[ "Information systems", "Information management" ]
2,254,600
https://en.wikipedia.org/wiki/Peak%20programme%20meter
A peak programme meter (PPM) is an instrument used in professional audio that indicates the level of an audio signal. Different kinds of PPM fall into broad categories: True peak programme meter. This shows the peak level of the waveform no matter how brief its duration. Quasi peak programme meter (QPPM). This only shows the true level of the peak if it exceeds a certain duration, typically a few milliseconds. On peaks of shorter duration, it indicates less than the true peak level. The extent of the shortfall is determined by the 'integration time'. Sample peak programme meter (SPPM). This is a PPM for digital audio. It shows only peak sample values, not true waveform peaks (which may fall between samples and may be higher in amplitude). It may have either a 'true' or a 'quasi' integration characteristic. Over-sampling peak programme meter. This is a sample PPM that first oversamples the signal, typically by a factor of four, to alleviate the problems of a basic sample PPM. In professional use, which requires consistent level measurements across an industry, audio level meters often comply with a formal standard. This ensures that all compliant meters indicate the same level for a given audio signal. The principal standard for PPMs is IEC . It describes two different quasi-PPM designs that have roots in meters originally developed in the 1930s for the AM radio broadcasting networks of Germany (Type I) and the United Kingdom (Type II). The term Peak Programme Meter usually refers to these IEC-specified types and similar designs. Though originally designed for monitoring analogue audio signals, these PPMs are now also used with digital audio. PPMs do not provide effective loudness monitoring. Newer types of meter do, and there is now a push within the broadcasting industry to move away from the traditional level meters in this article to two new types: loudness meters based on EBU Tech. 3341 and oversampling true PPMs. The former would be used to standardise broadcast loudness to −23 LUFS and the latter to prevent digital clipping. Design characteristics Display technologies In common with many other types of audio level meter, PPMs originally used electro-mechanical displays. These took the form of moving-coil panel meters or mirror galvanometers with demanding 'ballistics': the key requirement being that the indicated level should rise as quickly as possible with negligible overshoot. These displays require active driver electronics. Nowadays PPMs are often implemented as 'bargraph' incremental displays using solid-state illuminated segments in a vertical or horizontal array. For these, IEC 60268-10 requires a minimum of 100 segments and a resolution better than 0.5 dB at the higher levels. Many operators prefer the moving-coil meter type of display, in which a needle moves in an arc, because they feel the angular movement is easier for the human eye to monitor than the linear movement of a bar graph. PPMs can also be implemented in software—in a general-purpose computer or by a dedicated device that inserts a PPM image into a picture signal for display on a picture monitor. Level definitions A variety of terms such as 'line-up level' and 'operating level' exist, and their meaning may vary from place to place. In an attempt bring clarity to level definitions in the context of programme transmission from one country to another, where different technical practices may apply, ITU-R Rec. BS.645 defined three reference levels: Measurement Level (ML), Alignment Level (AL) and Permitted Maximum Level (PML). This document shows the reading corresponding to these levels for several types of meter. Alignment Level is the level of a steady sine-wave alignment tone. Permitted Maximum Level refers to the permitted maximum meter indication that operators should aim for on speech, music etc., not tone. Scales and scale marks PPMs often use white-on-black displays, to minimise eyestrain especially with extended periods of use. PPMs are usually calibrated in one of these ways: In decibels relative to Alignment Level (e.g., Nordic, EBU) In decibels relative to Permitted Maximum Level (e.g., DIN, ABC, SABC) In decibels relative to 0 dBu (e.g., CBC) In decibels relative to 0 dBFS (e.g., IEC 60268-18) In simple numerical marks that can be correlated with any of the above (e.g., British) Whichever scheme is used, usually there is a scale mark corresponding to Alignment Level. Most PPMs have an approximately logarithmic scale, i.e., roughly linear in decibels, to provide useful indications over a wide dynamic range. Integration time Quasi-PPMs use a short integration time so they can register peaks longer than a few milliseconds in duration. In the original context of AM radio broadcasting in the 1930s, overloads due to shorter peaks were considered unimportant on the grounds that the human ear could not detect distortion due to momentary clipping. Ignoring momentary clipping made it possible to increase average modulation levels. In modern digital audio practice, where quality standards are hopefully much higher than AM radio in the 1930s, clipping of even short peaks is usually regarded as something to avoid. On typical, real-world audio signals, a quasi-PPM under-reads the true peak by 6 to 8 dB. Nevertheless, quasi-PPMs are still widely used in the digital age because of their usefulness in achieving programme balance. Overloads are avoided by allowing, typically, 9 dB of headroom when controlling digital levels with a quasi-PPM. The extent to which quasi-PPMs show less than the true amplitude of momentary peaks is determined by the 'integration time'. This is defined by IEC as, "...the duration of a burst of sinusoidal voltage of 5000 Hz at reference level that results in an indication 2 dB below reference indication." This standard also contains tables showing the difference between indicated and true peaks for tone bursts of other durations. The longer the integration time, the greater the difference between the true and indicated peaks. In earlier standards, different methods of measurement and criteria were used, such as 0.2 Neper or 80% voltage instead of 2 dB, but the practical difference between them was small. A Type I PPM has an integration time of 5 milliseconds and a Type II PPM has an integration time of 10 milliseconds. Return time All PPMs have a return time much longer than the integration time, to give the operator more time to see the peaks and reduce eye strain. Type I PPMs fall back 20 dB in 1.7 seconds. Type II PPMs fall back 24 dB in 2.8 seconds. History and national variants The PPM was originally developed, independently in both Germany and the United Kingdom, for use in AM radio broadcasting networks in the 1930s. These were quasi-peak meters with some features in common but otherwise substantially different. They are superior to earlier types of meter that were not good for monitoring peak audio levels. IEC 60268-10 Type I PPM Germany In about 1936 and 1937, German broadcasters developed a peak programme meter with a mirror galvanometer known as a "Lichtzeigerinstrument" (light pointer) for the display. The system consisted of a drive amplifier (e.g., ARD types U21 and U71) and a separate display unit (e.g., ARD types J47 and J48). A stereo version, known as a "Doppel-Lichtzeigerinstrument" contained two mirror galvanometer displays in a single housing. Such displays were still used until the 1970s, when solid-state bargraph displays became the norm. The design became standardised as DIN 45406. It evolved into the Type I meter in IEC 60268-10 and it is still known colloquially as a DIN PPM. Compared to the Type II designs it has faster integration and return times, a much wider dynamic range and a semi-logarithmic scale, and is calibrated in dB relative to Permitted Maximum Level. It remains in use in much of northern Europe. In German broadcasting, the nominal analogue signal corresponding to Permitted Maximum Level was standardised by ARD at 1.55 volts (+6 dBu), and this is the usual sensitivity of a DIN-type PPM for an indication of 0 dB. Alignment Level (−3 dBu) is shown on the meter by a scale mark at −9. Scandinavia In Scandinavia a variant of the DIN PPM known as 'Nordic' is used. It has the same integration and return times but a different scale, with 'TEST' corresponding to Alignment Level (0 dBu) and +9 corresponding to Permitted Maximum Level (+9 dBu). Compared to the DIN scale, the Nordic scale is more logarithmic and covers a somewhat smaller dynamic range. IEC 60268-10 Type II PPM United Kingdom The BBC used a number of methods of measuring programme volume in its early years, including the 'volume indicator' and 'slide-back voltmeter'. By 1932, when the BBC moved to purpose-built facilities in Broadcasting House, the first audio meter called a 'programme meter' was introduced. It was developed by Charles Holt-Smith of the Research Department and became known as the 'Smith meter'. This was the first meter with white markings on a black background. It was driven by a circuit that gave a roughly logarithmic transfer characteristic, so it could be calibrated in decibels. The overall characteristics were the product of the driver circuit and the movement's ballistics. The first of the PPMs was designed by C. G. Mayo, also of the BBC's Research Department. It came into service in 1938. It kept the Smith meter's logarithmic, white-on-black display, and included all the key design features that are still used to this day with only slight modification: full-wave rectification, fast integration and slow return times, and a simple scale calibrated from 1 to 7. Mayo and others determined the integration and return times by a series of experiments. At first, they intended to create a true peak meter to prevent transmitters from exceeding 100% modulation. They created a prototype meter with an integration time of about 1 ms. They found that the ear tolerates distortion of only a few ms, and that a 'registration time' of 4 ms is sufficient. They made the return time a compromise between a rapid return, which was tiring to the eye—and a slow return, which made control difficult. Engineers decided that the meter should take between 2s and 3s to drop back 26 dB. The BBC PPM became the subject of several formal standards: BS 4297:1968 (superseded); BS5428:Part 9:1981 (superseded) and then BS 6840-10:1991. The text of the latter is identical to the Type IIa PPM in IEC 60268-10:1991. Alignment level (0 dBu) and Permitted Maximum Level (+8 dBu) correspond to scale marks '4' and '6' respectively. The BBC PPM was adopted by commercial broadcasters in the UK. Other organisations around the world, including the EBU, CBC and ABC used the same dynamics but with slightly different scales. Modern British PPMs have a 4 dB spacing between the scale marks. Older designs had a 6 dB spacing between '1' and '2'. This discrepancy can sometimes also be found at the equivalent position on the derived CBC and ABC scales. From its inception in 1939 until 2009, the PPM display was available in the form of an electro-mechanical, moving-coil meter movement with a demanding ballistic specification. For many years these were manufactured by Ernest Turner and Company, and in later years by Sifam, based in Torquay. In 2009, Sifam announced it was ending production of the Type 74 dual-needle meter movement. In 2010, Sifam ended all PPM meter movement manufacturing. Three major users—Bryant Unlimited, Canford Audio, and TSL—placed final orders with Sifam for large stocks of the meters to supply manufacturing and maintenance activities for several years. Stereo British PPMs In the UK, twin-needle PPMs are sometimes used for stereo. Red and green needles are used for left and right. White and yellow needles are used for sum and difference (M and S). A more recent variation is to use a black needle with a dayglo orange tip for S instead of yellow. The sensitivity of the S indication can be increased on some meter installations by 20 dB; this is to aid line-up procedures, e.g., of stereo mic pairs, or the azimuth of analogue tape machine heads, which rely on cancellation of the S signal. M3 and M6 M and S meters are normally aligned to the 'M6' standard in which M = (L + R) − 6 dB and S = (L − R) − 6 dB. In other words, the sum and difference signals are each attenuated by 6 dB before being displayed on the meter. As a result, signals of identical amplitude and phase in the left and right channels make the M meter show exactly the same deflection as for the individual L and R meters. This is because summing two identical signals produces a result 6 dB louder than either source, but the M and S meters show summed signals attenuated by 6 dB to compensate. The M6 standard means that dual mono sources (e.g. a presenter panned to the centre of a stereo sound stage) can be peaked to 6 in both channels, with the M meter also showing 6. The M6 format has largely replaced the earlier 'M3' standard in which the sum and difference attenuation is only 3 dB. This M3 format is designed to give a more accurate indication of the level of the summed mono signal when working with conventional stereo material. The premise is that in summing two signals of similar level but carrying non-phase-coherent sounds (i.e. typical stereo material), the result averages 3 dB more than either source channel (rather than 6 dB more). The M3 standard means that true stereo material can be peaked to 6 in both channels, with the M meter also showing 6. However, dual-mono sources can only be peaked to 5.25 in each channel to keep the M meter at 6. Note: the chosen M6/M3 metering standard does not affect the relative audible balance of sounds panned to one side versus the centre – that is determined solely by the panning law of the mixing console's pan-pot. The M6 standard is deemed a simpler form of metering for untrained broadcasters to use as it keeps the M meter at '4' for Alignment Level and '6' for peaks, without the operator having to remember to subtract 3 dB. Commercial broadcasting in the UK initially used M3 but had switched to M6 by 1980. This was mandated by the IBA's Engineering Code of Practice. BBC installations used M3 until 1999. The BBC now uses M6 in both radio and TV, although much legacy equipment is still configured for the 'traditional' M3 standard. European Broadcasting Union The EBU PPM is a variant of the British PPM designed for the control of programme levels in international programme exchange. It is formalised as the Type IIb PPM in IEC 60268-10. It is identical to the British PPM except for the scale plate, which is calibrated in dB relative to Alignment Level, which is marked 'TEST'. There are also ticks at 2 dB intervals and at +9 dB, corresponding to Permitted Maximum Level. United States In the late 1930s PPMs were considered for use in the US, but rejected in favour of a 'Standard Volume Indicator' (VU meter) on grounds of cost. Joint research by CBS, NBC and Bell Labs found that using an experimental design of PPM (with a relatively long integration time of 25 ms) in the control of programme levels gave only a 1 dB advantage over the VU meter, in terms of average output level for a given amount of distortion. It was felt that this was too small to justify the much greater expense. It was also found that VU meters gave more consistent readings than PPMs when comparing programme levels at the sending and receiving end of long lines subject to group delay, which altered the waveform. This finding has been disputed by others. A widely believed myth is that the PPM was developed as a superior alternative to the VU meter. In fact, the PPM came first, and if anything the VU meter was developed as an economical alternative to the PPM. By 1980, ABC had about 100 PPMs in use in control rooms in New York and its Washington News Bureau, and was ordering new consoles with PPMs fitted. These were Type II PPMs with the seven marks labelled −22, −16, −12, −8, −4, 0 and +4. ABC found that a modified version of the EBU meter based on the VU-meter 'A scale' was best, since it let operators use their usual jargon such as 'zero level' etc. The appearance is similar to an EBU scale except that the numbers are 8 dB lower. To aid alignment on both VU meters and PPMS, ABC in New York used a special test signal known as ATS. A 440 Hz tone alternated between steady tone at +8 dBu (indicated at 0 VU and −8 PPM) and tone bursts at +16 dBu (indicated at 0 VU and 0 PPM). Canada By 1978 PPMs were in use at the Canadian Broadcasting Corporation's Vancouver plant. Some 30 or 40 PPMs were in use, with just one or two VU meters retained for settling telco disputes. These are Type II PPMs with the seven marks labelled −6, 0, +4, +8, +12, +16 and +20: this scaling shows absolute levels in dBu (or dBm into 600 Ω). The appearance is similar to the ABC PPM except that all the numbers are 16 dB higher. South Africa The South African Broadcasting Corporation (SABC) uses a Type II PPM modified with a black-on-white scale plate calibrated in percentage and dB relative to Permitted Maximum Level, which is +6 dBu. Alignment Level is 0 dBu or 50%. IEC 60268-18 Digital PPM IEC 60268-18 is a partial standard for a PPM designed for use with digital audio in both professional and consumer use, using "incremental dot or bar type displays or numerical displays". Such a display shows level relative to 0 dBFS. The integration time can have any value less than 5 ms − thus both true-peak and quasi-peak meters can comply, and different meters may indicate very different levels despite compliance with the standard. The return time has the same value as a Type I meter: 1.7±0.3 seconds for a 20 dB fall. Table of characteristics IEC 60268-10 specifies three variants: Types I, IIa and IIb, known colloquially as the DIN, British and EBU types respectively. Types IIa and IIb differ only in the scale marks. The Nordic, ABC, CBC and SABC variants are not specified in IEC 60268-10. The Nordic PPM uses Type I ballistics with a different scale. The ABC, CBC and SABC variants use Type II ballistics with different scales. Parameters for the VU meter and Nagra modulometer are included in the table below for comparison. Some information has been obtained from ITU-T Rec. J.15. Nagra modulometer The 'modulometer' is a proprietary type of quasi-PPM found on Nagra products. It has an integration time (−2 dB) of 7.5 ms, and a semi-logarithmic scale with an appearance between that of a VU meter and a DIN-type PPM. A stereo version ("double modulometer") uses a meter movement with two coaxial needles. In typical practice for Nagra analogue tape recorders, Alignment Level is regarded as −8 and maximum level 0. Thus sound recordists using location mixers would typically send a tone at 0 VU or PPM 4 (British) and adjust the Nagra recorder's gain to read −8 on the modulometer. Some newer digital recorders, e.g., the Nagra VI, have modulometers displayed as bargraphs calibrated in dBFS. For these, Alignment Level is as for any other digital PPM, i.e., −18 dBFS (EBU) or −20 dBFS (SMPTE). Usage of meter by sound balancers To use PPMs effectively to control sound levels it is necessary to understand design rationale and limitations. Many engineers prefer the PPM to the much slower VU meter used in the US—but it does require some interpretation in use. Though it gives a useful overload warning, it does not represent either true peak level or subjective loudness. The BBC have tables showing recommended settings for different types of programme, such as speech, classical music etc., which attempt to take account of the latter. Regardless of the kind of programme, there is usually a nominal Permitted Maximum Level, as indicated on a PPM. Operators are expected to keep levels below it, within reason. Practices vary between countries and organisations. In the UK, the Permitted Maximum Level is 8 dB above Alignment Level, corresponding to '6' on the British PPM scale. ITU-T standards for international sound programme circuits specify a Permitted Maximum Level of 9 dB above Alignment Level. Accordingly, +9 dB is represented by a mark on the EBU PPM scale. Digital audio levels Because quasi-peak PPMs indicate neither loudness nor true peaks but something between the two, it is important to allow sufficient headroom when using them in the control of digital audio levels. The EBU convention (R68) provides for this by defining Alignment Level as −18 dBFS. Thus a peak to the Permitted Maximum Level as indicated on a quasi-PPM corresponds to −9 or −10 dBFS. This 9-10 dB margin allows for operator error, the true peak typically being several dB higher than the PPM indication, and that subsequent signal processing (e.g., sample rate conversion) may increase the amplitude. SMPTE RP 0155 recommends a different alignment level, corresponding to 0 VU, of −20 dBFS. The two conventions result in line-up tone levels that differ by 2 dB, but in practice the level of programme modulation tends to be similar. The SMPTE and the EBU agree that regardless of whether −18 or −20 dBFS is used as the Alignment Level, that level should be declared and that in both cases programme should peak to a Permitted Maximum Level of −9 dBFS when measured on an IEC 60268-10 quasi-PPM with an integration time of 10 milliseconds. Consumer use IEC 60268-10 is concerned mainly with the highly specified Type I and Type II PPMs used in broadcasting. It does however also contain a brief section on PPMs for 'secondary and consumer' applications. The requirements include a minimum of a 12-segment bargraph type display covering a range of −42 dB to +6 dB relative to nominal maximum level, and the same integration and return times as a Type I PPM. References Audio engineering BBC Research & Development British inventions Audiovisual introductions in 1932 German inventions Broadcast engineering Sound production technology Sound recording ru:Измеритель уровня звука
Peak programme meter
[ "Engineering" ]
4,945
[ "Electronic engineering", "Electrical engineering", "Audio engineering", "Broadcast engineering" ]
2,254,751
https://en.wikipedia.org/wiki/LBV%201806%E2%88%9220
LBV 1806−20 is a candidate luminous blue variable (LBV) and likely binary star located around from the Sun, towards the center of the Milky Way. It has an estimated mass of around 36 solar masses and an estimated variable luminosity of around two million times that of the Sun. It is highly luminous but is invisible from the Solar System at visual wavelengths because less than one billionth of its visible light reaches us. When first discovered, LBV 1806−20 was considered both the most luminous and most massive star known, which challenged scientific understanding of the formation of massive stars. Recent estimates place it somewhat nearer to Earth, which when combined with its binary nature mean that it is now well within the expected range of parameters for extremely luminous stars in the galaxy. It is estimated at 2 million times as luminous as the sun which makes it one of the most luminous stars in the galaxy. Location LBV 1806−20 lies at the core of radio nebula G10.0–0.3, which is believed to be primarily powered by its stellar wind. It is a member of the 1806−20 open cluster, itself a component of W31, one of the largest H II regions in the Milky Way. Cluster 1806−20 is made up of some highly unusual stars, including four Wolf–Rayet stars, several OB stars, and a magnetar (SGR 1806−20). Spectrum The spectral type of LBV 1806−20 is uncertain and possibly variable. It has been constrained to between O9 and B2 on the basis of an infrared HeI line equivalent width. The spectrum shows strong emission in the Paschen and Brackett series of hydrogen, but also emission lines of helium, FeII, MgII, and NaI. The lines are broad and have uneven profiles, some showing P Cygni profiles. High resolution spectra show that some HeI absorption lines are doubled. Properties Intervening dust in the direction of the Galactic Center absorb an estimated 35 magnitudes at visual wavelengths, and so most observations are conducted using infrared telescopes. On the basis of its luminosity and spectral type it is suspected of being an LBV, but despite the name the characteristic photometric and spectroscopic variations have not yet been observed so it remains just a candidate. Binary To account for the doubled HeI lines in its spectrum and the inconsistent mass, luminosity and age estimates, LBV 1806-20 has been proposed to be a binary. The emission lines are single, so only one star appears to have a dense stellar wind as might be expected from an LBV. Notes References Luminous blue variables Sagittarius (constellation) J18084031-2024411
LBV 1806−20
[ "Astronomy" ]
545
[ "Sagittarius (constellation)", "Constellations" ]
2,254,781
https://en.wikipedia.org/wiki/Phycobilisome
Phycobilisomes are light-harvesting antennae that transmit the energy of harvested photons to photosystem II and photosystem I in cyanobacteria and in the chloroplasts of red algae and glaucophytes. They were lost during the evolution of the chloroplasts of green algae and plants. General structure Phycobilisomes are protein complexes (up to 600 polypeptides) anchored to thylakoid membranes. They are made of stacks of chromophorylated proteins, the phycobiliproteins, and their associated linker polypeptides. Each phycobilisome consists of a core made of allophycocyanin, from which several outwardly oriented rods made of stacked disks of phycocyanin and (if present) phycoerythrin(s) or phycoerythrocyanin. The spectral property of phycobiliproteins are mainly dictated by their prosthetic groups, which are linear tetrapyrroles known as phycobilins including phycocyanobilin, phycoerythrobilin, phycourobilin and phycobiliviolin. The spectral properties of a given phycobilin are influenced by its protein environment. Function Each phycobiliprotein has a specific absorption and fluorescence emission maximum in the visible range of light. Therefore, their presence and the particular arrangement within the phycobilisomes allow absorption and unidirectional transfer of light energy to chlorophyll a of the photosystem II. In this way, the cells take advantage of the available wavelengths of light (in the 500–650 nm range), which are inaccessible to chlorophyll, and utilize their energy for photosynthesis. This is particularly advantageous deeper in the water column, where light with longer wavelengths is less transmitted and therefore less available directly to chlorophyll. The geometrical arrangement of a phycobilisome is very elegant in an antenna-like assembly. It results in 95% efficiency of energy transfer. Evolution and diversity There are many variations to the general phycobilisome structure. Their shape can be hemidiscoidal (in cyanobacteria) or hemiellipsoidal (in red algae). Species lacking phycoerythrin have at least two disks of phycocyanin per rod, which is sufficient for maximum photosynthesis. The phycobiliproteins themselves show little sequence evolution due to their highly constrained function (absorption and transfer of specific wavelengths). In some species of cyanobacteria, when both phycocyanin and phycoerythrin is present, the phycobilisome can undergo significant restructuring as response to light color. In green light the distal portions of the rods are made of red colored phycoerythrin, which absorbs green light better. In red light, this is replaced by blue colored phycocyanin, which absorbs red light better. This reversible process is known as complementary chromatic adaptation. It is the component of photosynthetic system of cyanobacteria, as a particle with which various structures are linked (i.e. thylakoid membrane, etc.). Applications Phycobilisomes can be used in prompt fluorescence , flow cytometry, Western blotting and protein microarrays. Some phycobilisomes have an absorption and emission profile similar to Cy5, allowing them to be used in many of the same applications. They can also be up to 200 times brighter and with a larger Stokes shift, providing a larger signal per binding event. This property allows the detection of low-level target molecules or rare events. References Further reading External links Columbia Biosciences - Phycobilisome Resource Cell anatomy Bacteriology Photosynthesis Prokaryotic cell anatomy Organelles
Phycobilisome
[ "Chemistry", "Biology" ]
847
[ "Biochemistry", "Photosynthesis" ]
2,254,792
https://en.wikipedia.org/wiki/Methyl%20isobutyl%20ketone
Methyl isobutyl ketone (MIBK, 4-methylpentan-2-one) is an organic compound with the condensed chemical formula (CH3)2CHCH2C(O)CH3. This ketone is a colourless liquid that is used as a solvent for gums, resins, paints, varnishes, lacquers, and nitrocellulose. Production At laboratory scale, MIBK can be produced via a three-step process using acetone as the starting material. Self-condensation, a type of aldol reaction, produces diacetone alcohol, which readily dehydrates to give 4-methylpent-3-en-2-one (commonly, mesityl oxide). Mesityl oxide is then hydrogenated to give MIBK. Industrially, these three steps are combined. Acetone is treated with a strongly acidic, palladium catalyst-doped cation exchange resin under medium pressure of hydrogen. Several million kilograms are produced annually. Uses MIBK is used as a solvent for nitrocellulose, lacquers, and certain polymers and resins. Precursor to 6PPD Another major use is as a precursor to N-(1,3-dimethylbutyl)-N'''-phenyl-p''-phenylene diamine (6PPD), an antiozonant used in tires. 6PPD is prepared by reductive coupling of MIBK with 4-aminodiphenylamine. Solvent and niche applications Unlike the other common ketone solvents, acetone and MEK, MIBK has quite low solubility in water, making it useful for liquid-liquid extraction. It has a similar polarity to ethyl acetate, but greater stability towards aqueous acid and base. It can be used to extract gold, silver and other precious metals from cyanide solutions, such as those used in gold mines, to determine the levels of those dissolved metals. Diisobutyl ketone (DIBK), a related lipophilic ketone, is also used for this purpose. Methyl isobutyl ketone is also used as a denaturing agent for denatured alcohol. When mixed with water or isopropyl alcohol MIBK serves as a developer for PMMA electron beam lithography resist. MIBK is used as a solvent for CS in the preparation of the CS spray used currently by American and British police forces. References External links International Chemical Safety Card 0511 National Pollutant Inventory - Methyl isobutyl ketone fact sheet NIOSH Pocket Guide to Chemical Hazards Hazardous air pollutants Hexanones Ketone solvents Commodity chemicals IARC Group 2B carcinogens
Methyl isobutyl ketone
[ "Chemistry" ]
587
[ "Commodity chemicals", "Products of chemical industry" ]
2,254,890
https://en.wikipedia.org/wiki/Hypervariable
Hypervariable may refer to: Hypervariable sequence, a segment of a chromosome characterised by considerable variation in the number of tandem repeats at one or more loci Hypervariable locus, a locus with many alleles; especially those whose variation is due to variable numbers of tandem repeats Hypervariable region (HVR), a chromosomal segment characterized by multiple alleles within a population for a single genetic locus Genetics
Hypervariable
[ "Biology" ]
87
[ "Genetics" ]
2,255,218
https://en.wikipedia.org/wiki/Catalog%20server
A catalog server provides a single point of access that allows users to centrally search for information across a distributed network. In other words, it indexes databases, files and information across large network and allows keywords, Boolean and other searches. If you need to provide a comprehensive searching service for your intranet, extranet or even the Internet, a catalog server is a standard solution. References Databases
Catalog server
[ "Technology" ]
81
[ "Computing stubs", "Computer network stubs" ]
2,255,444
https://en.wikipedia.org/wiki/Retention%20basin
A retention basin, sometimes called a retention pond, wet detention basin, or storm water management pond (SWMP), is an artificial pond with vegetation around the perimeter and a permanent pool of water in its design. It is used to manage stormwater runoff, for protection against flooding, for erosion control, and to serve as an artificial wetland and improve the water quality in adjacent bodies of water. It is distinguished from a detention basin, sometimes called a "dry pond", which temporarily stores water after a storm, but eventually empties out at a controlled rate to a downstream water body. It also differs from an infiltration basin which is designed to direct stormwater to groundwater through permeable soils. Wet ponds are frequently used for water quality improvement, groundwater recharge, flood protection, aesthetic improvement, or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and viewed as an amenity. In urban areas, impervious surfaces (roofs, roads) reduce the time spent by rainfall before entering into the stormwater drainage system. If left unchecked, this will cause widespread flooding downstream. The function of a stormwater pond is to contain this surge and release it slowly. This slow release mitigates the size and intensity of storm-induced flooding on downstream receiving waters. Stormwater ponds also collect suspended sediments, which are often found in high concentrations in stormwater water due to upstream construction and sand applications to roadways. Design features Storm water is typically channeled to a retention basin through a system of street and/or parking lot storm drains, and a network of drain channels or underground pipes. The basins are designed to allow relatively large flows of water to enter, but discharges to receiving waters are limited by outlet structures that function only during very large storm events. Retention ponds are often landscaped with a variety of grasses, shrubs, and/or aquatic plants to provide bank stability and aesthetic benefits. Vegetation also provides water quality benefits by removing soluble nutrients through uptake. In some areas the ponds can attract nuisance types of wildlife like ducks or Canada geese, particularly where there is minimal landscaping and grasses are mowed. This reduces the ability of foxes, coyotes, and other predators to approach their prey unseen. Such predators tend to hide in the cattails and other tall, thick grass surrounding natural water features. Proper depth of retention ponds is important for removal of pollutants and maintenance of fish populations. Urban fishing continues to be one of the fastest growing fishing segments as new suburban neighborhoods are built around these aquatic areas. Other meanings A retention basin can also be a part of a nuclear reactor used to contain a core meltdown. See also Balancing lake (UK) Nationwide Urban Runoff Program (NURP) – US stormwater research project Settling basin – for treating agricultural and industrial wastewater Stream restoration Surface runoff Sustainable drainage system Urban runoff Water pollution Jonenbach flood retention basin References External links Virginia retention basin standards Detention vs. retention – Harris County, Texas Flood Control District Stormwater Ecological Enhancement Project – University of Florida The use of retention ponds in residential settings International Stormwater BMP Database – Performance Data on Urban Stormwater BMPs Environmental engineering Hydraulic engineering Hydrology Infrastructure Stormwater management
Retention basin
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
672
[ "Hydrology", "Water treatment", "Stormwater management", "Chemical engineering", "Water pollution", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering", "Infrastructure" ]
2,255,524
https://en.wikipedia.org/wiki/Motor%20unit%20recruitment
Motor unit recruitment is the activation of additional motor units to accomplish an increase in contractile strength in a muscle. A motor unit consists of one motor neuron and all of the muscle fibers it stimulates. All muscles consist of a number of motor units and the fibers belonging to a motor unit are dispersed and intermingle amongst fibers of other units. The muscle fibers belonging to one motor unit can be spread throughout part, or most of the entire muscle, depending on the number of fibers and size of the muscle. When a motor neuron is activated, all of the muscle fibers innervated by the motor neuron are stimulated and contract. The activation of one motor neuron will result in a weak but distributed muscle contraction. The activation of more motor neurons will result in more muscle fibers being activated, and therefore a stronger muscle contraction. Motor unit recruitment is a measure of how many motor neurons are activated in a particular muscle, and therefore is a measure of how many muscle fibers of that muscle are activated. The higher the recruitment the stronger the muscle contraction will be. Motor units are generally recruited in order of smallest to largest (smallest motor neurons to largest motor neurons, and thus slow to fast twitch) as contraction increases. This is known as Henneman's size principle. Neuronal mechanism of recruitment Henneman proposed that the mechanism underlying the size principle was that the smaller motor neurons had a smaller surface area and therefore a higher membrane resistance. He predicted that the current generated by an excitatory postsynaptic potential (EPSPs) would result in a higher voltage change (depolarization) across the neuronal membrane of the smaller motor neurons and therefore larger EPSPs in smaller motoneurons. Burke later demonstrated that there was a graded decrease of both EPSP and inhibitory postsynaptic potential (IPSP) amplitudes from small to large motoneurons. This seemed to confirm Henneman's idea, but Burke disagreed, pointing out that larger neurons with a larger surface area had space for more synapses. Burke eventually showed (in a very small sample of neurons) that smaller motoneurons have a greater number of synaptic inputs from a single input source. The topic is probably still regarded as controversial. In their 1982 paper, Burke and colleagues propose that the small cell size and high surface-to-volume ratio of S motor units allows for greater metabolic activity, optimized for the "highest duty cycles" of motoneurons, while other motor unit types may be involved in "lower duty cycles." However, they state that the evidence is not conclusive "to support or deny the intuitively appealing notion that there is a  correlation between metabolic activity, motoneuron size, and motor unit type." Under some circumstances, the normal order of motor unit recruitment may be altered, such that small motor units cease to fire and larger ones may be recruited. This is thought to be due to the interaction of excitatory and inhibitory motoneuronal inputs. Recruitment of motor unit types Another topic of controversy resides in the way in which Burke and colleagues categorized motor unit types. They designated three general groups by which motor units could be categorized: S (slow – slow twitch), FR (fast, resistant – fast twitch, fatigue-resistant), and FF (fast, fatigable – fast twitch, fatigable). These designations have served as the basis for motor unit categorization since their conception, but modern research indicates that human motor units are more complex and possibly do not directly fit this model. However, it is important to note that Burke himself recognized the risk in classifying motor units:My friend the late Elwood Henneman told me several times in conversation that he thought classifying motor units into distinct categories was probably a bad idea because, unless used with care, classifications tend to distort reality. I agreed, and still do, that taxonomies can lead to overly rigid thinking (and sometimes even lack of thinking) but they are necessary for communication, which requires that things be named; and scientific communication demands that things be named precisely, according to their attributes. If a correlation were to be drawn between Henneman's size principle and the motor unit categorization of Burke regarding the order of motor unit recruitment, it would resemble the following order: the smallest units, S (slow) (Slow-Oxidative), would be recruited first, followed by larger FR (fast, resistant) (Fast-Oxidative) units, and lastly the largest FF (fast, fatigable) (Fast-Glycolytic) units, reserved for high-energy tasks that require additional motor unit recruitment. Rate coding of muscle force The force produced by a single motor unit is determined in part by the number of muscle fibers in the unit. Another important determinant of force is the frequency with which the muscle fibers are stimulated by their innervating axon. The rate at which the nerve impulses arrive is known as the motor unit firing rate and may vary from frequencies low enough to produce a series of single twitch contractions to frequencies high enough to produce a fused tetanic contraction. Generally, this allows a 2 to 4-fold change in force. In general, the motor unit firing rate of each individual motor unit increases with increasing muscular effort until a maximum rate is reached. This smooths out the incremental force changes which would otherwise occur as each additional unit was recruited. Proportional control of muscle force The distribution of motor unit size is such that there is an inverse relationship between the number of motor units and the force each generates (i.e., the number of muscle fibers per motor unit). Thus, there are many small motor units and progressively fewer larger motor units. This means that at low levels of recruitment, the force increment due to recruitment is small, whereas in forceful contractions, the force increment becomes much larger. Thus the ratio between the force increment produced by adding another motor unit and the force threshold at which that unit is recruited remains relatively constant. Electrodiagnostic testing In medical electrodiagnostic testing for a patient with weakness, careful analysis of the "motor unit action potential" (MUAP) size, shape, and recruitment pattern can help in distinguishing a myopathy from a neuropathy. See also Motor unit number estimation Myopathy Neuropathy References External links Somatic motor system Motor control
Motor unit recruitment
[ "Biology" ]
1,314
[ "Behavior", "Motor control" ]
2,255,858
https://en.wikipedia.org/wiki/Somatic%20marker%20hypothesis
The somatic marker hypothesis, formulated by Antonio Damasio and associated researchers, proposes that emotional processes guide (or bias) behavior, particularly decision-making. "Somatic markers" are feelings in the body that are associated with emotions, such as the association of rapid heartbeat with anxiety or of nausea with disgust. According to the hypothesis, somatic markers strongly influence subsequent decision-making. Within the brain, somatic markers are thought to be processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala. The hypothesis has been tested in experiments using the Iowa gambling task. Background In economic theory, human decision-making is often modeled as being devoid of emotions, involving only logical reasoning based on cost-benefit calculations. In contrast, the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations. Patients with frontal lobe damage, such as Phineas Gage, provided the first evidence that the frontal lobes were associated with decision-making. Frontal lobe damage, particularly to the vmPFC, results in impaired abilities to organize and plan behavior and learn from previous mistakes, without affecting intellect in terms of working memory, attention, and language comprehension and expression. vmPFC patients also have difficulty expressing and experiencing appropriate emotions. This led Antonio Damasio to hypothesize that decision-making deficits following vmPFC damage result from the inability to use emotions to help guide future behavior based on past experiences. Consequently, vmPFC damage forces those affected to rely on slow and laborious cost-benefit analyses for every given choice situation. Hypothesis When individuals make decisions, they must assess the incentive value of the choices available to them, using cognitive and emotional processes. When the individuals face complex and conflicting choices, they may be unable to decide using only cognitive processes, which may become overloaded. Emotions, consequently, are hypothesized to guide decision-making. Emotions, as defined by Damasio, are changes in both body and brain states in response to stimuli. Physiological changes (such as muscle tone, heart rate, endocrine activity, posture, facial expression, and so forth) occur in the body and are relayed to the brain where they are transformed into an emotion that tells the individual something about the stimulus that they have encountered. Over time, emotions and their corresponding bodily changes, which are called "somatic markers", become associated with particular situations and their past outcomes. When making subsequent decisions, these somatic markers and their evoked emotions are consciously or unconsciously associated with their past outcomes, and influence decision-making in favor of some behaviors instead of others. For instance, when a somatic marker associated with a positive outcome is perceived, the person may feel happy and thereby motivated to pursue that behavior. When a somatic marker associated with the negative outcome is perceived, the person may feel sad, which acts as an internal alarm to warn the individual to avoid that course of action. These situation-specific somatic states are based on, and reinforced by, past experiences help to guide behavior in favor of more advantageous choices, and therefore are adaptive. According to the hypothesis, two distinct pathways reactivate somatic marker responses. In the first pathway, emotion can be evoked by changes in the body that are projected to the brain – called the "body loop". For instance, encountering a feared object like a snake may initiate the fight-or-flight response and cause fear. In the second pathway, cognitive representations of the emotions (imagining an unpleasant situation "as if" you were in that particular situation) can be activated in the brain without being directly elicited by a sensory stimulus – called the "as-if body loop". Thus, the brain can anticipate expected bodily changes, which allows the individual to respond faster to external stimuli without waiting for an event to actually occur. The amygdala and vmPFC (a subsection of the orbital and medial prefrontal cortex or OMPFC) are essential components of this hypothesized mechanism, and therefore damage to either structure will disrupt decision-making. Experimental evidence In an effort to produce a simple neuropsychological tool that would assess deficits in emotional processing, decision-making, and social skills of OMPFC-lesioned individuals, Bechara and collaborators created the Iowa gambling task. The task measures a form of emotion-based learning. Studies using the gambling task have found deficits in various neurological (such as amygdala and OMPFC lesions) and psychiatric populations (such as schizophrenia, mania, and drug abusers). The Iowa gambling task is a computerized test in which participants are presented with four decks of cards from which they repeatedly choose. Each deck contains various amounts of rewards of either $50 or $100, and occasional losses that are greater in the decks with higher rewards. The participants do not know where the penalty cards are located, and are told to pick cards that will maximize their winnings. The most profitable strategy turns out to be to choose cards only from the small reward/small penalty decks, because although the reward is smaller, the penalty is proportionally much smaller than in the high reward/high penalty decks. Over the course of a session, most healthy participants come to adopt the profitable low-penalty deck strategy. Participants with brain damage, however, are unable to determine the better deck to choose from, and continue to choose from the high reward/high penalty decks. Since the Iowa gambling task measures participants' quickness in "developing anticipatory emotional responses to guide advantageous choices", it is helpful in testing the somatic marker hypothesis. According to the hypothesis, somatic markers give rise to anticipation of the emotional consequences of a decision being made. Consequently, persons who perform well on the task are thought to be aware of the penalty cards and of the negative emotions associated with drawing such cards, and to realize which deck is less likely to yield a penalty. This experiment has been used to analyze the impairments of people with damage to the vmPFC, which has been known to affect neural signaling of prospective rewards or punishments. Such persons perform less well on the task. Functional magnetic resonance imaging (fMRI) has been used to analyze the brain during the Iowa gambling task. The brain regions that were activated during the Iowa gambling task were also the ones hypothesized to be triggered by somatic markers during decision-making. Evolutionary significance Damasio has posited that the ability of humans to perform abstract thinking quickly and efficiently coincides with both the development of the vmPFC and with the use of somatic markers to guide human behavior during evolution. Patients with damage to the vmPFC are more likely to engage in behaviors that negatively impact personal relationships in the distant future, but they never engage in actions that would lead to immediate harm to themselves or others. The evolution of the prefrontal cortex was associated with the ability to represent events that may occur in the future. Application to risky behavior The somatic marker hypothesis has been applied to trying to understand risky behaviors, such as risky sexual behavior and drug addiction. According to the hypothesis, riskier sexual behaviors are more exhilarating and pleasurable, and therefore they are more likely to stimulate repetitive engagement in such behaviors. When this idea was tested in individuals who were infected with HIV and were substance dependent, differences were found between persons who scored well in the Iowa gambling test, and those who scored poorly. The high scorers showed a correlation between the amount of distress they reported having over their HIV status, and their acceptance of risk during sexual behavior – the greater the distress, the greater the risk that these people would take. The low scorers, on the other hand, showed no such correlation. These results were interpreted as indicating that persons with intact decision-making abilities are better able to rely on past emotional experiences when weighing risks, than are persons who are deficient in such abilities, and that acceptance of risk serves to ameliorate emotional distress. Drug abusers are thought to ignore the negative consequences of addiction while seeking drugs. According to the somatic marker hypothesis, such abusers are impaired in their ability to recall and consider past unpleasant experiences when weighing whether to consider drug seeking behaviors. Researchers analyzed the neuroendocrine responses of substance-dependent individuals and healthy individuals after being shown pleasant or unpleasant images. In response to unpleasant images, drug users showed decreased levels of several neuroendocrine markers, including norepinephrine, cortisol, and adrenocorticotropic hormone. Addicts showed lesser responses to both pleasant and unpleasant images, suggesting that they may have a diminished emotional response. Neuroimaging studies utilizing fMRI indicate that drug-related stimuli have the ability to activate brain regions involved in emotional evaluation and reward processing. When shown a film of people smoking cocaine, cocaine users showed greater activation of the anterior cingulate cortex, the right inferior parietal lobe, and the caudate nucleus than did non-users. Conversely, the cocaine users showed lesser activation when viewing a sex film than did non-users. Criticism Some researchers believe that the use of somatic markers (i.e., afferent feedback) would be a very inefficient method of influencing behavior. Damasio's notion of the as-if experience dependent feedback route, whereby bodily responses are re-represented utilizing the somatosensory cortex (postcentral gyrus), also proposes an inefficient method of affecting explicit behavior. Edmund Rolls (1999) stated that; "it would be very inefficient and noisy to place in the execution route a peripheral response, and transducers to attempt to measure that peripheral response, itself a notoriously difficult procedure" (p. 73). Reinforcement association located in the orbitofrontal cortex and amygdala, where the incentive value of stimuli is decoded, is sufficient to elicit emotion-based learning and to affect behavior via, for example, the orbitofrontal-striatal pathway. This process can occur via implicit or explicit processes. The somatic marker hypothesis represents a model of how feedback from the body may contribute to both advantageous and disadvantageous decision-making in situations of complexity and uncertainty. Much of its supporting data comes from data taken from the Iowa gambling task. While the Iowa gambling task has proven to be an ecologically valid measure of decision-making impairment, there exist three assumptions that need to hold true. First, the claim that it assesses implicit learning as the reward/punishment design is inconsistent with data showing accurate knowledge of the task possibilities and that mechanisms such as working-memory appear to have a strong influence. Second, the claim that this knowledge occurs through preventive marker signals is not supported by competing explanations of the psychophysiology generated profile. Lastly, the claim that the impairment is due to a 'myopia for the future' is undermined by more plausible psychological mechanisms explaining deficits on the tasks such as reversal learning, risk-taking, and working-memory deficits. There may also be more variability in control performance than previously thought, thus complicating the interpretation of the findings. Furthermore, although the somatic marker hypothesis has accurately identified many of the brain regions involved in decision-making, emotion, and body-state representation, it has failed to clearly demonstrate how these processes interact at a psychological and evolutionary level. There are many experiments that could be implemented to further test the somatic marker hypothesis. One way would be to develop variants of the Iowa gambling task that control some of the methodological issues and interpretation ambiguities generated. It may be a good idea to include removing the reversal learning confound, which would make the task more difficult to consciously comprehend. Additionally, causal tests of the somatic marker hypothesis could be practiced more insistently in a greater range of populations with altered peripheral feedback, like on patients with facial paralysis. In conclusion, the somatic marker hypothesis needs to be tested in more experiments. Until a wider range of empirical approaches are employed in order to test the somatic marker hypothesis, it appears that the framework is simply an intriguing idea that is in need of some better supporting evidence. Despite these issues, the somatic marker hypothesis and the Iowa gambling task reestablish the notion that emotion has the potential to be a benefit as well as a problem during the decision-making process in humans. See also James–Lange theory Somatization References External links Neuropsychology Behavior Emotion Hypotheses Somatic psychology
Somatic marker hypothesis
[ "Biology" ]
2,580
[ "Emotion", "Behavior", "Human behavior" ]
2,256,072
https://en.wikipedia.org/wiki/Selenic%20acid
Selenic acid is the inorganic compound with the formula . It is an oxoacid of selenium, and its structure is more accurately described as . It is a colorless compound. Although it has few uses, one of its salts, sodium selenate is used in the production of glass and animal feeds. Structure and bonding The molecule is tetrahedral, as predicted by VSEPR theory. The Se–O bond length is 161 pm. In the solid state, it crystallizes in an orthorhombic structure. Preparation It is prepared by oxidising selenium compounds in lower oxidation states. One method involves the oxidation of selenium dioxide with hydrogen peroxide: Unlike the production sulfuric acid by hydration of sulfur trioxide, the hydration of selenium trioxide is an impractical method. Instead, selenic acid may also be prepared by the oxidation of selenous acid () with halogens, such as chlorine or bromine, or with potassium permanganate. Using chlorine or bromine as the oxidising agents also produces hydrochloric or hydrobromic acid as a side-product, which needs to be removed from the solution since they can reduce the selenic acid to selenous acid. To obtain the anhydrous acid as a crystalline solid, the resulting solution is evaporated at temperatures below in a vacuum. Reactions Like sulfuric acid, selenic acid is a strong acid that is hygroscopic and extremely soluble in water. Concentrated solutions are viscous. Crystalline mono- and di-hydrates are known. The monohydrate melts at 26 °C, and the dihydrate melts at −51.7 °C. Selenic acid is a stronger oxidizer than sulfuric acid, capable of liberating chlorine from chloride ions, being reduced to selenous acid in the process: It decomposes above 200 °C, liberating oxygen gas and being reduced to selenous acid: Selenic acid reacts with barium salts to precipitate solid , analogous to the sulfate. In general, selenate salts resemble sulfate salts, but are more soluble. Many selenate salts have the same crystal structure as the corresponding sulfate salts. Treatment with fluorosulfuric acid gives selenoyl fluoride: Hot, concentrated selenic acid reacts with gold, forming a reddish-yellow solution of gold(III) selenate: Applications Selenic acid is used as a specialized oxidizing agent. References Oxidizing acids Chalcogen oxoacids Selenates
Selenic acid
[ "Chemistry" ]
547
[ "Acids", "Oxidizing acids", "Oxidizing agents" ]
2,256,109
https://en.wikipedia.org/wiki/Situation%20calculus
The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation. Overview The situation calculus represents changing scenarios as a set of first-order logic formulae. The basic elements of the calculus are: The actions that can be performed in the world The fluents that describe the state of the world The situations A domain is formalized by a number of formulae, namely: Action precondition axioms, one for each action Successor state axioms, one for each fluent Axioms describing the world in various situations The foundational axioms of the situation calculus A simple robot world will be modeled as a running example. In this world there is a single robot and several inanimate objects. The world is laid out according to a grid so that locations can be specified in terms of coordinate points. It is possible for the robot to move around the world, and to pick up and drop items. Some items may be too heavy for the robot to pick up, or fragile so that they break when they are dropped. The robot also has the ability to repair any broken items that it is holding. Elements The main elements of the situation calculus are the actions, fluents and the situations. A number of objects are also typically involved in the description of the world. The situation calculus is based on a sorted domain with three sorts: actions, situations, and objects, where the objects include everything that is not an action or a situation. Variables of each sort can be used. While actions, situations, and objects are elements of the domain, the fluents are modeled as either predicates or functions. Actions The actions form a sort of the domain. Variables of sort action can be used and also functions whose result is of sort action. Actions can be quantified. In the example robot world, possible action terms would be to model the robot moving to a new location , and to model the robot picking up an object . A special predicate is used to indicate when an action is executable. Situations In the situation calculus, a dynamic world is modeled as progressing through a series of situations as a result of various actions being performed within the world. A situation represents a history of action occurrences. In the Reiter version of the situation calculus described here, a situation does not represent a state, contrarily to the literal meaning of the term and contrarily to the original definition by McCarthy and Hayes. This point has been summarized by Reiter as follows: A situation is a finite sequence of actions. Period. It's not a state, it's not a snapshot, it's a history. The situation before any actions have been performed is typically denoted and called the initial situation. The new situation resulting from the performance of an action is denoted using the function symbol (Some other references also use ). This function symbol has a situation and an action as arguments, and a situation as a result, the latter being the situation that results from performing the given action in the given situation. The fact that situations are sequences of actions and not states is enforced by an axiom stating that is equal to if and only if and . This condition makes no sense if situations were states, as two different actions executed in two different states can result in the same state. In the example robot world, if the robot's first action is to move to location , the first action is and the resulting situation is . If its next action is to pick up the ball, the resulting situation is . Situations terms like and denote the sequences of executed actions, and not the description of the state that result from execution. Fluents Statements whose truth value may change are modeled by relational fluents, predicates that take a situation as their final argument. Also possible are functional fluents, functions that take a situation as their final argument and return a situation-dependent value. Fluents may be thought of as "properties of the world"'. In the example, the fluent can be used to indicate that the robot is carrying a particular object in a particular situation. If the robot initially carries nothing, is false while is true. The location of the robot can be modeled using a functional fluent that returns the location of the robot in a particular situation. Formulae The description of a dynamic world is encoded in second-order logic using three kinds of formulae: formulae about actions (preconditions and effects), formulae about the state of the world, and foundational axioms. Action preconditions Some actions may not be executable in a given situation. For example, it is impossible to put down an object unless one is in fact carrying it. The restrictions on the performance of actions are modeled by literals of the form , where is an action, a situation, and is a special binary predicate denoting executability of actions. In the example, the condition that dropping an object is only possible when one is carrying it is modeled by: As a more complex example, the following models that the robot can carry only one object at a time, and that some objects are too heavy for the robot to lift (indicated by the predicate ): Action effects Given that an action is possible in a situation, one must specify the effects of that action on the fluents. This is done by the effect axioms. For example, the fact that picking up an object causes the robot to be carrying it can be modeled as: It is also possible to specify conditional effects, which are effects that depend on the current state. The following models that some objects are fragile (indicated by the predicate ) and dropping them causes them to be broken (indicated by the fluent ): While this formula correctly describes the effect of the actions, it is not sufficient to correctly describe the action in logic, because of the frame problem. The frame problem While the above formulae seem suitable for reasoning about the effects of actions, they have a critical weakness—they cannot be used to derive the non-effects of actions. For example, it is not possible to deduce that after picking up an object, the robot's location remains unchanged. This requires a so-called frame axiom, a formula like: The need to specify frame axioms has long been recognised as a problem in axiomatizing dynamic worlds, and is known as the frame problem. As there are generally a very large number of such axioms, it is very easy for the designer to leave out a necessary frame axiom, or to forget to modify all appropriate axioms when a change to the world description is made. The successor state axioms The successor state axioms "solve" the frame problem in the situation calculus. According to this solution, the designer must enumerate as effect axioms all the ways in which the value of a particular fluent can be changed. The effect axioms affecting the value of fluent can be written in generalised form as a positive and a negative effect axiom: The formula describes the conditions under which action in situation makes the fluent become true in the successor situation . Likewise, describes the conditions under which performing action in situation makes fluent false in the successor situation. If this pair of axioms describe all the ways in which fluent can change value, they can be rewritten as a single axiom: In words, this formula states: "given that it is possible to perform action in situation , the fluent would be true in the resulting situation if and only if performing in would make it true, or it is true in situation and performing in would not make it false." By way of example, the value of the fluent introduced above is given by the following successor state axiom: States The properties of the initial or any other situation can be specified by simply stating them as formulae. For example, a fact about the initial state is formalized by making assertions about (which is not a state, but a situation). The following statements model that initially, the robot carries nothing, is at location , and there are no broken objects: Foundational axioms The foundational axioms of the situation calculus formalize the idea that situations are histories by having . They also include other properties such as the second-order induction on situations. Regression Regression is a mechanism for proving consequences in the situation calculus. It is based on expressing a formula containing the situation in terms of a formula containing the action and the situation , but not the situation . By iterating this procedure, one can end up with an equivalent formula containing only the initial situation . Proving consequences is supposedly simpler from this formula than from the original one. GOLOG GOLOG is a logic programming language based on the situation calculus. The original version of the situation calculus The main difference between the original situation calculus by McCarthy and Hayes and the one in use today is the interpretation of situations. In the modern version of the situational calculus, a situation is a sequence of actions. Originally, situations were defined as "the complete state of the universe at an instant of time". It was clear from the beginning that such situations could not be completely described; the idea was simply to give some statements about situations, and derive consequences from them. This is also different from the approach that is taken by the fluent calculus, where a state can be a collection of known facts, that is, a possibly incomplete description of the universe. In the original version of the situation calculus, fluents are not reified. In other words, conditions that can change are represented by predicates and not by functions. Actually, McCarthy and Hayes defined a fluent as a function that depends on the situation, but they then proceeded always using predicates to represent fluents. For example, the fact that it is raining at place in the situation is represented by the literal . In the 1986 version of the situation calculus by McCarthy, functional fluents are used. For example, the position of an object in the situation is represented by the value of , where is a function. Statements about such functions can be given using equality: means that the location of the object is the same in the two situations and . The execution of actions is represented by the function : the execution of the action in the situation is the situation . The effects of actions are expressed by formulae relating fluents in situation and fluents in situations . For example, that the action of opening the door results in the door being open if not locked is represented by: The predicates and represent the conditions of a door being locked and open, respectively. Since these conditions may vary, they are represented by predicates with a situation argument. The formula says that if the door is not locked in a situation, then the door is open after executing the action of opening, this action being represented by the constant . These formulae are not sufficient to derive everything that is considered plausible. Indeed, fluents at different situations are only related if they are preconditions and effects of actions; if a fluent is not affected by an action, there is no way to deduce it did not change. For example, the formula above does not imply that follows from , which is what one would expect (the door is not made locked by opening it). In order for inertia to hold, formulae called frame axioms are needed. These formulae specify all non-effects of actions: In the original formulation of the situation calculus, the initial situation, later denoted by , is not explicitly identified. The initial situation is not needed if situations are taken to be descriptions of the world. For example, to represent the scenario in which the door was closed but not locked and the action of opening it is performed is formalized by taking a constant to mean the initial situation and making statements about it (e.g., ). That the door is open after the change is reflected by formula being entailed. The initial situation is instead necessary if, like in the modern situation calculus, a situation is taken to be a history of actions, as the initial situation represents the empty sequence of actions. The version of the situation calculus introduced by McCarthy in 1986 differs to the original one by the use of functional fluents (e.g., is a term representing the position of in the situation ) and for an attempt to use circumscription to replace the frame axioms. The situation calculus as a logic program It is also possible (e.g. Kowalski 1979, Apt and Bezem 1990, Shanahan 1997) to write the situation calculus as a logic program: Here is a meta-predicate and the variable ranges over fluents. The predicates , and correspond to the predicates , , and respectively. The left arrow ← is half of the equivalence ↔. The other half is implicit in the completion of the program, in which negation is interpreted as negation as failure. Induction axioms are also implicit, and are needed only to prove program properties. Backward reasoning as in SLD resolution, which is the usual mechanism used to execute logic programs, implements regression implicitly. See also Frame problem Event calculus References J. McCarthy and P. Hayes (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence, 4:463–502. Edinburgh University Press, 1969. R. Kowalski (1979). Logic for Problem Solving - Elsevier North Holland. K.R. Apt and M. Bezem (1990). Acyclic Programs. In: 7th International Conference on Logic Programming. MIT Press. Jerusalem, Israel. R. Reiter (1991). The frame problem in the situation calculus: a simple solution (sometimes) and a completeness result for goal regression. In Vladimir Lifshitz, editor, Artificial intelligence and mathematical theory of computation: papers in honour of John McCarthy, pages 359–380, San Diego, CA, USA. Academic Press Professional, Inc. 1991. M. Shanahan (1997). Solving the Frame Problem: a Mathematical Investigation of the Common Sense Law of Inertia. MIT Press. H. Levesque, F. Pirri, and R. Reiter (1998). Foundations for the situation calculus. Electronic Transactions on Artificial Intelligence, 2(3–4):159-178. F. Pirri and R. Reiter (1999). Some contributions to the metatheory of the Situation Calculus. Journal of the ACM, 46(3):325–361. R. Reiter (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The MIT Press. 1963 introductions Logic programming Logical calculi
Situation calculus
[ "Mathematics" ]
3,043
[ "Mathematical logic", "Logical calculi" ]
2,256,337
https://en.wikipedia.org/wiki/Radio%20spectrum%20pollution
Radio spectrum pollution is the straying of waves in the radio and electromagnetic spectrums outside their allocations that cause problems for some activities. It is of particular concern to radio astronomers. Radio spectrum pollution is mitigated by effective spectrum management. Within the United States, the Communications Act of 1934 grants authority for spectrum management to the President for all federal use (47 U.S.C. 305). The National Telecommunications and Information Administration (NTIA) manages the spectrum for the Federal Government. Its rules are found in the "NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management". The Federal Communications Commission (FCC) manages and regulates all domestic non-federal spectrum use (47 U.S.C. 301). Each country typically has its own spectrum regulatory organization. Internationally, the International Telecommunication Union (ITU) coordinates spectrum policy. See also Electromagnetic radiation and health Frequency allocation Radio quiet zone Spectrum management References External links Time Warner Cable's TV channel shift draws interference from Verizon LTE smartphones December 4, 2013 Fierce Wireless Electromagnetic spectrum Light pollution Radio communications Radio astronomy Radio spectrum Pollution de:Elektrosmog fr:Pollution électromagnétique it:Elettrosmog
Radio spectrum pollution
[ "Physics", "Astronomy", "Engineering" ]
246
[ "Telecommunications engineering", "Radio spectrum", "Spectrum (physical sciences)", "Radio communications", "Electromagnetic spectrum", "Astronomy stubs", "Radio astronomy", "Astronomical sub-disciplines" ]
2,256,597
https://en.wikipedia.org/wiki/Safe-life%20design
In safe-life design, products are intended to be removed from service at a specific design life. Safe-life is particularly relevant to simple metal aircraft, where airframe components are subjected to alternating loads over the lifetime of the aircraft which makes them susceptible to metal fatigue. In certain areas such as in wing or tail components, structural failure in flight would be catastrophic. The safe-life design technique is employed in critical systems which are either very difficult to repair or whose failure may cause severe damage to life and property. These systems are designed to work for years without requirement of any repairs. The disadvantage of the safe-life design philosophy is that serious assumptions must be made regarding the alternating loads imposed on the aircraft, so if those assumptions prove to be inaccurate, cracks may commence prior to the component being removed from service. To counter this disadvantage, alternative design philosophies like fail-safe design and fault-tolerant design were developed. The automotive industry One way the safe-life approach is planning and envisaging the toughness of the mechanisms in the automotive industry. When the repetitive loading on mechanical structures intensified with the advent of the steam engine, back in the mid-1800s, this approach was established (Oja 2013). According to Michael Oja, “Engineers and academics began to understand the effect that cyclic stress (or strain) has on the life of a component; a curve was developed relating the magnitude of the cyclic stress (S) to the logarithm of the number of cycles to failure (N)” (Oja 2013). The S-N curve because the fundamental relation is in safe life designs. The curve is reliant on many conditions, including the ratio of maximum load to minimum load (R-ratio), the type of material being inspected, and the regularity at which the cyclic stresses (or strains) are applied. Today, the curve is still consequential by experimentally testing laboratory specimens at many continuous cyclic load levels, and detecting the number of cycles to failure (Oja 2013). Michael Oja states that, “Unsurprisingly, as the load decreases, the life of the specimen increases” (Oja 2013). The practical limit of experimental challenges has been due to frequency confines of hydraulic-powered test machines. The load at which this high-cycle life happens has come to be recognized as the fatigue asset of the material (Oja 2013). Aerospace Aircraft structure There are two generic types of aircraft structure, safe life and fail safe. The former is one that has low residual strength if a primary load-bearing member should fail, whereas the latter has alternative load paths so that if a primary load-bearing member cracks, residual strength remains because the loads can be assumed by adjacent members. In modern aircraft, fail-safe structures with up to three alternative load paths are provided, but back in 1947 the main load-bearing structure was safe life. This did not matter on an interim airframe designed for operations in the calm upper air, but at around 500 ft the loads and stresses were more volatile. Helicopter structure The safe-life design philosophy is applied to all helicopter structures. In the current generation of Army helicopters, such as the UH-60 Black Hawk, composite materials make up as great as 17 percent of the airframe and rotor weight (Reddick). Harold Reddick states that, “With the advent of major helicopter composite structures R&D projects, such as the Advanced Composite Airframe Program (ACAP), and Manufacturing Methods and Technology (MM&T) projects, such as UH-60 Low Cost Composite Blade Program, it is estimated that within a few years composite materials could be applied to as much as 80% of the airframe and rotor weight of a helicopter in a production program” (Reddick). Along with this application, it is the essential obligation that sound, definitive design criteria be industrialized in order that the composite structures have high fatigue lives for economy of ownership and good damage tolerance for flight safety. Safe-life and damage-tolerant criteria are practical to all helicopter flight critical components (Reddick). See also Fail-safe Fault-tolerant design Safety engineering Damage tolerance 1945 Australian National Airways Stinson crash References Citations Oja, Michael (2013-03-18). "Structural Design Concepts: Overview of Safe Life and Damage Tolerance". Vextec.com | Reducing Life Cycle Costs From Design To Field Service. Retrieved 2019-06-11. "Fatigue (material)", Wikipedia, 2019-06-04, retrieved 2019-06-11 Reddick, Harold. "Safe-Life and Damage-Tolerant Design Approaches for Helicopter Structures" (PDF). NASA. Retrieved June 11, 2019. External links Design
Safe-life design
[ "Engineering" ]
962
[ "Design stubs", "Design" ]
2,256,654
https://en.wikipedia.org/wiki/Iterative%20deepening%20A%2A
Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to conservatively estimate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times. While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative , where is the cost to travel from the root to node and is a problem-specific heuristic estimate of the cost to travel from to the goal. The algorithm was first described by Richard Korf in 1985. Description Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold. As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below. Pseudocode path current search path (acts like a stack) node current node (last node in current path) g the cost to reach current node f estimated cost of the cheapest path (root..node..goal) h(node) estimated cost of the cheapest path (node..goal) cost(node, succ) step cost function is_goal(node) goal test successors(node) node expanding function, expand nodes ordered by g + h(node) ida_star(root) return either NOT_FOUND or a pair with the best path and its cost procedure ida_star(root) bound := h(root) path := [root] loop t := search(path, 0, bound) if t = FOUND then return (path, bound) if t = ∞ then return NOT_FOUND bound := t end loop end procedure function search(path, g, bound) node := path.last f := g + h(node) if f > bound then return f if is_goal(node) then return FOUND min := ∞ for succ in successors(node) do if succ not in path then path.push(succ) t := search(path, g + cost(node, succ), bound) if t = FOUND then return FOUND if t < min then min := t path.pop() end if end for return min end function Properties Like A*, IDA* is guaranteed to find the shortest path leading from the given start node to any goal node in the problem graph, if the heuristic function is admissible, that is for all nodes , where is the true cost of the shortest path from to the nearest goal (the "perfect heuristic"). IDA* is beneficial when the problem is memory constrained. A* search keeps a large queue of unexplored nodes that can quickly fill up memory. By contrast, because IDA* does not remember any node except the ones on the current path, it requires an amount of memory that is only linear in the length of the solution that it constructs. Its time complexity is analyzed by Korf et al. under the assumption that the heuristic cost estimate is consistent, meaning that for all nodes and all neighbors of ; they conclude that compared to a brute-force tree search over an exponential-sized problem, IDA* achieves a smaller search depth (by a constant factor), but not a smaller branching factor. Recursive best-first search is another memory-constrained version of A* search that can be faster in practice than IDA*, since it requires less regenerating of nodes. Applications Applications of IDA* are found in such problems as planning. Solving the Rubik's Cube is an example of a planning problem that is amenable to solving with IDA*. References Graph algorithms Routing algorithms Search algorithms Game artificial intelligence Articles with example pseudocode
Iterative deepening A*
[ "Mathematics" ]
951
[ "Game theory", "Game artificial intelligence" ]
2,256,814
https://en.wikipedia.org/wiki/Multipath%20I/O
In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers, switches, and bridge devices connecting them. As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency. Multipath software layers can leverage the redundant paths to provide performance-enhancing features, including dynamic load balancing, traffic shaping, automatic path management, and dynamic reconfiguration. See also Device mapper Linux DM Multipath External links Linux Multipathing, Linux Symposium 2005 p. 147 VxDMP white paper, Veritas Dynamic Multi pathing Linux Multipath Usage guide Computer data storage Computer storage technologies Fault-tolerant computer systems
Multipath I/O
[ "Technology", "Engineering" ]
223
[ "Reliability engineering", "Computer hardware stubs", "Computer systems", "Fault-tolerant computer systems", "Computing stubs" ]
2,256,844
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s%20theorem%20%28conformal%20mapping%29
In mathematics, Carathéodory's theorem is a theorem in complex analysis, named after Constantin Carathéodory, which extends the Riemann mapping theorem. The theorem, published by Carathéodory in 1913, states that any conformal mapping sending the unit disk to some region in the complex plane bounded by a Jordan curve extends continuously to a homeomorphism from the unit circle onto the Jordan curve. The result is one of Carathéodory's results on prime ends and the boundary behaviour of univalent holomorphic functions. Proofs of Carathéodory's theorem The first proof of Carathéodory's theorem presented here is a summary of the short self-contained account in ; there are related proofs in and . Clearly if f admits an extension to a homeomorphism, then ∂U must be a Jordan curve. Conversely if ∂U is a Jordan curve, the first step is to prove f extends continuously to the closure of D. In fact this will hold if and only if f is uniformly continuous on D: for this is true if it has a continuous extension to the closure of D; and, if f is uniformly continuous, it is easy to check f has limits on the unit circle and the same inequalities for uniform continuity hold on the closure of D. Suppose that f is not uniformly continuous. In this case there must be an ε > 0 and a point ζ on the unit circle and sequences zn, wn tending to ζ with |f(zn) − f(wn)| ≥ 2ε. This is shown below to lead to a contradiction, so that f must be uniformly continuous and hence has a continuous extension to the closure of D. For 0 < r < 1, let γr be the curve given by the arc of the circle lying within D. Then f ∘ γr is a Jordan curve. Its length can be estimated using the Cauchy–Schwarz inequality: Hence there is a "length-area estimate": The finiteness of the integral on the left hand side implies that there is a sequence rn decreasing to 0 with tending to 0. But the length of a curve g(t) for t in (a, b) is given by The finiteness of therefore implies that the curve has limiting points an, bn at its two ends with , so this distance, as well as diameter of the curve, tends to 0. These two limit points must lie on ∂U, because f is a homeomorphism between D and U and thus a sequence converging in U has to be the image under f of a sequence converging in D. By assumption there exist a homeomorphism β between the circle ∂D and ∂U. Since β−1 is uniformly continuous, the distance between the two points ξn and ηn corresponding to an and bn in ∂U must tend to 0. So eventually the smallest circular arc in ∂D joining ξn and ηn is defined. Denote τn image of this arc under β. By uniform continuity of β, diameter of τn in ∂U tends to 0. Together τn and f ∘ γrn form a simple Jordan curve. Its interior Un is contained in U by the Jordan curve theorem for ∂U and ∂Un: to see this, notice that U is the interior of ∂U, as it is bounded, connected and it is both open and closed in the complement of ∂U; so the exterior region of ∂U is unbounded, connected and does not intersect ∂Un, hence its closure is contained in the closure of the exterior of ∂Un; taking complements, we get the desired inclusion. The diameter of ∂Un tends to 0 because the diameters of τn and f ∘ γrn tend to 0. Hence the diameter of Un tend to 0. (For is compact set, hence contains two points u and v such that distance between them is maximal. It is easy to see that u and v must lie in ∂U and diameters of both U and ∂U equal .) Now if Vn denotes the intersection of D with the disk |z − ζ| < rn, then for all sufficiently large n f(Vn) = Un. Indeed, the arc γrn divides D into Vn and complementary region , so under the conformal homeomorphism f the curve f ∘ γrn divides U into and a complementary region ; Un is a connected component of U \ f ∘ γrn, as it is connected and is both open and closed in this set, hence equals either or . Diameter of does not decrease with increasing n, for implies . Since diameter of Un tends to 0 as n goes to infinity, it is eventually less than the diameter of and then necessarily f(Vn) = Un. So the diameter of f(Vn) tends to 0. On the other hand, passing to subsequences of (zn) and (wn) if necessary, it may be assumed that zn and wn both lie in Vn. But this gives a contradiction since |f(zn) − f(wn)| ≥ ε. So f must be uniformly continuous on U. Thus f extends continuously to the closure of D. Since f(D) = U, by compactness f carries the closure of D onto the closure of U and hence ∂D onto ∂U. If f is not one-one, there are points u, v on ∂D with u ≠ v and f(u) = f(v). Let X and Y be the radial lines from 0 to u and v. Then is a Jordan curve. Arguing as before, its interior V is contained in U and is a connected component of . On the other hand, is the disjoint union of two open sectors W1 and W2. Hence, for one of them, W1 say, f(W1) = V. Let Z be the portion of ∂W1 on the unit circle, so that Z is a closed arc and f(Z) is a subset of both ∂U and the closure of V. But their intersection is a single point and hence f is constant on Z. By the Schwarz reflection principle, f can be analytically continued by conformal reflection across the circular arc. Since non-constant holomorphic functions have isolated zeros, this forces f to be constant, a contradiction. So f is one-one and hence a homeomorphism on the closure of D. Two different proofs of Carathéodory's theorem are described in and . The first proof follows Carathéodory's original method of proof from 1913 using properties of Lebesgue measure on the circle: the continuous extension of the inverse function g of f to ∂U is justified by Fatou's theorem on the boundary behaviour of bounded harmonic functions on the unit disk. The second proof is based on the method of , where a sharpening of the maximum modulus inequality was established for bounded holomorphic functions h defined on a bounded domain V: if a lies in V, then |h(a)| ≤ mt ⋅ M1 − t, where 0 ≤ t ≤ 1, M is maximum modulus of h for sequential limits on ∂U and m is the maximum modulus of h for sequential limits on ∂U lying in a sector centred on a subtending an angle 2πt at a. Continuous extension and the Carathéodory-Torhorst theorem An extension of the theorem states that a conformal isomorphism , where is a simply connected subset of the Riemann sphere, extends continuously to the unit circle if and only if the boundary of is locally connected. This result is often also attributed to Carathéodory, but was first stated and proved by Marie Torhorst in her 1918 thesis, under the supervision of Hans Hahn, using Carathéodory's theory of prime ends. More precisely, Torhorst proved that local connectivity is equivalent to the domain having only prime ends of the first kind. By the theory of prime ends, the latter property, in turn, is equivalent to having a continuous extension. Notes References Conformal mappings Homeomorphisms Theorems in complex analysis
Carathéodory's theorem (conformal mapping)
[ "Mathematics" ]
1,689
[ "Theorems in mathematical analysis", "Topology", "Homeomorphisms", "Theorems in complex analysis" ]
2,256,924
https://en.wikipedia.org/wiki/Capacitor-input%20filter
A capacitor-input filter is a filter circuit in which the first element is a capacitor connected in parallel with the output of the rectifier in a linear power supply. The capacitor increases the DC voltage and decreases the ripple voltage components of the output. The capacitor is often referred to as a smoothing capacitor or reservoir capacitor. The capacitor is often followed by other alternating series and parallel filter elements to further reduce ripple voltage, or adjust DC output voltage. It may also be followed by a voltage regulator which virtually eliminates any remaining ripple voltage, and adjusts the DC voltage output very precisely to match the DC voltage required by the circuit. Operation While during the time the rectifier is conducting and the potential is higher than the charge across the capacitor, the capacitor will store energy from the transformer; when the output of the rectifier falls below the charge on the capacitor, the capacitor will discharge energy into the circuit. Since the rectifier conducts current only in the forward direction, any energy discharged by the capacitor will flow into the load. This results in output of a DC voltage upon which is superimposed a waveform referred to as a sawtooth wave. The sawtooth wave is a convenient linear approximation to the actual waveform, which is exponential for both charge and discharge. The crests of the sawtooth waves will be more rounded when the DC resistance of the transformer secondary is higher. Ripple current A ripple current which is 90 degrees out of phase with the ripple voltage also passes through the capacitor. See also Rectifier#Capacitor input filter Choke-input filter References Linear filters Analog circuits Electronic filter topology
Capacitor-input filter
[ "Engineering" ]
360
[ "Analog circuits", "Electronic engineering" ]
2,256,943
https://en.wikipedia.org/wiki/International%20Academy%20of%20Quantum%20Molecular%20Science
The International Academy of Quantum Molecular Science (IAQMS) is an international scientific learned society covering all applications of quantum theory to chemistry and chemical physics. It was created in Menton in 1967. The founding members were Raymond Daudel, Per-Olov Löwdin, Robert G. Parr, John Pople and Bernard Pullman. Its foundation was supported by Louis de Broglie. Originally, the academy had 25 regular members under 65 years of age. This was later raised to 30, and then to 35. There is no limit on the number of members over 65 years of age. The members are "chosen among the scientists of all countries who have distinguished themselves by the value of their scientific work, their role of pioneer or leader of a school in the broad field of quantum chemistry, i.e. the application of quantum mechanics to the study of molecules and macromolecules". As of 2006, the academy consisted of 90 members. The academy organizes the International Congress of Quantum Chemistry every three years. The academy awards a medal to a young member of the scientific community who has distinguished themselves by a pioneering and important contribution. The award has been made every year since 1967. Presidents Presidents and vice-presidents of the academy since its inception: Members Ludwik Adamowicz Ali Alavi Millard H. Alexander Jean-Marie André Evert-Jan Baerends Vincenzo Barone Rodney J. Bartlett Mikhail V. Basilevsky Axel D. Becke Joel M. Bowman Jean-Luc Brédas Ria Broer-Braam A. David Buckingham Kieron Burke Petr Cársky Emily A. Carter Lorenz S. Cederbaum David M. Ceperley Garnet Kin-Lic Chan Jiří Čížek David Clary Enrico Clementi Ernest R. Davidson Wolfgang Domcke Thom Dunning Michel Dupuis Odile Eisenstein Jiali Gao Jürgen Gauß Peter Gill William A. Goddard, III Leticia González Mark S. Gordon Stefan Grimme George G. Hall Sharon Hammes-Schiffer Martin Head-Gordon Trygve Helgaker Eric J. Heller Kimihiko Hirao So Hirata Roald Hoffmann Kendall N. Houk Bogumil Jeziorski Poul Jørgensen William L. Jorgensen Joshua Jortner Martin Karplus Kwang S. Kim Wim Klopper Peter Knowles Ronnie Kosloff Georg Kresse Anna Krylov Werner Kutzelnigg Roland Lefebvre William A. Lester Raphael D. Levine Mel Levy Shuhua Li Jan Erik Linderberg Wenjian Liu Jean-Claude Lorquet Nancy Makri Jean-Paul Malrieu David E. Manolopoulos Rudolph A. Marcus Todd J. Martinez Roy McWeeny Benedetta Mennucci Wilfried E. Meyer Josef Michl William H. Miller Debashis Mukherjee Saburo Nagakura Shigeru Nagase Hiroshi Nakatsuji Frank Neese Willem C. Nieuwpoort Evgueni E. Nikitin Jozef Noga Marcel Nooijen Christian Ochsenfeld Jeppe Olsen Josef Paldus Michele Parrinello Ruben Pauncz John P. Perdew Sigrid D. Peyerimhoff Piotr Piecuch Peter Pulay Pekka Pyykkö Leo Radom Krishnan Raghavachari Mark A. Ratner Julia Rice Michael A. Robb Clemens C. J. Roothaan Ursula Röthlisberger Klaus Ruedenberg Lionel Salem Trond Saue Andreas Savin Henry F. Schaefer, III George C. Schatz H. Bernhard Schlegel Peter Schwerdtfeger Gustavo E. Scuseria Sason Shaik Zhigang Shuai Per E. M. Siegbahn John F. Stanton Péter Szalay Krzysztof Szalewicz Seiichiro Ten-no Walter Thiel Jacopo Tomasi Donald G. Truhlar John Tully Miroslav Urban Ad van der Avoird Alain Veillard Luuk Visscher Gregory Voth Arieh Warshel Hans-Joachim Werner Weitao Yang Rudolf Zahradnik Deceased members Reinhart Ahlrichs David R. Bates S. Francis Boys Louis de Broglie Charles A. Coulson David P. Craig Alexander Dalgarno Raymond Daudel Alexander S. Davydov Michael J. S. Dewar Henry Eyring Inga Fischer-Hjalmars Vladimir Aleksandrovich Fock Kenichi Fukui Rezsö Gaspar Nicholas C. Handy Hermann Hartmann Edgar Heilbronner Walter Heitler Gerhard Herzberg Joseph O. Hirschfelder Erich Hückel Friedrich Hund Michael Kasha Shigeki Kato Walter Kohn Wlodzimierz Kolos Masao Kotani Jaroslav Koutecky John C. Light William N. Lipscomb H. Christopher Longuet-Higgins Per-Olov Löwdin Frederick A. Matsen Harden M. McConnell Keiji Morokuma Robert S. Mulliken Robert G. Parr Linus Pauling John Pople Alberte Pullman Bernard Pullman Björn Olof Roos Camille Sandorfy Paul von Rague Schleyer Eolo Scrocco Isaiah Shavitt Massimo Simonetta John C. Slater Au-chin Tang Edward Teller John H. Van Vleck E. Bright Wilson Tom Ziegler References External links Official website Chemistry education Organizations established in 1967 Quantum chemistry Molecular physics International academies International scientific organizations
International Academy of Quantum Molecular Science
[ "Physics", "Chemistry" ]
1,116
[ "Quantum chemistry", "Molecular physics", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
2,256,979
https://en.wikipedia.org/wiki/Sigrid%20D.%20Peyerimhoff
Sigrid Doris Peyerimhoff (born 12 January 1937, in Rottweil) is a theoretical chemist and Emeritus Professor at the Institute of Physical and Theoretical Chemistry, University of Bonn, Germany. Education After completing her abitur, Peyerimhoff studied physics at the University of Gießen, completing her degree in 1961 and receiving her doctorate under supervision of Bernhard Kockel in 1963. After researching at the University of Chicago, the University of Washington, and Princeton University, she returned to Germany and gained her habilitation at the University of Gießen in 1967. She became professor for theoretical chemistry at the University of Mainz in 1970, and at the University of Bonn in 1972. Quantum chemistry Her contributions have been to the development of ab initio quantum chemical methods, in particular, multireference configuration interaction, and to their application in many fields of physics and chemistry. Particular emphasis has been given to electronically excited states, molecular spectra and photochemistry. Many studies are on atmospheric molecules and ions, their lifetimes in excited states and decomposition due to radiative and non-radiative processes, and on stability and spectra of clusters. Some of her students became well known for their contribution to quantum chemistry, including Bernd Engels, Stefan Grimme, Bernd A. Hess, Christel Marian, Matthias Ernzerhoff and Bernd M. Nestmann. Awards and honors During her career, she received several awards and memberships: 1977 Medal of the International Academy of Quantum Molecular Science 1988 Gottfried Wilhelm Leibniz-Prize 1994 Cross of Merit of the Federal Republic of Germany 2007 Cothenius Medal of the Academy of Sciences Leopoldina 2008 Grand Cross of Merit of the Federal Republic of Germany 2011 Honorary doctor of the University of Ulm She is also a member of the International Academy of Quantum Molecular Science Publications She is the author of over 400 original articles in various international journals and coauthor of Umweltstandards: Fakten und Bewertungsprobleme am Beispiel des Strahlenrisikos. Her history of computational chemistry in Germany is of particular note. She edited Interactions in Molecules. Partial bibliography Peyerimhoff, Sigrid D. Interactions in Molecules: Electronic and Steric Effects. Weinheim: Wiley-VCH, 2003. References External links Her International Academy of Quantum Molecular Science web page University of Bonn web page 1937 births Living people 20th-century German chemists Theoretical chemists Gottfried Wilhelm Leibniz Prize winners Commanders Crosses of the Order of Merit of the Federal Republic of Germany Members of the International Academy of Quantum Molecular Science Academic staff of the University of Bonn University of Giessen alumni Academic staff of Johannes Gutenberg University Mainz People from Rottweil (district) German women chemists Computational chemists 20th-century German women scientists 21st-century German chemists Recipients of the Cothenius Medal
Sigrid D. Peyerimhoff
[ "Chemistry" ]
591
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
2,257,041
https://en.wikipedia.org/wiki/Multireference%20configuration%20interaction
In quantum chemistry, the multireference configuration interaction (MRCI) method consists of a configuration interaction expansion of the eigenstates of the electronic molecular Hamiltonian in a set of Slater determinants which correspond to excitations of the ground state electronic configuration but also of some excited states. The Slater determinants from which the excitations are performed are called reference determinants. The higher excited determinants (also called configuration state functions (CSFs) or shortly configurations) are then chosen either by the program according to some perturbation theoretical ansatz according to a threshold provided by the user or simply by truncating excitations from these references to singly, doubly, ... excitations resulting in MRCIS, MRCISD, etc. For the ground state using more than one reference configuration means a better correlation and so a lower energy. The problem of size inconsistency of truncated CI-methods is not solved by taking more references. As a result of a MRCI calculation one gets a more balanced correlation of the ground and excited states. For quantitative good energy differences (excitation energies) one has to be careful in selecting the references. Taking only the dominant configuration of an excited state into the reference space leads to a correlated (lower) energy of the excited state. The generally too-high excitation energies of CIS or CISD are lowered. But usually excited states have more than one dominant configuration and so the ground state is more correlated due to: a) now including some configurations with higher excitations (triply and quadruply in MRCISD); b) the neglect of other dominant configurations of the excited states which are still uncorrelated. Selecting the references can be done manually (), automatically (all possible configurations within an active space of some orbitals) or semiautomatically (taking all configurations as references that have been shown to be important in a previous CI or MRCI calculation) This method has been implemented first by Robert Buenker and Sigrid D. Peyerimhoff in the seventies under the name Multi-Reference Single and Double Configuration Interaction (MRSDCI). MRCI was further streamlined in 1988 by Hans-Joachim Werner and Peter Knowles, which made previous MRCI procedures more generalizable. The MRCI method can also be implemented in semi-empirical methods. An example for this is the OM2/MRCI method developed by Walter Thiel's group. See also Configuration interaction References Quantum chemistry
Multireference configuration interaction
[ "Physics", "Chemistry" ]
517
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
2,257,262
https://en.wikipedia.org/wiki/Lusin%27s%20theorem
In the mathematical field of mathematical analysis, Lusin's theorem (or Luzin's theorem, named for Nikolai Luzin) or Lusin's criterion states that an almost-everywhere finite function is measurable if and only if it is a continuous function on nearly all its domain. In the informal formulation of J. E. Littlewood, "every measurable function is nearly continuous". Classical statement For an interval [a, b], let be a measurable function. Then, for every ε > 0, there exists a compact E ⊆ [a, b] such that f restricted to E is continuous and Note that E inherits the subspace topology from [a, b]; continuity of f restricted to E is defined using this topology. Also for any function f, defined on the interval [a, b] and almost-everywhere finite, if for any ε > 0 there is a function ϕ, continuous on [a, b], such that the measure of the set is less than ε, then f is measurable. General form Let be a Radon measure space and Y be a second-countable topological space equipped with a Borel algebra, and let be a measurable function. Given , for every of finite measure there is a closed set with such that restricted to is continuous. If is locally compact and , we can choose to be compact and even find a continuous function with compact support that coincides with on and such that . Informally, measurable functions into spaces with countable base can be approximated by continuous functions on arbitrarily large portion of their domain. On the proof The proof of Lusin's theorem can be found in many classical books. Intuitively, one expects it as a consequence of Egorov's theorem and density of smooth functions. Egorov's theorem states that pointwise convergence is nearly uniform, and uniform convergence preserves continuity. Example The strength of Lusin's theorem might not be readily apparent, as can be demonstrated by example. Consider Dirichlet function, that is the indicator function on the unit interval taking the value of one on the rationals, and zero, otherwise. Clearly the measure of this function should be zero, but how can one find regions that are continuous, given that the rationals are dense in the reals? The requirements for Lusin's theorem can be satisfied with the following construction of a set Let be any enumeration of . Set and . Then the sequence of open sets "knocks out" all of the rationals, leaving behind a compact, closed set which contains no rationals, and has a measure of more than . References Sources N. Lusin. Sur les propriétés des fonctions mesurables, Comptes rendus de l'Académie des Sciences de Paris 154 (1912), 1688–1690. G. Folland. Real Analysis: Modern Techniques and Their Applications, 2nd ed. Chapter 7 W. Zygmunt. Scorza-Dragoni property (in Polish), UMCS, Lublin, 1990 M. B. Feldman, "A Proof of Lusin's Theorem", American Math. Monthly, 88 (1981), 191-2 Lawrence C. Evans, Ronald F. Gariepy, "Measure Theory and fine properties of functions", CRC Press Taylor & Francis Group, Textbooks in mathematics, Theorem 1.14 Citations Theorems in real analysis Theorems in measure theory Articles containing proofs
Lusin's theorem
[ "Mathematics" ]
710
[ "Theorems in mathematical analysis", "Theorems in real analysis", "Theorems in measure theory", "Articles containing proofs" ]
2,257,429
https://en.wikipedia.org/wiki/Prazosin
Prazosin, sold under the brand name Minipress among others, is a medication used to treat high blood pressure, symptoms of an enlarged prostate, and nightmares related to post-traumatic stress disorder (PTSD). It is an α1 blocker. It is a less preferred treatment of high blood pressure. Other uses may include heart failure and Raynaud syndrome. It is taken by mouth. Common side effects include dizziness, sleepiness, nausea, and heart palpitations. Serious side effects may include low blood pressure with standing and depression. Prazosin is a non-selective inverse agonist of the α1-adrenergic receptors. It works to decrease blood pressure by dilating blood vessels and helps with an enlarged prostate by relaxing the outflow of the bladder. How it works in PTSD is not entirely clear. Prazosin was patented in 1965 and came into medical use in 1974. It is available as a generic medication. In 2021, it was the 183rd most commonly prescribed medication in the United States, with more than 2million prescriptions. Medical uses Prazosin is active after taken by mouth and has a minimal effect on cardiac function due to its α1-adrenergic receptor selectivity. When prazosin is started, however, heart rate and contractility can increase in order to maintain the pre-treatment blood pressures because the body has reached homeostasis at its abnormally high blood pressure. The blood pressure lowering effect becomes apparent when prazosin is taken for longer periods of time. The heart rate and contractility go back down over time and blood pressure decreases. The antihypertensive characteristics of prazosin make it a second-line choice for the treatment of high blood pressure. Prazosin is also useful in treating urinary hesitancy associated with benign prostatic hyperplasia, blocking α1-adrenergic receptors, which control constriction of both the prostate and urethra. Although not a first-line choice for either hypertension or benign prostatic hyperplasia, it is a choice for people who present with both problems concomitantly. During its use for urinary hesitancy in military veterans in the 1990s, Murray A. Raskind and colleagues discovered that prazosin appeared to be effective in reducing nightmares. Subsequent reviews indicate prazosin is effective in improving sleep quality and treating nightmares related to post-traumatic stress disorder (PTSD). Prazosin is used off-label in the treatment of insomnia for its sedative effects. Prazosin is an inverse agonist at α1-adrenergic receptors; these receptors are expressed on dendrites that noradrenergic neurons synapse onto in the brain. Some of the noradrenergic pathways in the central nervous system form part of the ascending reticular activating system, which promotes arousal when stimulated. Prazosin inhibits the output neurons of the noradrenergic pathways in that system, in turn causing sedation. The drug is usually recommended for severe stings from the Indian red scorpion. Adverse effects Common (4–10% frequency) side effects of prazosin include dizziness, headache, drowsiness, fatigue, weakness, palpitations, and nausea. Less frequent (1–4%) side effects include vomiting, diarrhea, constipation, edema, orthostatic hypotension, dyspnea, syncope, vertigo, depression, anxiety, nasal congestion, and rash. A very rare side effect of prazosin is priapism. One phenomenon associated with prazosin is known as the "first-dose response", in which the side effects of the drug specifically orthostatic hypotension, dizziness, and drowsiness are especially pronounced in the first dose. Orthostatic hypotension and syncope are associated with the body's poor ability to control blood pressure without active α-adrenergic receptors. The nasal congestion is exacerbated by changing body positions, because α1-adrenergic receptors also control nasal vascular blood flow and alpha blockers inhibit this, in the same way that alpha-adrenergic agonists have the opposite effect of being a decongestant. Pharmacology Pharmacodynamics Prazosin is an α1-blocker that acts as a non-selective inverse agonist at α1-adrenergic receptors, including of the α1A-, α1B-, and α1D-adrenergic receptor subtypes. It binds to these receptors with affinity (Ki) values of 0.13 to 1.0 nM for the α1Α-adrenergic receptor, 0.06 to 0.62 nM for the α1B-adrenergic receptor, and 0.06 to 0.38 nM for the α1D-adrenergic receptor. It has much lower affinity for the α2-adrenergic receptors (Ki = 210–5,012 nM for the α2A-adrenergic receptor, 13–398 nM for the α2B-adrenergic receptor, and 10–200 nM for the α2C-adrenergic receptor). The α1-adrenergic receptors are found in vascular smooth muscle, where they are responsible for the vasoconstrictive action of norepinephrine. They are also found throughout the central nervous system. α1-Adrenergic receptors have additionally been found on immune cells, where catecholamine binding can stimulate and enhance cytokine production. Pharmacokinetics Prazosin has an onset of action of 30 to 90 minutes, the elimination half-life of prazosin is 2 to 3 hours, and its duration of action is 10 to 24 hours. Research Prazosin has been said to be the only selective α1-adrenergic receptor antagonist which has been used in the treatment of insomnia to any significant degree. It is used at doses of 1 to 12 mg for this purpose. The combination of prazosin and the beta blocker timolol may produce greater sedative effects than either of them alone. Prazosin has been shown to prevent death in animal models of cytokine storm. As a repurposed drug, prazosin is being investigated for the prevention of cytokine storm syndrome and complications of COVID-19 where it is thought to decrease cytokine dysregulation. References Alpha-1 blockers Antihypertensive agents Anxiolytics Carboxamides Catechol ethers 2-Furyl compounds Guanidines Drugs developed by Pfizer Piperazines Quinazolines Vasodilators Wikipedia medicine articles ready to translate
Prazosin
[ "Chemistry" ]
1,448
[ "Guanidines", "Functional groups" ]
2,257,488
https://en.wikipedia.org/wiki/Palm%20IIIc
The Palm IIIc was the first color PDA made by Palm, Inc., released in February 2000 for $449USD. It ran Palm OS 3.5, the first Palm OS version to have native color support and supported paletted 8-bit color modes. Using the Palm OS Upgrade Install CD, the Palm IIIc could be updated to Palm OS 4.1. The machine has a TFT LCD that is bright indoors. The Palm IIIc features the classic III-series connector, 8MB of RAM and a 20MHz DragonBall EZI CPU. The unit also has a lithium ion rechargeable battery and a slightly modified version of the original Palm III chassis. See also List of Palm OS Devices References External links Palm, Inc. Introduces The Palm IIIc Product Industry's Smallest, Lightest Color Handheld Computer, Palm Press Release, February 22, 2000 Palm OS devices 68k-based mobile devices
Palm IIIc
[ "Technology" ]
192
[ "Computing stubs", "Computer hardware stubs" ]
2,257,791
https://en.wikipedia.org/wiki/Nitrogen%20mustard
Nitrogen mustards (NMs) are cytotoxic organic compounds with the bis(2-chloroethyl)amino ((ClC2H4)2NR) functional group. Although originally produced as chemical warfare agents, they were the first chemotherapeutic agents for treatment of cancer. Nitrogen mustards are nonspecific DNA alkylating agents. Name Nitrogen mustards are not related to the mustard plant or its pungent essence, allyl isothiocyanate; the name comes from the pungent smell of chemical weapons preparations. Chemical warfare During World War II, nitrogen mustards were studied at the Yale School of Medicine by Alfred Gilman and Louis Goodman, and in December 1942, they started classified human clinical trials of nitrogen mustards for the treatment of lymphoma. In early December of 1943, an incident during the air raid on Bari, Italy, led to the release of mustard gas that affected several hundred soldiers and civilians. Medical examination of the survivors showed a decreased number of lymphocytes. After World War II was over, the Bari incident and the Yale group's studies eventually converged prompting a search for other similar compounds. Due to its use in previous studies, the nitrogen mustard known as "HN2" became the first chemotherapy drug mustine. Examples The nitrogen mustard drug mustine (HN2), is no longer commonly in use in its original IV formulation because of excessive toxicity. Other nitrogen mustards developed include cyclophosphamide, chlorambucil, uramustine, melphalan, and bendamustine. Bendamustine has recently re-emerged as a viable chemotherapeutic treatment. Nitrogen mustards that can be used for chemical warfare purposes are tightly regulated. Their weapon designations are: HN1: bis(2-chloroethyl)ethylamine HN2: bis(2-chloroethyl)methylamine HN3: tris(2-chloroethyl)amine Normustard (mustine without a methyl group on the nitrogen atom; bis(2-chloroethyl)ethylamine) can be used in the synthesis of piperazine drugs such as mazapertine, aripiprazole & fluanisone. Canfosfamide was also made from normustard. Some nitrogen mustards of opiates were also prepared, although these are not known to be antineoplastic. Examples include chlornaltrexamine and chloroxymorphamine. Mechanism of action Nitrogen mustards form cyclic ammonium ions (aziridinium ions) by intramolecular displacement of the chloride by the amine nitrogen. This aziridinium group then alkylates DNA once it is attacked by the N-7 nucleophilic center on the guanine base. A second attack after the displacement of the second chlorine forms the second alkylation step that results in the formation of interstrand cross-links (ICLs) as it was shown in the early 1960s. At that time, it was proposed that the ICLs were formed between N-7 atom of guanine residue in a 5’-d(GC) sequence. Later it was clearly demonstrated that nitrogen mustards form a 1,3 ICL in the 5’-d(GNC) sequence. The strong cytotoxic effect caused by the formation of ICLs is what makes NMs an effective chemotherapeutic agent. Other compounds used in cancer chemotherapy that have the ability to form ICLs are cisplatin, mitomycin C, carmustine, and psoralen. These kinds of lesions are effective at forcing the cell to undergo apoptosis via p53, a protein which scans the genome for defects. Note that the alkylating damage itself is not cytotoxic and does not directly cause cell death. Safety Nitrogen mustards are powerful and persistent blister agents. HN1, HN2, HN3 are therefore classified as Schedule 1 substances within the Chemical Weapons Convention. Production and use is therefore strongly restricted. See also Mustard gas Sulphur mustard References Further reading Blister agents IARC Group 2A carcinogens Cancer treatments
Nitrogen mustard
[ "Chemistry" ]
893
[ "Blister agents", "Chemical weapons" ]
2,257,802
https://en.wikipedia.org/wiki/Modified%20vaccinia%20Ankara
Modified vaccinia Ankara (MVA) is an attenuated (weakened) strain of the vaccinia virus. It is being used as a vaccine (called MVA-BN, brand names: Imvanex in the EU, Imvamune in Canada, and Jynneos in the US) against smallpox and mpox, having fewer side effects than smallpox vaccines derived from other poxviruses. This third-generation smallpox vaccine has the advantage that it cannot reproduce complete virions in human cells, "the block of the MVA life cycle occurs at the step of virion assembly resulting in assembly of immature virus particles that are not released from the infected cell." By inserting antigen genes into its genome, modified vaccinia Ankara virus is also used as an experimental viral vector for vaccines against non-poxvirus diseases. Development as a poxvirus vaccine The traditional smallpox vaccine, which was used in the smallpox eradication campaign 1958–1977, consists of a live vaccinia virus which can replicate in humans but usually does not cause disease. It can however sometimes lead to serious side effects. Modified vaccinia Ankara virus is a highly attenuated strain of vaccinia virus that was developed in Munich, Germany between 1953 and 1968. It was produced by more than 500 serial passages of vaccinia virus (from a wild strain discovered by the Turkish vaccine institute of Ankara) in chicken embryo fibroblasts. After testing the safety and effectiveness as a vaccine, it was approved in Germany in 1977, and then given to about 120,000 people until 1980, when smallpox vaccinations ended in Germany. No severe adverse events were seen during this time. It was later found that through the passaging, modified vaccinia virus Ankara had lost about 10% of the ancestral vaccinia genome and with it the ability to replicate efficiently in most mammalian cells. While it can enter host cells, express its genes and replicate its genome, it fails to assemble virus particles that are released from the cell. The vaccine was further developed and manufactured by the Danish company Bavarian Nordic, resulting in the vaccine MVA-BN, which is unable to replicate in human cells. The vaccine is given subcutaneously in two doses, at least 28 days apart. It was approved in Canada in 2013, as a smallpox vaccine and in 2020 also against mpox and related orthopoxvirus infections. It was approved in the European Union in 2013, as a vaccine against smallpox and in the US in September 2019, against smallpox and mpox. On 13 September 2024, the WHO has granted prequalification status to the MVA-BN vaccine, as the first vaccine approved for use against mpox. In August 2022, the US Food and Drug Administration (FDA) gave emergency use authorization for intradermal (rather than subcutaneous) mpox vaccination using a lower dose of Jynneos, which would increase the number of available doses up to five-fold. The vaccination would still be given in two doses, 28 days apart. A 2015 study had tested a regimen of one-fifth dose given intradermally. Development as a viral vector Modified vaccinia Ankara strains engineered to express foreign genes are vectors for production of recombinant proteins, the most common being a vaccine delivery system for antigens. A recombinant MVA-based vector for vaccination with different fluorescent reporter genes was developed, which indicate the progress of genetic recombination with the transgene of an antigen (green, colorless, red). In animal models, MVA-based vector vaccines have been found to be immunogenic and protective against various infectious agents including immunodeficiency viruses, influenza, parainfluenza, measles virus, flaviviruses, tuberculosis, Plasmodium parasites as well as certain cancers. MVA-B is an experimental vaccine to protect against HIV infection, produced by inserting HIV genes into the genome of modified vaccinia virus Ankara. In phase I clinical trials in 2013, it was found to be safe but produced only moderate levels of anti-HIV immunity. After removing a certain MVA gene, the vaccine produced an improved immune response in mice. Research A US Centers for Disease Control and Prevention (CDC) analysis of the vaccination status of 5402 individuals who had mpox infections during the summer of 2022 showed that unvaccinated people appeared to be 14 times more likely to be infected than those with a single (of two recommended) doses; the results were noted to be admittedly preliminary. References Further reading Genetically modified organisms Vaccines Vaccinia German inventions 1968 establishments in West Germany 1968 in medicine Products introduced in 1968
Modified vaccinia Ankara
[ "Engineering", "Biology" ]
971
[ "Vaccines", "Vaccination", "Genetic engineering", "Genetically modified organisms" ]
2,257,916
https://en.wikipedia.org/wiki/Tetragonal%20polycrystalline%20zirconia
Yttria blends of approximately 3% are called either tetragonal polycrystalline zirconia or tetragonal zirconia polycrystal (forming the initialisms TZP or TPZ) and have the finest grain size. These grades exhibit the highest toughness at room temperature, because they are nearly 100% tetragonal, but this degrades severely between 200 and 500 °C as these irreversible crystal transformations also cause dimensional change. See also Zirconium dioxide References Yttrium compounds Zirconium dioxide Ceramic materials
Tetragonal polycrystalline zirconia
[ "Chemistry", "Engineering" ]
122
[ "Ceramic engineering", "Ceramic materials", "Inorganic compounds", "Inorganic compound stubs" ]
2,258,083
https://en.wikipedia.org/wiki/Dirac%20fermion
In physics, a Dirac fermion is a spin-½ particle (a fermion) which is different from its antiparticle. A vast majority of fermions fall under this category. Description In particle physics, all fermions in the standard model have distinct antiparticles (perhaps excepting neutrinos) and hence are Dirac fermions. They are named after Paul Dirac, and can be modeled with the Dirac equation. A Dirac fermion is equivalent to two Weyl fermions. The counterpart to a Dirac fermion is a Majorana fermion, a particle that must be its own antiparticle. Dirac quasi-particles In condensed matter physics, low-energy excitations in graphene and topological insulators, among others, are fermionic quasiparticles described by a pseudo-relativistic Dirac equation. See also Dirac spinor, a wavefunction-like description of a Dirac fermion Dirac–Kähler fermion, a geometric formulation of Dirac fermions Majorana fermion, an alternate category of fermion, possibly describing neutrinos Spinor, mathematical details Semi-Dirac fermion, an unusual class of fermions References Fermions
Dirac fermion
[ "Physics", "Materials_science" ]
277
[ "Quantum field theory", "Matter", "Fermions", "Quantum physics stubs", "Quantum mechanics", "Condensed matter physics", "Subatomic particles" ]
2,258,103
https://en.wikipedia.org/wiki/Francis%20Polkinghorne%20Pascoe
Francis Polkinghorne Pascoe (1 September 1813 – 20 June 1893) was an English entomologist mainly interested in beetles. Biography He was born in Penzance, Cornwall and trained at St. Bartholomew's Hospital, London. Appointed surgeon in the Navy he served on Australian, West Indian and Mediterranean stations. He married a Miss Mary Glasson of Cornwall and settled at Trewhiddle near St Austell where his wife's property produced china clay. Widowed in 1851 he settled in London devoting himself to natural history and entomology in particular. The results of collecting trips to Europe, North Africa and the Lower Amazons were poor and Pascoe worked mainly on insects collected by others. His entomological papers listed and described species collected by Alfred Russel Wallace (in Longicornia Malayana), Robert Templeton and other assiduous collectors but not prolific writers on systematic entomology. He became a Fellow of the Entomological Society in 1854, was president from 1864 to 1865, a Member of the Société Entomologique de France and belonged to the Belgian and Stettin Societies. He was also a Fellow of the Linnean Society (elected 1852) and was on the Council of the Ray Society. His 2,500 types are in the Natural History Museum, London. Evolution Pascoe accepted the fact of evolution but was an opponent to natural selection. Pascoe's 1890 book The Darwinian Theory of the Origin of Species was an attack on natural selection. It received a lengthy review in the Nature journal by Raphael Meldola who disagreed with Pascoe's criticisms but noted the work should be taken seriously as Pascoe was a respected systematic entomologist. Works 1858 On new genera and species of longicorn Coleoptera. Part III Trans. Entomol. Soc. London, (2)4:236–266. 1859 On some new genera and species of longicorn Coleoptera. Part IV.Trans.Entomol. Soc. London, (2)5:12–61. 1860 Notices of new or little-known genera and species of Coleoptera. J.Entomol., 1(1):36–64. 1860 Notices of new or little-known genera and species of Coleoptera, pt.II. J. Entomol., 1(2):98–131. 1862 Notices of new or little-known genera and species of Coleoptera. J.Entomol., 1:319–370. 1864–1869 Longicornia Malayana; or a descriptive catalogue of the species of the three longicorn families Lamiidae, Cerambycidae and Prionidae collected by Mr. A. R. Wallace in the Malay Archipelago. Trans. Entomol. Soc. London, (3)3:1-712. 1866 List of the Longicornia collected by the late Mr. P. Bouchard, at Santa Marta. Trans. Entomol. Soc. London, 5(3):279–296. 1867 Diagnostic characters of some new genera and species of Prionidae.Ann. Mag. Nat. Hist., (3)19:410–413 1875 Notes on Coleoptera, with descriptions of new genera and species. Part III. Ann. Mag. Nat. Hist., (4)15:59–73. 1884 Notes on Natural Selection and the Origin of Species. Taylor & Francis. 1885 List of British Vertebrate Animals. Taylor & Francis. 1890 The Darwinian Theory of the Origin of Species. Gurney & Jackson. References Obituary in Natural science: a monthly review of scientific progress. Volume 3, 1893: S. 159 A. Boucard Obituary in The Humming Bird. A Quarterly, Artistic and Industrial Review. Volume 5. Spring Vale, 1895: S. 12–13 Fellows of the Linnean Society of London 1813 births 1893 deaths English scientists English coleopterists Non-Darwinian evolution People from Penzance Fellows of the Royal Entomological Society
Francis Polkinghorne Pascoe
[ "Biology" ]
827
[ "Non-Darwinian evolution", "Biology theories" ]
2,258,152
https://en.wikipedia.org/wiki/Pol%20Swings
Pol F. Swings (24 September 1906 – 28 October 1983) was a Belgian astrophysicist who was known for his studies of the composition and structure of stars and comets. He used spectroscopy to identify the elements in astronomical bodies, and, in particular, comets. Swings studied at the University of Liège, where he was professor of spectroscopy and astrophysics from 1932 to 1975. He was also a visiting professor at the University of Chicago in the United States (1939–43, 1946–52). From his study of cometary atmospheres, he is credited with the discovery of the Swings bands and the Swings effect. Swings bands are emission lines resulting from the presence of certain atoms of carbon; the Swings effect was discovered with the aid of a slit spectrograph and is attributed to fluorescence resulting partly from solar radiation. Moreover, Swings studied spectroscopy of interstellar space and investigated the rotation of stars, as well as nebulae, novae, and variable stars. Pol Swings was awarded the Francqui Prize for Exact Sciences in 1948. He was elected to the American Academy of Arts and Sciences in 1965, and the American Philosophical Society and the United States National Academy of Sciences in 1966. In 1981, Swings became a founding member of the World Cultural Council. 1637 Swings, an asteroid of the Main belt is named after him. References 1906 births 1983 deaths Scientists from Charleroi 20th-century Belgian astronomers University of Liège alumni Academic staff of the University of Liège Founding members of the World Cultural Council Foreign associates of the National Academy of Sciences Presidents of the International Astronomical Union Members of the American Philosophical Society University of Chicago faculty
Pol Swings
[ "Astronomy" ]
328
[ "Astronomers", "Presidents of the International Astronomical Union" ]
2,258,380
https://en.wikipedia.org/wiki/Nambu%E2%80%93Jona-Lasinio%20model
In quantum field theory, the Nambu–Jona-Lasinio model (or more precisely: the Nambu and Jona-Lasinio model) is a complicated effective theory of nucleons and mesons constructed from interacting Dirac fermions with chiral symmetry, paralleling the construction of Cooper pairs from electrons in the BCS theory of superconductivity. The "complicatedness" of the theory has become more natural as it is now seen as a low-energy approximation of the still more basic theory of quantum chromodynamics, which does not work perturbatively at low energies. Overview The model is much inspired by the different field of solid state theory, particularly from the BCS breakthrough of 1957. The model was introduced in a joint article of Yoichiro Nambu (who also contributed essentially to the theory of superconductivity, i.e., by the "Nambu formalism") and Giovanni Jona-Lasinio, published in 1961. A subsequent paper included chiral symmetry breaking, isospin and strangeness. Around that time, the same model was independently considered by Soviet physicists Valentin Vaks and Anatoly Larkin. The model is quite technical, although based essentially on symmetry principles. It is an example of the importance of four-fermion interactions and is defined in a spacetime with an even number of dimensions. It is still important and is used primarily as an effective although not rigorous low energy substitute for quantum chromodynamics. The dynamical creation of a condensate from fermion interactions inspired many theories of the breaking of electroweak symmetry, such as technicolor and the top-quark condensate. Starting with the one-flavor case first, the Lagrangian density is or, equivalently, The terms proportional to are an attractive four-fermion interaction, which parallels the BCS theory phonon exchange interaction. The global symmetry of the model is U(1)Q×U(1)χ where Q is the ordinary charge of the Dirac fermion and χ is the chiral charge. is actually an inverse squared mass, which represents short-distance physics or the strong interaction scale, producing an attractive four-fermion interaction. There is no bare fermion mass term because of the chiral symmetry. However, there will be a chiral condensate (but no confinement) leading to an effective mass term and a spontaneous symmetry breaking of the chiral symmetry, but not the charge symmetry. With N flavors and the flavor indices represented by the Latin letters a, b, c, the Lagrangian density becomes Chiral symmetry forbids a bare mass term, but there may be chiral condensates. The global symmetry here is SU(N)L×SU(N)R× U(1)Q × U(1)χ where SU(N)L×SU(N)R acting upon the left-handed flavors and right-handed flavors respectively is the chiral symmetry (in other words, there is no natural correspondence between the left-handed and the right-handed flavors), U(1)Q is the Dirac charge, which is sometimes called the baryon number and U(1)χ is the axial charge. If a chiral condensate forms, then the chiral symmetry is spontaneously broken into a diagonal subgroup SU(N) since the condensate leads to a pairing of the left-handed and the right-handed flavors. The axial charge is also spontaneously broken. The broken symmetries lead to massless pseudoscalar bosons which are sometimes called pions. See Goldstone boson. As mentioned, this model is sometimes used as a phenomenological model of quantum chromodynamics in the chiral limit. However, while it is able to model chiral symmetry breaking and chiral condensates, it does not model confinement. Also, the axial symmetry is broken spontaneously in this model, leading to a massless Goldstone boson unlike QCD, where it is broken anomalously. Since the Nambu–Jona-Lasinio model is nonrenormalizable in four spacetime dimensions, this theory can only be an effective field theory which needs to be UV completed. See also Gross–Neveu model References External links Giovanni Jona-Lasinio and Yoichiro Nambu, Nambu-Jona-Lasinio model, Scholarpedia, 5(12):7487, (2010). doi:10.4249/scholarpedia.7487 Quantum chromodynamics Superconductivity
Nambu–Jona-Lasinio model
[ "Physics", "Materials_science", "Engineering" ]
970
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
2,258,610
https://en.wikipedia.org/wiki/FMRFamide
FMRFamide (H-Phe-Met-Arg-Phe-NH2) is a neuropeptide from a broad family of FMRFamide-related peptides (FaRPs) all sharing an -RFamide sequence at their C-terminus. First identified in Hard clam (Mercenaria mercenaria), it is thought to play an important role in cardiac activity regulation. Several FMRFamide related peptides are known, regulating various cellular functions and possessing pharmacological actions, such as anti-opiate effects. In Mercenaria mercenaria, FMRFamide has been isolated and demonstrated to increase both the force and frequency of the heartbeat through a biochemical pathway that is thought to involve the increase of cytoplasmic cAMP in the ventricular region. FMRFamide is an important neuropeptide in several phyla such as Insecta, Nematoda, Mollusca, and Annelida. It is the most abundant neuropeptide in endocrine cells of insect alimentary tracts along with allatostatin and tachykinin families, however the neuropeptide's function is not known. Generally, the neuropeptide is encoded by several genes such as flp-1 through flp-22 in C. elegans. The common precursor of the FaRPs is modified to yield many different neuropeptides all having the same FMRFamide sequence. Moreover, these peptides are not functionally redundant. In invertebrates, the FMRFamide-related peptides are known to affect heart rate, blood pressure, gut motility, feeding behaviour and reproduction. In vertebrates such as mice, they are known to affect opioid receptors resulting in elicitation of naloxone-sensitive antinociception and reduction of morphine-induced antinociception. Detection of this neuropeptide is important because its expression lays down the foundation of the CNS in the early stages of development in invertebrates. In recent years, neuromodulatory actions of FMRFamide in invertebrates have become more apparent. This is, in part, due to the extensive studies done on the Planorbid and Lymnaeid families of pond snails. See also Neuropeptide VF precursor References External links FMRFamide antibody (ab10352) datasheet Biomolecules Neurotransmitters Tetrapeptides
FMRFamide
[ "Chemistry", "Biology" ]
525
[ "Natural products", "Neurotransmitters", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Neurochemistry", "Molecular biology" ]
2,258,663
https://en.wikipedia.org/wiki/Jason%20Cranford%20Teague
Jason Cranford Teague is a web designer and author. He designed Computer-Mediated Communications Magazine, the first online magazine, in 1994. He is best known for his books CSS3 Visual Quickstart (2013) and Fluid Web Typography (2012). Cranford Teague started as a web designer in 1994. Notable clients include EPA, IRS, Sargento, USDA, Aspen Institute, Marriott, Bank of America, Cisco, Coca-Cola, Virgin Group, CNN, Kodak, and WebMD. Books published Teague has written several books and articles about web design and media. His books include the best selling DHTML and CSS for the World Wide Web (originally 1999, fifth printing 2013), Final Cut Pro 4 and the Art of Filmmaking (2004), Photoshop at Your Fingertips (2004), and Speaking In Styles (2009). Published CSS3 Visual Quickstart, 6th edition Fluid Web Typography: A Guide Speaking In Styles: A CSS Primer for Web Designers CSS3 Visual Quickstart, 5th edition CSS, DHTML, & Ajax: Visual Quickstart Guide, 4th DHTML & CSS Advanced Out of print Photoshop at Your Fingertips, 2nd edition Photoshop at Your Fingertips Final Cut Express Essentials SVG for Web Designers Final Cut Pro and the Art of Filmmaking, 2nd Edition Final Cut Pro and the Art of Filmmaking DHTML & CSS Visual QuickStart, 3rd edition DHTML & CSS Visual QuickStart, 2nd edition DHTML Visual Quickstart How to program HTML Frames: Interface Design and Javascript Articles published Teague has contributed numerous articles to Apple Developers Connection, Computer Arts Magazine, and Macworld Magazine. He writes regularly about technology, politics, and culture on webbedENVIRONMENTS. He has also appeared on TechTV's The Screen Savers. References dmx zone webbedENVIRONMENTS Jason Cranford Teague's blog Technology writers Living people Year of birth missing (living people)
Jason Cranford Teague
[ "Technology" ]
422
[ "Computing stubs", "Computer specialist stubs" ]