id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,216,892
https://en.wikipedia.org/wiki/Scalable%20Linear%20Recording
Scalable Linear Recording is the name used by Tandberg Data for its line of QIC based tape drives. The earliest SLR drive, the SLR1, has a capacity of 250 MB, while the latest drive, the SLR140, has a capacity of 70 GB. The term SLR is often used to refer to QIC tapes, as for many years they were the only drives that used them before Tandberg discontinued production around 2015. Generations Quarter inch formats NOTE: MLR stands for Multi-channel Linear Recording. Eight millimeter formats External links SLR5 specsheet SLR7 specsheet SLR24 specsheet SLR32 specsheet SLR40 specsheet SLR50 specsheet SLR60 specsheet SLR75 specsheet SLR100 specsheet SLR140 specsheet Tandberg
Scalable Linear Recording
[ "Technology" ]
179
[ "Computing stubs", "Computer hardware stubs" ]
2,217,163
https://en.wikipedia.org/wiki/Linear%20amplifier
A linear amplifier is an electronic circuit whose output is proportional to its input, but capable of delivering more power into a load. The term usually refers to a type of radio-frequency (RF) power amplifier, some of which have output power measured in kilowatts, and are used in amateur radio. Other types of linear amplifier are used in audio and laboratory equipment. Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input. A linear amplifier responds to different frequency components independently, and tends not to generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity however, because the amplifying devices—transistors or vacuum tubes—follow nonlinear transfer function and rely on circuitry techniques to reduce those effects. There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Explanation Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input, generally at increased power levels. Load impedance, supply voltage, input base current, and power output capabilities can affect the efficiency of the amplifier. Class-A amplifiers can be designed to have good linearity in both single ended and push-pull topologies. Amplifiers of classes AB1, AB2 and B can be linear only when a tuned tank circuit is employed, or in the push-pull topology, in which two active elements (tubes, transistors) are used to amplify positive and negative parts of the RF cycle respectively. Class-C amplifiers are not linear in any topology. Amplifier classes There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Their use in RF applications are listed briefly below: Class-A amplifiers are very inefficient, they can never have an efficiency better than 50%. The semiconductor or vacuum tube conducts throughout the entire RF cycle. The mean anode current for a vacuum tube should be set to the middle of the linear section of the curve of the anode current vs grid bias potential. Class-B amplifiers can be 60–65% efficient. The semiconductor or vacuum tube conducts through half the cycle but requires large drive power. Class AB1 is where the grid is more negatively biased than it is in class A. Class AB2 is where the grid is often more negatively biased than in AB1, also the size of the input signal is often larger. When the drive is able to make the grid become positive the grid current will increase. Class-C amplifiers can be about 75% efficient with a conduction range of about 120°, but they are very nonlinear. They can only be used for non-AM modes, such as FM, CW, or RTTY. The semiconductor or vacuum tube conducts through less than half the RF cycle. The increase in efficiency can allow a given vacuum tube to deliver more RF power than it could in class A or AB. For instance two 4CX250B tetrodes operating at 144 MHz can deliver 400 watts in class A, but when biased into class C they can deliver 1,000 watts without fear of overheating. Even more grid current will be needed. Class-D amplifiers use switching technology to achieve high efficiency, often exceeding 90%, thereby requiring less power to operate, compared with that of other amplifier types. Because of the digital train used to drive the amplifier, many do not consider the Class-D amplifier a linear amplifier, yet many audio and radio manufacturers have incorporated its design into linear applications. Although class-A power amplifiers (PA) are best in terms of linearity, their efficiency is rather poor as compared with other amplification classes such as “AB”, “C” and Doherty amplifiers. However, higher efficiency leads to higher nonlinearity and PA output will be distorted, often to extent that fails the system performance requirements. Therefore, class-AB power amplifiers or other variations are used with some suitable form of linearization schemes such as feedback, feedforward or analog or digital predistortion (DPD). In DPD power amplifier systems, the transfer characteristics of the amplifier are modeled by sampling the output of the PA and the inverse characteristics are calculated in a DSP processor. The digital baseband signal is multiplied by the inverse of PA nonlinear transfer characteristics, up-converted to RF frequencies and is applied to the PA input. With careful design of PA response, the DPD engines can correct the PA output distortion and achieve higher efficiencies. With advances in digital signal processing techniques, digital predistortion (DPD) is now widely used for RF power amplifier subsystems. In order for a DPD to function properly the power amplifier characteristics need to be optimal and circuit techniques are available to optimize the PA performance. Amateur radio Some commercially manufactured one to two kilowatt linear amplifiers used in amateur radio still use vacuum tubes (valves) and can provide 10 to 20 times RF power amplification (10 to 13 dB). For example, a transmitter driving the input with 100 watts will be amplified to 2,000 watts (2 kW) output to the antenna. Solid state linear amplifiers are more common in the 1000-watt range and can be driven by as little as 5 watts. Modern power devices using LDMOS technology allow for more efficient, cost-effective linear RF power amplifiers for the amateur radio community. Large vacuum-tube linear amplifiers generally rely on one or more vacuum tubes supplied by a very high voltage power supply to convert large amounts of electrical energy into radio frequency energy. Linear amplifiers need to operate with class-A or class-AB biasing, which makes them relatively inefficient. While class C has far higher efficiency, a class-C amplifier is not linear, and is only suitable for the amplification of constant envelope signals. Such signals include FM, FSK, MFSK, and CW (Morse code). Broadcast radio stations The output stages of professional AM radio broadcast transmitters of up to 50 kW need to be linear and are now usually constructed using solid state technologies. Large vacuum tubes are still used for international long, medium, and shortwave broadcast transmitters from 500 kW up to 2 MW. See also Amplifiers Electronic amplifier References Electronic amplifiers Linear electronic circuits
Linear amplifier
[ "Technology" ]
1,275
[ "Electronic amplifiers", "Amplifiers" ]
1,010,088
https://en.wikipedia.org/wiki/Total%20suspended%20solids
Total suspended solids (TSS) is the dry-weight of suspended particles, that are not dissolved, in a sample of water that can be trapped by a filter that is analyzed using a filtration apparatus known as sintered glass crucible. TSS is a water quality parameter used to assess the quality of a specimen of any type of water or water body, ocean water for example, or wastewater after treatment in a wastewater treatment plant. It is listed as a conventional pollutant in the U.S. Clean Water Act. Total dissolved solids is another parameter acquired through a separate analysis which is also used to determine water quality based on the total substances that are fully dissolved within the water, rather than undissolved suspended particles. TSS is also referred to using the terms total suspended matter (TSM) and suspended particulate matter (SPM). All three terms describe the same essential measurement. TSS was previously called non-filterable residue (NFR), but was changed to TSS because of ambiguity in other scientific disciplines. Measurement TSS of a water or wastewater sample is determined by pouring a carefully measured volume of water (typically one litre; but less if the particulate density is high, or as much as two or three litres for very clean water) through a pre-weighed filter of a specified pore size, then weighing the filter again after the drying process that removes all water on the filter. Filters for TSS measurements are typically composed of glass fibres. The gain in weight is a dry weight measure of the particulates present in the water sample expressed in units derived or calculated from the volume of water filtered (typically milligrams per litre or mg/L). If the water contains an appreciable amount of dissolved substances (as certainly would be the case when measuring TSS in seawater), these will add to the weight of the filter as it is dried. Therefore, it is necessary to "wash" the filter and sample with deionized water after filtering the sample and before drying the filter. Failure to add this step is a fairly common mistake made by inexperienced laboratory technicians working with sea water samples, and will completely invalidate the results as the weight of salts left on the filter during drying can easily exceed that of the suspended particulate matter. Although turbidity purports to measure approximately the same water quality property as TSS, the latter is preferred when available as it provides an actual weight of the particulate material present in the sample. In water quality monitoring situations, a series of more labor-intensive TSS measurements will be paired with relatively quick and easy turbidity measurements to develop a site-specific correlation. Once satisfactorily established, the correlation can be used to estimate TSS from more frequently made turbidity measurements, saving time and effort. Because turbidity readings are somewhat dependent on particle size, shape, and color, this approach requires calculating a correlation equation for each location. Further, situations or conditions that tend to suspend larger particles through water motion (e.g., increase in a stream current or wave action) can produce higher values of TSS not necessarily accompanied by a corresponding increase in turbidity. This is because particles above a certain size (essentially anything larger than silt) are not measured by a bench turbidity meter (they settle out before the reading is taken), but contribute substantially to the TSS value. Definition problems Although TSS appears to be a straightforward measure of particulate weight obtained by separating particles from a water sample using a filter, it suffers as a defined quantity from the fact that particles occur in nature in essentially a continuum of sizes. At the lower end, TSS relies on a cut-off established by properties of the filter being used. At the upper end, the cut-off should be the exclusion of all particulates too large to be "suspended" in water. However, this is not a fixed particle size but is dependent upon the energetics of the situation at the time of sampling: moving water suspends larger particles than does still water. Usually it is the case that the additional suspended material caused by the movement of the water is of interest. These problems in no way invalidate the use of TSS; consistency in method and technique can overcome short-comings in most cases. But comparisons between studies may require a careful review of the methodologies used to establish that the studies are in fact measuring the same thing. TSS in mg/L can be calculated as: (dry weight of residue and filter − dry weight of filter alone, in grams)/ mL of sample * 1,000,000 See also References Moran, Joseph M.; Morgan, Michael D., & Wiersma, James H. (1980). Introduction to Environmental Science (2nd ed.). New York: W.H. Freeman. Ramsey, Justin. 2001. Design of septic tanks design summary series. National Association of Wastewater Transporters. Scandia, MN (1998). Introduction to Proper Onsite Sewage Treatment. Anaerobic digestion Water pollution Water quality indicators
Total suspended solids
[ "Chemistry", "Engineering", "Environmental_science" ]
1,050
[ "Water pollution", "Water quality indicators", "Anaerobic digestion", "Environmental engineering", "Water technology" ]
1,010,127
https://en.wikipedia.org/wiki/Carbon-13
Carbon-13 (13C) is a natural, stable isotope of carbon with a nucleus containing six protons and seven neutrons. As one of the environmental isotopes, it makes up about 1.1% of all natural carbon on Earth. Detection by mass spectrometry A mass spectrum of an organic compound will usually contain a small peak of one mass unit greater than the apparent molecular ion peak (M) of the whole molecule. This is known as the M+1 peak and comes from the few molecules that contain a 13C atom in place of a 12C. A molecule containing one carbon atom will be expected to have an M+1 peak of approximately 1.1% of the size of the M peak, as 1.1% of the molecules will have a 13C rather than a 12C. Similarly, a molecule containing two carbon atoms will be expected to have an M+1 peak of approximately 2.2% of the size of the M peak, as there is double the previous likelihood that any molecule will contain a 13C atom. In the above, the mathematics and chemistry have been simplified, however it can be used effectively to give the number of carbon atoms for small- to medium-sized organic molecules. In the following formula the result should be rounded to the nearest integer: where C = number of C atoms, X = amplitude of the M ion peak, and Y = amplitude of the M +1 ion peak. 13C-enriched compounds are used in the research of metabolic processes by means of mass spectrometry. Such compounds are safe because they are non-radioactive. In addition, 13C is used to quantify proteins (quantitative proteomics). One important application is in stable isotope labeling by amino acids in cell culture (SILAC). 13C-enriched compounds are used in medical diagnostic tests such as the urea breath test. Analysis in these tests is usually of the ratio of 13C to 12C by isotope ratio mass spectrometry. The ratio of 13C to 12C is slightly higher in plants employing C4 carbon fixation than in plants employing C3 carbon fixation. Because the different isotope ratios for the two kinds of plants propagate through the food chain, it is possible to determine if the principal diet of a human or other animal consists primarily of C3 plants or C4 plants by measuring the isotopic signature of their collagen and other tissues. Uses in science Due to differential uptake in plants as well as marine carbonates of 13C, it is possible to use these isotopic signatures in earth science. Biological processes preferentially take up the lower mass isotope through kinetic fractionation. In aqueous geochemistry, by analyzing the δ13C value of carbonaceous material found in surface and ground waters, the source of the water can be identified. This is because atmospheric, carbonate, and plant derived δ13C values all differ. In biology, the ratio of carbon-13 and carbon-12 isotopes in plant tissues is different depending on the type of plant photosynthesis and this can be used, for example, to determine which types of plants were consumed by animals. Greater carbon-13 concentrations indicate stomatal limitations, which can provide information on plant behaviour during drought. Tree ring analysis of carbon isotopes can be used to retrospectively understand forest photosynthesis and how it is impacted by drought. In geology, the 13C/12C ratio is used to identify the layer in sedimentary rock created at the time of the Permian extinction 252 Mya when the ratio changed abruptly by 1%. More information about usage of 13C/12C ratio in science can be found in the article about isotopic signatures. Carbon-13 has a non-zero spin quantum number of , and hence allows the structure of carbon-containing substances to be investigated using carbon-13 nuclear magnetic resonance. The carbon-13 urea breath test is a safe and highly accurate diagnostic tool to detect the presence of Helicobacter pylori infection in the stomach. The urea breath test utilizing carbon-13 is preferred to carbon-14 for certain vulnerable populations due to its non-radioactive nature. Production Bulk carbon-13 for commercial use, e.g. in chemical synthesis, is enriched from its natural 1% abundance. Although carbon-13 can be separated from the major carbon-12 isotope via techniques such as thermal diffusion, chemical exchange, gas diffusion, and laser and cryogenic distillation, currently only cryogenic distillation of methane (boiling point −161.5°C) or carbon monoxide (b.p. −191.5°C) is an economically feasible industrial production technique. Industrial carbon-13 production plants represent a substantial investment, greater than 100 meter tall cryogenic distillation columns are needed to separate the carbon-12 or carbon-13 containing compounds. The largest reported commercial carbon-13 production plant in the world as of 2014 has a production capability of ~400 kg of carbon-13 annually. In contrast, a 1969 carbon monoxide cryogenic distillation pilot plant at Los Alamos Scientific Laboratories could produce 4 kg of carbon-13 annually. See also Isotopes of carbon Isotope fractionation Notes Isotopes of carbon Medical isotopes Environmental isotopes
Carbon-13
[ "Chemistry" ]
1,080
[ "Environmental isotopes", "Isotopes of carbon", "Isotopes", "Chemicals in medicine", "Medical isotopes" ]
1,010,141
https://en.wikipedia.org/wiki/Gilbreath%27s%20conjecture
Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin. In 1878, eighty years before Gilbreath's discovery, François Proth had, however, published the same observations along with an attempted proof, which was later shown to be incorrect. Motivating arithmetic Gilbreath observed a pattern while playing with the ordered sequence of prime numbers 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ... Computing the absolute value of the difference between term n + 1 and term n in this sequence yields the sequence 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, ... If the same calculation is done for the terms in this new sequence, and the sequence that is the outcome of this process, and again ad infinitum for each sequence that is the output of such a calculation, the following five sequences in this list are 1, 0, 2, 2, 2, 2, 2, 2, 4, ... 1, 2, 0, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 2, ... 1, 2, 0, 0, 2, ... What Gilbreath—and François Proth before him—noticed is that the first term in each series of differences appears to be 1. The conjecture Stating Gilbreath's observation formally is significantly easier to do after devising a notation for the sequences in the previous section. Toward this end, let denote the ordered sequence of prime numbers, and define each term in the sequence by where is positive. Also, for each integer greater than 1, let the terms in be given by Gilbreath's conjecture states that every term in the sequence for positive is equal to 1. Verification and attempted proofs François Proth released what he believed to be a proof of the statement that was later shown to be flawed. Andrew Odlyzko verified that is equal to 1 for in 1993, but the conjecture remains an open problem. Instead of evaluating n rows, Odlyzko evaluated 635 rows and established that the 635th row started with a 1 and continued with only 0s and 2s for the next n numbers. This implies that the next n rows begin with a 1. Generalizations In 1980, Martin Gardner published a conjecture by Hallard Croft that stated that the property of Gilbreath's conjecture (having a 1 in the first term of each difference sequence) should hold more generally for every sequence that begins with 2, subsequently contains only odd numbers, and has a sufficiently low bound on the gaps between consecutive elements in the sequence. This conjecture has also been repeated by later authors. However, it is false: for every initial subsequence of 2 and odd numbers, and every non-constant growth rate, there is a continuation of the subsequence by odd numbers whose gaps obey the growth rate but whose difference sequences fail to begin with 1 infinitely often. is more careful, writing of certain heuristic reasons for believing Gilbreath's conjecture that "the arguments above apply to many other sequences in which the first element is a 1, the others even, and where the gaps between consecutive elements are not too large and are sufficiently random." However, he does not give a formal definition of what "sufficiently random" means. See also Difference operator Prime gap Rule 90, a cellular automaton that controls the behavior of the parts of the rows that contain only the values 0 and 2 References Analytic number theory Conjectures about prime numbers Unsolved problems in number theory Triangles of numbers
Gilbreath's conjecture
[ "Mathematics" ]
835
[ "Analytic number theory", "Unsolved problems in mathematics", "Unsolved problems in number theory", "Combinatorics", "Triangles of numbers", "Mathematical problems", "Number theory" ]
1,010,167
https://en.wikipedia.org/wiki/Ceruloplasmin
Ceruloplasmin (or caeruloplasmin) is a ferroxidase enzyme that in humans is encoded by the CP gene. Ceruloplasmin is the major copper-carrying protein in the blood, and in addition plays a role in iron metabolism. It was first described in 1948. Another protein, hephaestin, is noted for its homology to ceruloplasmin, and also participates in iron and probably copper metabolism. Function Ceruloplasmin (CP) is an enzyme () synthesized in the liver containing 6 atoms of copper in its structure. Ceruloplasmin carries more than 95% of the total copper in healthy human plasma. The rest is accounted for by macroglobulins. Ceruloplasmin exhibits a copper-dependent oxidase activity, which is associated with possible oxidation of Fe2+ (ferrous iron) into Fe3+ (ferric iron), therefore assisting in its transport in the plasma in association with transferrin, which can carry iron only in the ferric state. The molecular weight of human ceruloplasmin is reported to be 151kDa. Despite extensive research, much is still unknown about the exact functions of CP, most of the functions are attributed to CP focus on the presence of the Cu centers. These include copper transport to deliver the Cu to extrahepatic tissues, amine oxidase activity that controls the level of biogenic amines in intestinal fluids and plasma, removal of oxygen and other free radicals from plasma, and the export of iron from cells for transport through transferrin. Mutations have been known to disrupt the binding of copper to CP and will disrupt iron metabolism and cause an iron overload. Ceruloplasmin is a relatively large enzyme (~10 nm); the larger size prevents the bound copper from being lost in a person's urine during transport. Active site structure The multicopper active site of CP contains a type I (T1) mononuclear copper site and a trinuclear copper center ~ 12-13 Å away (see figure 2).  The tricopper center consists of two type III (T3) coppers and one type II (T2) copper ion.  The two T3 copper ions are bridged by a hydroxide ligand while another hydroxide ligand links the T2 copper ion to the protein.  The T1 center is bridged to the tricopper center by two histidine (His1020, His1022) residues and one Cys(1021) residue.  The substrate binds near the T1 center and is oxidized by the T1 Cu2+ ion forming the reduced Cu+ oxidation state.  The reduced T1 Cu+ then transfers the electron through the one Cys and two His bridging residues to the tricopper center.  After four electrons have been transferred from the substrates to the copper centers, an O2 binds at the tricopper center and undergoes a four-electron reduction to form two molecules of water. Regulation A cis-regulatory element called the GAIT element is involved in the selective translational silencing of the Ceruloplasmin transcript. The silencing requires binding of a cytosolic inhibitor complex called IFN-gamma-activated inhibitor of translation (GAIT) to the GAIT element. Clinical significance Like any other plasma protein, levels drop in patients with hepatic disease due to reduced synthesizing capabilities. Mechanisms of low ceruloplasmin levels: Gene expression genetically low (aceruloplasminemia) Copper levels are low in general Malnutrition/trace metal deficiency in the food source Zinc toxicity, due to induced copper deficiency Copper does not cross the intestinal barrier due to ATP7A deficiency (Menkes disease and Occipital horn syndrome) Delivery of copper into the lumen of the ER-Golgi network is absent in hepatocytes due to absent ATP7B (Wilson's disease) Copper availability doesn't affect the translation of the nascent protein. However, the apoenzyme without copper is unstable. Apoceruloplasmin is largely degraded intracellularly in the hepatocyte and the small amount that is released has a short circulation half life of 5 hours as compared to the 5.5 days for the holo-ceruloplasmin. Ceruloplasmin can be measured by means of a blood test; this can be done using immunoassays . The sample is spun and separated; it is stored around 4 °C Celsius for three days. This test is to determine if there are signs of Wilson disease. Another test that can be done is a urine copper level test; this has been found to be less accurate than the blood test. A liver tissue test can be done as well. Mutations in the ceruloplasmin gene (CP), which are very rare, can lead to the genetic disease aceruloplasminemia, characterized by hyperferritinemia with iron overload. In the brain, this iron overload may lead to characteristic neurologic signs and symptoms, such as cerebellar ataxia, progressive dementia, and extrapyramidal signs. Excess iron may also deposit in the liver, pancreas, and retina, leading to cirrhosis, endocrine abnormalities, and loss of vision, respectively. Deficiency Lower-than-normal ceruloplasmin levels may indicate the following: Wilson disease (a rare [UK incidence 2/100,000] copper storage disease). Menkes disease (Menkes kinky hair syndrome) (rare – UK incidence 1/100,000) Copper deficiency Aceruloplasminemia Zinc toxicity Excess Greater-than-normal ceruloplasmin levels may indicate or be noticed in: copper toxicity / zinc deficiency pregnancy oral contraceptive pill use lymphoma acute and chronic inflammation (it is an acute-phase reactant) rheumatoid arthritis Angina Alzheimer's disease Schizophrenia Obsessive-compulsive disorder Reference ranges Normal blood concentration of ceruloplasmin in humans is 20–50 mg/dL. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Aceruloplasminemia OMIM entries on Aceruloplasminemia Acute-phase proteins Chemical pathology EC 1.16.3 Hepatology Iron metabolism Copper enzymes
Ceruloplasmin
[ "Chemistry", "Biology" ]
1,351
[ "Biochemistry", "Chemical pathology" ]
1,010,189
https://en.wikipedia.org/wiki/Retinal
Retinal (also known as retinaldehyde) is a polyene chromophore. Retinal, bound to proteins called opsins, is the chemical basis of visual phototransduction, the light-detection stage of visual perception (vision). Some microorganisms use retinal to convert light into metabolic energy. One study suggests that approximately three billion years ago, most living organisms on Earth used retinal, rather than chlorophyll, to convert sunlight into energy. Because retinal absorbs mostly green light and transmits purple light, this gave rise to the Purple Earth hypothesis. Retinal itself is considered to be a form of vitamin A when eaten by an animal. There are many forms of vitamin A, all of which are converted to retinal, which cannot be made without them. The number of different molecules that can be converted to retinal varies from species to species. Retinal was originally called retinene, and was renamed after it was discovered to be vitamin A aldehyde. Vertebrate animals ingest retinal directly from meat, or they produce retinal from carotenoids – either from α-carotene or β-carotene – both of which are carotenes. They also produce it from β-cryptoxanthin, a type of xanthophyll. These carotenoids must be obtained from plants or other photosynthetic organisms. No other carotenoids can be converted by animals to retinal. Some carnivores cannot convert any carotenoids at all. The other main forms of vitamin A – retinol and a partially active form, retinoic acid – may both be produced from retinal. Invertebrates such as insects and squid use hydroxylated forms of retinal in their visual systems, which derive from conversion from other xanthophylls. Vitamin A metabolism Living organisms produce retinal by irreversible oxidative cleavage of carotenoids. For example: catalyzed by a beta-carotene 15,15'-monooxygenase or a beta-carotene 15,15'-dioxygenase. Just as carotenoids are the precursors of retinal, retinal is the precursor of the other forms of vitamin A. Retinal is interconvertible with retinol, the transport and storage form of vitamin A: catalyzed by retinol dehydrogenases (RDHs) and alcohol dehydrogenases (ADHs). Retinol is called vitamin A alcohol or, more often, simply vitamin A. Retinal can also be oxidized to retinoic acid: catalyzed by retinal dehydrogenases also known as retinaldehyde dehydrogenases (RALDHs) as well as retinal oxidases. Retinoic acid, sometimes called vitamin A acid, is an important signaling molecule and hormone in vertebrate animals. Vision Retinal is a conjugated chromophore. In the Vertebrate eyes, retinal begins in an 11-cis-retinal configuration, which — upon capturing a photon of the correct wavelength — straightens out into an all-trans-retinal configuration. This configuration change pushes against an opsin protein in the retina, which triggers a chemical signaling cascade, which results in perception of light or images by the brain. The absorbance spectrum of the chromophore depends on its interactions with the opsin protein to which it is bound, so that different retinal-opsin complexes will absorb photons of different wavelengths (i.e., different colors of light). Opsins Retinal is bound to opsins, which are G protein-coupled receptors (GPCRs). Opsins, like other GPCRs, have seven transmembrane alpha-helices connected by six loops. They are found in the photoreceptor cells in the retina of eye. The opsin in the vertebrate rod cells is rhodopsin. The rods form disks, which contain the rhodopsin molecules in their membranes and which are entirely inside of the cell. The N-terminus head of the molecule extends into the interior of the disk, and the C-terminus tail extends into the cytoplasm of the cell. The opsins in the cone cells are OPN1SW, OPN1MW, and OPN1LW. The cones form incomplete disks that are part of the plasma membrane, so that the N-terminus head extends outside of the cell. In opsins, retinal binds covalently to a lysine in the seventh transmembrane helix through a Schiff base. Forming the Schiff base linkage involves removing the oxygen atom from retinal and two hydrogen atoms from the free amino group of lysine, giving H2O. Retinylidene is the divalent group formed by removing the oxygen atom from retinal, and so opsins have been called retinylidene proteins. Opsins are prototypical G protein-coupled receptors (GPCRs). Cattle rhodopsin, the opsin of the rod cells, was the first GPCR to have its amino acid sequence and 3D-structure (via X-ray crystallography) determined. Cattle rhodopsin contains 348 amino acid residues. Retinal binds as chromophore at Lys296. This lysine is conserved in almost all opsins, only a few opsins have lost it during evolution. Opsins without the retinal binding lysine are not light sensitive. Such opsins may have other functions. Although mammals use retinal exclusively as the opsin chromophore, other groups of animals additionally use four chromophores closely related to retinal: 3,4-didehydroretinal (vitamin A2), (3R)-3-hydroxyretinal, (3S)-3-hydroxyretinal (both vitamin A3), and (4R)-4-hydroxyretinal (vitamin A4). Many fish and amphibians use 3,4-didehydroretinal, also called dehydroretinal. With the exception of the dipteran suborder Cyclorrhapha (the so-called higher flies), all insects examined use the (R)-enantiomer of 3-hydroxyretinal. The (R)-enantiomer is to be expected if 3-hydroxyretinal is produced directly from xanthophyll carotenoids. Cyclorrhaphans, including Drosophila, use (3S)-3-hydroxyretinal. Firefly squid have been found to use (4R)-4-hydroxyretinal. Visual cycle The visual cycle is a circular enzymatic pathway, which is the front-end of phototransduction. It regenerates 11-cis-retinal. For example, the visual cycle of mammalian rod cells is as follows: all-trans-retinyl ester + H2O → 11-cis-retinol + fatty acid; RPE65 isomerohydrolases; 11-cis-retinol + NAD+ → 11-cis-retinal + NADH + H+; 11-cis-retinol dehydrogenases; 11-cis-retinal + aporhodopsin → rhodopsin + H2O; forms Schiff base linkage to lysine, -CH=N+H-; rhodopsin + hν → metarhodopsin II (i.e., 11-cis photoisomerizes to all-trans): (rhodopsin + hν → photorhodopsin → bathorhodopsin → lumirhodopsin → metarhodopsin I → metarhodopsin II); metarhodopsin II + H2O → aporhodopsin + all-trans-retinal; all-trans-retinal + NADPH + H+ → all-trans-retinol + NADP+; all-trans-retinol dehydrogenases; all-trans-retinol + fatty acid → all-trans-retinyl ester + H2O; lecithin retinol acyltransferases (LRATs). Steps 3, 4, 5, and 6 occur in rod cell outer segments; Steps 1, 2, and 7 occur in retinal pigment epithelium (RPE) cells. RPE65 isomerohydrolases are homologous with beta-carotene monooxygenases; the homologous ninaB enzyme in Drosophila has both retinal-forming carotenoid-oxygenase activity and all-trans to 11-cis isomerase activity. Microbial rhodopsins All-trans-retinal is also an essential component of microbial opsins such as bacteriorhodopsin, channelrhodopsin, and halorhodopsin, which are important in bacterial and archaeal anoxygenic photosynthesis. In these molecules, light causes the all-trans-retinal to become 13-cis retinal, which then cycles back to all-trans-retinal in the dark state. These proteins are not evolutionarily related to animal opsins and are not GPCRs; the fact that they both use retinal is a result of convergent evolution. History The American biochemist George Wald and others had outlined the visual cycle by 1958. For his work, Wald won a share of the 1967 Nobel Prize in Physiology or Medicine with Haldan Keffer Hartline and Ragnar Granit. See also Purple Earth hypothesis Sensory nervous system Visual perception Visual phototransduction References Further reading Good historical review. The oceans are full of type 1 rhodopsin. External links First Steps of Vision - National Health Museum Vision and Light-Induced Molecular Changes Retinal Anatomy and Visual Capacities Retinal, Imperial College v-chemlib Aldehydes Apocarotenoids Cyclohexenes Photosynthetic pigments Signal transduction Vision Vitamin A he:אופסין#רטינל
Retinal
[ "Chemistry", "Biology" ]
2,222
[ "Vitamin A", "Photosynthetic pigments", "Photosynthesis", "Signal transduction", "Biomolecules", "Biochemistry", "Neurochemistry" ]
1,010,229
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease%20type%20II
Glycogen storage disease type II (GSD-II), also called Pompe disease, and formerly known as GSD-IIa or Limb–girdle muscular dystrophy 2V, is an autosomal recessive metabolic disorder which damages muscle and nerve cells throughout the body. It is caused by an accumulation of glycogen in the lysosome due to deficiency of the lysosomal acid alpha-glucosidase enzyme (GAA). The inability to break down glycogen within the lysosomes of cells leads to progressive muscle weakness throughout the body and affects various body tissues, particularly in the heart, skeletal muscles, liver and the nervous system. GSD-II and Danon disease are the only glycogen storage diseases characterised by a defect in lysosomal metabolism. It was first identified in 1932 by Dutch pathologist Joannes Cassianus Pompe, making it the first glycogen storage disease to be discovered. Signs and symptoms Infantile-Onset (IOPD) The infantile-onset (IOPD) form usually comes to medical attention within the first few months of life, either clinically or through newborn screening. The usual presenting features are cardiomyopathy, cardiomegaly, hypotonia, respiratory distress, muscle weakness, feeding difficulties and failure to thrive. IOPD patients can be further classified by Cross-Reactive Immunological Material (CRIM) status and has been found to be an important predictor of clinical response. Patients that produce no GAA protein are referred to as CRIM negative. Therefore, they can develop high sustained antibody titers to enzyme replacement therapy (ERT). Immunomodulation or immunotherapy has been found to be an effective treatment to prevent an immune response to ERT. The main clinical findings include floppy baby appearance, delayed motor milestones, and feeding difficulties. Moderate hepatomegaly may or may not be present. Facial features include macroglossia, hypernasal speech, hearing loss, and myopathic facies. Cardiopulmonary involvement is manifested by increased respiratory rate, use of accessory muscles for respiration, recurrent chest infections, decreased air entry in the left lower zone (due to cardiomegaly), arrhythmias, and evidence of heart failure. Before the development of a treatment, median age at death in untreated cases was 8.7 months, usually due to cardiorespiratory failure. However, this outcome is drastically changed since enzyme replacement therapy has become available, improving with early initiation of treatment. Late onset form The Late-onset (LOPD) form differs from the infantile-onset principally in the relative lack of cardiac involvement. The onset has a slower progression and can present at any decade of life. Cardiac involvement may occur but is milder than in the infantile form. Skeletal involvement is more prominent with a predilection for the lower limbs. Late onset features include impaired cough, recurrent chest infections, hypotonia, progressive muscle weakness, delayed motor milestones, difficulty swallowing or chewing and reduced vital capacity. One of the difficulties with attributing the illness solely to genetic deficiencies is that, even between people as genetically similar as identical twins, the symptoms may differ. For example, one may be in pain, whilst the other may not. Similarly, the rate of muscle deterioration of one may be faster than the other. Prognosis depends on the age of onset of symptoms with a better prognosis being associated with later onset disease. Cause Pompe disease has an autosomal recessive inheritance pattern. This means the defective gene is located on an autosome, and two faulty copies of the gene—one from each parent—are required to be born with the disorder. As with all cases of autosomal recessive inheritance, children have a one in four chance of inheriting the disorder when both parents carry the defective gene, and although both parents carry one copy of the defective gene, they are usually not affected by the disorder. The disease is caused by a mutation in a gene (acid alpha-glucosidase: also known as acid maltase) on long arm of chromosome 17 at 17q25.2-q25.3 (base pair 75,689,876 to 75,708,272). The number of mutations described is currently (in 2010) 289 with 67 being non-pathogenic mutations and 197 pathogenic mutations. The remainder are still being evaluated for their association with disease. The gene spans approximately 20 kb and contains 20 exons with the first exon being noncoding. The coding sequence of the putative catalytic site domain is interrupted in the middle by an intron of 101 bp. The promoter has features characteristic of a housekeeping gene. The GC content is high (80%) and distinct TATA and CCAAT motifs are lacking. Most cases appear to be due to three mutations. A transversion (T → G) mutation is the most common among adults with this disorder. This mutation interrupts a site of RNA splicing. The gene encodes a protein—acid alpha-glucosidase (EC 3.2.1.20)—which is a lysosomal hydrolase. The protein is an enzyme that normally degrades the alpha -1,4 and alpha -1,6 linkages in glycogen, maltose and isomaltose and is required for the degradation of 1–3% of cellular glycogen. The deficiency of this enzyme results in the accumulation of structurally normal glycogen in lysosomes and cytoplasm in affected individuals. Excessive glycogen storage within lysosomes may interrupt normal functioning of other organelles and lead to cellular injury. A putative homologue—acid alpha-glucosidase-related gene 1—has been identified in the nematode Caenorhabditis elegans. Diagnosis In the early-onset form, an infant will present with poor feeding causing failure to thrive, or with difficulty breathing. The usual initial investigations include chest X ray, electrocardiogram and echocardiography. Typical findings are those of an enlarged heart with non specific conduction defects. Biochemical investigations include serum creatine kinase (typically increased 10 fold) with lesser elevations of the serum aldolase, aspartate transaminase, alanine transaminase and lactic dehydrogenase. Diagnosis is made by estimating the acid alpha glucosidase activity in either skin biopsy (fibroblasts), muscle biopsy (muscle cells) or in white blood cells. The choice of sample depends on the facilities available at the diagnostic laboratory. In the late-onset form, an adult will present with gradually progressive arm and leg weakness, with worsening respiratory function. Electromyography may be used initially to distinguish Pompe from other causes of limb weakness. The findings on biochemical tests are similar to those of the infantile form, with the caveat that the creatine kinase may be normal in some cases. The diagnosis is by estimation of the enzyme activity in a suitable sample. On May 17, 2013, the Secretary's Discretionary Advisory Committee on Heritable Diseases in Newborns and Children (DACHDNC) approved a recommendation to the Secretary of Health and Human Services to add Pompe to the Recommended Uniform Screening Panel (RUSP). The HHS secretary must first approve the recommendation before the disease is formally added to the panel. Classification There are exceptions, but levels of alpha-glucosidase determines the type of GSD-II an individual may have. More alpha-glucosidase present in the individual's muscles means symptoms occur later in life and progress more slowly. GSD-II is broadly divided into two onset forms based on the age symptoms occur. Infantile-onset form is usually diagnosed at 4–8 months; muscles appear normal but are limp and weak preventing the child from lifting their head or rolling over. As the disease progresses, heart muscles thicken and progressively fail. Without treatment, death usually occurs due to heart failure and respiratory weakness. Late or later onset form occurs later than one to two years and progresses more slowly than Infantile-onset form. One of the first symptoms is a progressive decrease in muscle strength starting with the legs and moving to smaller muscles in the trunk and arms, such as the diaphragm and other muscles required for breathing. Respiratory failure is the most common cause of death. Enlargement of the heart muscles and rhythm disturbances are not significant features but do occur in some cases. Treatment Cardiac and respiratory complications are treated symptomatically. Physical and occupational therapy may be beneficial for some patients. Alterations in diet may provide temporary improvement but will not alter the course of the disease. Genetic counseling can provide families with information regarding risk in future pregnancies. On April 28, 2006, the US Food and Drug Administration (FDA) approved a biologic license application (BLA) for alglucosidase alfa, rhGAA (Myozyme), the first treatment for patients with Pompe disease, developed by a team of Duke University researchers. This was based on enzyme replacement therapy using biologically active recombinant human alglucosidase alfa produced in Chinese Hamster Ovary cells. Myozyme falls under the FDA orphan drug designation and was approved under a priority review. The FDA approved Myozyme for administration by intravenous infusion of the solution. The safety and efficacy of Myozyme were assessed in two separate clinical trials in 39 infantile-onset patients with Pompe disease ranging in age from 1 month to 3.5 years at the time of the first infusion. Myozyme treatment prolongs ventilator-free survival and overall survival. Early diagnosis and early treatment leads to much better outcomes. The treatment is not without side effects which include fever, flushing, skin rash, increased heart rate and even shock; these conditions, however, are usually manageable. Myozyme costs an average of US$300,000 a year and must be taken for the patients' entire life, so some American health insurers have refused to pay for it. In August 2006, Health Canada approved Myozyme for the treatment of Pompe disease. In June 2007, the Canadian Common Drug Review issued their recommendations regarding public funding for Myozyme therapy. Their recommendation was to provide funding to treat a tiny subset of Pompe patients (Infants less one year of age with cardiomyopathy). In May 2010, the FDA approved Lumizyme for the treatment of late-onset Pompe disease. Lumizyme and Myozyme have the same generic ingredient (alglucosidase alfa) and manufacturer (Genzyme Corporation). The difference between these two products is in the manufacturing process. Myozyme is made using a 160-L bioreactor, while Lumizyme uses a 4000-L bioreactor. Because of the difference in the manufacturing process, the FDA claims that the two products are biologically different. Myozyme is FDA approved for replacement therapy for infantile-onset Pompe disease. In July 2021, the European Medicines Agency (EMA) recommended the authorization of avalglucosidase alfa. Avalglucosidase alfa (Nexviazyme) was approved for medical use in the United States in August 2021, and in the European Union in June 2022. In December 2022, the EMA recommended the authorization of cipaglucosidase alfa. The approval was given in June 2023. In the EU, the therapy is available to all age groups without restrictions on weight of patients. In September 2023, the FDA approved a two-component therapy of Pombiliti (cipaglucosidase alfa-atga) and Opfolda (miglustat) 65 mg capsules for adults living with late-onset Pompe disease weighing more than 40 kg and who are not improving on their current enzyme replacement therapy. Prognosis The prognosis for individuals with Pompe disease varies according to the onset and severity of symptoms, along with lifestyle factors. Without treatment the infantile form (which can typically be predicted by mutation analysis) of the disease is particularly lethal — in these cases, the time taken to begin treatment is critical, with evidence that days (not weeks or months) matter. Myozyme (alglucosidase alfa) is a recombinant form of the human enzyme acid alpha-glucosidase, and is also currently being used to replace the missing enzyme. In a study which included the largest cohort of patients with Pompe disease treated with enzyme replacement therapy (ERT) to date findings showed that Myozyme treatment clearly prolongs ventilator-free survival and overall survival in patients with infantile-onset Pompe disease as compared to an untreated historical control population. Furthermore, the study demonstrated that initiation of ERT prior to 6 months of age, which could be facilitated by newborn screening, shows great promise to reduce the mortality and disability associated with this devastating disorder. Taiwan and several states in the United States have started the newborn screening and results of such regimen in early diagnosis and early initiation of the therapy have dramatically improved the outcome of the disease; many of these babies have reached the normal motor developmental milestones. Another factor affecting the treatment response is generation of antibodies against the infused enzyme, which is particularly severe in Pompe infants who have complete deficiency of the acid alpha-glucosidase. Immune tolerance therapy to eliminate these antibodies has improved the treatment outcome. A Late Onset Treatment Study (LOTS) was published in 2010. The study was undertaken to evaluate the safety and efficacy of aglucosidase alfa in juvenile and adult patients with Pompe disease. LOTS was a randomized, double-blind, placebo-controlled study that enrolled 90 patients at eight primary sites in the United States and Europe. Participants received either aglucosidase alfa or a placebo every other week for 18 months. The average age of study participants was 44 years. The primary efficacy endpoints of the study sought to determine the effect of Myozyme on functional endurance as measured by the six-minute walk test and to determine the effect of aglucosidase alfa on pulmonary function as measured by percent predicted forced vital capacity. The results showed that, at 78 weeks, patients treated with aglucosidase alfa increased their distance walked in six minutes by an average of approximately 25 meters as compared with the placebo group which declined by 3 meters (P=0.03). The placebo group did not show any improvement from baseline. The average baseline distance walked in six minutes in both groups was approximately 325 meters. Percent predicted forced vital capacity in the group of patients treated with aglucosidase alfa increased by 1.2 percent at 78 weeks. In contrast, it declined by approximately 2.2 percent in the placebo group (P=0.006). There is an emerging recognition of the role that diet and exercise can play in functionally limiting symptom progression. This is an area for further study, as there is not a clear consensus guideline, but rather a body of case study work that suggests that appropriate physical activity can be an effective tool in managing disease progression. In one such study, side-alternating vibration training was used 3 times per week for 15 weeks. The results showed that, at 15 weeks, the patient had a 116-meter (70%) improvement to their 6MWT, which is significant compared with the results from the aforementioned LOTS study. Epidemiology The total birth prevalence of Pompe disease is 1:18,698. History The disease is named after Joannes Cassianus Pompe, who characterized it in 1932. Pompe described accumulation of glycogen in muscle tissue in some cases of a previously unknown disorder. This accumulation was difficult to explain as the enzymes involved in the usual metabolism of glucose and glycogen were all present and functioning. The basis for the disease remained a puzzle until Christian de Duve's discovery of lysosomes in 1955 for which he won the Nobel Prize in 1974. His co-worker Henri G. Hers realised in 1965 that the deficiency of a lysosomal enzyme (alpha glucosidase) for the breakdown of glycogen could explain the symptoms of Pompe disease. This discovery led to establishing the concept of lysosomal storage diseases, of which 49 have been described (to date). Despite recognizing the basis for the disease, treatment proved difficult. Administration of the enzyme lead to its uptake by the liver and not the muscle cells where it is needed. In the early 1990s Dutch scientists Arnold Reuser and Ans van der Ploeg were able to show that using alpha-glucosidase containing phosphorylated mannose residues purified from bovine testes increased the enzyme's activity in normal mouse muscles. Later in 1998, Yuan-Tsong Chen and colleagues at Duke University, using the enzyme produced in Chinese hamster ovary (CHO) cells demonstrated for the first time that the enzyme can clear the glycogen and improved the muscle function in Pompe disease quail. The results of the work at Duke were impressive with one treated bird recovering to the point of being able to fly again. This was followed by production of clinical grade alpha-glucosidase in Chinese hamster ovary (CHO) cells and in the milk of transgenic rabbits. This work eventually culminated in the start of clinical trials with the first clinical trial including four babies receiving enzyme from rabbit milk at Erasmus MC Sophia Children's Hospital and three babies receiving enzyme grown in CHO cells at Duke University in 1999. The currently approved Myozyme is manufactured by Genzyme Corp. in Cambridge, Massachusetts. Its development was a complex process. Genzyme first partnered with Pharming Group NV who had managed to produce acid alpha-glucosidase from the milk of transgenic rabbits. They also partnered with a second group based at Duke University using Chinese hamster ovary cells. In 2001, Genzyme acquired Novazyme which was also working on this enzyme. Genzyme also had its own product (Myozyme) grown in CHO cells under development. In November 2001, Genzyme chief executive Henri Termeer organised a systematic comparison of the various potential drugs in a mouse model of Pompe disease. It was found that the Duke enzyme was the most efficacious, followed by Myozyme. However, due to easier manufacture of Myozyme, work on the other products was discontinued. Funding for research in this field was in part provided by the Muscular Dystrophy Association and the Acid Maltase Deficiency Association in the US, and by the Association of Glycogen Storage Disease in the UK, as well as the International Pompe Association. John Crowley became involved in the fund-raising efforts in 1998 after two of his children were diagnosed with Pompe. He joined the company Novazyme in 1999, which was working on enzyme replacement treatment for Pompe. Novazyme was sold to Genzyme in 2001 for over US$100 million. The 2010 film Extraordinary Measures is based on Crowley's search for a cure. As of 2019, many biomedical companies are developing gene therapy in hopes of helping the body create alpha-glucosidase on its own. In 2021, in utero enzyme replacement therapy infusions were provided to the fetus of an Ottawa, Ontario, mother who had had two previous children with Pompe disease. The medical team was a collaboration between Ottawa Hospital, Children's Hospital of Eastern Ontario, the University of California, San Francisco, and Duke University. The child, born in June 2021, is thriving as of November 2022. See also Autophagic vacuolar myopathy Glycogen storage disease Danon disease (formerly GSD-IIb) Inborn errors of carbohydrate metabolism Lysosomal storage disease Metabolic myopathies References External links Understanding Pompe Disease - US National Institute of Arthritis and Musculoskeletal and Skin Diseases AGSD — Association of Glycogen Storage Disease in the United States AGSD-UK — Association of Glycogen Storage Disease in the UK AMDA — Acid Maltase Deficiency Association (Pompe disease) IPA — International Pompe Association IamGSD — International Association for Muscle Glycogen Storage Disease Autosomal recessive disorders Hepatology Inborn errors of carbohydrate metabolism Lysosomal storage diseases Rare diseases
Glycogen storage disease type II
[ "Chemistry" ]
4,302
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
1,010,309
https://en.wikipedia.org/wiki/Bleomycin
Bleomycin is a medication primarily used to treat cancer. This includes Hodgkin's lymphoma, non-Hodgkin's lymphoma, testicular cancer, ovarian cancer, and cervical cancer among others. Typically used with other cancer medications, it can be given intravenously, by injection into a muscle or under the skin. It may also be administered inside the chest to help prevent the recurrence of a pleural effusion due to cancer; however talc is better for this. It may sometimes be used to treat other difficult-to-treat skin lesions such as plantars warts in immunocompromised patients. Common side effects include fever, weight loss, vomiting, and rash. A severe type of anaphylaxis may occur. It may also cause inflammation of the lungs that can result in lung scarring. Chest X-rays every couple of weeks are recommended to check for this. Bleomycin may cause harm to the baby if used during pregnancy. It is believed to primarily work by preventing the synthesis of DNA. Bleomycin was discovered in 1962. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It is made by the bacterium Streptomyces verticillus. Medical uses Cancer Bleomycin is mostly used to treat cancer. This includes testicular cancer, ovarian cancer, and Hodgkin's disease, and less commonly non-Hodgkin's disease. It can be given intravenously, by intramuscular injection, or under the skin. Other uses It may also be put inside the chest to help prevent the recurrence of a pleural effusion due to cancer. However, for scarring down the pleura, talc appears to be the better option although indwelling pleural catheters are at least as effective in reducing the symptoms of an effusion(such as dyspnea). While potentially effective against bacterial infections, its toxicity prevents its use for this purpose. It has been studied in the treatment of warts but is of unclear benefit. Side effects The most common side effects are flu-like symptoms and include fever, rash, dermatographism, hyperpigmentation, alopecia (hair loss), chills, and Raynaud's phenomenon (discoloration of fingers and toes). The most serious complication of bleomycin, occurring upon increasing dosage, is pulmonary fibrosis and impaired lung function. It has been suggested that bleomycin induces sensitivity to oxygen toxicity and recent studies support the role of the proinflammatory cytokines IL-18 and IL-1beta in the mechanism of bleomycin-induced lung injury. Any previous treatment with bleomycin should therefore always be disclosed to the anaesthetist prior to undergoing a procedure requiring general anaesthesia. Due to the oxygen sensitive nature of bleomycin, and the theorised increased likelihood of developing pulmonary fibrosis following supplemental oxygen therapy, it has been questioned whether patients should take part in scuba diving following treatment with the drug. Bleomycin has also been found to disrupt the sense of taste. Lifetime cumulative dose Bleomycin should not exceed a lifetime cumulative dose greater than 400 units. Pulmonary toxicities, most commonly presenting as pulmonary fibrosis, are associated with doses of bleomycin greater than 400 units. Mechanism of action Bleomycin acts by induction of DNA strand breaks. Some studies suggest bleomycin also inhibits incorporation of thymidine into DNA strands. DNA cleavage by bleomycin depends on oxygen and metal ions, at least in vitro. The exact mechanism of DNA strand scission is unresolved, but it has been suggested that bleomycin chelates metal ions (primarily iron), producing a pseudoenzyme that reacts with oxygen to produce superoxide and hydroxide free radicals that cleave DNA. An alternative hypothesis states that bleomycin may bind at specific sites in the DNA strand and induce scission by abstracting the hydrogen atom from the base, resulting in strand cleavage as the base undergoes a Criegee-type rearrangement, or forms an alkali-labile lesion. Biosynthesis Biosynthesis of bleomycin is completed by glycosylation of the aglycones. Bleomycin naturally occurring-analogues have two to three sugar molecules, and DNA cleavage activities of these analogues have been assessed, primarily by the plasmid relaxation and break light assays. History Bleomycin was first discovered in 1962 when the Japanese scientist Hamao Umezawa found anticancer activity while screening culture filtrates of Streptomyces verticillus. Umezawa published his discovery in 1966. The drug was launched in Japan by Nippon Kayaku in 1969. In the US, bleomycin gained FDA approval in July 1973. It was initially marketed in the US by the Bristol-Myers Squibb precursor, Bristol Laboratories, under the brand name Blenoxane. Research Bleomycin is used in research to induce pulmonary fibrosis in mice. It accomplishes this by preventing alveolar cell proliferation, which in turn leads to cellular senescence. See also Flagellate pigmentation from bleomycin Pingyangmycin (Bleomycin A5) References Further reading Cancer research DNA intercalaters DNA replication inhibitors Glycopeptide antibiotics IARC Group 2B carcinogens Sulfonium compounds World Health Organization essential medicines Wikipedia medicine articles ready to translate Eukaryotic selection compounds Hydroxymethyl compounds Japanese inventions
Bleomycin
[ "Chemistry" ]
1,197
[ "Glycopeptide antibiotics", "Glycopeptides" ]
1,010,311
https://en.wikipedia.org/wiki/Micromanagement%20%28gameplay%29
Micromanagement in gaming is the handling of detailed gameplay elements by the player. It appears in a wide range of games and genres, including strategy video games, construction and management simulations, and pet-raising simulations. Micromanagement has been perceived in different ways by game designers and players for many years: some perceive it as a useful addition to games that adds options and technique to the gameplay, something that is necessary if the game is to support top-level competitions; some enjoy opportunities to use tactical skill in strategic games; others regard it as an unwelcome distraction from higher levels of strategic thinking and dislike having to do a lot of detailed work. Some developers attempt to minimize micromanagement in a game's interface for this reason. Combat Detailed management of units in combat aims to maximize damage given to enemy units and minimize damage to the player's units. For standard combat units the most common techniques are: grouping units into formations, for example to keep lightly armored shooters behind and protected by more heavily armored melee units; concentrating the fire of all ranged units on one target and then a second, etc., to destroy threats as fast as possible; withdrawing seriously damaged units from combat, if repairing / healing them is cheaper than replacing them; "dancing" units that have taken some damage out of enemy weapons range and then back into combat once the enemy have locked on to another target; using military tactics such as flanking and counterattacks; exploiting nontransitive ("circle of death" or "rock-paper-scissors") power relationships between units; using cheap units to draw the enemy's fire away from more expensive units, gameplay especially typical of games of the real-time tactics type. Micromanagement is even more necessary for units with special abilities, that can only be used infrequently. "Micromanagement" in this sense is often abbreviated to "micro", which can be used as a noun or a verb. Versus macromanagement There is sometimes confusion regarding the difference between micromanagement and macromanagement, normally abbreviated as 'micro' and 'macro' respectively. Macro generally refers to managing large quantities of tasks at the same time. For example, building units from various structures throughout the game while also building more structures, scouting, creating new bases, etc. This is different from micro, which is generally controlling small numbers of units and giving them very specific orders. Economic The range of possible economic micromanagement techniques is much wider than for combat, because strategy games' economies work in so many different ways. If the game uses "worker" units to gather resources and / or build things (a common technique in real-time strategy games), one must make sure none are idle and that they are doing the right things, and must avoid letting enemy raiders destroy them. In some turn-based games one tells colonies what percentages of their efforts to put into various activities such as industrial growth, research, and building defenses or combat units; as colonies grow or the strategic situation changes, one has to check and adjust these ratios. In Sid Meier's Civilization series, it may be important for either economic or military reasons to build railroads as fast as possible, and doing this efficiently requires considerable micromanagement of Settler/Engineer units. Twitch vs trick Some forms of micromanagement involve continuous input of a large number of commands over a short period of time. This is known as twitch micromanagement. For example, a micromanagement technique known as kiting requires continuous input from the player in order to keep their character at an optimum distance from a target. Another example of twitch micromanagement can be found in racing games whereby a player is required to keep making split second adjustments to the position of their vehicle. In contrast to twitch micromanagement, some game elements need only occasional input from the player in order to exploit tricks in their behavior. In these situations, quick thinking is rewarded over continuous, quick reaction. This is known as trick micromanagement. Other types of games are based entirely on micromanagement, such as pet-raising simulations and games like Cake Mania, where the player's ability to micromanage is often the only skill being tested by the game. Policy-based Some games are designed in such a way that players must constantly set or check strategic parameters to ensure that operations are proceeding smoothly and efficiently. A typical city-building game or 4X game, for example, requires the player to regulate taxation and production levels in order to keep their industries and commerce flowing. The amount of detail that goes into a simulation like this may necessitate spending a disproportionate amount of time in adjusting relatively minor parameters in order to achieve maximum efficiency. Controversy Micromanagement can divert the player's attention from grand strategy by overloading the player with repetitive and mechanical work. Some commentators think that "Strategy is irrelevant in today's real-time strategy games when you're playing against a fourteen-year-old who can click twice as fast as you." Games in which constant micromanagement is needed are often described as "micromanagement hell". In turn-based games the need for economic micromanagement is generally regarded as a defect in the design, and more recent TBS games have tried to minimize it. But hands-on tactical combat is a feature of many turn-based games (e.g. Master of Orion II, Space Empires III, Heroes of Might and Magic III), and reviewers complained about the difficulty of controlling combat in Master of Orion 3. There is controversy between fans of different RTS games about whether micromanagement is: (a) a skill which involves making decisions quickly while under pressure; or (b) a chore which degenerates into a "clickfest" where a player who is faster with the mouse usually beats a player who is better at grand strategy. As a result, real time strategy games vary widely from e.g. Total Annihilation, which eliminates most economic micromanagement and reduces tactical micromanagement, to StarCraft, in which both economic and tactical micromanagement are considered important skills. Software has been developed to analyze players' actions per minute (commonly known as APM). Other games aim for differing levels of micromanagement of different types: for instance, the Relic Entertainment title Dawn of War 2 minimises economic micromanagement as much as possible, such that there is no base construction, all units are produced from a single source, and resources are accumulated automatically over time by controlling strategic battlefield locations, while on the other hand the game emphasises tactical micromanagement as its primary skill, with combat taking place principally between relatively small squads of highly effective and highly vulnerable units, with victory a function of the rapid deployment of special weapons and tactics in order to counter enemy manoeuvres and inflict maximum damage quickly while avoiding sustaining damage. A Gamasutra article pointed out that micromanagement in Civilization III resulted in the game becoming "a chore more than a game," explaining: "Computers can now animate more units than any player could reasonably want to control, and the number will continue to increase exponentially." Many role-playing video games and first-person shooters are developing more advanced hotkey layouts, allowing these genres to develop their own micromanagement skills. In popular culture The popular Internet-distributed mockumentary series Pure Pwnage coined the term "über-micro", a term describing unusually superior levels of micromanagement. In one episode, it was claimed that micromanagement was discovered in "The Battle of 1974". In South Korea, the real-time strategy game StarCraft is highly popular as a professional sport. The need to micromanage efficiently and multitask under pressure are regarded as features that make it suitable for top-level competitions. The game is broadcast on Korean national television, showing professional players' micromanagement skills. See also Actions per minute Macromanagement Micromanagement, in business Real-time strategy Real-time tactics Turn-based strategy References Video game terminology
Micromanagement (gameplay)
[ "Technology" ]
1,649
[ "Computing terminology", "Video game terminology" ]
1,010,454
https://en.wikipedia.org/wiki/Maldevelopment
Maldevelopment is the state of an organism or an organisation that did not develop in the "normal" way (used in medicine, e.g. "brain maldevelopment of a fetus"). It was introduced as a human and social development term in France in the 1990s by Samir Amin to challenge the concept of "underdevelopment." The word maldéveloppement did not exist before then (the medical terms are malformation or développement anormal), so the word is a neologism meant to be analogous to the difference between undernutrition and malnutrition. Maldevelopment is a global concept that includes human and social development. Under the philosophy of sustainable development, economic development is only a "tool" that allows for greater human and social development, not the final goal. Under-development is a quantitative notion, implying that a nation has a lack and must gain something to reach a particular reference state—the state of the nation that judges another nation as underdeveloped. So this notion also implies a unique development model—the one of the judging nation. Mal-development, or ill-development, is a qualitative notion that expresses a mismatch, a discrepancy between the conditions (economic, political, meteorological, cultural, etc.) and the needs and means of the people. See also Human development theory. References Human development
Maldevelopment
[ "Biology" ]
295
[ "Behavioural sciences", "Behavior", "Human development" ]
1,010,494
https://en.wikipedia.org/wiki/Enterprise%20information%20system
An Enterprise Information System (EIS) is any kind of information system which improves the functions of enterprise business processes by integration. This means typically offering high quality of service, dealing with large volumes of data and capable of supporting some large and possibly complex organization or enterprise. An EIS must be able to be used by all parts and all levels of an enterprise. The word enterprise can have various connotations. Frequently the term is used only to refer to very large organizations such as multi-national companies or public-sector organizations. However, the term may be used to mean virtually anything, by virtue of it having become a corporate-speak buzzword. Purpose Enterprise information systems provide a technology platform that enables organizations to integrate and coordinate their business processes on a robust foundation. An EIS is currently used in conjunction with customer relationship management and supply chain management to automate business processes. An enterprise information system provides a single system that is central to the organization that ensures information can be shared across all functional levels and management hierarchies. An EIS can be used to increase business productivity and reduce service cycles, product development cycles and marketing life cycles. It may be used to amalgamate existing applications. Other outcomes include higher operational efficiency and cost savings. Financial value is not usually a direct outcome from the implementation of an enterprise information system. Design stage At the design stage the main characteristic of EIS efficiency evaluation is the probability of timely delivery of various messages such as command, service, etc. Information systems Enterprise systems create a standard data structure and are invaluable in eliminating the problem of information fragmentation caused by multiple information systems within an organization. An EIS differentiates itself from legacy systems in that it is self-transactional, self-helping and adaptable to general and specialist conditions. Unlike an enterprise information system, legacy systems are limited to department-wide communications. A typical enterprise information system would be housed in one or more data centers, would run enterprise software, and could include applications that typically cross organizational borders such as content management systems. See also Executive information system Management information system Enterprise planning systems Enterprise software References Data management Enterprise architecture Enterprise modelling Website management
Enterprise information system
[ "Technology", "Engineering" ]
435
[ "Data management", "Systems engineering", "Enterprise modelling", "Data" ]
1,010,522
https://en.wikipedia.org/wiki/Disjunction%20and%20existence%20properties
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005). Definitions The disjunction property is satisfied by a theory if, whenever a sentence A ∨ B is a theorem, then either A is a theorem, or B is a theorem. The existence property or witness property is satisfied by a theory if, whenever a sentence is a theorem, where A(x) has no other free variables, then there is some term t such that the theory proves . Related properties Rathjen (2005) lists five properties that a theory may possess. These include the disjunction property (DP), the existence property (EP), and three additional properties: The numerical existence property (NEP) states that if the theory proves , where φ has no other free variables, then the theory proves for some Here is a term in representing the number n. Church's rule (CR) states that if the theory proves then there is a natural number e such that, letting be the computable function with index e, the theory proves . A variant of Church's rule, CR1, states that if the theory proves then there is a natural number e such that the theory proves is total and proves . These properties can only be directly expressed for theories that have the ability to quantify over natural numbers and, for CR1, quantify over functions from to . In practice, one may say that a theory has one of these properties if a definitional extension of the theory has the property stated above (Rathjen 2005). Results Non-examples and examples Almost by definition, a theory that accepts excluded middle while having independent statements does not have the disjunction property. So all classical theories expressing Robinson arithmetic do not have it. Most classical theories, such as Peano arithmetic and ZFC in turn do not validate the existence property either, e.g. because they validate the least number principle existence claim. But some classical theories, such as ZFC plus the axiom of constructibility, do have a weaker form of the existence property (Rathjen 2005). Heyting arithmetic is well known for having the disjunction property and the (numerical) existence property. While the earliest results were for constructive theories of arithmetic, many results are also known for constructive set theories (Rathjen 2005). John Myhill (1973) showed that IZF with the axiom of replacement eliminated in favor of the axiom of collection has the disjunction property, the numerical existence property, and the existence property. Michael Rathjen (2005) proved that CZF has the disjunction property and the numerical existence property. Freyd and Scedrov (1990) observed that the disjunction property holds in free Heyting algebras and free topoi. In categorical terms, in the free topos, that corresponds to the fact that the terminal object, , is not the join of two proper subobjects. Together with the existence property it translates to the assertion that is an indecomposable projective object—the functor it represents (the global-section functor) preserves epimorphisms and coproducts. Relationship between properties There are several relationship between the five properties discussed above. In the setting of arithmetic, the numerical existence property implies the disjunction property. The proof uses the fact that a disjunction can be rewritten as an existential formula quantifying over natural numbers: . Therefore, if is a theorem of , so is . Thus, assuming the numerical existence property, there exists some such that is a theorem. Since is a numeral, one may concretely check the value of : if then is a theorem and if then is a theorem. Harvey Friedman (1974) proved that in any recursively enumerable extension of intuitionistic arithmetic, the disjunction property implies the numerical existence property. The proof uses self-referential sentences in way similar to the proof of Gödel's incompleteness theorems. The key step is to find a bound on the existential quantifier in a formula (∃x)A(x), producing a bounded existential formula (∃x<n)A(x). The bounded formula may then be written as a finite disjunction A(1)∨A(2)∨...∨A(n). Finally, disjunction elimination may be used to show that one of the disjuncts is provable. History Kurt Gödel (1932) stated without proof that intuitionistic propositional logic (with no additional axioms) has the disjunction property; this result was proven and extended to intuitionistic predicate logic by Gerhard Gentzen (1934, 1935). Stephen Cole Kleene (1945) proved that Heyting arithmetic has the disjunction property and the existence property. Kleene's method introduced the technique of realizability, which is now one of the main methods in the study of constructive theories (Kohlenbach 2008; Troelstra 1973). See also Constructive set theory Heyting arithmetic Law of excluded middle Realizability Existential quantifier References Peter J. Freyd and Andre Scedrov, 1990, Categories, Allegories. North-Holland. Harvey Friedman, 1975, The disjunction property implies the numerical existence property, State University of New York at Buffalo. Gerhard Gentzen, 1934, "Untersuchungen über das logische Schließen. I", Mathematische Zeitschrift v. 39 n. 2, pp. 176–210. Gerhard Gentzen, 1935, "Untersuchungen über das logische Schließen. II", Mathematische Zeitschrift v. 39 n. 3, pp. 405–431. Kurt Gödel, 1932, "Zum intuitionistischen Aussagenkalkül", Anzeiger der Akademie der Wissenschaftischen in Wien, v. 69, pp. 65–66. Stephen Cole Kleene, 1945, "On the interpretation of intuitionistic number theory," Journal of Symbolic Logic, v. 10, pp. 109–124. Ulrich Kohlenbach, 2008, Applied proof theory, Springer. John Myhill, 1973, "Some properties of Intuitionistic Zermelo-Fraenkel set theory", in A. Mathias and H. Rogers, Cambridge Summer School in Mathematical Logic, Lectures Notes in Mathematics v. 337, pp. 206–231, Springer. Michael Rathjen, 2005, "The Disjunction and Related Properties for Constructive Zermelo-Fraenkel Set Theory", Journal of Symbolic Logic, v. 70 n. 4, pp. 1233–1254. Anne S. Troelstra, ed. (1973), Metamathematical investigation of intuitionistic arithmetic and analysis, Springer. External links Proof theory Constructivism (mathematics)
Disjunction and existence properties
[ "Mathematics" ]
1,473
[ "Mathematical logic", "Constructivism (mathematics)", "Proof theory" ]
1,010,567
https://en.wikipedia.org/wiki/Reflected-wave%20switching
Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI. A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line. Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account. When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus. When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflected back towards its source. As this reflected wave travels back along the microstrip, its amplitude is added to that of the original pulse. As the reflected wave passes the receiver for a second time, this time from the opposite direction, it now has enough amplitude to be detected. This is what happens in a reflected-wave switching bus. In incident-wave switching buses, reflections from the end of the bus are undesirable and must be prevented by adding termination. Terminating an incident-wave trace varies in complexity from a DC-balanced, AC-coupled termination to a single resistor series terminator, but all incident wave terminations consume both power and space (Johnson and Graham, 1993). However, incident-wave switching buses can be significantly longer than reflected-wave switching buses operating at the same frequency. If the limited bus length is acceptable, a reflected-wave switching bus will use less power, and fewer components to operate at a given frequency. The bus has to be short enough, such that a pulse may travel twice the length of the backplane (one complete journey for the incident wave, and another for the reflected wave), and stabilize sufficiently to be read in a single bus cycle. The travel time can be calculated by dividing the round-trip length of the bus by the speed of propagation of the signal (which is roughly one half to two-thirds of c, the speed of light in vacuum). References Johnson, Howard; Graham, Martin (1993). High Speed Digital Design. Prentice Hall. . Computer engineering Computer buses
Reflected-wave switching
[ "Technology", "Engineering" ]
727
[ "Electrical engineering", "Computer engineering" ]
1,010,708
https://en.wikipedia.org/wiki/Neem%20oil
Neem oil, also known as margosa oil, is a vegetable oil pressed from the fruits and seeds of the neem (Azadirachta indica), a tree which is indigenous to the Indian subcontinent and has been introduced to many other areas in the tropics. It is the most important of the commercially available products of neem, and its chemical properties have found widespread use as a pesticide in organic farming. Composition Azadirachtin is the most well known and studied triterpenoid in neem oil. Nimbin is another triterpenoid which has been credited with some of neem oil's properties as an antiseptic, antifungal, antipyretic and antihistamine. Uses Ayurveda Neem oil has a history of use in Ayurvedic folk medicine. Pesticide Formulations that include neem oil have found wide usage as a biopesticide for horticulturists and for organic farming, as it repels a wide variety of insect pests including mealy bugs, beet armyworms, aphids, cabbage worms, thrips, whiteflies, mites, fungus gnats, beetles, moth larvae, mushroom flies, leaf miners, caterpillars, locusts, nematodes and Japanese beetles. When sufficiently diluted and not concentrated directly into their area of habitat or on their food source, neem oil is not known to be harmful to mammals, birds, earthworms or some beneficial insects such as butterflies, honeybees and ladybugs. It can be used as a household pesticide for ants, bedbugs, cockroaches, houseflies, sand flies, snails, termites and mosquitoes both as a repellent and as a larvicide. Neem extracts act as an antifeedant and block the action of the insect molting hormone ecdysone. Azadirachtin is the most active of these growth regulators (limonoids), occurring at 0.2–0.4% in the seeds of the neem tree. Toxicity The ingestion of neem oil is potentially toxic and can cause metabolic acidosis, seizures, kidney failure, encephalopathy and severe brain ischemia in infants and young children. Neem oil should not be consumed alone without any other solutions, particularly by pregnant women, women trying to conceive or children. It can also be associated with allergic contact dermatitis. References Plant toxin insecticides Vegetable oils
Neem oil
[ "Chemistry" ]
529
[ "Plant toxin insecticides", "Chemical ecology" ]
1,010,712
https://en.wikipedia.org/wiki/Closed%20convex%20function
In mathematics, a function is said to be closed if for each , the sublevel set is a closed set. Equivalently, if the epigraph defined by is closed, then the function is closed. This definition is valid for any function, but most used for convex functions. A proper convex function is closed if and only if it is lower semi-continuous. Properties If is a continuous function and is closed, then is closed. If is a continuous function and is open, then is closed if and only if it converges to along every sequence converging to a boundary point of . A closed proper convex function f is the pointwise supremum of the collection of all affine functions h such that h ≤ f (called the affine minorants of f). References Convex analysis Types of functions
Closed convex function
[ "Mathematics" ]
167
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations", "Types of functions" ]
1,010,745
https://en.wikipedia.org/wiki/Giemsa%20stain
Giemsa stain (), named after German chemist and bacteriologist Gustav Giemsa, is a nucleic acid stain used in cytogenetics and for the histopathological diagnosis of malaria and other parasites. Uses It is specific for the phosphate groups of DNA and attaches itself to regions of DNA where there are high amounts of adenine-thymine bonding. Giemsa stain is used in Giemsa banding, commonly called G-banding, to stain chromosomes and often used to create a karyogram (chromosome map). It can identify chromosomal aberrations such as translocations and rearrangements. It stains the trophozoite Trichomonas vaginalis, which presents with greenish discharge and motile cells on wet prep. Giemsa stain is also a differential stain, such as when it is combined with Wright stain to form Wright-Giemsa stain. It can be used to study the adherence of pathogenic bacteria to human cells. It differentially stains human and bacterial cells purple and pink respectively. It can be used for histopathological diagnosis of the Plasmodium species that cause malaria and some other spirochete and protozoan blood parasites. It is also used to stain Wolbachia cells in host tissue. Giemsa stain is a classic blood film stain for peripheral blood smears and bone marrow specimens. Erythrocytes stain pink, platelets show a light pale pink, lymphocyte cytoplasm stains sky blue, monocyte cytoplasm stains pale blue, and leukocyte nuclear chromatin stains magenta. It is also used to visualize the classic "safety pin" shape in Yersinia pestis. Giemsa stain is also used to visualize chromosomes. This is particularly relevant for detection of Cytomegalovirus infection, where the classical finding would be an "owl-eye" viral inclusion. Giemsa stains the fungus Histoplasma, Chlamydia bacteria, and can be used to identify mast cells. Generation Giemsa's solution is a mixture of methylene blue, eosin, and Azure B. The stain is usually prepared from commercially available Giemsa powder. A thin film of the specimen on a microscope slide is fixed in pure methanol for 30 seconds, by immersing it or by putting a few drops of methanol on the slide. The slide is immersed in a freshly prepared 5% Giemsa stain solution for 20–30 minutes (in emergencies 5–10 minutes in 10% solution can be used), then flushed with tap water and left to dry. See also Biological stains and staining protocols Histology Leishman stain Microscopy Romanowsky stain Wright's stain References Histopathology Histotechnology Staining dyes
Giemsa stain
[ "Chemistry" ]
593
[ "Histopathology", "Microscopy" ]
1,010,773
https://en.wikipedia.org/wiki/Heiligenschein
(; ) is an optical phenomenon in which a bright spot appears around the shadow of the viewer's head in the presence of dew. In photogrammetry and remote sensing, it is more commonly known as the hotspot. It is also occasionally known as Cellini's halo after the Italian artist and writer Benvenuto Cellini (15001571), who described the phenomenon in his memoirs in 1562. Nearly spherical dew droplets act as lenses to focus the light onto the surface behind them. When this light scatters or reflects off that surface, the same lens re-focuses that light into the direction from which it came. This configuration is similar to a cat's eye retroreflector. However a cat's eye retroreflector needs a refractive index of around 2, while water has a much smaller refractive index of approximately 1.33. This means that the water droplets focus the light about 20% to 50% of the diameter beyond the rear surface of the droplet. When dew droplets are suspended on trichomes at approximately this distance away from the surface of a plant, the combination of droplet and plant acts as a retroreflector. Any retroreflective surface is brightest around the antisolar point. Opposition surge by other particles than water and the glory in water vapour are similar effects caused by different mechanisms. See also Aureole effect Brocken spectre, the magnified shadow of an observer cast upon the upper surfaces of clouds opposite the Sun Gegenschein, a faint spot of dust lit by sunlight focused by Earth's atmosphere, visible in the night sky toward the antisolar point Retroreflector Subparhelic circle Sylvanshine References External links A site showing examples of a Heiligenschein What causes heiligenschein Atmospheric optical phenomena
Heiligenschein
[ "Physics" ]
379
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
1,010,814
https://en.wikipedia.org/wiki/Minimally%20invasive%20education
Minimally invasive education (MIE) is a form of learning in which children operate in unsupervised environments. The methodology arose from an experiment done by Sugata Mitra while at NIIT in 1999, often called The Hole in the Wall, which has since gone on to become a significant project with the formation of Hole in the Wall Education Limited (HiWEL), a cooperative effort between NIIT and the International Finance Corporation, employed in some 300 'learning stations', covering some 300,000 children in India and several African countries. The programme has been feted with the digital opportunity award by WITSA, and been extensively covered in the media. History Background Professor Mitra, Chief Scientist at NIIT, is credited with proposing and initiating the Hole-in-the-Wall programme. As early as 1982, he had been toying with the idea of unsupervised learning and computers. Finally, in 1999, he decided to test his ideas in the field. The experiment On 26 January 1999, Mitra's team carved a "hole in the wall" that separated the NIIT premises from the adjoining slum in Kalkaji, New Delhi. Through this hole, a freely accessible computer was put up for use. This computer proved to be popular among the slum children. With no prior experience, the children learned to use the computer on their own. This prompted Mitra to propose the following hypothesis: The acquisition of basic computing skills by any set of children can be achieved through incidental learning provided the learners are given access to a suitable computing facility, with entertaining and motivating content and some minimal (human) guidance. In the following comment on the TED website Mitra explains how they saw to it that the computer in this experiment was accessible to children only: "... We placed the computers 3 feet off the ground and put a shade on top, so if you are tall, you hit your head on it. Then we put a protective plastic cowl over the keyboard which had an opening such that small hands would go in. Then we put a seating rod in front that was close to the wall so that, if you are of adult height, your legs would splay when you sit. Then we painted the whole thing in bright colours and put a sign saying 'for children under 15'. Those design factors prevented adult access to a very large extent." Results Mitra has summarised the results of his experiment as follows. Given free and public access to computers and the Internet, a group of children can Become computer literate on their own, that is, they can learn to use computers and the Internet for most of the tasks done by lay users. Teach themselves enough English to use email, chat and search engines. Learn to search the Internet for answers to questions in a few months time. Improve their English pronunciation on their own. Improve their mathematics and science scores in school. Answer examination questions several years ahead of time. Change their social interaction skills and value systems. Form independent opinions and detect indoctrination. Current status and expansion outside India The first adopter of the idea was the Government of National Capital Territory of Delhi. In 2000, the Government of Delhi set up 30 Learning Stations in a resettlement colony. This project is ongoing and said to be achieving significant results. Encouraged by the initial success of the Kalkaji experiment, freely accessible computers were set up in Shivpuri (a town in Madhya Pradesh) and in Madantusi (a village in Uttar Pradesh). These experiments came to be known as Hole-in-the-Wall experiments. The findings from Shivpuri and Madantusi confirmed the results of Kalkaji experiments. It appeared that the children in these two places picked up computer skills on their own. Dr. Mitra defined this as a new way of learning "Minimally Invasive Education". At this point in time, International Finance Corporation joined hands with NIIT to set up Hole-in-the-Wall Education Ltd (HiWEL). The idea was to broaden the scope of the experiments and conduct research to prove and streamline Hole-in-the-Wall. The results, show that children learn to operate as well as play with the computer with minimum intervention. They picked up skills and tasks by constructing their own learning environment. Today, more than 300,000 children have benefited from 300 Hole-in-the-Wall stations over last 8 years. In India Suhotra Banerjee (Head-Government Relations) has increased the reach of HiWEL learning stations in Nagaland, Jharkhand, Andhra Pradesh... and is slowly expanding their numbers. Besides India, HiWEL also has projects abroad. The first such project was established in Cambodia in 2004. The project currently operates in Botswana, Mozambique, Nigeria, Rwanda, Swaziland, Uganda, and Zambia, besides Cambodia. The idea, also called Open learning, is even being applied in Britain, albeit inside the classroom. HiWEL Hole-in-the-Wall Education Ltd. (HiWEL) is a joint venture between NIIT and the International Finance Corporation. Established in 2001, HiWEL was set up to research and propagate the idea of Hole-in-the-Wall, a path-breaking learning methodology created by Mitra, Chief Scientist of NIIT. Awards and recognition Digital Opportunity Award by the World Information Technology and Services Alliance (WITSA) in 2008. Reason: "groundbreaking work in developing computer literacy and improving the quality of education at a grass root level." Coverage in the media The project has received extensive coverage from sources as diverse as UNESCO, Business Week, CNN, Reuters, and The Christian Science Monitor, besides being featured at the annual TED conference in 2007. The project received international publicity, when it was found that it was the inspiration behind the book Q & A, itself the inspiration for the Academy Award winning film Slumdog Millionaire. HiWEL has been covered by the Indian Reader's Digest. In school Minimally Invasive Education in school adduces there are many reasons why children may have difficulty learning, especially when the learning is imposed and the subject is something the student is not interested in, a frequent occurrence in modern schools. Schools also label children as "learning disabled" and place them in special education even if the child does not have a learning disability, because the schools have failed to teach the children basic skills. Minimally Invasive Education in school asserts there are many ways to study and learn. It argues that learning is a process you do, not a process that is done to you. The experience of schools holding this approach shows that there are many ways to learn without the intervention of teaching, to say, without the intervention of a teacher being imperative. In the case of reading for instance in these schools some children learn from being read to, memorizing the stories and then ultimately reading them. Others learn from cereal boxes, others from games instructions, others from street signs. Some teach themselves letter sounds, others syllables, others whole words. They adduce that in their schools no one child has ever been forced, pushed, urged, cajoled, or bribed into learning how to read or write, and they have had no dyslexia. None of their graduates are real or functional illiterates, and no one who meets their older students could ever guess the age at which they first learned to read or write. In a similar form students learn all the subjects, techniques and skills in these schools. Every person, children and youth included, has a different learning style and pace and each person, is unique, not only capable of learning but also capable of succeeding. These schools assert that applying the medical model of problem-solving to individual children who are pupils in the school system, and labeling these children as disabled—referring to a whole generation of non-standard children that have been labeled as dysfunctional, even though they suffer from nothing more than the disease of responding differently in the classroom than the average manageable student—systematically prevents the students' success and the improvement of the current educational system, thus requiring the prevention of academic failure through intervention. This, they clarify, does not refer to people who have a specific disability that affects their drives; nor is anything they say and write about education meant to apply to people who have specific mental impairments, which may need to be dealt with in special, clinical ways. Describing current instructional methods as homogenization and lockstep standardization, alternative approaches are proposed, such as the Sudbury model schools, an alternative approach in which children, by enjoying personal freedom thus encouraged to exercise personal responsibility for their actions, learn at their own pace rather than following a chronologically-based curriculum. These schools are organized to allow freedom from adult interference in the daily lives of students. As long as children do no harm to others, they can do whatever they want with their time in school. The adults in other schools plan a curriculum of study, teach the students the material and then test and grade their learning. The adults at Sudbury schools are "the guardians of the children's freedom to pursue their own interests and to learn what they wish," creating and maintaining a nurturing environment, in which children feel that they are cared for, and that does not rob children of their time to explore and discover their inner selves. They also are there to answer questions and to impart specific skills or knowledge when asked to by students. As Sudbury schools, proponents of unschooling have also claimed that children raised in this method do not suffer from learning disabilities, thus not requiring the prevention of academic failure through intervention. "If learning is an emergent phenomenon, then the teacher needs to provide stimulus — lots of it – in the form of “big” questions. These must include questions to which the teacher, or perhaps anyone, does not have the answer. These should be the sorts of questions that will occupy children’s minds perpetually. The teacher needs to help each child cultivate a vision of the future. Thus, a new primary curriculum needs to teach only three skills: 1. Reading comprehension: This is perhaps the most crucial skill a child needs to acquire while growing up. 2. Information search and analysis: First articulated at the National Institute of Technology in India by professor J.R. Isaac in the early 1990s — decades ahead of its time — this skill set is vital for children searching for answers in an infinite cyberspace. 3. A rational system of belief: If children know how to search, and if they know how to read, then they must learn how to believe. Each one of us has a belief system. How soon can a child acquire one? A rational belief system will be our children’s protection against doctrine. Children who have these skills scarcely need schools as we define them today. They need a learning environment and a source of rich, big questions. Computers can give out answers, but they cannot, as of yet, make questions. Hence, the teacher’s role becomes bigger and stranger than ever before: She must ask her “learners” about things she does not know herself. Then she can stand back and watch as learning emerges." See also Open learning Didactic method Response to intervention Positive Behavior Interventions and Supports Sudbury school Problem-based learning Notes and references External links The Hole in the Wall site https://web.archive.org/web/20080523112413/http://www.ascilite.org.au/ajet/ajet21/mitra.html https://web.archive.org/web/20070816042917/http://www.egovmonitor.com/node/5865 Live Conversation with Professor Sugata Mitra at Wiz-IQ-dot-com WizIQ is a popular educational website equipped with state-of-art Virtual Classroom Classrooms in the cloud or castles in the air? Alternative education Computing and society Educational technology Human–computer interaction Pedagogy
Minimally invasive education
[ "Technology", "Engineering" ]
2,455
[ "Human–computer interaction", "Computing and society", "Human–machine interaction" ]
1,010,924
https://en.wikipedia.org/wiki/Tittle
The tittle or superscript dot is the dot on top of lowercase i and j. The tittle is an integral part of these glyphs, but diacritic dots can appear over other letters in various languages. In most languages, the tittle of i or j is omitted when a diacritic is placed in the tittle's usual position (as í or ĵ), but not when the diacritic appears elsewhere (as į, ɉ). Use The word tittle is rarely used. One notable occurrence is in the King James Bible at Matthew 5:18: "For verily I say unto you, Till heaven and earth pass, one jot or one tittle shall in no wise pass from the law, till all be fulfilled" (KJV). The quotation uses "jot and tittle" as examples of extremely small graphic details in "the Law", presumably referring to the Hebrew text of the Torah. In English the phrase "jot and tittle" indicates that every small detail has received attention. The Greek terms translated in English as "jot" and "tittle" in Matthew 5:18 are iota and keraia (). Iota is the smallest letter of the Greek alphabet (ι); the even smaller iota subscript was a medieval innovation. Alternatively, iota may represent yodh (י), the smallest letter of the Hebrew and Aramaic alphabets (to which iota is related). "Keraia" is a hook or serif, and in Matthew 5:18 may refer to Greek diacritics, or, if the reference is to the Hebrew text of the Torah, possibly refers to the pen strokes that distinguish between similar Hebrew letters, e.g., ב (Bet) versus כ (Kaph), or to ornamental pen strokes attached to certain Hebrew letters, or to the Hebrew letter Vav, since in Hebrew vav also means "hook". "Keraia" in Matt. 5:18 cannot refer to vowel marks known as Niqqud, which developed later than the date of Matthew's composition. Others have suggested that "Keraia" refers to markings in cursive scripts of languages derived from Aramaic, such as Syriac, written in Serṭā (, 'short line'). In printing modern Greek numerals a keraia is used. Tittles also exist in Cyrillic. Dotless and dotted i A number of alphabets use dotted and dotless I, both upper and lower case. In the modern Turkish alphabet, the absence or presence of a tittle distinguishes two different letters representing two different phonemes: the letter "I" / "ı", with the absence of a tittle also on the lower case letter, represents the close back unrounded vowel , while "İ" / "i", with the inclusion of a tittle even on the capital letter, represents the close front unrounded vowel . This practice has carried over to several other Turkic languages, like the Azerbaijani alphabet, Crimean Tatar alphabet, and Tatar alphabet. In some of the Dene languages of the Northwest Territories in Canada, specifically North Slavey, South Slavey, Tłı̨chǫ and Dëne Sųłıné, all instances of i are undotted to avoid confusion with tone-marked vowels í or ì. The other Dene language of the Northwest Territories, Gwich’in, always includes the tittle on lowercase i. There is only one letter I in Irish, but i is undotted in the traditional uncial Gaelic script to avoid confusion of the tittle with the buailte overdot found over consonants. Modern texts replace the buailte with the letter h, and use the same antiqua-descendant fonts, which have a tittle, as other Latin-alphabet languages. Bilingual road signs formerly used dotless i in lowercase Irish text to better distinguish i from í. The letter "j" is not used in Irish other than in foreign words. In most Latin-based orthographies, the lowercase letter i conventionally has its dot replaced when a diacritical mark atop the letter, such as a tilde or caron, is placed. The tittle is sometimes retained in some languages. In some Baltic languages sources, the lowercase letter i sometimes retains a tittle even when accented. In Vietnamese in the 17th century, the tittle is preserved atop ỉ and ị but not ì and í, as seen in the seminal quốc ngữ reference Dictionarium Annamiticum Lusitanum et Latinum. In modern Vietnamese, a tittle can be seen in ì, ỉ, ĩ, and í in cursive handwriting and some signage. This detail rarely occurs in computers and on the Internet, due to the obscurity of language-specific fonts. In any case, the tittle is always retained in ị. A particular and unique variant is in the Johnston typeface, long employed by and proprietary to the Transport for London organisation and its associates, in print and notices, where above a certain point size the dot (and full stop) are diamond shaped, this being among the most distinguishing features of the font. Phrases It is thought that the phrase "to a T" is derived from the word tittle because long before "to a T" became popular, the phrase "to a tittle" was used. The phrase "to dot the 'i's and cross the 't's" is used figuratively to mean "to put the finishing touches to" or "to be thorough". References Sources Dictionary.com – Tittle External links Henry George Liddell, Robert Scott, A Greek-English Lexicon "Tittle" on Everything2 Diacritics Christian terminology
Tittle
[ "Mathematics" ]
1,190
[ "Symbols", "Diacritics" ]
1,010,962
https://en.wikipedia.org/wiki/Turret%20ship
Turret ships were a 19th-century type of warship, the earliest to have their guns mounted in a revolving gun turret, instead of a broadside arrangement. Background Before the development of large-calibre, long-range guns in the mid-19th century, the classic ship of the line design used rows of port-mounted guns on each side of the ship, often mounted in casemates. Firepower was provided by a large number of guns which could only be aimed in a limited arc from one side of the ship. Due to instability, fewer larger and heavier guns can be carried on a ship. Also, the casemates often sat near the waterline, which made them vulnerable to flooding and restricted their use to calm seas. Turrets were weapon mounts designed to protect the crew and mechanism of the artillery piece and with the capability of being aimed and fired in many directions as a rotating weapon platform. This platform can be mounted on a fortified building or structure such as an anti-naval land battery, or on a combat vehicle, a naval ship, or a military aircraft. Origins Designs for a rotating gun turret date back to the late 18th century. Practical rotating turret warships were independently developed in Great Britain and the United States with the availability of steam power in the mid-19th Century. British developments During the Crimean War, Captain Cowper Phipps Coles of the British Royal Navy constructed a raft with guns protected by a 'cupola' and used the raft, named Lady Nancy, to shell the Russian town of Taganrog in the Black Sea. Lady Nancy "proved a great success", and Coles patented his rotating turret after the war. Following Coles' patenting, the British Admiralty ordered a prototype of Coles' design in 1859, which was installed in the floating battery vessel, , for trials in 1861, becoming the first vessel to be fitted with a revolving gun turret. Coles' design aim was to create a ship with the greatest possible all round arc of fire, as low in the water as possible to minimise the target. The British Admiralty accepted the principle of the gun turret as a useful innovation, and incorporated it into other new designs. Coles submitted a design for a ship having ten domed turrets each housing two large guns. The design was rejected as impractical, although the Admiralty remained interested in turret ships and instructed its own designers to create better designs. Coles enlisted the support of Prince Albert, who wrote to the First Lord of the Admiralty, the Duke of Somerset, supporting the construction of a turret ship. In January 1862, the Admiralty agreed to construct a ship, , which had four turrets and a low freeboard, intended only for coastal defence. Coles was allowed to design the turrets, but the ship was the responsibility of the chief Constructor Isaac Watts. Another of Coles's designs, , was completed in August 1864. Its existing broadside guns were replaced with four turrets on a flat deck and the ship was fitted with of armour in a belt around the waterline. Early ships like and Royal Sovereign had little sea-keeping qualities being limited to coastal waters. Coles, in collaboration with Sir Edward James Reed, went on to design and build , the first seagoing warship to carry her guns in turrets. Laid down in 1866 and completed in June 1869, it carried two turrets, although the inclusion of a forecastle and poop deck prevented the guns firing fore and aft. American developments The gun turret was independently invented by the Swedish inventor John Ericsson in the United States. Ericsson designed USS Monitor in 1861. Erickson's most prominent design feature was a large cylindrical gun turret mounted amidships above the low-freeboard upper hull, also called the "raft". The raft extended well past the sides of the lower, more traditionally shaped lower hull. A small armoured pilot house was fitted on the upper deck towards the bow, however, its position prevented Monitor from firing her guns straight forward. One of Ericsson's prime goals in designing the ship was to present the smallest possible target to enemy gunfire. The turret's rounded shape helped to deflect cannon shot. A pair of donkey engines rotated the turret through a set of gears; a full rotation was made in 22.5 seconds during testing on 9 February 1862. This design was technologically inferior to Coles', and made fine control of the turret difficult. If turret rotation overshot its mark it was difficult to make a correction. Either the engine would have to be placed in reverse or another full rotation was necessary. Including the guns, the turret weighed approximately ; the entire weight rested on an iron spindle that had to be jacked up using a wedge before the turret could rotate. The spindle was in diameter, which gave it ten times the strength needed in preventing the turret from sliding sideways. When not in use, the turret rested on a brass ring on the deck that was intended to form a watertight seal. In service, however, this proved to leak heavily, despite caulking by the crew. The gap between the turret and the deck proved to be a problem as debris and shell fragments entered the gap and jammed the turrets of several s, which used the same turret design, during the First Battle of Charleston Harbor in April 1863. Direct hits at the turret with heavy shot also had the potential to bend the spindle, which could also jam the turret. The turret was intended to mount a pair of smoothbore Dahlgren guns, but they were not ready in time and guns were substituted. Each gun weighed approximately . Monitors guns used the standard propellant charge of specified by the 1860 ordnance for targets "distant", "near", and "ordinary", established by the gun's designer Dahlgren himself. They could fire a round shot or shell up to a range of at an elevation of +15°. Culmination of 1871 and of 1872 represented the culmination of this pioneering work. These ironclad turret ships were designed by Edward James Reed. They were also the world's first mastless battleships, built with a central superstructure layout, and became the prototype for all subsequent warships, leading directly to the modern battleship. Surviving examples The only preserved steam and sail turret ship in Europe is the mid 19th century Dutch ironclad HNLMS Schorpioen. The Chilean and Peruvian flagship Huascar is a memorial at Talcahuano. A replica of the Chinese battleship Dingyuan was built as a museum ship in 2003. See also Barbette ship Footnotes References Notes Bibliography Ship types Shipbuilding
Turret ship
[ "Engineering" ]
1,333
[ "Shipbuilding", "Marine engineering" ]
1,011,112
https://en.wikipedia.org/wiki/Huda%20Salih%20Mahdi%20Ammash
Huda Salih Mahdi Ammash () (born 29 October 1953) is an Iraqi scientist and academic. Ammash was often referred to as "Mrs. Anthrax" due to her alleged association with an Iraqi biological weapons program. Ammash was number 53 on the Pentagon's list of the 55 most wanted, the "five of hearts" , in the U.S. deck of most-wanted Iraqi playing cards, and the only woman to be featured. She surrendered to coalition forces in May 2003 but was released in December 2005 without being charged. Life She received her undergraduate degree from the University of Baghdad, followed by a master's in microbiology from Texas Woman's University in Denton, Texas. She spent four years at the University of Missouri pursuing a doctorate in microbiology, which she received in December 1983. Her thesis focused on the effects of radiation, paraquat and the chemotherapy drug Adriamycin, on bacteria and mammals. She was appointed to the Revolutionary Command Council in May 2001. In one of several videos that Saddam released during the war, Ammash was the only woman among about a half-dozen men seated around a table. The videos were broadcast on Iraqi TV as invading forces drew closer to Baghdad: it is not known when the meeting took place or what the significance was of her appearance on camera. She served as president of Iraq's microbiology society and as dean at the University of Baghdad. U.S. officials said she was trained by Nassir al-Hindawi, described by United Nations inspectors as the "father of Iraq's biological weapons program". She conducted research into illnesses that may have been caused by depleted uranium from shells used in the 1991 Gulf War, and had published several papers on the health effects of the war and the subsequent sanctions. Capture Ammash surrendered to coalition forces on 9 May 2003 and was one of two Iraqi women known to be in U.S. custody as of April 2005. The other was the British-educated Rihab Taha, who led Iraq's biological weapons program until 1995. In August 2005, the American Association for the Advancement of Science called for Ammash to be either sent to trial or released: According to Times Higher Education, "The organisation [AAAS] has not issued the statement lightly. Senior figures including Alan Leshner, chief executive officer of the AAAS, were involved in drawing it up." Both women were released in December 2005 after they were among those an American-Iraqi board process found were no longer a security threat and would have no charges filed against them. Ammash was also said to be suffering from breast cancer. Family Ammash's father, Salih Mahdi Ammash, was a high-level Baath Party member in Iraq, who became defense minister in 1963, deputy prime minister in 1968, and an ambassador in 1977. References 1953 births Living people Texas Woman's University alumni University of Missouri alumni People from Baghdad Iraqi biological weapons program Iraqi microbiologists Iraqi women scientists University of Baghdad alumni Women in the Iraq War Members of the Regional Command of the Arab Socialist Ba'ath Party – Iraq Region Academic staff of the University of Baghdad Women microbiologists People related to biological warfare Most-wanted Iraqi playing cards Iraq War prisoners of war Iraqi prisoners of war Prisoners of war held by the United States
Huda Salih Mahdi Ammash
[ "Biology" ]
686
[ "People related to biological warfare", "Biological warfare" ]
1,011,231
https://en.wikipedia.org/wiki/Undercroft
An undercroft is traditionally a cellar or storage room, often brick-lined and vaulted, and used for storage in buildings since medieval times. In modern usage, an undercroft is generally a ground (street-level) area which is relatively open to the sides, but covered by the building above. History While some were used as simple storerooms, others were rented out as shops. For example, the undercroft rooms at Myres Castle in Fife, Scotland, of were used as the medieval kitchen and a range of stores. Many of these early medieval undercrofts were vaulted or groined, such as the vaulted chamber at Beverston Castle in Gloucestershire or the groined stores at Myres Castle. The term is sometimes used to describe a crypt beneath a church, used for burial purposes. For example, there is a 14th-century undercroft or crypt extant at Muchalls Castle in Aberdeenshire in Scotland, even though the original chapel above it was destroyed in an act of war in 1746. Undercrofts were commonly built in England and Scotland throughout the thirteenth and early fourteenth centuries. They occur in cities such as London, Chester, Coventry and Southampton. The undercroft beneath the Houses of Parliament in London was rented to the conspirators behind the Gunpowder Plot. Modern usage In modern buildings, the term undercroft is often used to describe a ground-level parking area that occupies the footprint of the building (and sometimes extends to other service or garden areas around the structure). This type of parking is, however, discouraged by some urban design guidelines, as it prevents the ground floor from having activities (shops, restaurants or similar) that provide for a lively streetscape. See also The Undercroft, Guildford The Undercroft, Southbank Centre – skateboarding and graffiti centre in London Void deck References Rooms
Undercroft
[ "Engineering" ]
367
[ "Rooms", "Architecture" ]
1,011,242
https://en.wikipedia.org/wiki/Value%20investing
Value investing is an investment paradigm that involves buying securities that appear underpriced by some form of fundamental analysis. Modern value investing derives from the investment philosophy taught by Benjamin Graham and David Dodd at Columbia Business School starting in 1928 and subsequently developed in their 1934 text Security Analysis. The early value opportunities identified by Graham and Dodd included stock in public companies trading at discounts to book value or tangible book value, those with high dividend yields and those having low price-to-earning multiples or low price-to-book ratios. Proponents of value investing, including Berkshire Hathaway chairman Warren Buffett, have argued that the essence of value investing is buying stocks at less than their intrinsic value. The discount of the market price to the intrinsic value is what Benjamin Graham called the "margin of safety". Buffett further expanded the value investing concept with a focus on "finding an outstanding company at a sensible price" rather than generic companies at a bargain price. Hedge fund manager Seth Klarman has described value investing as rooted in a rejection of the efficient-market hypothesis (EMH). While the EMH proposes that securities are accurately priced based on all available data, value investing proposes that some equities are not accurately priced. Graham himself did not use the phrase value investing. The term was coined later to help describe his ideas. The term however has also led to misinterpretation of his principles - most notably the notion that Graham simply recommended cheap stocks. Columbia Business School is the current home for value investing. History Early predecessors The concept of intrinsic value for equities was recognized as early as the 1600s, as was the idea that paying substantially above intrinsic value was likely to be a poor long-term investment. Daniel Defoe observed in the 1690s how stock for the East India Company was trading at what he believed was an elevated price of over 300% more than face value, "without any material difference in Intrinsick [sic] value." Hetty Green (1834-1916) was retrospectively described as "America's first value investor." She had a habit of buying unwanted assets at low prices, which she held, as she stated in 1905, "until they go up [in price] and people are anxious to buy." The investing firm Tweedy, Browne was founded in 1920 and has been described as "the oldest value investing firm on Wall Street". Founder Forest Berwind "Bill" Tweedy initially focused on shares of smaller companies, often family owned, which traded in lower numbers and lower volume than stock for larger companies. This niche allowed Tweedy to buy stocks at a significant discount to estimated book value due to the limited options for sellers. Tweedy and Benjamin Graham eventually became friends and worked out of the same New York City office building at 52 broadway. Economist John Maynard Keynes is also recognized as an early value investor. While managing the endowment of King's College, Cambridge starting in the 1920s, Keynes first attempted a stock trading strategy based on market timing. When this method was unsuccessful, he turned to a strategy similar to value investing. In 2017, Joel Tillinghast of Fidelity Investments wrote: Instead of using big-picture economics, Keynes increasingly focused on a small number of companies that he knew very well. Rather than chasing momentum, he bought undervalued stocks with generous dividends. [...] Most were small and midsize companies in dull or out of favor industries, such as mining and autos in the midst of the Great Depression. Despite his rough start [by timing markets], Keynes beat the market averages by 6 percent a year over more than two decades. Keynes used similar terms and concepts as Graham and Dodd (e.g. an emphasis on the intrinsic value of equities). A review of his archives at King's College found no evidence of contact between Keynes and his American counterparts and Keynes is believed to have developed his investing theories independently. Keynes did not teach his concepts in classes or seminars, unlike Graham and Dodd, and details of his investing theories became widely known only decades after his death in 1946. There was "considerable overlap" of Keynes's ideas with those of Graham and Dodd, though their ideas were not entirely congruent. Benjamin Graham Value investing was established by Benjamin Graham and David Dodd. Both were professors at Columbia Business School. In Graham's book The Intelligent Investor, he advocated the concept of margin of safety. The concept was introduced in the book Security Analysis which he co-authored with David Dodd in 1934 and calls for an approach to investing that is focused on purchasing equities at prices less than their intrinsic values. In terms of picking or screening stocks, he recommended purchasing firms which have steady profits, are trading at low prices to book value, have low price-to-earnings (P/E) ratios and which have relatively low debt. Further evolution However, the concept of value (as well as "book value") has evolved significantly since the 1970s. Book value is most useful in industries where most assets are tangible. Intangible assets such as patents, brands, or goodwill are difficult to quantify, and may not survive the break-up of a company. When an industry is going through fast technological advancements, the value of its assets is not easily estimated. Sometimes, the production power of an asset can be significantly reduced due to competitive disruptive innovation and therefore its value can suffer permanent impairment. One good example of decreasing asset value is a personal computer. An example of where book value does not mean much is the service and retail sectors. One modern model of calculating value is the discounted cash flow model (DCF), where the value of an asset is the sum of its future cash flows, discounted back to the present. Quantitative value investing Quantitative value investing, also known as Systematic value investing, is a form of value investing that analyzes fundamental data such as financial statement line items, economic data, and unstructured data in a rigorous and systematic manner. Practitioners often employ quantitative applications such as statistical / empirical finance or mathematical finance, behavioral finance, natural language processing, and machine learning. Quantitative investment analysis can trace its origin back to Security Analysis by Benjamin Graham and David Dodd in which the authors advocated detailed analysis of objective financial metrics of specific stocks. Quantitative investing replaces much of the ad-hoc financial analysis used by human fundamental investment analysts with a systematic framework designed and programmed by a person but largely executed by a computer in order to avoid cognitive biases that lead to inferior investment decisions. In an interview, Benjamin Graham admitted that even by that time ad-hoc detailed financial analysis of single stocks was unlikely to produce good risk-adjusted returns. Instead, he advocated a rules-based approach focused on constructing a coherent portfolio based on a relatively limited set of objective fundamental financial factors. Joel Greenblatt's magic formula investing is a simple illustration of a quantitative value investing strategy. Many modern practitioners employ more sophisticated forms of quantitative analysis and evaluate numerous financial metrics, as opposed to just two as in the "magic formula". James O'Shaughnessy's What Works on Wall Street is a classic guide to quantitative value investing, containing backtesting performance data of various quantitative value strategies and value factors based on Compustat data from January 1927 until December 2009. Value investing performance Performance of value strategies Value investing has proven to be a successful investment strategy. There are several ways to evaluate the success. One way is to examine the performance of simple value strategies, such as buying low PE ratio stocks, low price-to-cash-flow ratio stocks, or low price-to-book ratio stocks. Numerous academics have published studies investigating the effects of buying value stocks. These studies have consistently found that value stocks outperform growth stocks and the market as a whole, not necessarily over short periods but when tracked over long periods, even going back to the 19th century. A review of 26 years of data (1990 to 2015) from US markets found that the over-performance of value investing was more pronounced in stocks for smaller and mid-size companies than for larger companies and recommended a "value tilt" with greater emphasis on value than growth investing in personal portfolios. Performance of value investors Since examining only the performance of the best known value investors introduces a selection bias (as typically investors might not become well known unless they are successful) a way to investigate the performance of a group of value investors was suggested by Warren Buffett in his 1984 speech The Superinvestors of Graham-and-Doddsville. In this speech, Buffett examined the performance of those investors who worked at Graham-Newman Corporation and were influenced by Benjamin Graham. Buffett's conclusion was that value investing is on average successful in the long run. This was also the conclusion of the academic research on simple value investing strategies. From 1965 to 1990 there was little published research and articles in leading journals on value investing. Well-known value investors The Graham-and-Dodd Disciples Ben Graham's students Benjamin Graham is regarded by many to be the father of value investing. Along with David Dodd, he wrote Security Analysis, first published in 1934. The most lasting contribution of this book to the field of security analysis was to emphasize the quantifiable aspects of security analysis (such as the evaluations of earnings and book value) while minimizing the importance of more qualitative factors such as the quality of a company's management. Graham later wrote The Intelligent Investor, a book that brought value investing to individual investors. Aside from Buffett, many of Graham's other students, such as William J. Ruane, Irving Kahn, Walter Schloss, and Charles Brandes went on to become successful investors in their own right. Irving Kahn was one of Graham's teaching assistants at Columbia University in the 1930s. He was a close friend and confidant of Graham's for decades and made research contributions to Graham's texts Security Analysis, Storage and Stability, World Commodities and World Currencies and The Intelligent Investor. Kahn was a partner at various finance firms until 1978 when he and his sons, Thomas Graham Kahn and Alan Kahn, started the value investing firm, Kahn Brothers & Company. Irving Kahn remained chairman of the firm until his death at age 109. Walter Schloss was another Graham-and-Dodd disciple. Schloss never had a formal education. When he was 18, he started working as a runner on Wall Street. He then attended investment courses taught by Ben Graham at the New York Stock Exchange Institute, and eventually worked for Graham in the Graham-Newman Partnership. In 1955, he left Graham’s company and set up his own investment firm, which he ran for nearly 50 years. Walter Schloss was one of the investors Warren Buffett profiled in his famous Superinvestors of Graham-and-Doddsville article. Christopher H. Browne of Tweedy, Browne was well known for value investing. According to The Wall Street Journal, Tweedy, Browne was the favorite brokerage firm of Benjamin Graham during his lifetime; also, the Tweedy, Browne Value Fund and Global Value Fund have both beat market averages since their inception in 1993. In 2006, Christopher H. Browne wrote The Little Book of Value Investing in order to teach ordinary investors how to value invest. Peter Cundill was a well-known Canadian value investor who followed the Graham teachings. His flagship Cundill Value Fund allowed Canadian investors access to fund management according to the strict principles of Graham and Dodd. Warren Buffett had indicated that Cundill had the credentials he's looking for in a chief investment officer. Warren Buffett and Charlie Munger Graham's most famous student, however, is Warren Buffett, who ran successful investing partnerships before closing them in 1969 to focus on running Berkshire Hathaway. Buffett was a strong advocate of Graham's approach and strongly credits his success back to his teachings. Another disciple, Charlie Munger, who joined Buffett at Berkshire Hathaway in the 1970s and has since worked as Vice Chairman of the company, followed Graham's basic approach of buying assets below intrinsic value, but focused on companies with robust qualitative qualities, even if they weren't statistically cheap. This approach by Munger gradually influenced Buffett by reducing his emphasis on quantitatively cheap assets, and instead encouraged him to look for long-term sustainable competitive advantages in companies, even if they weren't quantitatively cheap relative to intrinsic value. Buffett is often quoted saying, "It's better to buy a great company at a fair price, than a fair company at a great price." Buffett is a particularly skilled investor because of his temperament. He has a famous quote stating "be greedy when others are fearful, and fearful when others are greedy." In essence, he updated the teachings of Graham to fit a style of investing that prioritizes fundamentally good businesses over those that are deemed cheap by statistical measures. He is further known for a talk he gave titled the Super Investors of Graham and Doddsville. The talk was an outward appreciation for the fundamentals that Benjamin Graham instilled in him. Michael Burry Dr. Michael Burry, the founder of Scion Capital, is another strong proponent of value investing. Burry is famous for being the first investor to recognize and profit from the impending subprime mortgage crisis, as portrayed by Christian Bale in the movie The Big Short. Burry has said on multiple occasions that his investment style is built upon Benjamin Graham and David Dodd’s 1934 book Security Analysis: "All my stock picking is 100% based on the concept of a margin of safety." Other Columbia Business School value investors Columbia Business School has played a significant role in shaping the principles of the Value Investor, with professors and students making their mark on history and on each other. Ben Graham’s book, The Intelligent Investor, was Warren Buffett’s bible and he referred to it as "the greatest book on investing ever written.” A young Warren Buffett studied under Ben Graham, took his course and worked for his small investment firm, Graham Newman, from 1954 to 1956. Twenty years after Ben Graham, Roger Murray arrived and taught value investing to a young student named Mario Gabelli. About a decade or so later, Bruce Greenwald arrived and produced his own protégés, including Paul Sonkin—just as Ben Graham had Buffett as a protégé, and Roger Murray had Gabelli. Mutual Series and Franklin Templeton disciples Mutual Series has a well-known reputation of producing top value managers and analysts in this modern era. This tradition stems from two individuals: Max Heine, founder of the well regarded value investment firm Mutual Shares fund in 1949 and his protégé legendary value investor Michael F. Price. Mutual Series was sold to Franklin Templeton Investments in 1996. The disciples of Heine and Price quietly practice value investing at some of the most successful investment firms in the country. Franklin Templeton Investments takes its name from Sir John Templeton, another contrarian value oriented investor. Seth Klarman, a Mutual Series alum, is the founder and president of The Baupost Group, a Boston-based private investment partnership, and author of Margin of Safety, Risk Averse Investing Strategies for the Thoughtful Investor, which since has become a value investing classic. Now out of print, Margin of Safety has sold on Amazon for $1,200 and eBay for $2,000. Other value investors Laurence Tisch, who led Loews Corporation with his brother, Robert Tisch, for more than half a century, also embraced value investing. Shortly after his death in 2003 at age 80, Fortune wrote, "Larry Tisch was the ultimate value investor. He was a brilliant contrarian: He saw value where other investors didn't -- and he was usually right." By 2012, Loews Corporation, which continues to follow the principles of value investing, had revenues of $14.6 billion and assets of more than $75 billion. Michael Larson is the Chief Investment Officer of Cascade Investment, which is the investment vehicle for the Bill & Melinda Gates Foundation and the Gates personal fortune. Cascade is a diversified investment shop established in 1994 by Gates and Larson. Larson graduated from Claremont McKenna College in 1980 and the Booth School of Business at the University of Chicago in 1981. Larson is a well known value investor but his specific investment and diversification strategies are not known. Larson has consistently outperformed the market since the establishment of Cascade and has rivaled or outperformed Berkshire Hathaway's returns as well as other funds based on the value investing strategy. Martin J. Whitman is another well-regarded value investor. His approach is called safe-and-cheap, which was hitherto referred to as financial-integrity approach. Martin Whitman focuses on acquiring common shares of companies with extremely strong financial position at a price reflecting meaningful discount to the estimated NAV of the company concerned. Whitman believes it is ill-advised for investors to pay much attention to the trend of macro-factors (like employment, movement of interest rate, GDP, etc.) because they are not as important and attempts to predict their movement are almost always futile. Whitman's letters to shareholders of his Third Avenue Value Fund (TAVF) are considered valuable resources "for investors to pirate good ideas" by Joel Greenblatt in his book on special-situation investment You Can Be a Stock Market Genius. Joel Greenblatt achieved annual returns at the hedge fund Gotham Capital of over 50% per year for 10 years from 1985 to 1995 before closing the fund and returning his investors' money. He is known for investing in special situations such as spin-offs, mergers, and divestitures. Charles de Vaulx and Jean-Marie Eveillard are well known global value managers. For a time, these two were paired up at the First Eagle Funds, compiling an enviable track record of risk-adjusted outperformance. For example, Morningstar designated them the 2001 "International Stock Manager of the Year" and de Vaulx earned second place from Morningstar for 2006. Eveillard is known for his Bloomberg appearances where he insists that securities investors never use margin or leverage. The point made is that margin should be considered the anathema of value investing, since a negative price move could prematurely force a sale. In contrast, a value investor must be able and willing to be patient for the rest of the market to recognize and correct whatever pricing issue created the momentary value. Eveillard correctly labels the use of margin or leverage as speculation, the opposite of value investing. Other notable value investors include: Mason Hawkins, Thomas Forester, Whitney Tilson, Mohnish Pabrai, Li Lu, Guy Spier and Tom Gayner who manages the investment portfolio of Markel Insurance. San Francisco investing firm Dodge & Cox, founded in 1931 and with one of the oldest US mutual funds still in existence as of 2019, emphasizes value investing. Criticism Value stocks do not always beat growth stocks, as demonstrated in the late 1990s. Moreover, when value stocks perform well, it may not mean that the market is inefficient, though it may imply that value stocks are simply riskier and thus require greater returns. Furthermore, Foye and Mramor (2016) find that country-specific factors have a strong influence on measures of value (such as the book-to-market ratio). This leads them to conclude that the reasons why value stocks outperform are country-specific. Also, one of the biggest criticisms of price centric value investing is that an emphasis on low prices (and recently depressed prices) regularly misleads retail investors; because fundamentally low (and recently depressed) prices often represent a fundamentally sound difference (or change) in a company's relative financial health. To that end, Warren Buffett has regularly emphasized that "it's far better to buy a wonderful company at a fair price, than to buy a fair company at a wonderful price." In 2000, Stanford accounting professor Joseph Piotroski developed the F-score, which discriminates higher potential members within a class of value candidates. The F-score aims to discover additional value from signals in a firm's series of annual financial statements, after initial screening of static measures like book-to-market value. The F-score formula inputs financial statements and awards points for meeting predetermined criteria. Piotroski retrospectively analyzed a class of high book-to-market stocks in the period 1976–1996, and demonstrated that high F-score selections increased returns by 7.5% annually versus the class as a whole. The American Association of Individual Investors examined 56 screening methods in a retrospective analysis of the financial crisis of 2008, and found that only F-score produced positive results. Over-simplification of value The term "value investing" causes confusion because it suggests that it is a distinct strategy, as opposed to something that all investors (including growth investors) should do. In a 1992 letter to shareholders, Warren Buffett said, "We think the very term 'value investing' is redundant". In other words, there is no such thing as "non-value investing" because putting your money into assets that you believe are overvalued would be better described as speculation, conspicuous consumption, etc., but not investing. Unfortunately, the term still exists, and therefore the quest for a distinct "value investing" strategy leads to over-simplification, both in practice and in theory. Firstly, various naive "value investing" schemes, promoted as simple, are grossly inaccurate because they completely ignore the value of growth, or even of earnings altogether. For example, many investors look only at dividend yield. Thus they would prefer a 5% dividend yield at a declining company over a modestly higher-priced company that earns twice as much, reinvests half of earnings to achieve 20% growth, pays out the rest in the form of buybacks (which is more tax efficient), and has huge cash reserves. These "dividend investors" tend to hit older companies with huge payrolls that are already highly indebted and behind technologically, and can least afford to deteriorate further. By consistently voting for increased debt, dividends, etc., these naive "value investors" (and the type of management they tend to appoint) serve to slow innovation, and to prevent the majority of the population from working at healthy businesses. Furthermore, the method of calculating the "intrinsic value" may not be well-defined. Some analysts believe that two investors can analyze the same information and reach different conclusions regarding the intrinsic value of the company, and that there is no systematic or standard way to value a stock. In other words, a value investing strategy can only be considered successful if it delivers excess returns after allowing for the risk involved, where risk may be defined in many different ways, including market risk, multi-factor models or idiosyncratic risk. See also Contrarian investing Index investing Low-volatility investing Quality investing Value (economics) Value averaging Value premium References Further reading The Theory of Investment Value (1938), by John Burr Williams. The Intelligent Investor (1949), by Benjamin Graham. You Can Be a Stock Market Genius (1997), by Joel Greenblatt. . Contrarian Investment Strategies: The Next Generation (1998), by David Dreman. . The Essays of Warren Buffett (2001), edited by Lawrence A. Cunningham. . The Little Book That Beats the Market (2006), by Joel Greenblatt. . The Little Book of Value Investing (2006), by Chris Browne. . "The Rediscovered Benjamin Graham - selected writings of the wall street legend," by Janet Lowe. John Wiley & Sons "Benjamin Graham on Value Investing," Janet Lowe, Dearborn "Value Investing: From Graham to Buffett and Beyond" (2004), by Bruce C. N. Greenwald, Judd Kahn, Paul D. Sonkin, Michael van Biema "Modern Security Analysis: Understand Wall Street Fundamentals" (2013), by Fernando Diz and Martin J. Whitman, The Most Important Thing Illuminated (2013), by Howard Marks "Stocks and Exchange - the only Book you need" (2021), by Ladis Konecny, ISBN 9783848220656 Business terms Finance theories Financial risk Investment Market trends Mathematical finance Personal finance Securities (finance) Stock market Valuation (finance)
Value investing
[ "Mathematics" ]
4,986
[ "Applied mathematics", "Mathematical finance" ]
1,011,245
https://en.wikipedia.org/wiki/KL-7
The TSEC/KL-7, also known as Adonis was an off-line non-reciprocal rotor encryption machine. The KL-7 had rotors to encrypt the text, most of which moved in a complex pattern, controlled by notched rings. The non-moving rotor was fourth from the left of the stack. The KL-7 also encrypted the message indicator. History and development In 1945, the Army Security Agency (ASA) initiated the research for a new cipher machine, designated MX-507, planned as successor for the SIGABA and the less secure Hagelin M-209. In 1949, its development was transferred to the newly formed Armed Forces Security Agency (AFSA) who named the machine AFSAM-7, which stands for Armed Forces Security Agency Machine No 7. It was the first rotor crypto machine, developed under one centralized cryptologic organisation as a standard machine for all parts of the armed forces, and the first cipher machine to use electronics (vacuum tubes), apart from the British ROCKEX, which was developed during World War 2. It was also the first cipher machine to use the re-entry (re-flexing, not to be confused with reflector) principle, conceived by Albert W. Small, which re-introduces the encryption output back into the encryption process to re-encipher it again, so that some symbols are ciphered more than once. In 1953, AFSA's successor, the U.S. National Security Agency, introduced the machine in the US Army and Air Force, the FBI and CIA. In 1955, the AFSAM-7 was renamed TSEC/KL-7, following the new standard crypto nomenclature. It was the most widely used crypto machine in the US armed forces until the mid-1960s and was the first machine capable of supporting large networks that was considered secure against known plaintext attack. Some 25,000 machines were in use in the mid-1960s. From 1956 on, the KL-7 was also introduced to all NATO countries. The KL-7 used two encryption procedures, codename POLLUX and ADONIS. The POLLUX procedure sent the message indicator (i.e. start position of the rotors) in clear, and ADONIS sent the message indicator in encrypted form. Description The KL-7 was designed for off-line operation. It was about the size of a Teletype machine and had a similar three-row keyboard, with shift keys for letters and figures. The KL-7 produced printed output on narrow paper strips that were then glued to message pads. When encrypting, it automatically inserted a space between five-letter code groups. One of the reasons for the five letter groups was messages might be given to a morse code operator. The number of five letter groups was easily verified when transmitted. There was an adaptor available, the HL-1/X22, that allowed 5-level Baudot punched paper tape from Teletype equipment to be read for decryption. The standard KL-7 had no ability to punch tapes. A variant of the KL-7, the KL-47, could also punch paper tape for direct input to teleprinters. Product details Each rotor had 36 contacts. To establish a new encryption setting, operators would select a rotor and place it in a plastic outer ring at a certain offset. The ring and the offset to use for each position were specified in a printed key list. This process would be repeated eight times until all rotor positions were filled. Key settings were usually changed every day at midnight, GMT. The basket containing the rotors was removable, and it was common to have a second basket and set of rotors, allowing the rotors to be set up prior to key change. The old basket could then be kept intact for most of the day to decode messages sent the previous day, but received after midnight. Rotor wiring was changed every 1 to 3 years. The keyboard itself was a large sliding switch, also called permutor board. A signal, coming from a letter key, went through the rotors, back to the permutor board to continue to the printer. The KL-7 was non-reciprocal. Therefore, depending on the Encipher or Decipher position of the permutor board, the direction of the signal through the rotors was changed. The rotor basket had two sets of connectors, two with 26 pins and two with 10 pins, at each end that mated with the main assembly. Both 26 pin connectors were connected to the keyboard to enable the switching of the signal direction through the rotors. Both 10 pin connectors on each side were hard-wired with each other. If a signal that entered on one of the 26 pins left the rotor pack on one of these 10 pins, that signal was redirected back into the rotors on the entry side to perform a new pass through the rotors. This loop-back, the so-called re-entry, created complex scrambling of the signal and could result in multiple passes through the rotor pack, depending on the current state of the rotor wiring. There was also a switch pile-up under each movable rotor that was operated by cams on its plastic outer ring. Different outer rings had different arrangements of cams. The circuitry of the switches controlled solenoids which in turn enabled the movement of the rotors. The combination of cam rings and the controlling of a rotor by several switches created a most complex and irregular stepping. The exact wiring between switches and solenoids is still classified. The KL-7 was largely replaced by electronic systems such as the KW-26 ROMULUS and the KW-37 JASON in the 1970s, but KL-7s were kept in service as backups and for special uses. In 1967, when the U.S. Navy sailor John Anthony Walker walked into the embassy of the Soviet Union in Washington, DC seeking employment as a spy, he carried with him a copy of a key list for the KL-47. KL-7s were compromised at other times as well. A unit captured by North Vietnam is on display at NSA's National Cryptologic Museum. The KL-7 was withdrawn from service in June 1983, and Canada's last KL-7-encrypted message was sent on June 30, 1983, "after 27 years of service." The successor to the KL-7 was the KL-51, an off-line, paper tape encryption system that used digital electronics instead of rotors. See also NSA encryption systems Typex Enigma Notes Britannica (2005). Proc (2005) differs, saying that, "after the Walker family spy ring was exposed in the mid-1980s (1985)...immediately, all KL-7's were withdrawn from service" References Sources Jerry Proc's page on the KL-7, retrieved August 15, 2023. NSA Crypto Almanac 50th Anniversary - The development of the AFSAM-7, retrieved February 27, 2011. Technical details and history of the TSEC/KL-7, from Dirk Rijmenants' Cipher Machines & Cryptology, retrieved February 27, 2011. History of the TSEC/KL-7 - First U.S. tactical lightweight cipher machine using electronics, Cipher Machines & Cryptology, retrieved September 16, 2024. Patent for Rotor Re-entry by Albert W Small, filed 1944 from Free Patents On-line, retrieved February 27, 2011. "Cryptology", Encyclopædia Britannica. Retrieved 22 June 2005 from Encyclopædia Britannica Online. Card attached to KL-51 on display at the National Cryptologic Museum, 2005. External links TSEC/KL-7 with detailed information and many images on the Crypto Museum website Accurate TSEC/KL-7 Simulator (Windows), on Dirk Rijmenants' Cipher Machines & Cryptology Accurate TSEC/KL-7 Simulator (Java, platform-independent), released by MIT, on Crypto Museum website Rotor machines National Security Agency encryption devices Computer-related introductions in 1949 Products introduced in 1949
KL-7
[ "Physics", "Technology" ]
1,707
[ "Physical systems", "Machines", "Rotor machines" ]
1,011,270
https://en.wikipedia.org/wiki/Bourbaki%E2%80%93Witt%20theorem
In mathematics, the Bourbaki–Witt theorem in order theory, named after Nicolas Bourbaki and Ernst Witt, is a basic fixed-point theorem for partially ordered sets. It states that if X is a non-empty chain complete poset, and such that for all then f has a fixed point. Such a function f is called inflationary or progressive. Special case of a finite poset If the poset X is finite then the statement of the theorem has a clear interpretation that leads to the proof. The sequence of successive iterates, where x0 is any element of X, is monotone increasing. By the finiteness of X, it stabilizes: for n sufficiently large. It follows that x∞ is a fixed point of f. Proof of the theorem Pick some . Define a function K recursively on the ordinals as follows: If is a limit ordinal, then by construction is a chain in X. Define This is now an increasing function from the ordinals into X. It cannot be strictly increasing, as if it were we would have an injective function from the ordinals into a set, violating Hartogs' lemma. Therefore the function must be eventually constant, so for some that is, So letting we have our desired fixed point. Q.E.D. Applications The Bourbaki–Witt theorem has various important applications. One of the most common is in the proof that the axiom of choice implies Zorn's lemma. We first prove it for the case where X is chain complete and has no maximal element. Let g be a choice function on Define a function by This is allowed as, by assumption, the set is non-empty. Then f(x) > x, so f is an inflationary function with no fixed point, contradicting the theorem. This special case of Zorn's lemma is then used to prove the Hausdorff maximality principle, that every poset has a maximal chain, which is easily seen to be equivalent to Zorn's Lemma. Bourbaki–Witt has other applications. In particular in computer science, it is used in the theory of computable functions. It is also used to define recursive data types, e.g. linked lists, in domain theory. See also Kleene fixed-point theorem for Scott-continuous functions Knaster–Tarski theorem for complete lattices References Order theory Fixed-point theorems Theorems in the foundations of mathematics Articles containing proofs
Bourbaki–Witt theorem
[ "Mathematics" ]
528
[ "Theorems in mathematical analysis", "Order theory", "Foundations of mathematics", "Mathematical logic", "Fixed-point theorems", "Theorems in topology", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
1,011,332
https://en.wikipedia.org/wiki/Predicate%20variable
In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation (between terms), but which has not been specifically assigned any particular relation (or meaning). Common symbols for denoting predicate variables include capital roman letters such as , and , or lower case roman letters, e.g., . In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of (at least) second-order quantifiers. Notation Predicate variables should be distinguished from predicate constants, which could be represented either with a different (exclusive) set of predicate letters, or by their own symbols which really do have their own specific meaning in their domain of discourse: e.g. . If letters are used for both predicate constants and predicate variables, then there must be a way of distinguishing between them. One possibility is to use letters W, X, Y, Z to represent predicate variables and letters A, B, C,..., U, V to represent predicate constants. If these letters are not enough, then numerical subscripts can be appended after the letter in question (as in X1, X2, X3). Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could be used to represent entire well-formed formulae (wff) of the predicate calculus: any free variable terms of the wff could be incorporated as terms of the Greek-letter predicate. This is the first step towards creating a higher-order logic. Usage If the predicate variables are not defined as belonging to the vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicates are just called "predicate letters". The metavariables are thus understood to be used to code for axiom schema and theorem schemata (derived from the axiom schemata). Whether the "predicate letters" are constants or variables is a subtle point: they are not constants in the same sense that are predicate constants, or that are numerical constants. If "predicate variables" are only allowed to be bound to predicate letters of zero arity (which have no arguments), where such letters represent propositions, then such variables are propositional variables, and any predicate logic which allows second-order quantifiers to be used to bind such propositional variables is a second-order predicate calculus, or second-order logic. If predicate variables are also allowed to be bound to predicate letters which are unary or have higher arity, and when such letters represent propositional functions, such that the domain of the arguments is mapped to a range of different propositions, and when such variables can be bound by quantifiers to such sets of propositions, then the result is a higher-order predicate calculus, or higher-order logic. See also References Bibliography Rudolf Carnap and William H. Meyer. Introduction to Symbolic Logic and Its Applications. Dover Publications (June 1, 1958). Predicate logic Logic symbols
Predicate variable
[ "Mathematics" ]
700
[ "Predicate logic", "Mathematical logic", "Symbols", "Mathematical symbols", "Logic symbols", "Basic concepts in set theory" ]
1,011,379
https://en.wikipedia.org/wiki/Multilevel%20security
Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications (i.e., at different security levels), permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security. One context is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion, and have adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy. Trusted operating systems An MLS operating environment often requires a highly trustworthy information processing system often built on an MLS operating system (OS), but not necessarily. Most MLS functionality can be supported by a system composed entirely from untrusted computers, although it requires multiple independent computers linked by hardware security-compliant channels (see section B.6.2 of the Trusted Network Interpretation, NCSC-TG-005). An example of hardware enforced MLS is asymmetric isolation. If one computer is being used in MLS mode, then that computer must use a trusted operating system. Because all information in an MLS environment is physically accessible by the OS, strong logical controls must exist to ensure that access to information is strictly controlled. Typically this involves mandatory access control that uses security labels, like the Bell–LaPadula model. Customers that deploy trusted operating systems typically require that the product complete a formal computer security evaluation. The evaluation is stricter for a broader security range, which are the lowest and highest classification levels the system can process. The Trusted Computer System Evaluation Criteria (TCSEC) was the first evaluation criteria developed to assess MLS in computer systems. Under that criteria there was a clear uniform mapping between the security requirements and the breadth of the MLS security range. Historically few implementations have been certified capable of MLS processing with a security range of Unclassified through Top Secret. Among them were Honeywell's SCOMP, USAF SACDIN, NSA's Blacker, and Boeing's MLS LAN, all under TCSEC, 1980s vintage and Intel 80386-based. Currently, MLS products are evaluated under the Common Criteria. In late 2008, the first operating system (more below) was certified to a high evaluated assurance level: Evaluation Assurance Level (EAL) - EAL 6+ / High Robustness, under the auspices of a U.S. government program requiring multilevel security in a high threat environment. While this assurance level has many similarities to that of the old Orange Book A1 (such as formal methods), the functional requirements focus on fundamental isolation and information flow policies rather than higher level policies such as Bell-La Padula. Because the Common Criteria decoupled TCSEC's pairing of assurance (EAL) and functionality (Protection Profile), the clear uniform mapping between security requirements and MLS security range capability documented in CSC-STD-004-85 has largely been lost when the Common Criteria superseded the Rainbow Series. Freely available operating systems with some features that support MLS include Linux with the Security-Enhanced Linux feature enabled and FreeBSD. Security evaluation was once thought to be a problem for these free MLS implementations for three reasons: It is always very difficult to implement kernel self-protection strategy with the precision needed for MLS trust, and these examples were not designed to or certified to an MLS protection profile so they may not offer the self-protection needed to support MLS. Aside from EAL levels, the Common Criteria lacks an inventory of appropriate high assurance protection profiles that specify the robustness needed to operate in MLS mode. Even if (1) and (2) were met, the evaluation process is very costly and imposes special restrictions on configuration control of the evaluated software. Notwithstanding such suppositions, Red Hat Enterprise Linux 5 was certified against LSPP, RBACPP, and CAPP at EAL4+ in June 2007. It uses Security-Enhanced Linux to implement MLS and was the first Common Criteria certification to enforce TOE security properties with Security-Enhanced Linux. Vendor certification strategies can be misleading to laypersons. A common strategy exploits the layperson's overemphasis of EAL level with over-certification, such as certifying an EAL 3 protection profile (like CAPP) to elevated levels, like EAL 4 or EAL 5. Another is adding and certifying MLS support features (such as role-based access control protection profile (RBACPP) and labeled security protection profile (LSPP)) to a kernel that is not evaluated to an MLS-capable protection profile. Those types of features are services run on the kernel and depend on the kernel to protect them from corruption and subversion. If the kernel is not evaluated to an MLS-capable protection profile, MLS features cannot be trusted regardless of how impressive the demonstration looks. It is particularly noteworthy that CAPP is specifically not an MLS-capable profile as it specifically excludes self-protection capabilities critical for MLS. General Dynamics offers PitBull, a trusted, MLS operating system. PitBull is currently offered only as an enhanced version of Red Hat Enterprise Linux, but earlier versions existed for Sun Microsystems Solaris, IBM AIX, and SVR4 Unix. PitBull provides a Bell LaPadula security mechanism, a Biba integrity mechanism, a privilege replacement for superuser, and many other features. PitBull has the security base for General Dynamics' Trusted Network Environment (TNE) product since 2009. TNE enables Multilevel information sharing and access for users in the Department of Defense and Intelligence communities operating a varying classification levels. It's also the foundation for the Multilevel coalition sharing environment, the Battlefield Information Collection and Exploitation Systems Extended (BICES-X). Sun Microsystems, now Oracle Corporation, offers Solaris Trusted Extensions as an integrated feature of the commercial OSs Solaris and OpenSolaris. In addition to the controlled access protection profile (CAPP), and role-based access control (RBAC) protection profiles, Trusted Extensions have also been certified at EAL4 to the labeled security protection profile (LSPP). The security target includes both desktop and network functionality. LSPP mandates that users are not authorized to override the labeling policies enforced by the kernel and X Window System (X11 server). The evaluation does not include a covert channel analysis. Because these certifications depend on CAPP, no Common Criteria certifications suggest this product is trustworthy for MLS. BAE Systems offers XTS-400, a commercial system that supports MLS at what the vendor claims is "high assurance". Predecessor products (including the XTS-300) were evaluated at the TCSEC B3 level, which is MLS-capable. The XTS-400 has been evaluated under the Common Criteria at EAL5+ against the CAPP and LSPP protection profiles. CAPP and LSPP are both EAL3 protection profiles that are not inherently MLS-capable, but the security target for the Common Criteria evaluation of this product contains an enriched set of security functions that provide MLS capability. Problem areas Sanitization is a problem area for MLS systems. Systems that implement MLS restrictions, like those defined by Bell–LaPadula model, only allow sharing when it obviously does not violate security restrictions. Users with lower clearances can easily share their work with users holding higher clearances, but not vice versa. There is no efficient, reliable mechanism by which a Top Secret user can edit a Top Secret file, remove all Top Secret information, and then deliver it to users with Secret or lower clearances. In practice, MLS systems circumvent this problem via privileged functions that allow a trustworthy user to bypass the MLS mechanism and change a file's security classification. However, the technique is not reliable. Covert channels pose another problem for MLS systems. For an MLS system to keep secrets perfectly, there must be no possible way for a Top Secret process to transmit signals of any kind to a Secret or lower process. This includes side effects such as changes in available memory or disk space, or changes in process timing. When a process exploits such a side effect to transmit data, it is exploiting a covert channel. It is extremely difficult to close all covert channels in a practical computing system, and it may be impossible in practice. The process of identifying all covert channels is a challenging one by itself. Most commercially available MLS systems do not attempt to close all covert channels, even though this makes it impractical to use them in high security applications. Bypass is problematic when introduced as a means to treat a system high object as if it were MLS trusted. A common example is to extract data from a secret system high object to be sent to an unclassified destination, citing some property of the data as trusted evidence that it is 'really' unclassified (e.g. 'strict' format). A system high system cannot be trusted to preserve any trusted evidence, and the result is that an overt data path is opened with no logical way to securely mediate it. Bypass can be risky because, unlike narrow bandwidth covert channels that are difficult to exploit, bypass can present a large, easily exploitable overt leak in the system. Bypass often arises out of failure to use trusted operating environments to maintain continuous separation of security domains all the way back to their origin. When that origin lies outside the system boundary, it may not be possible to validate the trusted separation to the origin. In that case, the risk of bypass can be unavoidable if the flow truly is essential. A common example of unavoidable bypass is a subject system that is required to accept secret IP packets from an untrusted source, encrypt the secret userdata and not the header and deposit the result to an untrusted network. The source lies outside the sphere of influence of the subject system. Although the source is untrusted (e.g. system high) it is being trusted as if it were MLS because it provides packets that have unclassified headers and secret plaintext userdata, an MLS data construct. Since the source is untrusted, it could be corrupt and place secrets in the unclassified packet header. The corrupted packet headers could be nonsense but it is impossible for the subject system to determine that with any reasonable reliability. The packet userdata is cryptographically well protected but the packet header can contain readable secrets. If the corrupted packets are passed to an untrusted network by the subject system they may not be routable but some cooperating corrupt process in the network could grab the packets and acknowledge them and the subject system may not detect the leak. This can be a large overt leak that is hard to detect. Viewing classified packets with unclassified headers as system high structures instead of the MLS structures they really are presents a very common but serious threat. Most bypass is avoidable. Avoidable bypass often results when system architects design a system before correctly considering security, then attempt to apply security after the fact as add-on functions. In that situation, bypass appears to be the only (easy) way to make the system work. Some pseudo-secure schemes are proposed (and approved!) that examine the contents of the bypassed data in a vain attempt to establish that bypassed data contains no secrets. This is not possible without trusting something about the data such as its format, which is contrary to the assumption that the source is not trusted to preserve any characteristics of the source data. Assured "secure bypass" is a myth, just as a so-called High Assurance Guard (HAG) that transparently implements bypass. The risk these introduce has long been acknowledged; extant solutions are ultimately procedural, rather than technical. There is no way to know with certainty how much classified information is taken from our systems by exploitation of bypass. Debate: "There is no such thing as MLS" Some laypersons are designing secure computing systems and drawing the conclusion that MLS does not exist. An explanation could be that there is a decline in COMPUSEC experts and the MLS term has been overloaded by two different meanings / uses. These two uses are: MLS as a processing environment vs MLS as a capability. The belief that MLS is non-existent is based on the belief that there are no products certified to operate in an MLS environment or mode and that therefore MLS as a capability does not exist. One does not imply the other. Many systems operate in an environment containing data that has unequal security levels and therefore is MLS by the Computer Security Intermediate Value Theorem (CS-IVT). The consequence of this confusion runs deeper. NSA-certified MLS operating systems, databases, and networks have existed in operational mode since the 1970s and that MLS products are continuing to be built, marketed, and deployed. Laypersons often conclude that to admit that a system operates in an MLS environment (environment-centric meaning of MLS) is to be backed into the perceived corner of having a problem with no MLS solution (capability-centric meaning of MLS). MLS is deceptively complex and just because simple solutions are not obvious does not justify a conclusion that they do not exist. This can lead to a crippling ignorance about COMPUSEC that manifests itself as whispers that "one cannot talk about MLS," and "There's no such thing as MLS." These MLS-denial schemes change so rapidly that they cannot be addressed. Instead, it is important to clarify the distinction between MLS-environment and MLS-capable. MLS as a security environment or security mode: A community whose users have differing security clearances may perceive MLS as a data sharing capability: users can share information with recipients whose clearance allows receipt of that information. A system is operating in MLS Mode when it has (or could have) connectivity to a destination that is cleared to a lower security level than any of the data the MLS system contains. This is formalized in the CS-IVT. Determination of security mode of a system depends entirely on the system's security environment; the classification of data it contains, the clearance of those who can get direct or indirect access to the system or its outputs or signals, and the system's connectivity and ports to other systems. Security mode is independent of capabilities, although a system should not be operated in a mode for which it is not worthy of trust. MLS as a capability: Developers of products or systems intended to allow MLS data sharing tend to loosely perceive it in terms of a capability to enforce data-sharing restrictions or a security policy, like mechanisms that enforce the Bell–LaPadula model. A system is MLS-capable if it can be shown to robustly implement a security policy. The original use of the term MLS applied to the security environment, or mode. One solution to this confusion is to retain the original definition of MLS and be specific about MLS-capable when that context is used. MILS architecture Multiple Independent Levels of Security (MILS) is an architecture that addresses the domain separation component of MLS. Note that UCDMO (the US government lead for cross domain and multilevel systems) created a term Cross Domain Access as a category in its baseline of DoD and Intelligence Community accredited systems, and this category can be seen as essentially analogous to MILS. Security models such as the Biba model (for integrity) and the Bell–LaPadula model (for confidentiality) allow one-way flow between certain security domains that are otherwise assumed to be isolated. MILS addresses the isolation underlying MLS without addressing the controlled interaction between the domains addressed by the above models. Trusted security-compliant channels mentioned above can link MILS domains to support more MLS functionality. The MILS approach pursues a strategy characterized by an older term, MSL (multiple single level), that isolates each level of information within its own single-level environment (System High). The rigid process communication and isolation offered by MILS may be more useful to ultra high reliability software applications than MLS. MILS notably does not address the hierarchical structure that is embodied by the notion of security levels. This requires the addition of specific import/export applications between domains each of which needs to be accredited appropriately. As such, MILS might be better called Multiple Independent Domains of Security (MLS emulation on MILS would require a similar set of accredited applications for the MLS applications). By declining to address out of the box interaction among levels consistent with the hierarchical relations of Bell-La Padula, MILS is (almost deceptively) simple to implement initially but needs non-trivial supplementary import/export applications to achieve the richness and flexibility expected by practical MLS applications. Any MILS/MLS comparison should consider if the accreditation of a set of simpler export applications is more achievable than accreditation of one, more complex MLS kernel. This question depends in part on the extent of the import/export interactions that the stakeholders require. In favour of MILS is the possibility that not all the export applications will require maximal assurance. MSL systems There is another way of solving such problems known as multiple single-level. Each security level is isolated in a separate untrusted domain. The absence of a medium of communication between the domains assures no interaction is possible. The mechanism for this isolation is usually physical separation in separate computers. This is often used to support applications or operating systems which have no possibility of supporting MLS such as Microsoft Windows. Applications Infrastructure such as trusted operating systems are an important component of MLS systems, but in order to fulfill the criteria required under the definition of MLS by CNSSI 4009 (paraphrased at the start of this article), the system must provide a user interface that is capable of allowing a user to access and process content at multiple classification levels from one system. The UCDMO ran a track specifically focused on MLS at the NSA Information Assurance Symposium in 2009, in which it highlighted several accredited (in production) and emergent MLS systems. Note the use of MLS in SELinux. There are several databases classified as MLS systems. Oracle has a product named Oracle Label Security (OLS) that implements mandatory access controls - typically by adding a 'label' column to each table in an Oracle database. OLS is being deployed at the US Army INSCOM as the foundation of an "all-source" intelligence database spanning the JWICS and SIPRNet networks. There is a project to create a labeled version of PostgreSQL, and there are also older labeled-database implementations such as Trusted Rubix. These MLS database systems provide a unified back-end system for content spanning multiple labels, but they do not resolve the challenge of having users process content at multiple security levels in one system while enforcing mandatory access controls. There are also several MLS end-user applications. The other MLS capability currently on the UCDMO baseline is called MLChat , and it is a chat server that runs on the XTS-400 operating system - it was created by the US Naval Research Laboratory. Given that content from users at different domains passes through the MLChat server, dirty-word scanning is employed to protect classified content, and there has been some debate about if this is truly an MLS system or more a form of cross-domain transfer data guard. Mandatory access controls are maintained by a combination of XTS-400 and application-specific mechanisms. Joint Cross Domain eXchange (JCDX) is another example of an MLS capability currently on the UCDMO baseline. JCDX is the only Department of Defense (DoD), Defense Intelligence Agency (DIA) accredited Multilevel Security (MLS) Command, Control, Communication, Computers and Intelligence (C4I) system that provides near real-time intelligence and warning support to theater and forward deployed tactical commanders. The JCDX architecture is comprehensively integrated with a high assurance Protection Level Four (PL4) secure operating system, utilizing data labeling to disseminate near real-time data information on force activities and potential terrorist threats on and around the world's oceans. It is installed at locations in United States and Allied partner countries where it is capable of providing data from Top Secret/SCI down to Secret-Releasable levels, all on a single platform. MLS applications not currently part of the UCDMO baseline include several applications from BlueSpace. BlueSpace has several MLS applications, including an MLS email client, an MLS search application and an MLS C2 system. BlueSpace uses a middleware strategy to enable its applications to be platform neutral, orchestrating one user interface across multiple Windows OS instances (virtualized or remote terminal sessions). The US Naval Research Laboratory has also implemented a multilevel web application framework called MLWeb which integrates the Ruby on Rails framework with a multilevel database based on SQLite3. Trends Perhaps the greatest change going on in the multilevel security arena today is the convergence of MLS with virtualization. An increasing number of trusted operating systems are moving away from labeling files and processes, and are instead moving towards UNIX containers or virtual machines. Examples include zones in Solaris 10 TX, and the padded cell hypervisor in systems such as Green Hill's Integrity platform, and XenClient XT from Citrix. The High Assurance Platform from NSA as implemented in General Dynamics' Trusted Virtualization Environment (TVE) is another example - it uses SELinux at its core, and can support MLS applications that span multiple domains. See also Bell–LaPadula model Biba model, Biba Integrity Model Clark–Wilson model Discretionary access control (DAC) Evaluation Assurance Level (EAL) Graham-Denning model Mandatory access control (MAC) Multi categories security (MCS) Multifactor authentication Non-interference (security) model Role-based access control (RBAC) Security modes of operation System high mode Take-grant model References Further reading (a.k.a. the TCSEC or "Orange Book"). (a.k.a. the TNI or "Red Book"). . P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. . External links First RTOS Integrity 178B certified to support MILS INTEGRITY 178B product Page PitBull Trusted Operating System Computer security models
Multilevel security
[ "Engineering" ]
4,668
[ "Cybersecurity engineering", "Computer security models" ]
1,011,474
https://en.wikipedia.org/wiki/Potassium%20channel
Potassium channels are the most widely distributed type of ion channel found in virtually all organisms. They form potassium-selective pores that span cell membranes. Potassium channels are found in most cell types and control a wide variety of cell functions. Function Potassium channels function to conduct potassium ions down their electrochemical gradient, doing so both rapidly (up to the diffusion rate of K+ ions in bulk water) and selectively (excluding, most notably, sodium despite the sub-angstrom difference in ionic radius). Biologically, these channels act to set or reset the resting potential in many cells. In excitable cells, such as neurons, the delayed counterflow of potassium ions shapes the action potential. By contributing to the regulation of the cardiac action potential duration in cardiac muscle, malfunction of potassium channels may cause life-threatening arrhythmias. Potassium channels may also be involved in maintaining vascular tone. They also regulate cellular processes such as the secretion of hormones (e.g., insulin release from beta-cells in the pancreas) so their malfunction can lead to diseases (such as diabetes). Some toxins, such as dendrotoxin, are potent because they block potassium channels. Types There are four major classes of potassium channels: Calcium-activated potassium channel - open in response to the presence of calcium ions or other signalling molecules. Inwardly rectifying potassium channel - passes current (positive charge) more easily in the inward direction (into the cell). Tandem pore domain potassium channel - are constitutively open or possess high basal activation, such as the "resting potassium channels" or "leak channels" that set the negative membrane potential of neurons. Voltage-gated potassium channel - are voltage-gated ion channels that open or close in response to changes in the transmembrane voltage. The following table contains a comparison of the major classes of potassium channels with representative examples (for a complete list of channels within each class, see the respective class pages). For more examples of pharmacological modulators of potassium channels, see potassium channel blocker and potassium channel opener. Structure Potassium channels have a tetrameric structure in which four identical protein subunits associate to form a fourfold symmetric (C4) complex arranged around a central ion conducting pore (i.e., a homotetramer). Alternatively four related but not identical protein subunits may associate to form heterotetrameric complexes with pseudo C4 symmetry. All potassium channel subunits have a distinctive pore-loop structure that lines the top of the pore and is responsible for potassium selective permeability. There are over 80 mammalian genes that encode potassium channel subunits. However potassium channels found in bacteria are amongst the most studied of ion channels, in terms of their molecular structure. Using X-ray crystallography, profound insights have been gained into how potassium ions pass through these channels and why (smaller) sodium ions do not. The 2003 Nobel Prize for Chemistry was awarded to Rod MacKinnon for his pioneering work in this area. Selectivity filter Potassium ion channels remove the hydration shell from the ion when it enters the selectivity filter. The selectivity filter is formed by a five residue sequence, TVGYG, termed the signature sequence, within each of the four subunits. This signature sequence is within a loop between the pore helix and TM2/6, historically termed the P-loop. This signature sequence is highly conserved, with the exception that a valine residue in prokaryotic potassium channels is often substituted with an isoleucine residue in eukaryotic channels. This sequence adopts a unique main chain structure, structurally analogous to a nest protein structural motif. The four sets of electronegative carbonyl oxygen atoms are aligned toward the center of the filter pore and form a square antiprism similar to a water-solvating shell around each potassium binding site. The distance between the carbonyl oxygens and potassium ions in the binding sites of the selectivity filter is the same as between water oxygens in the first hydration shell and a potassium ion in water solution, providing an energetically-favorable route for de-solvation of the ions. Sodium ions, however, are too small to fill the space between the carbonyl oxygen atoms. Thus, it is energetically favorable for sodium ions to remain bound with water molecules in the extracellular space, rather than to pass through the potassium-selective ion pore. This width appears to be maintained by hydrogen bonding and van der Waals forces within a sheet of aromatic amino acid residues surrounding the selectivity filter. The selectivity filter opens towards the extracellular solution, exposing four carbonyl oxygens in a glycine residue (Gly79 in KcsA). The next residue toward the extracellular side of the protein is the negatively charged Asp80 (KcsA). This residue together with the five filter residues form the pore that connects the water-filled cavity in the center of the protein with the extracellular solution. Selectivity mechanism The mechanism of potassium channel selectivity remains under continued debate. The carbonyl oxygens are strongly electro-negative and cation-attractive. The filter can accommodate potassium ions at 4 sites usually labelled S1 to S4 starting at the extracellular side. In addition, one ion can bind in the cavity at a site called SC or one or more ions at the extracellular side at more or less well-defined sites called S0 or Sext. Several different occupancies of these sites are possible. Since the X-ray structures are averages over many molecules, it is, however, not possible to deduce the actual occupancies directly from such a structure. In general, there is some disadvantage due to electrostatic repulsion to have two neighboring sites occupied by ions. Proposals for the mechanism of selectivity have been made based on molecular dynamics simulations, toy models of ion binding, thermodynamic calculations, topological considerations, and structural differences between selective and non-selective channels. The mechanism for ion translocation in KcsA has been studied extensively by theoretical calculations and simulation. The prediction of an ion conduction mechanism in which the two doubly occupied states (S1, S3) and (S2, S4) play an essential role has been affirmed by both techniques. Molecular dynamics (MD) simulations suggest the two extracellular states, Sext and S0, reflecting ions entering and leaving the filter, also are important actors in ion conduction. Hydrophobic region This region neutralizes the environment around the potassium ion so that it is not attracted to any charges. In turn, it speeds up the reaction. Central cavity A central pore, 10 Å wide, is located near the center of the transmembrane channel, where the energy barrier is highest for the transversing ion due to the hydrophobity of the channel wall. The water-filled cavity and the polar C-terminus of the pore helices ease the energetic barrier for the ion. Repulsion by preceding multiple potassium ions is thought to aid the throughput of the ions. The presence of the cavity can be understood intuitively as one of the channel's mechanisms for overcoming the dielectric barrier, or repulsion by the low-dielectric membrane, by keeping the K+ ion in a watery, high-dielectric environment. Regulation The flux of ions through the potassium channel pore is regulated by two related processes, termed gating and inactivation. Gating is the opening or closing of the channel in response to stimuli, while inactivation is the rapid cessation of current from an open potassium channel and the suppression of the channel's ability to resume conducting. While both processes serve to regulate channel conductance, each process may be mediated by a number of mechanisms. Generally, gating is thought to be mediated by additional structural domains which sense stimuli and in turn open the channel pore. These domains include the RCK domains of BK channels, and voltage sensor domains of voltage gated K+ channels. These domains are thought to respond to the stimuli by physically opening the intracellular gate of the pore domain, thereby allowing potassium ions to traverse the membrane. Some channels have multiple regulatory domains or accessory proteins, which can act to modulate the response to stimulus. While the mechanisms continue to be debated, there are known structures of a number of these regulatory domains, including RCK domains of prokaryotic and eukaryotic channels, pH gating domain of KcsA, cyclic nucleotide gating domains, and voltage gated potassium channels. N-type inactivation is typically the faster inactivation mechanism, and is termed the "ball and chain" model. N-type inactivation involves interaction of the N-terminus of the channel, or an associated protein, which interacts with the pore domain and occludes the ion conduction pathway like a "ball". Alternatively, C-type inactivation is thought to occur within the selectivity filter itself, where structural changes within the filter render it non-conductive. There are a number of structural models of C-type inactivated K+ channel filters, although the precise mechanism remains unclear. Pharmacology Blockers Potassium channel blockers inhibit the flow of potassium ions through the channel. They either compete with potassium binding within the selectivity filter or bind outside the filter to occlude ion conduction. An example of one of these competitors is quaternary ammonium ions, which bind at the extracellular face or central cavity of the channel. For blocking from the central cavity quaternary ammonium ions are also known as open channel blockers, as binding classically requires the prior opening of the cytoplasmic gate. Barium ions can also block potassium channel currents, by binding with high affinity within the selectivity filter. This tight binding is thought to underlie barium toxicity by inhibiting potassium channel activity in excitable cells. Medically potassium channel blockers, such as 4-aminopyridine and 3,4-diaminopyridine, have been investigated for the treatment of conditions such as multiple sclerosis. Off target drug effects can lead to drug induced Long QT syndrome, a potentially life-threatening condition. This is most frequently due to action on the hERG potassium channel in the heart. Accordingly, all new drugs are preclinically tested for cardiac safety. Activators Muscarinic potassium channel Some types of potassium channels are activated by muscarinic receptors and these are called muscarinic potassium channels (IKACh). These channels are a heterotetramer composed of two GIRK1 and two GIRK4 subunits. Examples are potassium channels in the heart, which, when activated by parasympathetic signals through M2 muscarinic receptors, cause an outward current of potassium, which slows down the heart rate. In fine art Roderick MacKinnon commissioned Birth of an Idea, a tall sculpture based on the KcsA potassium channel. The artwork contains a wire object representing the channel's interior with a blown glass object representing the main cavity of the channel structure. See also References External links in 3D Ion channels Electrophysiology Integral membrane proteins
Potassium channel
[ "Chemistry" ]
2,314
[ "Neurochemistry", "Ion channels" ]
1,011,550
https://en.wikipedia.org/wiki/Juan%20Carlos%20Wasmosy
Juan Carlos Wasmosy Monti (born December 15, 1938) is a Paraguayan former politician and engineer who was the 44th president of Paraguay from 1993 to 1998. He was a member of the Colorado Party, and the country's first freely elected president, as well as the first civilian president in 39 years. Biography Born in Asunción, Wasmosy trained as a civil engineer and became head of the Paraguayan consortium working on the Itaipu Dam. During this project, he amassed a large amount of wealth. He served as minister of integration under President Andrés Rodríguez. His ancestors, Dániel and József Vámosy, immigrated to South America from Debrecen, Hungary in 1828. At that time, the surname of the family was Vámosy; it was Hispanicized to Wasmosy. His relative, Alceu Wamosy (1895–1923), a famous Brazilian writer, is also from this ancestry. Juan Carlos Wasmosy went to see the home town of his ancestors in 1995 during his official visit to Hungary. Rodríguez endorsed Wasmosy as his successor in the 1993 elections. He won with almost 42 percent of the vote, in what is generally acknowledged to be the first (largely) free and fair election in the country's history (the country had gained independence in 1811), with Domingo Laino finishing a close second. Although there were confirmed cases of fraud, a team of international observers led by Jimmy Carter concluded that Wasmosy's margin of victory was large enough to offset any wrongdoing. Carter also noted that opposition candidates took 60 percent of the vote between them. This was a remarkable figure given Paraguay's long history of autocratic rule. For most of the country's history, particularly during Alfredo Stroessner's 35-year dictatorship, the opposition was barely tolerated when it was even permitted at all. At the time of Stroessner's ouster in 1989, the country had only known two years of true democracy in its entire history. However, he became very unpopular when he appointed many of Stroessner's supporters to government posts. He also failed to continue the limited reforms of Rodríguez. A principal obstacle was the factional nature of the party, which contributed in the stalling of many of his priorities. He was a solid conservative who approved market-oriented policies. He oversaw the privatization of the national airline, merchant fleet and steel company. Lino Oviedo, head of the Paraguayan army, allegedly attempted a coup in April 1996. Wasmosy countered by offering Oviedo a ministerial position, but soon imprisoned him. When he made the offer to Oviedo, many Paraguans accused him of undermining the civilian government and organized massive demonstrations in the capital. Wasmosy was barred from running again in 1998; in response to Stroessner's authoritarian excesses, the 1992 constitution barred any sort of reelection for the president. Raúl Cubas stood for the Colorado Party presidential nomination and won. In 2002, Wasmosy was convicted of defrauding the Paraguayan state and was himself sentenced to four years in prison. The sentence was later appealed. Upon appeal, his sentence was reduced to bail and house arrest. As a former president of Paraguay, he was made a senator for life. References External links WASMOSY, Juan Carlos International Who's Who. accessed September 3, 2006. Biography by CIDOB 1938 births Living people Politicians from Asunción Paraguayan people of Hungarian descent Paraguayan people of Italian descent Colorado Party (Paraguay) politicians 20th-century presidents of Paraguay Government ministers of Paraguay Paraguayan engineers Civil engineers Paraguayan politicians convicted of crimes Prisoners and detainees of Paraguay Paraguayan prisoners and detainees Grand Cross of the Legion of Honour Recipients of the Medal of the Oriental Republic of Uruguay
Juan Carlos Wasmosy
[ "Engineering" ]
778
[ "Civil engineering", "Civil engineers" ]
1,011,654
https://en.wikipedia.org/wiki/Managerial%20grid%20model
The managerial grid model or managerial grid theory (1964) is a model, developed by Robert R. Blake and Jane Mouton, of leadership styles. This model originally identified five different leadership styles based on the concern for people and the concern for production. The optimal leadership style in this model is based on Theory Y. The grid theory has continued to evolve and develop. The theory was updated with two additional leadership styles and with a new element, resilience. In 1999, the grid managerial seminar began using a new text, The Power to Change. The model is represented as a grid with concern for production as the x-axis and concern for people as the y-axis; each axis ranges from 1 (Low) to 9 (High). The resulting leadership styles are as follows: The indifferent (previously called impoverished) style (1,1): evade and elude. In this style, managers have low concern for both people and production. Managers use this style to preserve job and job seniority, protecting themselves by avoiding getting into trouble. The main concern for the manager is not to be held responsible for any mistakes, which results in less innovative decisions. The accommodating (previously, country club) style (1,9): yield and comply. This style has a high concern for people and a low concern for production. Managers using this style pay much attention to the security and comfort of the employees, in hopes that this will increase performance. The resulting atmosphere is usually friendly, but not necessarily very productive. The dictatorial (previously, produce or perish) style (9,1): in return. Managers using this style pressure their employees through rules and punishments to achieve the company goals. This dictatorial style is based on Theory X of Douglas McGregor, and is commonly applied in companies on the edge of real or perceived failure. This style is often used in cases of crisis management. The status quo (previously, middle-of-the-road) style (5,5): balance and compromise. Managers using this style try to balance between company goals and workers' needs. By giving some concern to both people and production, managers who use this style hope to achieve suitable performance but doing so gives away a bit of each concern so that neither production nor people needs are met. The sound (previously, team) style (9,9): contribute and commit. In this style, high concern is paid both to people and production. As suggested by the propositions of Theory Y, managers choosing to use this style encourage teamwork and commitment among employees. This method relies heavily on making employees feel themselves to be constructive parts of the company. The opportunistic style: exploit and manipulate. Individuals using this style, which was added to the grid theory before 1999, do not have a fixed location on the grid. They adopt whichever behaviour offers the greatest personal benefit. The paternalistic style: prescribe and guide. This style was added to the grid theory before 1999. In The Power to Change, it was redefined to alternate between the (1,9) and (9,1) locations on the grid. Managers using this style praise and support, but discourage challenges to their thinking. Behavioral elements Grid theory breaks behavior down into seven key elements: See also Behavior modification Leadership Three levels of leadership model References Organizational behavior Leadership
Managerial grid model
[ "Biology" ]
679
[ "Behavior", "Organizational behavior", "Human behavior" ]
1,011,848
https://en.wikipedia.org/wiki/Fixed-point%20theorem
In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. In mathematical analysis The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point. By contrast, the Brouwer fixed-point theorem (1911) is a non-constructive result: it says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point (see also Sperner's lemma). For example, the cosine function is continuous in [−1, 1] and maps it into [−1, 1], and thus must have a fixed point. This is clear when examining a sketched graph of the cosine function; the fixed point occurs where the cosine curve y = cos(x) intersects the line y = x. Numerically, the fixed point (known as the Dottie number) is approximately x = 0.73908513321516 (thus x = cos(x) for this value of x). The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology is notable because it gives, in some sense, a way to count fixed points. There are a number of generalisations to Banach fixed-point theorem and further; these are applied in PDE theory. See fixed-point theorems in infinite-dimensional spaces. The collage theorem in fractal compression proves that, for many images, there exists a relatively small description of a function that, when iteratively applied to any starting image, rapidly converges on the desired image. In algebra and discrete mathematics The Knaster–Tarski theorem states that any order-preserving function on a complete lattice has a fixed point, and indeed a smallest fixed point. See also Bourbaki–Witt theorem. The theorem has applications in abstract interpretation, a form of static program analysis. A common theme in lambda calculus is to find fixed points of given lambda expressions. Every lambda expression has a fixed point, and a fixed-point combinator is a "function" which takes as input a lambda expression and produces as output a fixed point of that expression. An important fixed-point combinator is the Y combinator used to give recursive definitions. In denotational semantics of programming languages, a special case of the Knaster–Tarski theorem is used to establish the semantics of recursive definitions. While the fixed-point theorem is applied to the "same" function (from a logical point of view), the development of the theory is quite different. The same definition of recursive function can be given, in computability theory, by applying Kleene's recursion theorem. These results are not equivalent theorems; the Knaster–Tarski theorem is a much stronger result than what is used in denotational semantics. However, in light of the Church–Turing thesis their intuitive meaning is the same: a recursive function can be described as the least fixed point of a certain functional, mapping functions to functions. The above technique of iterating a function to find a fixed point can also be used in set theory; the fixed-point lemma for normal functions states that any continuous strictly increasing function from ordinals to ordinals has one (and indeed many) fixed points. Every closure operator on a poset has many fixed points; these are the "closed elements" with respect to the closure operator, and they are the main reason the closure operator was defined in the first place. Every involution on a finite set with an odd number of elements has a fixed point; more generally, for every involution on a finite set of elements, the number of elements and the number of fixed points have the same parity. Don Zagier used these observations to give a one-sentence proof of Fermat's theorem on sums of two squares, by describing two involutions on the same set of triples of integers, one of which can easily be shown to have only one fixed point and the other of which has a fixed point for each representation of a given prime (congruent to 1 mod 4) as a sum of two squares. Since the first involution has an odd number of fixed points, so does the second, and therefore there always exists a representation of the desired form. List of fixed-point theorems Atiyah–Bott fixed-point theorem Banach fixed-point theorem Bekić's theorem Borel fixed-point theorem Bourbaki–Witt theorem Browder fixed-point theorem Brouwer fixed-point theorem Rothe's fixed-point theorem Caristi fixed-point theorem Diagonal lemma, also known as the fixed-point lemma, for producing self-referential sentences of first-order logic Lawvere's fixed-point theorem Discrete fixed-point theorems Earle-Hamilton fixed-point theorem Fixed-point combinator, which shows that every term in untyped lambda calculus has a fixed point Fixed-point lemma for normal functions Fixed-point property Fixed-point theorems in infinite-dimensional spaces Injective metric space Kakutani fixed-point theorem Kleene fixed-point theorem Knaster–Tarski theorem Lefschetz fixed-point theorem Nielsen fixed-point theorem Poincaré–Birkhoff theorem proves the existence of two fixed points Ryll-Nardzewski fixed-point theorem Schauder fixed-point theorem Topological degree theory Tychonoff fixed-point theorem See also Trace formula Footnotes References External links Fixed Point Method Closure operators Iterative methods
Fixed-point theorem
[ "Mathematics" ]
1,231
[ "Theorems in mathematical analysis", "Closure operators", "Fixed-point theorems", "Theorems in topology", "Order theory" ]
1,011,891
https://en.wikipedia.org/wiki/Hexapawn
Hexapawn is a deterministic two-player game invented by Martin Gardner. It is played on a rectangular board of variable size, for example on a 3×3 board or on a regular chessboard. On a board of size n×m, each player begins with m pawns, one for each square in the row closest to them. The goal of each player is to either advance a pawn to the opposite end of the board or leave the other player with no legal moves, either by stalemate or by having all of their pieces captured. Hexapawn on the 3×3 board is a solved game; with perfect play, White will always lose in 3 moves (1.b2 axb2 2.cxb2 c2 3.a2 c1#). Indeed, Gardner specifically constructed it as a game with a small game tree in order to demonstrate how it could be played by a heuristic AI implemented by a mechanical computer based on Donald Michie's Matchbox Educable Noughts and Crosses Engine (MENACE). A variant of this game is octopawn, which is played on a 4×4 board with 4 pawns on each side. It is a forced win for White. Only 24 matchboxes are required for a hexapawn version of Matchbox Educable Noughts and Crosses Engine. Rules As in chess, a pawn may be moved in two different ways: it may be moved one square vertically forward, or it may capture a pawn one square diagonally ahead of it. A pawn may not be moved forward if there is a pawn in the next square. Unlike chess, the first move of a pawn may not advance it by two spaces. A player loses if they have no legal moves or one of the other player's pawns reaches the end of the board. Dawson's chess Whenever a player advances a pawn to the penultimate rank and attacks an opposing pawn, there is a threat to proceed to the final rank by capture. The opponent's only sensible responses, therefore, are to either capture the advanced pawn or advance the threatened one, the latter only being sensible in the case that there is one threatened pawn rather than two. If one restricts 3× hexapawn with the additional rule that capturing is always compulsory, the result is the game Dawson's chess. The game was invented by Thomas Rayner Dawson in 1935. Dawson's chess reduces to the impartial game denoted .137 in Conway's notation. This means that it is equivalent to a Nim-like game in which: on a turn, the player may remove one to three objects from a heap, removing just one object is a legal move only if the removed object is the only object in the heap, and when removing three objects from a heap of five or more, the player may also split the remainder into two heaps. The initial position is a single heap of size . The nim-sequence for this game is 0.1120311033224052233011302110452740 1120311033224455233011302110453748 1120311033224455933011302110453748 1120311033224455933011302110453748 1120311033224455933011302110453748 ..., where bold entries indicate the values that differ from the eventual periodic behavior of the sequence. References Sources Mathematical Games, Scientific American, March 1962, reprinted in The Unexpected Hanging and Other Mathematical Diversions, by Martin Gardner, pp. 93ff External links Hexapawn - an article by Robert Price. Hexapawn java applet - source code included. Hexapawn game for IOS Play Hexapawn - Play Online Mathematical games Chess variants 1962 in chess Board games introduced in 1962 Solved games
Hexapawn
[ "Mathematics" ]
808
[ "Recreational mathematics", "Mathematical games" ]
1,011,901
https://en.wikipedia.org/wiki/Borazon
Borazon is a brand name of a cubic form of boron nitride (cBN). Its color ranges from black to brown and gold, depending on the chemical bond. It is one of the hardest known materials, along with various forms of diamond and other kinds of boron nitride. Borazon is a crystal created by heating equal quantities of boron and nitrogen at temperatures greater than 1800 °C (3300 °F) at 7 GPa (1 million lbf/in2). Borazon was first produced in 1957 by Robert H. Wentorf, Jr., a physical chemist working for the General Electric. In 1969, General Electric adopted the name Borazon as its trademark for the material. The trademark is now owned by Diamond Innovations, doing business as Hyperion Materials & Technologies, Inc., and Borazon is manufactured only by Hyperion Materials & Technologies. Uses and production Borazon has a number of uses , such as: cutting tools, dies, punches, shears, knives, saw blades, bearing rings, needles, rollers, spacers, balls, pump and compressor parts, engine and drive train components (e.g. camshafts, crankshafts, gears, valve stems, drive shafts, CV joints, piston pins, fuel injectors, turbochargers, and aerospace and land-based gas turbine parts such as vanes, blades, nozzles, and seals), surgical knives, blades, scissors, honing, superfinishing, cylinder liners, connecting rods, grinding of steel and paper mill rolls, and gears. Prior to the production of Borazon, diamond was the preferred abrasive used for grinding very hard superalloys but it could not be used effectively on steels because carbon tends to dissolve in iron at high temperatures. Aluminium oxide was the conventional abrasive used on hardened steel tools. Borazon replaced aluminium oxide for grinding hardened steels owing to its superior abrasive properties, comparable to that of diamond. Borazon is used in industrial applications to shape tools, as it can withstand temperatures greater than 2000 °C (3632 °F), much higher than that of a pure diamond at 871 °C (1600 °F). Other uses include jewellery designing, glass cutting and laceration of diamonds. CBN-coated grinding wheels, referred to as Borazon wheels, are routinely used in the machining of hard ferrous metals, cast irons, and nickel-base and cobalt-base superalloys. They can grind more material, to a higher degree of accuracy, than any other abrasive. The limiting factor in the life of such tools is typically determined not by wear on the cutting surface but by its break-down and separation from the metal core resulting from failure of the bonding layer. Cultural references A Borazon drill is used in the TV miniseries and film Quatermass and the Pit. In this story an alien spacecraft is unearthed in London and the drill is used in an attempt to open a sealed bulkhead. The shell of the object is so hard that even the Borazon drill makes no impression, and when the attempt is made, vibrations cause severe distress in people around the object. In Ivan Yefremov's novel Andromeda: A Space-Age Tale (written 1954–1956, published Jan 1957) boron nitride named borazon is routinely used in sublight engine parts and spaceship surface coating. In Randall Garrett's's short story "Thin Edge" (Analog, Dec 1963) a fictional borazon-tungsten cable of extraordinary tensile strength is a central plot element. The cable is developed by asteroid miners for a tow-rope for hauling asteroids. The protagonist cuts through cell bars and booby-traps his room using a single strand of the wire, similar to a monomolecular wire. See also Abrasive machining References External links Discovering a Material That's Harder Than Diamond by Robert H. Wentorf, Jr. Boron in Materials Technology Borazon(R) CBN Synthetic materials Superhard materials Boron compounds Nitrides
Borazon
[ "Physics", "Chemistry" ]
847
[ "Synthetic materials", "Materials", "Superhard materials", "Chemical synthesis", "Matter" ]
1,011,976
https://en.wikipedia.org/wiki/List%20of%20galaxy%20groups%20and%20clusters
This article lists some galaxy groups and galaxy clusters. Defining the limits of galaxy clusters is imprecise as many clusters are still forming. In particular, clusters close to the Milky Way tend to be classified as galaxy clusters even when they are much smaller than more distant clusters. Clusters exhibiting strong evidence of dark matter Some clusters exhibiting strong evidence of dark matter. Named groups and clusters This is a list of galaxy groups and clusters that are well known by something other than an entry in a catalog or list, or a set of coordinates, or a systematic designation. Clusters Groups The major nearby groups and clusters are generally named after the constellation they lie in. Many groups are named after the leading galaxy in the group. This represents an ad hoc systematic naming system. Groups and clusters visible to the unaided eye The Local Group contains the largest number of visible galaxies with the naked eye. However, its galaxies are not visually grouped together in the sky, except for the two Magellanic Clouds. The IC342/Maffei Group, the nearest galaxy group, would be visible by the naked eye if it were not obscured by the stars and dust clouds in the Milky Way's spiral arms. No galaxy cluster is visible to the unaided eye. Firsts Extremes Closest groups Closest clusters Farthest clusters In 2003 RDCS 1252-29 (RDCS1252.9–2927) at z=1.237, was found to be the most distant rich cluster, which lasted until 2005. In 2000, a cluster was announced in the field of quasar QSO 1213-0017 at z=1.31 (the quasar lies at z=2.69) In 1999, cluster RDCS J0849+4452 (RX J0849+4452, RXJ0848.9+4452) was found at z=1.261 In 1995 and 2001, the cluster around 3C 294 was announced, at z=1.786 In 1992, observations of the field of cluster Cl 0939+4713 found what appears to be a background cluster near a quasar, also in the background. The quasar was measured at z=2.055 and it was assumed that the cluster would be as well. In 1975, 3C 123 and its galaxy cluster was incorrectly determined to lie at z=0.637 (actually z=0.218) In 1958, cluster Cl 0024+1654 and Cl 1447+2619 were estimated to have redshifts of z=0.29 and z=0.35 respectively. However, they were not spectroscopically determined. Farthest protoclusters In 2002, a very large, very rich protocluster, or the most distant protosupercluster was found in the field of galaxy cluster MS 1512+36, around the gravitationally lensed galaxy MS 1512-cB58, at z=2.724 False clusters Sometimes clusters are put forward that are not genuine clusters or superclusters. Through the researching of member positions, distances, peculiar velocities, and binding mass, former clusters are sometimes found to be the product of a chance line-of-sight superposition. See also Lists of astronomical objects Galaxy cluster Galaxy group Illustris project List of galaxies Lists of groups and clusters Catalogue of Galaxies and of Clusters of Galaxies Hickson Compact Group List of Abell clusters List of galaxy superclusters Lyons Groups of Galaxies Virgo Supercluster References External links ; Abell's 1957 cluster list Galaxy groups and clusters
List of galaxy groups and clusters
[ "Astronomy" ]
737
[ "Galaxy clusters", "Astronomical objects" ]
1,011,995
https://en.wikipedia.org/wiki/Davis%E2%80%93Besse%20Nuclear%20Power%20Station
Davis–Besse Nuclear Power Station is an 894 megawatt (MW) nuclear power plant, located northeast of Oak Harbor, Ohio, United States. It has a single pressurized water reactor. Davis–Besse is operated by Vistra Corp. Throughout its operation, Davis–Besse has been the site of several safety incidents that affected the plant's operation. According to the Nuclear Regulatory Commission (NRC), Davis–Besse has been the source of two of the top five most dangerous nuclear incidents in the United States since 1979. The most severe occurring in March 2002, when maintenance workers discovered corrosion had eaten a football-sized hole into the reactor vessel head. The NRC kept Davis–Besse shut down until March 2004, so that FirstEnergy was able to perform all the necessary maintenance for safe operations. The NRC imposed an over $5 million fine, its largest fine ever to a nuclear power plant, against FirstEnergy for the actions that led to the corrosion. The company paid an additional $28 million in fines under a settlement with the United States Department of Justice (DOJ). Davis–Besse was expected to close in 2020 as it is no longer profitable to run when competing against natural gas plants. Plans were updated indicating possible shut down by May 31, 2020. However, Ohio House Bill 6 was signed into law in July 2019 which added a fee to residents' utility bills that funded subsidies of $150 million per year to Davis–Besse and the Perry Nuclear Generating Station to keep both plants operational. The bill was alleged to be part of the Ohio nuclear bribery scandal revealed by the United States Department of Justice (DOJ) in July 2020. Location and history The power station is located on the southwest shore of Lake Erie about north of Oak Harbor, Ohio and is on the north side of Highway 2 just east of Highway 19 on a site in the Carroll Township. The plant only utilizes , with devoted to the Ottawa National Wildlife Refuge. The entrance to the Magee Marsh Wildlife Area is approximately 5 miles east of the power station. The official name according to the U.S. Energy Information Administration is the Davis–Besse Nuclear Generating Station. It is the 57th commercial power reactor to commence building in the United States of America (construction began on September 1, 1970) and the 50th to come on-line July 31, 1978. The plant was originally jointly owned by Cleveland Electric Illuminating (CEI) and Toledo Edison (TE) and was named for former TE Chairman John K. Davis and former CEI Chairman Ralph M. Besse. Unit One is an 879 MWe pressurized water reactor supplied by Babcock & Wilcox. The reactor was shut down from 2002 until early 2004 for safety repairs and upgrades. In 2012 the reactor supplied 7101.700 GWh of electricity. In 1973, two more reactors were also ordered from Babcock & Wilcox. However, construction on Units Two and Three never commenced, and these two units were officially canceled in 1981. Electricity Production Incident history 1977 first stuck-open pilot-operated relief valve On September 24, 1977, the relief valve for the reactor pressurizer failed to close when the reactor, running at only 9% power, shut down because of a disruption in the feedwater system. This incident later became a precursor to the Three Mile Island accident, in which a pilot-operated relief valve also became stuck open, leaking thousands of gallons of coolant water into the basement of the reactor building. 1985 loss of feedwater event On June 9, 1985, the main feedwater pumps, used to supply water to the reactor steam generators, shut down. A control room operator then attempted to start the auxiliary (emergency) feedwater pumps. These pumps both tripped on overspeed conditions because of operator error. This incident was originally classified an "NRC Unusual Event" (the lowest classification the NRC uses) but it was later determined that it should have been classified a "site area emergency". 1998 tornado On June 24, 1998, the station was struck by an F2 tornado. The plant's switchyard was damaged and access to external power was disabled. The plant's reactor automatically shut down at 8:42 pm and an alert (the next to lowest of four levels of severity) was declared at 9:18 pm. The plant's emergency diesel generators powered critical facility safety systems until external power could be restored. 2002 reactor head hole In March 2002, plant staff discovered that the borated water that serves as the reactor coolant had leaked from cracked control rod drive mechanisms directly above the reactor and eaten through more than six inches (150 mm) of the carbon steel reactor pressure vessel head over an area roughly the size of a football (see photo). This significant reactor head wastage on the exterior of the reactor vessel head left only of stainless steel cladding holding back the high-pressure (~2155 psi, 14.6 MPa) reactor coolant. A breach most likely would have resulted in a massive loss-of-coolant accident, in which reactor coolant would have jetted into the reactor's containment building and resulted in emergency safety procedures to protect from core damage or meltdown. Because of the location of the reactor head damage, such a jet of reactor coolant might have damaged adjacent control rod drive mechanisms, hampering or preventing reactor shut-down. As part of the system reviews following the accident, significant safety issues were identified with other critical plant components, including the following: the containment sump that allows the reactor coolant to be reclaimed and reinjected into the reactor; the high pressure injection pumps that would reinject such reclaimed reactor coolant; the emergency diesel generator system; the containment air coolers that would remove heat from the containment building; reactor coolant isolation valves; and the plant's electrical distribution system. The resulting corrective operational and system reviews and engineering changes took two years. Repairs and upgrades cost $600 million, and the Davis–Besse reactor was restarted in March 2004. To replace the reactor vessel head, FirstEnergy purchased one from the mothballed Midland Nuclear Power Plant in Midland, Michigan. The NRC determined that this incident was the fifth-most dangerous nuclear incident in the United States since 1979, and imposed its largest fine ever—more than $5 million—against FirstEnergy for the actions that led to the corrosion. Criminal prosecutions In January 2006, First Energy, the owner of Davis–Besse, acknowledged a series of safety violations by former workers, and entered into a deferred prosecution agreement with the United States Department of Justice (DOJ). The deferred prosecution agreement related to the March 2002 incident. The deferment granted by the NRC were based on letters from Davis–Besse engineers stating that previous inspections were adequate. However, those inspections were not as thorough as the company suggested, as proved by the material deficiency discovered later. In any case, because FirstEnergy cooperated with investigators on the matter, they were able to avoid more serious penalties. The company paid $28 million under a settlement with the Justice Department. $23.7 million of that were fines, with an additional $4.3 million to be contributed to various groups, including the National Park Service, the U.S. Fish and Wildlife Service, Habitat for Humanity, and the University of Toledo as well as to pay some costs related to the federal investigation. Two former employees and one former contractor were indicted for statements made in multiple documents and one videotape, over several years, for hiding evidence that the reactor pressure vessel was being corroded by boric acid. The maximum penalty for the three was 25 years in prison. The indictment mentions that other employees also provided false information to inspectors, but does not name them. In 2007, one of these men was convicted and another acquitted of hiding information from and lying to the NRC. Another jury trial in 2008 convicted the remaining engineer of similar crimes. 2003 slammer worm computer virus In January 2003, the plant's private network became infected with the slammer worm, which resulted in a five-hour loss of safety monitoring at the plant. 2008 discovery of tritium leak The NRC and Ohio Environmental Protection Agency (Ohio EPA) were notified of a tritium leak accidentally discovered during an unrelated fire inspection on October 22, 2008. Preliminary indications suggest radioactive water did not infiltrate groundwater outside plant boundaries. 2010 replacement reactor head problems During a scheduled refueling outage in 2010, ultrasonic examinations performed on the control rod drive mechanism nozzles penetrating the reactor vessel closure head identified that two of the nozzles inspected did not meet acceptance criteria. FirstEnergy investigators subsequently found new cracks in 24 of 69 nozzles, including one serious enough to leak boric acid. Crack indications required repair prior to returning the vessel head to service. Control rod drive nozzles were repaired using techniques proven at other nuclear facilities. The plant resumed operation in 2010. The existing reactor vessel head was scheduled for replacement in 2011. 2011 shield building cracks An October 2011 shutdown of the plant for maintenance revealed a 30 foot long hairline crack in the concrete shield building around the containment vessel. 2012 reactor coolant pump seal pinhole leak On June 6, 2012, an approximately 0.1 gpm pinhole spray leakage was identified from a weld in a seal of the reactor coolant pump during a routine reactor coolant system walkdown inspection. The plant entered limited operations, and root cause analysis was undertaken. 2015 steam leak shutdown On May 9, 2015, a steam leak in the turbine building caused FirstEnergy operators to declare an 'Unusual Event' and shut the reactor down until repairs could be made. The plant was brought back online and synchronized with the local power grid at May 12 after repairs were completed. Future The facility's original nuclear operating license was set to expire on April 22, 2017. In August 2006, FirstEnergy Nuclear Operating Company (FENOC) submitted a letter of intent to renew. The submission date for the application was August 10, 2010. On December 8, 2015, the NRC granted a 20-year license extension to expire on April 22, 2037. On March 31, 2018, FirstEnergy Nuclear Operating Company filed for Bankruptcy protection. Around that time, the company indicated it would close the nuclear plant. In 2019, Ohio lawmakers debated a $9/MWh subsidy to keep Davis–Besse open. House Bill 6 was signed into law on July 23, 2019, and FirstEnergy announced it would refuel Davis–Besse and rescind its deactivation notice on July 24, 2019. It was later learned that the bill itself was a part of a public corruption scheme revealed by the Justice Department in July 2020. Seismic risk The Nuclear Regulatory Commission's estimate of the risk each year of an earthquake intense enough to cause core damage to the reactor at Davis–Besse was 1 in 149,254, according to an NRC study published in August 2010. Surrounding population The Nuclear Regulatory Commission defines two emergency planning zones around nuclear power plants: a plume exposure pathway zone with a radius of , concerned primarily with exposure to, and inhalation of, airborne radioactive contamination, and an ingestion pathway zone of about , concerned primarily with ingestion of food and liquid contaminated by radioactivity. The 2010 U.S. population within of Davis–Besse was 18,635, an increase of 14.2 percent in a decade, according to an analysis of U.S. Census data for msnbc.com. The 2010 U.S. population within was 1,791,856, an increase of 1.4 percent since 2000. Cities within include Sandusky, Ohio, ; Toledo, Ohio ; and Detroit, Michigan, (distance to the city centers). U.S. Census data for Canadian population within the area is not available, though Leamington, Ontario (population: 30,000) is away, and Windsor, Ontario (population: 241,000) is from Davis–Besse. The cooling tower for Davis–Besse stands at 493 feet above the surrounding area, making it a major landmark around the western end of Lake Erie. The tower is visible from the Michigan and Ontario shores and on clear days the condensing steam plume can be seen from Bowling Green, Ohio, over 40 miles away. See also Nuclear reactor accidents in the United States Pilot-operated relief valve References External links Davis–Besse Pressurized Water Reactor Information from the U.S. Nuclear Regulatory Commission Union of Concerned Scientists report on Davis–Besse U.S. Nuclear Regulatory Commission's "Davis–Besse Lessons Learned Task Force" (with links to the Task Force Report) Energy infrastructure completed in 1978 Towers completed in 1978 Civilian nuclear power accidents Disasters in Ohio Nuclear power plants in Ohio Buildings and structures in Ottawa County, Ohio Towers in Ohio Nuclear power stations using pressurized water reactors FirstEnergy Nuclear accidents and incidents in the United States Non-renewable resource companies established in 1978
Davis–Besse Nuclear Power Station
[ "Technology" ]
2,657
[ "Environmental impact of nuclear power", "Civilian nuclear power accidents" ]
1,012,083
https://en.wikipedia.org/wiki/Synaptophysin
Synaptophysin, also known as the major synaptic vesicle protein p38, is a protein that in humans is encoded by the SYP gene. Genomics The gene is located on the short arm of X chromosome (Xp11.23-p11.22). It is 12,406 bases in length and lies on the minus strand. The encoded protein has 313 amino acids with a predicted molecular weight of 33.845 kDa. Molecular biology The protein is a synaptic vesicle glycoprotein with four transmembrane domains weighing 38 kDa. It is present in neuroendocrine cells and in virtually all neurons in the brain and spinal cord that participate in synaptic transmission. It acts as a marker for neuroendocrine tumors, and its ubiquity at the synapse has led to the use of synaptophysin immunostaining for quantification of synapses. The exact function of the protein is unknown: it interacts with the essential synaptic vesicle protein synaptobrevin, but when the synaptophysin gene is experimentally inactivated in animals, they still develop and function normally. Recent research has shown, however, that elimination of synaptophysin in mice creates behavioral changes such as increased exploratory behavior, impaired object novelty recognition, and reduced spatial learning. Clinical importance This gene has been implicated in X-linked intellectual disability. Using immunohistochemistry, synaptophysin can be demonstrated in a range of neural and neuroendocrine tissues, including cells of the adrenal medulla and pancreatic islets. As a specific marker for these tissues, it can be used to identify tumours arising from them, such as neuroblastoma, retinoblastoma, phaeochromocytoma, carcinoid, small-cell carcinoma, medulloblastoma and medullary thyroid carcinoma, among others. Diagnostically, it is often used in combination with chromogranin A. Interactions Synaptophysin has been shown to interact with AP1G1 and SIAH2. See also List of human genes Merkel-cell carcinoma - although origin of this tumor is unclear, it does express synaptophysin References Further reading External links Glycoproteins Tumor markers
Synaptophysin
[ "Chemistry", "Biology" ]
505
[ "Biomarkers", "Tumor markers", "Glycoproteins", "Glycobiology", "Chemical pathology" ]
1,012,146
https://en.wikipedia.org/wiki/Game%20mechanics
In tabletop games and video games, game mechanics define how a game works for players. Game mechanics are the rules or ludemes that govern and guide player actions, as well as the game's response to them. A rule is an instruction on how to play, while a ludeme is an element of play, such as the L-shaped move of the knight in chess. The interplay of various mechanics determines the game's complexity and how the players interact with the game. All games use game mechanics; however, different theories disagree about their degree of importance to a game. The process and study of game design includes efforts to develop game mechanics that engage players. Common examples of game mechanics include turn-taking, movement of tokens, set collection, bidding, capture, and spell slots. Definition of term There is no consensus on the precise definition of game mechanics. Competing definitions claim that game mechanics are: "systems of interactions between the player and the game" "the rules and procedures that guide the player and the game response to the player's moves or actions" "more than what the player may recognize, they are only those things that impact the play experience" Game mechanics vs. theme A game's mechanics are not its theme. Some games have a theme—some element of representation. For example, in Monopoly, the events of the game represent another activity, the buying and selling of properties. Two games that are mechanically similar can be thematically different, and visa versa. The tension between a game's mechanics and theme is ludonarrative dissonance. Abstract games do not have themes, because the action is not intended to represent anything. Go is an example of an abstract game. Game mechanics vs. gameplay Some game studies scholars distinguish between game mechanics and gameplay. In Playability and Player Experience Research, the authors define gameplay as "the interactive gaming process of the player with the game." In this definition, gameplay occurs when players interact with the game mechanics. Similarly, in Dissecting Play – Investigating the Cognitive and Emotional Motivations and Affects of Computer Gameplay, the authors define gameplay as "interacting with a game design in the performance of cognitive tasks". Video games researcher Carlo Fabricatore defines gameplay as: What the player can do What other entities can do, in response to player's actions. In Ernest Adams and Andrew Rollings on game design, the authors define gameplay as the combination and interaction of many elements of a game. However, popular usage sometimes elides the two terms. For example, gamedesigning.org defines gameplay as the core game mechanics that determine a game's overall characteristics. Categorization Scholars organize game mechanics into categories, which they use (along with theme and gameplay) to classify games. For example, in Building Blocks of Tabletop Game Design, Geoffrey Engelstein and Isaac Shalev classify game mechanisms into categories based on game structure, turn order, actions, resolution, victory conditions, uncertainty, economics, auctions, worker placement, movement, area control, set collection, and card mechanisms. Examples of game mechanics The following examples of game mechanics are not a strict or complete taxonomy. This list is alphabetical. Action points Each player receives a budget of action points to use on each turn. These points may be spent on various actions according to the game rules, such as moving pieces, drawing cards, collecting money, etc. Alignment Alignment is a game mechanic in both tabletop role-playing games and role-playing video games. Alignment represents characters' moral and ethical orientation, such as good or evil. In some games, a player character's alignment permits or prohibits the use of additional game mechanics. For example, in Shin Megami Tensei: Strange Journey Redux, alignment determines which demon assistants a player can or cannot recruit, and in Star Wars Knights of the Old Republic II: The Sith Lords, players aligned with the light and dark sides of The Force gain different bonuses to attacks, healing, and speed. Auction or bidding Some games use an auction or bidding system in which the players make competitive bids to determine which player wins the right to perform particular actions. Such an auction can be based on different forms of payment: The winning bidder must pay for the won privilege with some form of game resource (game money, points, etc.). For example, Ra uses this mechanic. The auction is a form of a promise that the winner will achieve some outcome in the near future. If this outcome is not achieved, the bidder pays a penalty. Such a system is used in many trick-taking games, such as contract bridge. Capture/eliminate In some games, the number of tokens a player has on the playing surface represents their current strength in the game. A central goal is capturing an opponent's tokens, which removes them from the playing surface. Captures can be achieved in a number of ways: Moving one of one's own tokens into a space occupied by an opposing token (e.g. chess, parchisi), also known as a replacement capture or displacement capture. If the space immediately opposite must either be off the board or a marked trap space, it is known as a push capture. Jumping a token over the space immediately occupied by an opposing token (e.g. draughts), known as a jump or leap. When the opposing token can be any distance along an unobstructed line, it is known as a flying capture. Occupying the adjacent squares of an opposing token (e.g. tafl), also known as a custodian capture, custodianship or interception. Occupying one immediately adjacent square to an opposing token, also known as approach. The reverse of approach: capturing an adjacent opposing token by moving away from it in a straight line (e.g. fanorona), also known as withdrawal. Capturing two opposing tokens by occupying the single square separating them, also known as intervention. Declaring an "attack" on an opposing token, and then determining the outcome of the attack, either in a deterministic way by the game rules (e.g. Stratego, Illuminati), or by using a randomizing method (e.g. Illuminati: New World Order). Surrounding a token or region with one's own tokens in some manner (e.g. go), also known as enclosure. Playing cards or other game resources to capture tokens. Other specialized mechanisms that do not fall neatly into any of the above categories. In some games, captured tokens are simply removed and play no further part in the game (e.g. chess). In others, captured tokens are removed but can return to play later in the game under various rules (e.g. backgammon, pachisi). Some games allow the capturing player to take possession of the captured tokens and use them later in the game (e.g. Shogi, Reversi, Illuminati), also known as conversion. Many video games express the capture mechanism in the form of a kill count (sometimes referred to as "frags"), reflecting the number of opposing pawns eliminated during the game. Chance and randomization Dice The most common use of dice is to randomly determine the outcome of an interaction in a game. An example is a player rolling a die or dice to determine how many board spaces to move a game token. Dice often determine the outcomes of in-game conflict between players, with different outcomes of the die/dice roll of different benefit (or adverse effect) to each player involved. This occurs in games that simulate direct conflicts of interest. Different dice formulas are used to generate different probability curves. A single die has equal probability of landing on any particular side, and consequently produces a linear probability distribution curve. The sum of two or more dice, however, results in a bell curve-shaped probability distribution, with the addition of further dice resulting in a steeper bell curve, decreasing the likelihood of an extreme result. A linear curve is generally perceived by players as being more "swingy", whereas a bell curve is perceived as being more "fair". Risk and reward Some games include situations where players can "press their luck" in optional actions where the danger of a risk must be weighed against the chance of reward. For example, in Beowulf: The Legend, players may elect to take a "Risk", with success yielding cards and failure weakening the player's ultimate chance of victory. Crafting Crafting new in-game items is a game mechanic in open world survival video games such as Minecraft and Palworld, role-playing video games such as Divinity: Original Sin and Stardew Valley, tabletop role-playing games such as Dungeons & Dragons, and deck-building card games such as Mystic Vale. Crafting mechanics rely on set collection mechanics, since crafting new items requires obtaining specific sets of items, then transforming them into new ones. Modes A game mode is a distinct configuration that varies gameplay and affects how other game mechanics behave. A game with several modes presents different settings in each, changing how a particular element of the game is played. A common example is the choice between single-player and multiplayer modes in video games, where multiplayer can further be cooperative or competitive. A sandbox mode allows free play without predefined goals. In a Time Attack Mode, the player tries to score, progress or clear levels in a limited amount of time. Changing modes while the game is in progress can increase difficulty and provide additional challenge or reward player success. Power-ups are modes that last for a few moments or that change only one or a few game rules. For example, power pellets in Pac-Man give the player a temporary ability to eat enemies. A game mode may restrict or change the behavior of the available tools, such as allowing play with limited/unlimited ammo, new weapons, obstacles or enemies, or a timer, etc. A mode may establish different rules and game mechanics, such as altered gravity, win at first touch in a fighting game, or play with some cards face-up in a poker game. A mode may even change a game's overarching goals, such as following a story or character's career vs. playing a limited deathmatch or capture the flag set. Movement Many board games involve the movement of tokens. Movement mechanics govern how and when these tokens are allowed to move. Some game boards are divided into small, equally-sized areas that can be occupied by game tokens. (Often such areas are called squares, even if not square in shape.) Movement rules specify how and when a token can be moved to another area. For example, a player may be allowed to move a token to an adjacent area, but not one further away. Dice are sometimes used to randomize the allowable movements. Other games, such as miniatures games, are played on surfaces with no marked areas. Resource management Many games involve the management of resources. Examples of game resources include tokens, money, land, natural resources, human resources and game points. Players establish relative values for various types of available resources, in the context of the current state of the game and the desired outcome (i.e. winning the game). Game rules determine how players can increase, spend, or exchange resources. The skillful management of resources lets players influence the game's outcome. Set collection Engine building Engine building is a mechanism that involves building and optimizing a system to create a flow of resources. SimCity is an example of an engine-building video game: money activates building mechanisms, which in turn unlock feedback loops between many internal resources such as people, job vacancies, power, transport capacity, and zone types. In engine-building board games, the player adds and modifies combinations of abilities or resources to assemble a virtuous circle of increasingly powerful and productive outcomes. Tile-laying Many games use tiles - flat, rigid pieces of a regular shape - that can be laid down on a flat surface to form a tessellation. Usually, such tiles have patterns or symbols on their surfaces that combine when tessellated to form game-mechanically significant combinations. The tiles themselves are often drawn at random by the players, either immediately before placing them on the playing surface, or in groups to form a pool or hand of tiles from which the player may select one to play. Tiles can be used in two distinct ways: The playing of a tile itself is directly significant to the outcome of the game, in that where and when it is played contributes points or resources to the player. Tiles are used to build a board upon which other game tokens are placed, and the interaction of those tokens with the tiles provides game points or resources. Examples of tile mechanics include: Scrabble, in which players lay down lettered tiles to form words and score points, and Tikal, in which players lay jungle tiles on the play surface then move tokens through them to score points. Turns A turn is a segment of a game set aside for certain actions to happen before moving on to the next turn, where the sequence of events can largely repeat. Some games, such as Monopoly and chess, use player turns where one player performs their actions before another player can perform any on their turn. Some games use game turns, where all players contribute to the actions of a single turn. Some games combine the two. For example, Civilization uses a series of player turns followed by a trading round in which all players participate. Games with semi-simultaneous turns allow for some actions on another player's turn. Victory conditions Victory conditions control how a player wins the game. In many games, victory is achieved by a player who accumulates a sufficiently high score, or a higher score than any other player. Other examples of victory conditions include the necessity of completing a quest in a role-playing video game, or the player being suitably trained in a skill in a business game. Some games also feature a losing condition, such as being checkmated in chess, or being tagged in tag. In such a game, the winner is the only remaining player to have avoided loss. Games are not limited to one victory or loss condition, and can combine several at once. Tabletop role-playing games and sandbox games frequently have no victory condition. Catch-up Some games include a mechanism designed to make progress towards victory more difficult for players in the lead. The idea behind this is to allow trailing players a chance to catch up and potentially still win the game, rather than suffer an inevitable loss once they fall behind. For example, in The Settlers of Catan, a neutral piece (the robber) debilitates the resource generation of players whose territories it is near. Players occasionally get to move the robber, and frequently choose to position it where it will cause maximal disruption to the player currently winning the game. In some racing games, such as Chutes and Ladders, a player must roll or spin the exact number needed to reach the finish line; e.g., if a player is only four spaces from the finish line then they must roll a four on the die or land on the four with the spinner. If more than four is rolled, then the turn is forfeited to the next player. Worker placement Worker placement is a game mechanism where players allocate a limited number of tokens ("workers") to multiple stations that provide various defined actions. The worker placement mechanism originates with board games. Stewart Woods identifies Keydom (1998; later remade and updated as Aladdin's Dragons) as the first game to implement the mechanic. Worker placement was popularized by Caylus (2005) and became a staple of the Eurogame genre in the wake of the game's success. Other popular board games that use this mechanism include Stone Age and Agricola. Although the mechanism is chiefly associated with board games, the worker placement concept has been used in analysis of other game types. For instance, Adams and Dormans describe the assigning of tasks to SCV units in the real-time strategy game StarCraft as an example of the worker placement mechanic. See also Ludology Ludeme Chess clock Kingmaker scenario Pie rule Gamification Dynamic game difficulty balancing References External links Gamification Design Elements at Enterprise Gamification Wiki Board Game Mechanics Database at MechanicsBG List of games sorted by mechanic at BoardGameGeek SCVNGR's Secret Game Mechanics Playdeck at Tech Crunch Game Mechanic Explorer Game design Video game design Video game terminology
Game mechanics
[ "Technology", "Engineering" ]
3,389
[ "Computing terminology", "Design", "Video game terminology", "Game design" ]
1,012,418
https://en.wikipedia.org/wiki/Auer%20rod
Auer rods (or Auer bodies) are large, crystalline cytoplasmic inclusion bodies sometimes observed in myeloid blast cells during acute myeloid leukemia, acute promyelocytic leukemia, high-grade myelodysplastic syndromes and myeloproliferative disorders. Composed of fused lysosomes and rich in lysosomal enzymes, Auer rods are azurophilic and can resemble needles, commas, diamonds, rectangles, corkscrews, or (rarely) granules. Eponym Although Auer rods are named for American physiologist John Auer, they were first described in 1905 by Canadian physician Thomas McCrae, then at Johns Hopkins Hospital, as Auer himself acknowledged in his 1906 paper. Both McCrae and Auer mistakenly thought that the cells containing the rods were lymphoblasts. Additional images References External links Image at NIH/MedlinePlus Slides at wadsworth.org Image at University of Utah Leukemia Histopathology
Auer rod
[ "Chemistry" ]
212
[ "Histopathology", "Microscopy" ]
1,012,545
https://en.wikipedia.org/wiki/Regge%20calculus
In general relativity, Regge calculus is a formalism for producing simplicial approximations of spacetimes that are solutions to the Einstein field equation. The calculus was introduced by the Italian theoretician Tullio Regge in 1961. Overview The starting point for Regge's work is the fact that every four dimensional time orientable Lorentzian manifold admits a triangulation into simplices. Furthermore, the spacetime curvature can be expressed in terms of deficit angles associated with 2-faces where arrangements of 4-simplices meet. These 2-faces play the same role as the vertices where arrangements of triangles meet in a triangulation of a 2-manifold, which is easier to visualize. Here a vertex with a positive angular deficit represents a concentration of positive Gaussian curvature, whereas a vertex with a negative angular deficit represents a concentration of negative Gaussian curvature. The deficit angles can be computed directly from the various edge lengths in the triangulation, which is equivalent to saying that the Riemann curvature tensor can be computed from the metric tensor of a Lorentzian manifold. Regge showed that the vacuum field equations can be reformulated as a restriction on these deficit angles. He then showed how this can be applied to evolve an initial spacelike hyperslice according to the vacuum field equation. The result is that, starting with a triangulation of some spacelike hyperslice (which must itself satisfy a certain constraint equation), one can eventually obtain a simplicial approximation to a vacuum solution. This can be applied to difficult problems in numerical relativity such as simulating the collision of two black holes. The elegant idea behind Regge calculus has motivated the construction of further generalizations of this idea. In particular, Regge calculus has been adapted to study quantum gravity. See also Numerical relativity Quantum gravity Euclidean quantum gravity Piecewise linear manifold Euclidean simplex Path integral formulation Lattice gauge theory Wheeler–DeWitt equation Mathematics of general relativity Causal dynamical triangulation Ricci calculus Twisted geometries Notes References See chapter 42. Chapters 4 and 6. Available (subscribers only) at "Classical and Quantum Gravity". Available at . eprint Available at "Living Reviews of Relativity". See section 3. Available (subscribers only) at "Classical and Quantum Gravity". External links Regge calculus on ScienceWorld Mathematical methods in general relativity Simplicial sets Numerical analysis
Regge calculus
[ "Mathematics" ]
491
[ "Computational mathematics", "Basic concepts in set theory", "Families of sets", "Mathematical relations", "Simplicial sets", "Numerical analysis", "Approximations" ]
1,012,767
https://en.wikipedia.org/wiki/Coarse%20woody%20debris
Coarse woody debris (CWD) or coarse woody habitat (CWH) refers to fallen dead trees and the remains of large branches on the ground in forests and in rivers or wetlands. A dead standing tree – known as a snag – provides many of the same functions as coarse woody debris. The minimum size required for woody debris to be defined as "coarse" varies by author, ranging from in diameter. Since the 1970s, forest managers worldwide have considered it best environmental practice to allow dead trees and woody debris to remain in woodlands, recycling nutrients trapped in the wood and providing food and habitat for a wide range of organisms, thereby improving biodiversity. The amount of coarse woody debris is an important criterion for the evaluation and restoration of temperate deciduous forest. Coarse woody debris is also important in wetlands, particularly in deltas where woody debris accumulates. Sources Coarse woody debris comes from natural tree mortality, plant pathology, insects, wildfire, logging, windthrows and floods. Ancient, or old growth, forest, with its dead trees and woody remains lying where they fell to feed new vegetation, constitutes the ideal woodland in terms of recycling and regeneration. In healthy temperate forests, dead wood comprises up to thirty per cent of all woody biomass. In recent British studies, woods managed for timber had between a third and a seventh less fallen debris than unmanaged woods that had been left undisturbed for many years, while in recently coppiced woods the amount of CWD was almost zero. In old growth Douglas fir forests of the Pacific Northwest of North America, CWD concentrations were found to be from 72 metric tons/hectare (64,000 pounds/acre) in drier sites to 174 t/ha (155,000 lb/acre) in moister sites. Australian native forests have mean CWD concentrations ranging from 19 t/ha (17,000 lb/acre) to 134 t/ha (120,000 lb/acre), depending on forest type. Benefits Nutrient cycling Coarse woody debris and its subsequent decomposition recycles nutrients that are essential for living organisms, such as carbon, nitrogen, potassium, and phosphorus. Saprotrophic fungi and detritivores such as bacteria and insects directly consume dead wood, releasing nutrients by converting them into other forms of organic matter which may then be consumed by other organisms It has almost no physiologically important nutrients, so must be first enriched for consumption by transport of nutrients from outside. Thus CWD is important actor contributing to soil nutrients cycles. CWD, while itself not particularly rich in nitrogen, contributes nitrogen to the ecosystem by acting as a host for nonsymbiotic free-living nitrogen-fixing bacteria. Scientific studies show that coarse woody debris can be a significant contributor to biological carbon sequestration. Trees store atmospheric carbon in their wood using photosynthesis. Once the trees die, fungi and other saprotrophs transfer some of that carbon from CWD into the soil. This sequestration can continue in old-growth forests for hundreds of years. Habitat By providing both food and microhabitats for many species, coarse woody debris helps to maintain the biodiversity of forest ecosystems. Up to forty percent of all forest fauna is dependent on CWD. Studies in western North America showed that only five per cent of living trees consisted of living cells by volume, whereas in dead wood it was as high as forty percent by volume, mainly fungi and bacteria. Colonizing organisms that live on the remains of cambium and sapwood of dead trees aid decomposition and attract predators that prey on them and so continue the chain of metabolizing the biomass. The list of organisms dependent on CWD for habitat or as a food source includes bacteria, fungi, lichens, mosses and other plants, and in the animal kingdom, invertebrates such as termites, ants, beetles, and snails, amphibians such as salamanders, reptiles such as the slow-worm, as well as birds and small mammals. One third of all woodland birds live in the cavities of dead tree trunks. Woodpeckers, tits, chickadees, and owls all live in dead trees, and grouse shelter behind woody debris. Some plants use coarse woody debris as habitat. Mosses and lichens may cover logs, while ferns and trees may regenerate on the top of logs. Large fragments of CWD that provide such habitat for herbs, shrubs, and trees are called nurse logs. CWD can also protect young plants from herbivory damage by acting as barriers to browsing animals. The persistence of coarse woody debris can shelter organisms during a large disturbance to the ecosystem such as wildfire or logging. Rivers and wetlands Fallen debris and trees in streams provide shelter for fish, amphibians and mammals by modifying the flow of water and sediment. Turtles of many species may also use coarse woody debris for basking. Musk turtles may lay their eggs under logs near wetlands. Soil Coarse woody debris, particularly on slopes, stabilizes soils by slowing downslope movement of organic matter and mineral soil. Leaves and other debris collect behind CWD, allowing for decomposition to occur. Infiltration of precipitation is improved as well. During dry weather, CWD slows evaporation of soil moisture and provides damp microhabitats for moisture-sensitive organisms. Wildfire In fire-prone forests, coarse woody debris can be a significant fuel during a wildfire. High amounts of fuels can lead to increased fire severity and size. CWD may be managed to reduce fuel levels, particularly in forests where fire exclusion has resulted in the buildup of fuels. Reductions in CWD for fire safety should be balanced with the retention of CWD for habitat and other benefits. CWD of in diameter is classified as 1000-hour fuel by fire managers, referring to the amount of time needed for the moisture content in the wood to come to equilibrium with the surrounding environment. Regional examples In Glen Affric, Scotland, the Trees for Life group found the black tinder fungus beetle (Bolitothorus reticulatus) is dependent on a particular fungus (Fomes fomentarius), which itself grows only on dead birch. Another insect, the pine hoverfly (Blera fallax), requires rotting Scots pine in order to reproduce. In the temperate deciduous forests of eastern North America, CWD provides habitat ranging from salamanders to ferns. It is an important indicator for evaluating and restoring this type of forest. In certain subtropical areas such as Australia where bushfire constitutes a major hazard, the amount of CWD left standing or lying is determined by what may be considered safe in the course of reasonable fire prevention. When fires occur, some invertebrates find shelter either within or beneath dead tree logs. In Canada, bears seek out dead tree logs to tear open and look for and feed on ants and beetles, a fact that has encouraged the authorities to reserve a sufficient amount of coarse woody debris for these purposes. In North America, too, CWD is often used as barriers to prevent browsing deer and elk from damaging young trees. See also Large woody debris Plant litter Slash (logging) Soil life Tree hollow References Further reading Franklin J. F., Lindenmayer D., MacMahon J. A., McKee A., Magnuson J., Perry D. A., Waide R. & Foster D. (2000). "Threads of Continuity". Conservation Biology in Practice. [Malden, MA] Blackwell Science, Inc. 1(1) pp9–16. Proceedings of the Symposium on the Ecology and Management of Dead Wood in Western Forests. PSW-GTR-181. William F. Laudenslayer, Jr., Patrick J. Shea, Bradley E. Valentine, C. Phillip Weatherspoon, and Thomas E. Lisle Technical Coordinators. External links Dead wood Forest ecology Habitat Fungus ecology Sustainable forest management Wildfire ecology Habitat management equipment and methods
Coarse woody debris
[ "Biology" ]
1,615
[ "Fungus ecology", "Fungi" ]
1,012,800
https://en.wikipedia.org/wiki/Thermal%20design%20power
Thermal Design Power (TDP), also known as thermal design point, is the maximum amount of heat that a computer component (like a CPU, GPU or system on a chip) can generate and that its cooling system is designed to dissipate during normal operation at a non-turbo clock rate (base frequency). Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. Calculation The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor. According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth), which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures. The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted heat sink. For example, a laptop's CPU cooling system may be designed for a 20 W TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the laptop's CPU. A cooling system can do this using an active cooling method (e.g. conduction coupled with forced convection) such as a heat sink with a fan, or any of the two passive cooling methods: thermal radiation or conduction. Typically, a combination of these methods is used. Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared (a processor with a TDP of, for example, 100 W will almost certainly use more power at full load than processors with a fraction of said TDP, and very probably more than processors with lower TDP from the same manufacturer, but it may or may not use more power than a processor from a different manufacturer with a not excessively lower TDP, such as 90 W). Additionally, TDPs are often specified for families of processors, with the low-end models usually using significantly less power than those at the high end of the family. Until around 2006 AMD used to report the maximum power draw of its processors as TDP. Intel changed this practice with the introduction of its Conroe family of processors. Intel calculates a specified chip's TDP according to the amount of power the computer's fan and heatsink need to be able to dissipate while the chip is under sustained load. Actual power usage can be higher or (much) lower than TDP, but the figure is intended to give guidance to engineers designing cooling solutions for their products. In particular, Intel's measurement also does not fully take into account Intel Turbo Boost due to the default time limits, while AMD does because AMD Turbo Core always tries to push for the maximum power. Alternatives TDP specifications for some processors may allow them to work under multiple different power levels, depending on the usage scenario, available cooling capacities and desired power consumption. Technologies that provide such variable TDPs include Intel's configurable TDP (cTDP) and scenario design power (SDP), and AMD's TDP power cap. Configurable TDP (cTDP), also known as programmable TDP or TDP power cap, is an operating mode of later generations of Intel mobile processors () and AMD processors () that allows adjustments in their TDP values. By modifying the processor behavior and its performance levels, power consumption of a processor can be changed altering its TDP at the same time. That way, a processor can operate at higher or lower performance levels, depending on the available cooling capacities and desired power consumption. cTDP typically provide (but are not limited to) three operating modes: Nominal TDP the processor's rated frequency and TDP. cTDP down when a cooler or quieter mode of operation is desired, this mode specifies a lower TDP and lower guaranteed frequency versus the nominal mode. cTDP up when extra cooling is available, this mode specifies a higher TDP and higher guaranteed frequency versus the nominal mode. For example, some of the mobile Haswell processors support cTDP up, cTDP down, or both modes. As another example, some of the AMD Opteron processors and Kaveri APUs can be configured for lower TDP values. IBM's POWER8 processor implements a similar power capping functionality through its embedded on-chip controller (OCC). Intel introduced scenario design power (SDP) for some low power Y-series processors since 2013. It is described as "an additional thermal reference point meant to represent thermally relevant device usage in real-world environmental scenarios." As a power rating, SDP is not an additional power state of a processor; it states the average power consumption of a processor using a certain mix of benchmark programs to simulate "real-world" scenarios. Ambiguities of the Thermal Design Power parameter As some authors and users have observed, the Thermal Design Power (TDP) rating is an ambiguous parameter. In fact, different manufacturers define the TDP using different calculation methods and different operating conditions, keeping these details almost undisclosed (with very few exceptions). This makes highly problematic (if not impossible) to reasonably compare similar devices made by different manufacturers based on their TDP, and to optimize the design of a cooling system in terms of both heat management and cost. Thermal Management fundamentals To better understand the problem we must remember the basic concepts underlying Thermal management and Computer cooling. Let’s consider the thermal conduction path from the CPU case to the ambient air through a Heat sink, with: Pd (Watt) = Thermal power generated by a CPU and to be dissipated into the ambient through a suitable Heat sink. It corresponds to the total power drain from the direct current supply rails of the CPU. Rca (°C/W) = Thermal resistance of the Heat sink, between the case of the CPU and the ambient air. Tc (°C) = Maximum allowed temperature of the CPU's case (ensuring full performances). Ta (°C) = Maximum expected ambient temperature at the inlet of the Heat sink fan. All these parameters are linked together by the following equation: Hence, once we know the thermal power to be dissipated (Pd), the maximum allowed case temperature (Tc) of the CPU and the maximum expected ambient temperature (Ta) of the air entering the cooling fans, we can determine the fundamental characteristics of the required Heat sink, i.e. its thermal resistance Rca, as: This equation can be rearranged by writing where in Pd can replaced by the Thermal Design Power (TDP). Note that the heat dissipation path going from the CPU to the ambient air flowing through the printed circuit of the motherboard has a thermal resistance that is orders of magnitude greater than the one of the Heat sink, therefore it can be neglected in these computations. Issues when dealing with the Thermal Design Power (TDP) Once all the input data is known, the previous formula allows to choose a CPU’s Heat sink with a suitable thermal resistance Rca between case and ambient air, sufficient to keep the maximum case temperature at or below a predefined value Tc. On the contrary, when dealing with the Thermal Design Power (TDP), ambiguities arise because the CPU manufacturers usually do not disclose the exact conditions under which this parameter has been defined. The maximum acceptable case temperature Tc to get the rated performances is usually missing, as well as the corresponding ambient temperature Ta, and, last but not least, details about the specific computational test workload. For instance, an Intel’s general support page states briefly that the TDP refers to "the power consumption under the maximum theoretical load". Here they also inform that starting from the 12th generation of their CPUs the term Thermal Design Power (TDP) has been replaced with Processor Base Power (PBP) . In a support page dedicated to the Core i7-7700 processor, Intel defines the TDP as the maximum amount of heat that a processor can produce when running real life applications , without telling what these "real life applications" are. Another example: in a 2011 white paper where the Xeon processors are compared with AMD’s competing devices, Intel defines TDP as the upper point of the thermal profile measured at maximum case temperature, but without specifying what this temperature should be (nor the computing load). It is important to note that all these definitions imply that the CPU is running at the base clock rate (non-turbo). In conclusion: Comparing the TDP between devices of different manufacturers is not very meaningful. The selection of a heat sink may end up with overheating (and CPU reduced performances) or overcooling (oversized, expensive heat sink ), depending if one chooses a too high or a too low case temperature Tc (respectively with a too low or too high ambient temperature Ta), or if the CPU operates with different computational loads. A possible approach to ensure a long life of a CPU is to ask the manufacturer the recommended maximum case temperature Tc and then to oversize the cooling system. For instance, a safety margin taking into account some turbo overclocking could consider a thermal power that is 1.5 times the rated TDP. In any case, the lower is the silicon junction temperature, the longer will be the lifespan of the device, according to an acceleration factor very roughly expressed by means of the Arrhenius equation. Some disclosed details of AMD’s Thermal Design Power (TDP) In October 2019, the GamersNexus Hardware Guides showed a table with case and ambient temperature values that they got directly from AMD, describing the TPDs of some Ryzen 5, 7 and 9 CPUs. The formula relating all these parameters, given by AMD, is the usual The declared TPDs of these devices range from 65 W to 105 W; the ambient temperature considered by AMD is +42°C, and the case temperatures range from +61.8 °C to +69.3°C, while the case-to-ambient thermal resistances range from 0.189 to 0.420 °C/W. See also Heat generation in integrated circuits Operating temperature Power rating Intel Turbo Boost AMD Turbo Core References External links Details on AMD Bulldozer: Opterons to Feature Configurable TDP, AnandTech, July 15, 2011, by Johan De Gelas and Kristian Vättö Making x86 Run Cool, April 15, 2001, by Paul DeMone Computer engineering Heat transfer
Thermal design power
[ "Physics", "Chemistry", "Technology", "Engineering" ]
2,399
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Computer engineering", "Thermodynamics", "Electrical engineering" ]
1,012,978
https://en.wikipedia.org/wiki/Whetstone%20%28benchmark%29
The Whetstone benchmark is a synthetic benchmark for evaluating the performance of computers. It was first written in ALGOL 60 in 1972 at the Technical Support Unit of the Department of Trade and Industry (later part of the Central Computer and Telecommunications Agency) in the United Kingdom. It was derived from statistics on program behaviour gathered on the KDF9 computer at NPL National Physical Laboratory, using a modified version of its Whetstone ALGOL 60 compiler. The workload on the machine was represented as a set of frequencies of execution of the 124 instructions of the Whetstone Code. The Whetstone Compiler was built at the Atomic Power Division of the English Electric Company in Whetstone, Leicestershire, England, hence its name. Dr. B.A. Wichman at NPL produced a set of 42 simple ALGOL 60 statements, which in a suitable combination matched the execution statistics. To make a more practical benchmark Harold Curnow of TSU wrote a program incorporating the 42 statements. This program worked in its ALGOL 60 version, but when translated into FORTRAN it was not executed correctly by the IBM optimizing compiler. Calculations whose results were not output were omitted. He then produced a set of program fragments which were more like real code and which collectively matched the original 124 Whetstone instructions. Timing this program gave a measure of the machine's speed in thousands of Whetstone instructions per second (). The Fortran version became the first general purpose benchmark that set industry standards of computer system performance. Further development was carried out by Roy Longbottom, also of TSU/CCTA, who became the official design authority. The Algol 60 program ran under the Whetstone compiler in July 2010, for the first time since the last KDF9 was shut down in 1980, but now executed by a KDF9 emulator. Benchmark content and enhancements The benchmark employs 8 test procedures, with three executing standard floating point calculations, two with such as COS or EXP functions, one each for integer arithmetic, branching or memory assignments. Output from the original comprised parameters used for each test, numeric results produced and the overall KWIPS performance rating. In 1978, the program was updated to log running time of each of the tests, allowing MFLOPS (Millions of Floating Point Operations Per Second) to be included in reports, along with an estimation of Integer MIPS (Millions of Instructions Per Second). In 1987, MFLOPS calculations were included in the log for the three appropriate tests and MOPS (Millions of Operations Per Second) for the others. Code changes were also carried out, including by Bangor University, necessary to identify unexpected behaviour, without changing the implementation of the original 124 Whetstone instructions. One necessary change was to maintain measurement accuracy at increasing CPU speeds, with self calibration to run for a noticeable finite time, typically set for 10 seconds or 100 for early PCs with low timer resolution. Note that there are other versions of the Whetstone Benchmark available online, some claiming copyright, without reference to CCTA or the design authority. Initial CCTA results In conjunction with the undertaking controlled by the Contracts Division, CCTA engineers had responsibility to design and supervise acceptance trials of all UK Government computers and those for centrally funded for Universities and Research Councils, with systems varying from minicomputers to supercomputers. This provided the opportunity to gather verified Whetstone Benchmark results. Other results were obtained via new computer system appraisal activities. CCTA records are now available in The UK National Archives, including technical reports. Original Whetstone Benchmark results are in the 1985 CCTA Technical Memorandum 1182, where overall speed is only shown as MWIPS. This contains more than 1000 results for 244 computers from 32 manufacturers, including the first for PCs and Supercomputers. The report might well be accessible from the Archive. The details were later included in a publicly available report (see Available Reports below). Vector processing version Roy Longbottom converted the original Whetstone Benchmark to fully exploit capabilities of the new vector processors. Results were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara . This was also repeated in the Harold Curnow paper “Whither Whetstone? The synthetic benchmark after 15 years” presented at the “Evaluating supercomputers: strategies for exploiting, evaluating and benchmarking computers with advanced architecture” conference in 1990, in book . Whetstone benchmark influences Harold also reported comments from the 1989 conference “Software for Parallel Computers” in a presentation by Gordon Bell, designer of the Digital Equipment Corporation VAX range of minicomputers, indicating that the range was designed to perform well on the Whetstone Benchmark. The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087” . The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs. Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer . Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set. By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal . Details of the Vector Whetstone Benchmark performance were also repeated, by Roy Longbottom, at the June 1990 Advanced Computing Seminar at Natural Environment Research Council Wallingford. This led to Council for the Central Laboratory of the Research Councils Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the Wayback Machine Archive in entries to at least the year 2007 section . The report also indicated “The wide variety of standard functions exercised (sqrt, exp, cos etc.) consume a far larger fraction of the reported times.”. The First 1 MIPS minicomputer and Dhrystone benchmark On achieving 1 MWIPS, the Digital Equipment Corporation VAX-11/780 minicomputer became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), CERN , not really appropriate for a benchmark dependent on floating point speed. This had an impact on the Dhrystone Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757. Later developments Following retirement from CCTA, Roy Longbottom continued providing free benchmarking and stress testing programs available on his web site, latterly roylongbottom.org.uk, with most development using C (programming language), via Microsoft Windows and Linux based Operating Systems on PCs. This was initially in conjunction with the Compuserve Benchmarks and Standards Forum, see Wayback Machine Archive, covering PC hardware 1997 to 2008, providing numerous new benchmark results. From 2008 to 2013 further PC results were collected privately. By then, PC processor operating clock speeds reached 4000 MHz and did not increase that much by the 2020s, reducing the need to gather results of the original scalar benchmark. In 2017 “Whetstone Benchmark History and Results” was published for public access, with identified year of first delivery and purchase prices were added, also doubling the number of computers covered in the CCTA report. The most notable citation for this was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole” . This considered available benchmarks and performance by time with detailed graphs, including those from the Whetstone reports. At a later stage, 504 of the results, by year, were included in the report “Techniques used for analyzing basic performance measurements” . During this period, versions of the Whetstone Benchmark were produced to access Multithreading (computer architecture), initially for PCs running under Microsoft Windows, the latest supporting up to 8 CPUs or CPU cores particularly for those known as 4 core/8 thread varieties. Compiler and interpreter efficiency The History report includes new sections for PC results, with CPUs from 1979, particularly those produced by up to 12 different compilers or interpreters, covering C/C++ ( up to 64 bit SSE level), Old Fortran, Basic and Java. These are based on the ratio MWIPS per MHz (multiplied by 100) to represent efficiency. Bottom line is one with a Core i7 CPU with ratings varying from 0.39, via the Basic Interpreter, to 311, via C, using 64 bit SSE options, then 1003 with the multithreading benchmarks, using all four CPU cores. Results with individual test performance Another report “Whetstone Benchmark Detailed Later Results” was produced in 2017. This document provides a summary of speeds of the eight test loops in the benchmark, as MfLOPS or MOPS plus the MWIPS ratings. There are 22 pages of results covering the same Windows based PCs as the Historic file with different compilers and compiling options, some with multithreaded versions. Later results cover PCs using Linux. Then there are others for a sample of Android phones and tablets and, at the time, the full range of Raspberry Pi computers. For the latter, Roy Longbottom had been recruited as a voluntary member of Raspberry Pi Foundation new products Alpha Testing Team. Cray 1 supercomputer performance comparisons Later scalar, vector and multithreading results were included in a 2022 report “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets” . This included the following, originally in a report on the first Raspberry Pi computer: "In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1" This claim was based on the official average performance of the Livermore Loops Benchmark that was used to demonstrate that the first Cray 1 met the required contractual requirements. The scalar Whetstone Benchmark achieved a much higher gain of 16.7 times improvement. The report includes comparisons with other supercomputers, a modern fairly fast laptop PC and the 2020 Raspberry Pi 400, where the latter obtained MWIPS gains over the Cray 1 of 155 times scalar, 38 vector and 593 scalar multithreading (4 CPU cores versus 1). The quad core laptop, using advanced SIMD compilations, obtained gains of 400, 215 and 3520 times respectively. Detailed Results, Source and Executable Codes Whetstone Benchmark source codes, compiled programs and reports including results are currently (at the time of writing) on Roy Longbottom’s website roylongbottom.org.uk, but this has a limited lifetime. For main reference purposes the HTML based reports were converted to PDF format and uploaded to ResearchGate. Brief descriptions of all files are included in an indexing file (download via More v for menu choices). Unfortunately, the file structure was changed, disabling access to most older compressed files containing benchmark source codes and compiled programs. The original website provides the same indexing format but includes the links to access both local files and those at ResearchGate, the former having options to download program codes. Presently, and hopefully for longtime future access, the website has been captured numerous times by the Wayback Machine Internet Archive site, but all captures do not necessarily include compressed program files. If the file name is known, available captures can be found, such as for benchnt.zip (copy and modify link address), Other benchmarks and references The Whetstone benchmark primarily measures the floating-point arithmetic performance. A similar benchmark for integer and string operations is the Dhrystone. See also Dhrystone FLOPS Gibson Mix LINPACK benchmarks Million instructions per second (MIPS) References External links Benchmark Programs and Reports (see also Netlib) Whetstone Algol Revisited, or Confessions of a compiler writer PDF file (B. Randell, 1964) Benchmarks (computing) Blaby Computer-related introductions in 1972 History of computing in the United Kingdom Science and technology in Leicestershire
Whetstone (benchmark)
[ "Technology" ]
2,842
[ "Computing comparisons", "Computer performance", "History of computing in the United Kingdom", "Benchmarks (computing)", "History of computing" ]
1,012,996
https://en.wikipedia.org/wiki/Helium-4
Helium-4 () is a stable isotope of the element helium. It is by far the more abundant of the two naturally occurring isotopes of helium, making up about 99.99986% of the helium on Earth. Its nucleus is identical to an alpha particle, and consists of two protons and two neutrons. Helium-4 makes up about one quarter of the ordinary matter in the universe by mass, with almost all of the rest being hydrogen. While nuclear fusion in stars also produces helium-4, most of the helium-4 in the Sun and in the universe is thought to have been produced during the Big Bang, known as "primordial helium". However, primordial helium-4 is largely absent from the Earth, having escaped during the high-temperature phase of Earth's formation. On Earth, most naturally occurring helium-4 is produced by the alpha decay of heavy elements in the Earth's crust, after the planet cooled and solidified. When liquid helium-4 is cooled to below , it becomes a superfluid, with properties very different from those of an ordinary liquid. For example, if superfluid helium-4 is placed in an open vessel, a thin Rollin film will climb the sides of the vessel, causing the liquid to escape. The total spin of the helium-4 nucleus is an integer (zero), making it a boson. The superfluid behavior is a manifestation of Bose–Einstein condensation, which occurs only in collections of bosons. It is theorized that at 0.2 K and 50 atm, solid helium-4 may be a superglass (an amorphous solid exhibiting superfluidity). The helium-4 atom The helium atom is the second simplest atom (hydrogen is the simplest), but the extra electron introduces a third "body", so its wave equation becomes a "three-body problem", which has no analytic solution. However, numerical approximations of the equations of quantum mechanics have given a good estimate of the key atomic properties of , such as its size and ionization energy. The size of the 4He nucleus has long been known to be in the order of magnitude of 1 fm. In an experiment involving the use of exotic helium atoms where an atomic electron was replaced by a muon, the nucleus size has been estimated to be 1.67824(83) fm. Stability of the 4He nucleus and electron shell The nucleus of the helium-4 atom has a type of stability called doubly magic. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each canceling the other's intrinsic spin. Adding another of any of these particles would require angular momentum, and would release substantially less energy (in fact, no nucleus with five nucleons is stable). This arrangement is thus energetically extremely stable for all these particles, and this stability accounts for many crucial facts regarding helium in nature. For example, the stability and low energy of the electron cloud of helium causes helium's chemical inertness (the most extreme of all the elements), and also the lack of interaction of helium atoms with each other (producing the lowest melting and boiling points of all the elements). In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions involving both heavy-particle emission and fusion. Some stable helium-3 is produced in fusion reactions from hydrogen, but it is a very small fraction, compared with the highly energetically favorable production of helium-4. The stability of helium-4 is the reason that hydrogen is converted to helium-4, and not deuterium (hydrogen-2) or helium-3 or other heavier elements during fusion reactions in the Sun. It is also partly responsible for the alpha particle being by far the most common type of baryonic particle to be ejected from an atomic nucleus; in other words, alpha decay is far more common than cluster decay. The unusual stability of the helium-4 nucleus is also important cosmologically. It explains the fact that, in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about a 6:1 ratio cooled to the point where nuclear binding was possible, almost all atomic nuclei to form were helium-4 nuclei. The binding of the nucleons in helium-4 is so tight that its production consumed nearly all the free neutrons in a few minutes, before they could beta decay, and left very few to form heavier atoms (especially lithium, beryllium, and boron). The energy of helium-4 nuclear binding per nucleon is stronger than in any of those elements (see nucleogenesis and binding energy), and thus no energetic "drive" was available to make elements 3, 4, and 5 once helium had been formed. It is barely energetically favorable for helium to fuse into the next element with a higher energy per nucleon (carbon). However, due to the rarity of intermediate elements, and extreme instability of beryllium-8 (the product when two 4He nuclei fuse), this process needs three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure where helium fusion to carbon was no longer possible. This left the early universe with a very similar hydrogen–helium ratio as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements—including those necessary for rocky planets like the Earth, and for carbon-based or other life—thus had to be produced, since the Big Bang, in stars which were hot enough to fuse elements heavier than hydrogen. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, makes up about 23% of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen (1H). See also Big Bang nucleosynthesis References External links Superfluid Helium-4 Interactive Properties Helium-04
Helium-4
[ "Chemistry" ]
1,400
[ "Isotopes of helium", "Isotopes" ]
1,013,089
https://en.wikipedia.org/wiki/Solovay%E2%80%93Strassen%20primality%20test
The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic primality test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 1967 (see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test, but has great historical importance in showing the practical feasibility of the RSA cryptosystem. The Solovay–Strassen test is essentially an Euler–Jacobi probable prime test. Concepts Euler proved that for any odd prime number p and any integer a, where is the Legendre symbol. The Jacobi symbol is a generalisation of the Legendre symbol to , where n can be any odd integer. The Jacobi symbol can be computed in time O((log n)²) using Jacobi's generalization of the law of quadratic reciprocity. Given an odd number n one can contemplate whether or not the congruence holds for various values of the "base" a, given that a is relatively prime to n. If n is prime then this congruence is true for all a. So if we pick values of a at random and test the congruence, then as soon as we find an a which doesn't fit the congruence we know that n is not prime (but this does not tell us a nontrivial factorization of n). This base a is called an Euler witness for n; it is a witness for the compositeness of n. The base a is called an Euler liar for n if the congruence is true while n is composite. For every composite odd n, at least half of all bases are (Euler) witnesses as the set of Euler liars is a proper subgroup of . For example, for , the set of Euler liars has order 8 and , and has order 48. This contrasts with the Fermat primality test, for which the proportion of witnesses may be much smaller. Therefore, there are no (odd) composite n without many witnesses, unlike the case of Carmichael numbers for Fermat's test. Example Suppose we wish to determine if n = 221 is prime. We write (n−1)/2=110. We randomly select an a (greater than 1 and smaller than n): 47. Using an efficient method for raising a number to a power (mod n) such as binary exponentiation, we compute: a(n−1)/2 mod n  =  47110 mod 221  =  −1 mod 221 mod n  =  mod 221  =  −1 mod 221. This gives that, either 221 is prime, or 47 is an Euler liar for 221. We try another random a, this time choosing a = 2: a(n−1)/2 mod n  =  2110 mod 221  =  30 mod 221 mod n  =  mod 221  =  −1 mod 221. Hence 2 is an Euler witness for the compositeness of 221, and 47 was in fact an Euler liar. Note that this tells us nothing about the prime factors of 221, which are actually 13 and 17. Algorithm and running time The algorithm can be written in pseudocode as follows: inputs: n, a value to test for primality k, a parameter that determines the accuracy of the test output: composite if n is composite, otherwise probably prime repeat k times: choose a randomly in the range [2,n − 1] if or then return composite return probably prime Using fast algorithms for modular exponentiation, the running time of this algorithm is O(k·log3 n), where k is the number of different values of a we test. Accuracy of the test It is possible for the algorithm to return an incorrect answer. If the input n is indeed prime, then the output will always correctly be probably prime. However, if the input n is composite then it is possible for the output to be incorrectly probably prime. The number n is then called an Euler–Jacobi pseudoprime. When n is odd and composite, at least half of all a with gcd(a,n) = 1 are Euler witnesses. We can prove this as follows: let {a1, a2, ..., am} be the Euler liars and a an Euler witness. Then, for i = 1,2,...,m: Because the following holds: now we know that This gives that each ai gives a number a·ai, which is also an Euler witness. So each Euler liar gives an Euler witness and so the number of Euler witnesses is larger or equal to the number of Euler liars. Therefore, when n is composite, at least half of all a with gcd(a,n) = 1 is an Euler witness. Hence, the probability of failure is at most 2−k (compare this with the probability of failure for the Miller–Rabin primality test, which is at most 4−k). For purposes of cryptography the more bases a we test, i.e. if we pick a sufficiently large value of k, the better the accuracy of test. Hence the chance of the algorithm failing in this way is so small that the (pseudo) prime is used in practice in cryptographic applications, but for applications for which it is important to have a prime, a test like ECPP or the Pocklington primality test should be used which proves primality. Average-case behaviour The bound 1/2 on the error probability of a single round of the Solovay–Strassen test holds for any input n, but those numbers n for which the bound is (approximately) attained are extremely rare. On the average, the error probability of the algorithm is significantly smaller: it is less than for k rounds of the test, applied to uniformly random . The same bound also applies to the related problem of what is the conditional probability of n being composite for a random number which has been declared prime in k rounds of the test. Complexity The Solovay–Strassen algorithm shows that the decision problem COMPOSITE is in the complexity class RP. References Further reading See also External links Solovay-Strassen Implementation of the Solovay–Strassen primality test in Maple Primality tests Modular arithmetic Randomized algorithms
Solovay–Strassen primality test
[ "Mathematics" ]
1,341
[ "Arithmetic", "Modular arithmetic", "Number theory" ]
1,013,233
https://en.wikipedia.org/wiki/Impossible%20bottle
An impossible bottle is a bottle containing an object that appears too large to fit through the bottle's mouth. The ship model in a bottle is a traditional and the most iconic type of impossible bottle. Other common objects include fruits, matchboxes, decks of cards, tennis balls, racketballs, Rubik's Cubes, padlocks, knots, and scissors. These may be placed inside the bottle using various mechanisms, including constructing an object inside the bottle from smaller parts, using a small object that expands or grows inside the bottle, or molding the glass around the object. Ship in a bottle There are two ways to place a model ship inside a bottle. The simpler way is to rig the masts of the ship and raise it up when the ship is inside the bottle. Masts, spars, and sails are built separately and then attached to the hull of the ship with strings and hinges so the masts can lie flat against the deck. The ship is then placed inside the bottle and the masts are pulled up using the strings attached to the masts. The hull of the ship must still be able to fit through the opening. Bottles with minor distortions and soft tints are often chosen to hide the small details of the ship such as hinges on the masts. Alternatively, with specialized long-handled tools, it is possible to build the entire ship inside the bottle. The oldest surviving ships in a bottle were crafted by Giovanni Biondo at the end of the eighteenth century; two, at least, reproduce Venetian ships of the line. These are quite large and expensive models: the bottles (intended to be displayed upside down, with the neck resting on a small pedestal) measure about 45 cm. The oldest (1784) is in a museum in Lübeck; another (1786) is held by a private collector; the third (1792), that apparently reproduces the heavy frigate PN Fama, is in the Navy Museum in Lisbon. Another old model (1795), from an unknown builder, is kept in a museum in Rotterdam. Ships in bottles became more popular as folk art in the second half of the nineteenth century, after the introduction of cheap, mass-produced bottles made with clear glass. A significant collection of ships in bottles is the Dashwood-Howard collection held by the Merseyside Maritime Museum. God-in-a-bottle God-in-a-bottle, or God-in-the-Bottle, is a symbolisation of the crucifixion of Jesus through the placing in a bottle of carved wooden items, including a cross and often others such as a ladder and spear . The crossbeam of the cross is attached to the vertical beam after both are in the bottle. The bottles were often filled with liquid, latterly sometimes with particles akin to a snow globe. Such bottles were used in 19th-century Irish Catholicism as devotional objects or as talismans akin to witch bottles. They were found elsewhere in Catholic Europe, and are related to older "Passion Bottles", made by glassblowers in their spare time, where a large variety of small glass symbols of the Passion of Jesus were inserted into a bottle. The making of Gods-in-bottles was exported through Irish diaspora, notably to mining communities in Northern England, where scenes with mining tools sometimes replaced the crucifixion. Examples are in the collections of the National Museum of Ireland – Country Life the Irish Agricultural Museum, Enniscorthy Castle museum, and the Beamish Museum in County Durham. Later makers were often Irish Travellers, whose craftworks often recycle discarded objects. Richard Power's 1964 novel The Land of Youth, set in a fictional version of the Aran Islands, mentions an outcast who uses driftwood for what is "known to generations of children as God-in-a-bottle." Although the Offaly Independent says that in the 1970s "almost every pub in Tullamore" displayed a bottle, by the 21st century they were largely unknown in Ireland. A 2023 episode of Nationwide reported on two men in the Irish midlands still practising the tradition. Small objects that expand naturally One variation of the impossible bottle takes advantage of pine cones opening as they dry out. In constructing the display, a closed, green cone of suitable size is inserted into a narrow-mouthed bottle and then allowed to dry inside the bottle. Fruits and vegetables inside bottles are grown by placing a bottle around the blossom or young fruit and securing it to the plant. The fruit then grows to full size inside the bottle. This technique is used to put pears into bottles of pear brandy (most famously the French eau de vie Poire Williams). See also Bonsai Kitten Chinese puzzle ball References External links Folk Art in Bottles Bottles Mechanical puzzles da:Flaskeskib fr:Bateau en bouteille nl:Flessescheepje no:Flaskeskute nn:Flaskeskip pl:Model statku w butelce ru:Корабль в бутылке sv:Flaskskepp
Impossible bottle
[ "Mathematics" ]
1,041
[ "Recreational mathematics", "Mechanical puzzles" ]
1,013,331
https://en.wikipedia.org/wiki/Component%20Manager
In Apple Macintosh computer programming, Component Manager was one of many approaches to sharing code that originated on the pre-PowerPC Macintosh. It was originally introduced as part of QuickTime, which remained the part of the classic Mac OS that used it most heavily. Technical details A component was a piece of code that provided various functions that may be invoked by clients. Each function was identified by a signed 16-bit integer ID code. Non-positive codes were reserved for predefined functions that should be understood by all components—open/close a component instance, query whether a function was supported, etc. The meanings of positive function codes depended on the type of component. A component instance was created by opening a component. This called the component's open function to allocate and initialize any necessary storage for the instance. Closing the instance got rid of this storage and invalidated all references to that instance. Components and component instances were referenced by 32-bit values that were not pointers. Instead, they were interpreted as keys into internal Component Manager tables. These references were generated in such a way that, once they became invalid, those values were unlikely to become valid again for a long time. This minimized the chance of obscure bugs due to dangling references. Components were identified by OSType codes giving their type, subtype and "manufacturer". For instance, a component type might be "raster image compressor", subtypes of which might exist for JPEG, H.261, Sorenson, and Intel Indeo, among others. It was possible to have multiple components registered with exactly the same identification codes, giving alternative implementations of the same algorithm for example using hardware versus software, trading off speed versus quality, or other criteria. It was possible for the applications to query the existence of such alternatives and make explicit choices between them, or let the system choose a default. Among the options available, a component could delegate parts of its functions to another component as a form of subclassing for code reuse. It was also possible for one component to capture another, which meant that all accesses to the captured component had to go through the capturing one. Mac OS Components Mac OS accumulated a great variety of component types: Within QuickTime, there were image codecs, media handlers, media data handlers, video digitizer drivers, file format importers and exporters, and many others. The Sound Manager moved to a predominantly component-based architecture in version 3.0: sound output devices were represented as components, and there were also component types for mixing multiple channels, converting between different sample rates and sample sizes, and encoding and decoding compressed formats. AppleScript introduced the concept of scripting languages implemented as components. ColorSync implemented different colour-matching methods as components. QuickDraw GX "font scalers" were renderers for the different font formats. References Macintosh operating systems development Component-based software engineering
Component Manager
[ "Technology" ]
595
[ "Component-based software engineering", "Components" ]
1,013,492
https://en.wikipedia.org/wiki/Marcos%20Moshinsky
Marcos Moshinsky Borodiansky (; ; 1921–2009) was a Mexican physicist of Ukrainian-Jewish origin whose work in the field of elementary particles won him the Prince of Asturias Prize for Scientific and Technical Investigation in 1988 and the UNESCO Science Prize in 1997. Early life He was born in 1921 into a Jewish family in Kyiv, Ukrainian SSR. At the age of three, he emigrated as a refugee to Mexico, where he became a naturalized citizen in 1942. He received a bachelor's degree in physics from the National Autonomous University of Mexico (UNAM) and a doctorate in the same discipline at Princeton University under Nobel Laureate Eugene Paul Wigner. Career In the 1950s he researched nuclear reactions and the structure of the atomic nucleus, introducing the concept of the transformation bracket for eigenstates of the quantum harmonic oscillator, which, together with the tables elaborated in collaboration with Thomas A. Brody, simplified calculations in the nuclear shell model and became an indispensable reference for the study of nuclear structure. In 1952, his work on the transient dynamics of matter waves led to the discovery of diffraction in time. After completing postdoctoral studies at the Henri Poincaré Institute in Paris, France, he returned to Mexico City to serve as a professor at the UNAM. In 1967 he was chosen president of the Mexican Society of Physics and in 1972 he was admitted to the National College. He was the editor of several international scientific reviews, including the Bulletin of the Atomic Scientists, and authored four books and more than 200 technical papers. He received the Mexican National Prize for Science (1968), the Luis Elizondo Prize (1971), the Prince of Asturias Prize for Scientific and Technical Investigation (1988) and the UNESCO Science Prize (1997). In 1990 he was elected a Fellow of the American Physical Society "for his many fundamental contributions to the description of many-body quantum systems through the use of group-theoretical techniques" While practicing physics, he wrote a weekly column in the newspaper Excélsior on Mexican politics. References This article began as a translation of the corresponding article in the Spanish-language Wikipedia. M. Moshinsky and Y. F. Smirnov, The harmonic oscillator in modern physics, Informa HealthCare, Amsterdam 1996. External links Profile at the Prince of Asturias Foundation Profile at the National College of Mexico. 2009 deaths 1921 births Scientists from Mexico City UNESCO Science Prize laureates Members of El Colegio Nacional (Mexico) Particle physicists 20th-century Mexican physicists Members of the Pontifical Academy of Sciences Members of the Brazilian Academy of Sciences National Autonomous University of Mexico alumni Academic staff of the National Autonomous University of Mexico Mexican people of Ukrainian-Jewish descent Ukrainian Jews Soviet emigrants to Mexico Members of the Mexican Academy of Sciences Mathematical physicists Fellows of the American Physical Society
Marcos Moshinsky
[ "Physics" ]
569
[ "Particle physicists", "Particle physics" ]
1,013,539
https://en.wikipedia.org/wiki/BMW%20iDrive
iDrive is an in-car communications and entertainment system, used to control most secondary vehicle systems in late-model BMW cars. It was launched in 2001, first appearing in the E65 7 Series. The system unifies an array of functions under a single control architecture consisting of an LCD panel mounted on the dashboard and a control knob mounted on the center console. iDrive introduced the first multiplexed MOST Bus/Byteflight optical fiber data busses with a very high bit rate in a production vehicle. These are used for high-speed applications such as controlling the television, DVD, or driver assistance systems like adaptive cruise control, infrared night vision or head-up display. iDrive allows the driver (and, in some models, front-seat passengers) to control the climate (air conditioner and heater), audio system (radio and CD player), navigation system, and communication system. iDrive is also used in modern Rolls-Royce models, as Rolls-Royce is owned by BMW, and in the 2019 onwards Toyota Supra is a collaboration between BMW and Toyota. BMW also owns the Mini brand, and a pared-down version of iDrive is available on those cars, branded as Connected. iDrive Generations iDrive (1st Gen) An early prototype iDrive (called the Intuitive Interaction Concept) was featured on the BMW Z9 concept in 1999. The production version debuted in September 2001 in the BMW 7 Series (E65) and was built on the VxWorks kernel while the Navigation computer used Microsoft Windows CE for Automotive; this can be seen when the system reboots or restarts after a software crash, displaying a "Windows CE" logo. The first generation of iDrive controllers in the 7 Series was equipped with only a rotary knob. The GPS computer ("NAV01", located in the trunk) was only capable of reading map CDs. In October 2003, a menu and a customizable button was added to the controller. The new GPS computer ("NAV02") was updated to read DVDs, featured a faster processor and the ability to display the map in bird's-eye view ("perspective"). In April 2005, the iDrive controller was changed again, the turn knob having a new leather top. The last hardware update of the GPS unit ("NAV03") got a faster processor again. The map display is antialiased. The 8.8" wide-screen display was updated, having a brighter screen and the ability to control a MP3 capable 6 CD-changer or a BMW iPod Interface. Possible options include a TV tuner, DVD changer, BMW Night Vision, side view camera and a rear view camera. iDrive Business (M-ASK) M-ASK stands for MMI Audio System controller and is manufactured by Becker. This is a limited version of the iDrive computer with a small 6.6" display and is only found on 5, 6 Series and the X5 or X6, without the navigation option. In addition, it can be ordered as an option in Europe on the 1 Series, 3 Series and 5 series as "Business navigation", which has basic navigation abilities. Early versions of the Business navigation could only display directional arrows, but the latest version can also display 2D maps. iDrive Business Navigation uses a different map DVD than iDrive Professional Navigation. In addition, as only one optical drive is available, one cannot use both navigation and listen to a CD simultaneously. When iDrive Professional is ordered the M-ASK system is replaced by iDrive CCC Professional with a dual slot dash mounted drive computer and larger 8.8" display. iDrive Business is available on the following cars; iDrive Business Navigation (optional) 1 Series E81/E82/E87/E88 3 Series E90/E91/E92/E93 5 Series E60/E61 iDrive Business (default when navigation is not ordered) 6 Series E63/E64 X5 E70 X6 E71 The above list can vary depending on the region. iDrive Professional Navigation (CCC) [iDrive 2.0] It debuted in 2003 with the E60/E61 5 Series and is based on Wind River VxWorks, a real-time operating system. CCC stands for Car Communication Computer and uses a larger 8.8" wide-screen display. It was available on the following cars as an option; 1-Series E81/E82/E87/E88 - 06/2004 – 09/2008 3-Series E90/E91/E92/E93 - 03/2005 – 09/2008 5-Series E60/E61 - 12/2003 – 11/2008 6-Series E63/E64 - 12/2003 – 11/2008 X5 E70 - 03/2007 – 10/2009 X6 E71 - 05/2008 – 10/2009 CCC based systems use a map DVD from Navteq in a dedicated DVD drive. CCC - Update 1 This is a minor update to iDrive Professional debuted in March 2007. It adds additional programmable buttons in the dashboard to directly access frequent functions and it removes the haptic feedback from the iDrive controller. It is available on the following cars as an option; 1 Series E81/E82/E87/E88 manufactured between March 2007 and September 2008 3 Series E90/E91/E92/E93 manufactured between March 2007 and August 2008 5 Series E60/E61 manufactured between March 2007 and August 2008 6 Series E63/E64 manufactured between March 2007 and August 2008 X5 E70 manufactured until MY2010 X6 E71 CCC - Update 2 This is a minor update debuted in September 2008 for Model Year 2009 cars equipped with iDrive Professional that did not get the new CIC based system. These cars get the new iDrive controller that is also used on cars with CIC. The actual iDrive computer (CCC) remains the same. This update is available on the following cars; 5 Series E60/E61 manufactured in September 2008 to February 2009 (to October 2008 for European production) 6 Series E63/E64 manufactured in September 2008 to February 2009 (to October 2008 for European production) iDrive Professional Navigation (CIC) [iDrive 3.0] It debuted in September 2008 with F01/F02 7 Series. CIC stands for Car Information Computer and is manufactured by Becker, utilizing the QNX operating system. It is available on the following cars as an option: 1-Series E81/E82/E87/E88 - 09/2008 – 08/2013 1-Series F20/F21 - 09/2011 – 03/2013 3-Series E90/E91/E92/E93 - 09/2008 – 10/2013 3-Series F30/F31/F34/F80 - 02/2012 – 11/2012 5-Series E60/E61 - 11/2008 – 05/2010 5-Series F07 - 10/2009 – 07/2012 5-Series F10 - 03/2010 – 09/2012 5-Series F11 - 09/2010 – 09/2012 6-Series E63/E64 - 11/2008 – 07/2010 6-Series F06 - 03/2012 – 03/2013 6-Series F12/F13 - 12/2010 – 03/2013 7-Series F01/F02/F03 - 11/2008 – 07/2013 7-Series F04 - 11/2008 – 06/2015 X1 E84 - 10/2009 – 06/2015 X3 F25 - 10/2010 – 04/2013 X5 E70 - 10/2009 – 06/2013 X6 E71 - 10/2009 – 08/2014 Z4 E89 - 02/2009 – 08/2016 The CIC system is a major update to iDrive, replacing the display, computer and the controller. The display is of a higher resolution, and is generally more responsive than CCC, to address one of the common complaints of iDrive. Internet access is also supported. CIC-based systems use maps from TeleAtlas that are installed on an internal 2.5" 80 GB Hard Disk Drive (HDD). This HDD can also store up to 8 GB of music files for playback. For facilitating the uploading of music files to the HDD, a USB port is provided in the glove box. Following 2009 LCI production, all CIC-based iDrive systems support DVD video. This, however, is only operational when the vehicle is in the "Park" position for automatic transmissions, or while the parking brake is set for vehicles that have a manual transmission. DVD audio will continue to play while driving. iDrive Professional NBT (Next Big Thing) [iDrive 4.0] BMW introduced a further update to the iDrive Professional System in early 2012, calling it the "Next Big Thing" (NBT). It was introduced in current generation cars as an option, including: 1-Series F20/F21 - 03/2013 – 03/2015 2-Series F22 - 11/2013 – 03/2015 3-Series F30/F31 - 11/2012 – 07/2015 3-Series F34 - 03/2013 – 07/2015 3-Series F80 - 03/2014 – 07/2015 4-Series F32 - 07/2013 – 07/2015 4-Series F33 - 11/2013 – 07/2015 4-Series F36 - 03/2014 – 07/2015 5-Series F07 - 07/2012 – 2016 5-Series F10/F11/F18 - 09/2012 – 2016 6-Series F06/F12/F13 - 03/2013 – 2016 7-Series F01/F02/F03 - 07/2012 – 06/2015 X3 F25 - 04/2013 – 08/2017 X4 F26 - 04/2014 – 08/2017 X5 F15 - 08/2014 – 07/2016 X5 F85 - 12/2014 – 07/2016 X6 F16 - 08/2014 – 07/2016 X6 F86 - 12/2014 – 07/2016 i3 - 09/2013 – 09/2017 i8 - 04/2014 – 09/2017 The update includes extensive hardware and software changes including cosmetic enhancements, faster processor, more memory, detailed 3D maps and improved routing. In addition, the capacity of the internal HDD has been increased from 10GB to 20GB. NBT also introduced a redesigned iDrive controller with optional handwriting recognition capabilities and gesture controls. This was achieved through a capacitive touch pad on top of the iDrive controller. NBT also removed the need for a separate COMBOX module for A2DP and USB media as those functions were integrated directly into the NBT Head Unit. BMW Online was also replaced with the newly introduced Connected Drive, which relied on a hardware TCB module with a built-in SIM card for mobile connectivity. iDrive Professional NBT EVO [iDrive 5.0/6.0] NBT EVO (Evolution) was released starting in 2016 and represented the first major change in the operational logic of iDrive since being introduced in 2001. The familiar vertical list of text menus was replaced by a horizontal set of dynamic tiles, each able to show real time information. This update saw major updates to the iDrive hardware, including the ability to interact with the system via touch screen for the first time. BMW's Connected Drive services were further enhanced with this upgrade to the iDrive system, and the TCB module was replaced with a newer, faster ATM module. NBT EVO also introduced basic gesture controls as an optional extra on select BMW models. Three Interface options exist for NBT EVO. ID4 looks like CIC NBT while ID5 and ID6 feature the new horizontal tile interface. NBT EVO was available on the following BMW Models: 1-Series F20/F21 - 03/2015 – 2019 2-Series F22 - 03/2015 – 2021 2-Series F23 - 11/2014 – 2021 3-Series F30/F31/F34/F80 - 07/2015 – 2018 3-Series G20 - 2019 - 2022 (base system, when no Live Cockpit Plus or Professional is equipped) 4-Series F32/F33/F36 - 07/2015 – 2019 5-Series G30 - 10/2016 – 2019 6-Series F06/F12/F13 - 03/2013 – 2018 6-Series G32 - 07/2017 – 2018 7-Series G12 - 07/2015 – 2019 X1 F48 - 06/2015 – 06/2022 X2 F39 - 11/2017 – 10/2022 X3 F25 - 03/2016 – 2017 X3 G01 - 11/2017 – present X4 F26 - 03/2016 – 2018 X5 F15/F85 - 07/2016 – 2018 X6 F16/F86 - 07/2016 – 2019 i3 (ID6.0) 09/2018–07/2022 i8 (ID6.0) 09/2018- 2020 BMW Live Cockpit series [iDrive 7.0] iDrive consists of the MGU hardware (Media Graphics Unit) running the 7th generation of iDrive called BMW Operating System 7.0. Two Live Cockpit configurations are available: Live Cockpit Plus and Live Cockpit Professional. The Live Cockpit Plus system uses a hybrid analog/digital instrument cluster with a 5.7-inch Driver's Information Display and a 8.8-inch main display. In the Live Cockpit Professional System these are upgraded to a 12.3-inch digital instrument cluster and a 10.25-inch main display. iDrive 7.0 is available on the following BMW models: BMW 1 Series (F40) BMW 2 Series (F44) BMW 3 Series (G20) BMW 4 Series (G22) BMW 5 Series (G30) BMW 6 Series (G32) BMW 7 Series (G11) BMW 8 Series (G15) BMW X3 (G01) BMW iX3 (G08) BMW X4 (G02) BMW X5 (G05) BMW X6 (G06) BMW X7 (G07) BMW Z4 (G29) BMW Curved Display [iDrive 8.0] BMW unveiled the 8th generation of iDrive in 2021 with BMW Curved Display. iDrive 8.0 is available on the following BMW models: BMW 1 Series (F70) BMW 2 Series Active Tourer (U06) BMW 2 Series Coupé (G42) (after summer 2022) BMW 3 Series (G20 facelift) BMW 5 Series (G60) BMW 7 Series (G70) BMW iX1 BMW i4 BMW iX BMW XM BMW X1 (U11) BMW X2 (U10) BMW X3 (G45) BMW X5 (G05 facelift) BMW X6 (G06 facelift) BMW X7 (G07 facelift) BMW is reportedly releasing iDrive 9 on a new infotainment head unit based on Android OS starting in March 2023. Rationale The design rationale of iDrive is to replace an array of controls for the above systems with an all-in-one unit. The controls necessary for vehicle control and safety, such as the headlights and turn signals, are still located in the immediate vicinity of the steering column. Since, in the rationale of the designers, the air conditioning, car audio, navigation and communication controls are not used equally often, they have been moved into a central location. The iDrive M-ASK and CCC systems were based around the points of a compass (north, south, east, west) with each direction corresponding with a specific area. These areas are also colour-coded providing identification as to which part of the system is currently being viewed. North (blue) for communication East (green) for navigation (In some models without navigation, this option is replaced by the On Board Computer) South (brown) for entertainment West (red) for climate control Starting in 2007, iDrive added programmable buttons (6 USA/Japan, 8 in Europe) to the dashboard, breaking tradition of having the entire system operated via the control knob. Each button can be programmed to instantly access any feature within iDrive (such as a particular navigation route, or one's favorite radio station). In addition, a dedicated AM/FM button, and a Mode button (to switch between entertainment sources) were added for North American-market vehicles. Older versions of iDrive used a widescreen display that was split into a 2/3 main window, and 1/3 "Assistance Window". This allowed the driver to use a function or menu, while simultaneously maintaining secondary information. For example, if the driver was not in the Navigation menu, they could still see a map on the assistance window. Other information that could be displayed included navigation route directions and a trip computer. Controversy iDrive caused significant controversy among users, the automotive media, and critics when it was first introduced. Many reviewers of BMW vehicles in automobile magazines disapproved of the system. Criticisms of iDrive included its steep learning curve and its tendency to cause the driver to look away from the road too much. Most users report that they adapt to the system after about one year of practice, and the advent of voice controls has reduced the learning curve greatly. A new iDrive system (CIC) was introduced in September 2008 to address most of the complaints. iDrive NBT, introduced in 2012, brought further improvements. References External links Third Generation BMW iDrive in the F01/F02 BMW 7 Series Operation Video Advanced driver assistance systems Automotive technology tradenames BMW Human–computer interaction In-car entertainment Rolls-Royce Vehicle telematics
BMW iDrive
[ "Engineering" ]
3,814
[ "Human–computer interaction", "Human–machine interaction" ]
1,013,680
https://en.wikipedia.org/wiki/Evaluation%20Assurance%20Level
The Evaluation Assurance Level (EAL1 through EAL7) of an IT product or system is a numerical grade assigned following the completion of a Common Criteria security evaluation, an international standard in effect since 1999. The increasing assurance levels reflect added assurance requirements that must be met to achieve Common Criteria certification. The intent of the higher levels is to provide higher confidence that the system's principle security features are reliably implemented. The EAL level does not measure the security of the system itself, it simply states at what level the system was tested. To achieve a particular EAL, the computer system must meet specific assurance requirements. Most of these requirements involve design documentation, design analysis, functional testing, or penetration testing. The higher EALs involve more detailed documentation, analysis, and testing than the lower ones. Achieving a higher EAL certification generally costs more money and takes more time than achieving a lower one. The EAL number assigned to a certified system indicates that the system completed all requirements for that level. Although every product and system must fulfill the same assurance requirements to achieve a particular level, they do not have to fulfill the same functional requirements. The functional features for each certified product are established in the Security Target document tailored for that product's evaluation. Therefore, a product with a higher EAL is not necessarily "more secure" in a particular application than one with a lower EAL, since they may have very different lists of functional features in their Security Targets. A product's fitness for a particular security application depends on how well the features listed in the product's Security Target fulfill the application's security requirements. If the Security Targets for two products both contain the necessary security features, then the higher EAL should indicate the more trustworthy product for that application. Assurance levels EAL1: Functionally Tested EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. It will be of value where independent assurance is required support the contention that due care has been exercised with respect to the protection of personal or similar information. EAL1 provides an evaluation of the TOE (Target of Evaluation) as made available to the customer, including independent testing against a specification, and an examination of the guidance documentation provided. It is intended that an EAL1 evaluation could be successfully conducted without assistance from the developer of the TOE, and for minimal cost. An evaluation at this level should provide evidence that the TOE functions in a manner consistent with its documentation, and that it provides useful protection against identified threats. EAL2: Structurally Tested EAL2 requires the cooperation of the developer in terms of the delivery of design information and test results, but should not demand more effort on the part of the developer than is consistent with good commercial practice. As such it should not require a substantially increased investment of cost or time. EAL2 is therefore applicable in those circumstances where developers or users require a low to moderate level of independently assured security in the absence of ready availability of the complete development record. Such a situation may arise when securing legacy systems. EAL3: Methodically Tested and Checked EAL3 permits a conscientious developer to gain maximum assurance from positive security engineering at the design stage without substantial alteration of existing sound development practices. EAL3 is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without substantial re-engineering. EAL4: Methodically Designed, Tested and Reviewed EAL4 permits a developer to gain maximum assurance from positive security engineering based on good commercial development practices which, though rigorous, do not require substantial specialist knowledge, skills, and other resources. EAL4 is the highest level at which it is likely to be economically feasible to retrofit to an existing product line. EAL4 is therefore applicable in those circumstances where developers or users require a moderate to high level of independently assured security in conventional commodity TOEs and are prepared to incur additional security-specific engineering costs. Commercial operating systems that provide conventional, user-based security features are typically evaluated at EAL4. Examples with expired Certificate are AIX, HP-UX, Oracle Linux, NetWare, Solaris, SUSE Linux Enterprise Server 9, SUSE Linux Enterprise Server 10, Red Hat Enterprise Linux 5, Windows 2000 Service Pack 3, Windows 2003, Windows XP, Windows Vista, Windows 7, Windows Server 2008 R2, z/OS version 2.1 and z/VM version 6.3. Operating systems that provide multilevel security are evaluated at a minimum of EAL4. Examples with active Certificate include SUSE Linux Enterprise Server 15 (EAL 4+). Examples with expired Certificate are Trusted Solaris, Solaris 10 Release 11/06 Trusted Extensions, an early version of the XTS-400, VMware ESXi version 4.1, 3.5, 4.0, AIX 4.3, AIX 5L, AIX 6, AIX7, Red Hat 6.2 & SUSE Linux Enterprise Server 11 (EAL 4+). vSphere 5.5 Update 2 did not achieve EAL4+ level it was an EAL2+ and certified on June 30, 2015. EAL5: Semiformally Designed and Tested EAL5 permits a developer to gain maximum assurance from security engineering based upon rigorous commercial development practices supported by moderate application of specialist security engineering techniques. Such a TOE will probably be designed and developed with the intent of achieving EAL5 assurance. It is likely that the additional costs attributable to the EAL5 requirements, relative to rigorous development without the application of specialized techniques, will not be large. EAL5 is therefore applicable in those circumstances where developers or users require a high level of independently assured security in a planned development and require a rigorous development approach without incurring unreasonable costs attributable to specialist security engineering techniques. Numerous smart card devices have been evaluated at EAL5, as have multilevel secure devices such as the Tenix Interactive Link. XTS-400 (STOP 6) is a general-purpose operating system which has been evaluated at EAL5 augmented. LPAR on IBM System z is EAL5 Certified. EAL6: Semiformally Verified Design and Tested EAL6 permits developers to gain high assurance from application of security engineering techniques to a rigorous development environment in order to produce a premium TOE for protecting high-value assets against significant risks. EAL6 is therefore applicable to the development of security TOEs for application in high risk situations where the value of the protected assets justifies the additional costs. Green Hills Software's INTEGRITY-178B RTOS has been certified to EAL6 augmented. EAL7: Formally Verified Design and Tested EAL7 is applicable to the development of security TOEs for application in extremely high risk situations and/or where the high value of the assets justifies the higher costs. Practical application of EAL7 is currently limited to TOEs with tightly focused security functionality that is amenable to extensive formal analysis. The ProvenCore OS, developed by ProvenRun, has been certified to EAL7 in 2019 by the ANSSI. The Tenix Interactive Link Data Diode Device and the Fox-IT Fox Data Diode (one-way data communications device) claimed to have been evaluated at EAL7 augmented (EAL7+). Implications of assurance levels Technically speaking, a higher EAL means nothing more, or less, than that the evaluation completed a more stringent set of quality assurance requirements. It is often assumed that a system that achieves a higher EAL will provide its security features more reliably (and the required third-party analysis and testing performed by security experts is reasonable evidence in this direction), but there is little or no published evidence to support that assumption. Impact on cost and schedule In 2006, the US Government Accountability Office published a report on Common Criteria evaluations that summarized a range of costs and schedules reported for evaluations performed at levels EAL2 through EAL4. In the mid to late 1990s, vendors reported spending US$1 million and even US$2.5 million on evaluations comparable to EAL4. There have been no published reports of the cost of the various Microsoft Windows security evaluations. Augmentation of EAL requirements In some cases, the evaluation may be augmented to include assurance requirements beyond the minimum required for a particular EAL. Officially this is indicated by following the EAL number with the word augmented and usually with a list of codes to indicate the additional requirements. As shorthand, vendors will often simply add a "plus" sign (as in EAL4+) to indicate the augmented requirements. EAL notation The Common Criteria standards denote EALs as shown in this article: the prefix "EAL" concatenated with a digit 1 through 7 (Examples: EAL1, EAL3, EAL5). In practice, some countries place a space between the prefix and the digit (EAL 1, EAL 3, EAL 5). The use of a plus sign to indicate augmentation is an informal shorthand used by product vendors (EAL4+ or EAL 4+). References External links CCEVS Validated Products List Common Criteria Assurance Level information from IACS Cisco Common Criteria Certifications IBM AIX operating system certifications Microsoft Windows and the Common Criteria Certification SUSE Linux awarded government security cert XTS-400 information Understanding the Windows EAL4 Evaluation Computer security procedures Evaluation of computers Management cybernetics de:Evaluation Assurance Level
Evaluation Assurance Level
[ "Technology", "Engineering" ]
1,952
[ "Cybersecurity engineering", "Computers", "Computer security procedures", "Evaluation of computers" ]
1,013,718
https://en.wikipedia.org/wiki/RecQ%20helicase
RecQ helicase is a family of helicase enzymes initially found in Escherichia coli that has been shown to be important in genome maintenance. They function through catalyzing the reaction ATP + H2O → ADP + P and thus driving the unwinding of paired DNA and translocating in the 3' to 5' direction. These enzymes can also drive the reaction NTP + H2O → NDP + P to drive the unwinding of either DNA or RNA. Function In prokaryotes RecQ is necessary for plasmid recombination and DNA repair from UV-light, free radicals, and alkylating agents. This protein can also reverse damage from replication errors. In eukaryotes, replication does not proceed normally in the absence of RecQ proteins, which also function in aging, silencing, recombination and DNA repair. Structure RecQ family members share three regions of conserved protein sequence referred to as the: N-terminal – Helicase middle – RecQ-conserved (RecQ-Ct) and C-terminal – Helicase-and-RNase-D C-terminal (HRDC) domains. The removal of the N-terminal residues (Helicase and, RecQ-Ct domains) impairs both helicase and ATPase activity but has no effect on the binding ability of RecQ implying that the N-terminus functions as the catalytic end. Truncations of the C-terminus (HRDC domain) compromise the binding ability of RecQ but not the catalytic function. The importance of RecQ in cellular functions is exemplified by human diseases, which all lead to genomic instability and a predisposition to cancer. Clinical significance There are at least five human RecQ genes; and mutations in three human RecQ genes are implicated in heritable human diseases: WRN gene in Werner syndrome (WS), BLM gene in Bloom syndrome (BS), and RECQL4 in Rothmund–Thomson syndrome. These syndromes are characterized by premature aging, and can give rise to the diseases of cancer, type 2 diabetes, osteoporosis, and atherosclerosis, which are commonly found in old age. These diseases are associated with high incidence of chromosomal abnormalities, including chromosome breaks, complex rearrangements, deletions and translocations, site specific mutations, and in particular sister chromatid exchanges (more common in BS) that are believed to be caused by a high level of somatic recombination. Mechanism The proper function of RecQ helicases requires the specific interaction with topoisomerase III (Top 3). Top 3 changes the topological status of DNA by binding and cleaving single stranded DNA and passing either a single stranded or a double stranded DNA segment through the transient break and finally re-ligating the break. The interaction of RecQ helicase with topoisomerase III at the N-terminal region is involved in the suppression of spontaneous and damage induced recombination and the absence of this interaction results in a lethal or very severe phenotype. The emerging picture clearly is that RecQ helicases in concert with Top 3 are involved in maintaining genomic stability and integrity by controlling recombination events, and repairing DNA damage in the G2-phase of the cell cycle. The importance of RecQ for genomic integrity is exemplified by the diseases that arise as a consequence of mutations or malfunctions in RecQ helicases; thus it is crucial that RecQ is present and functional to ensure proper human growth and development. WRN helicase The Werner syndrome ATP-dependent helicase (WRN helicase) is unusual among RecQ DNA family helicases in having an additional exonuclease activity. WRN interacts with DNA-PKcs and the Ku protein complex. This observation, combined with evidence that WRN deficient cells produce extensive deletions at sites of joining of non-homologous DNA ends, suggests a role for WRN protein in the DNA repair process of non-homologous end joining (NHEJ). WRN also physically interacts with the major NHEJ factor X4L4 (XRCC4-DNA ligase 4 complex). X4L4 stimulates WRN exonuclease activity that likely facilitates DNA end processing prior to final ligation by X4L4. WRN also appears to play a role in resolving recombination intermediate structures during homologous recombinational repair (HRR) of DNA double-strand breaks. WRN participates in a complex with RAD51, RAD54, RAD54B and ATR proteins in carrying out the recombination step during inter-strand DNA cross-link repair. Evidence was presented that WRN plays a direct role in the repair of methylation induced DNA damage. The process likely involves the helicase and exonuclease activities of WRN that operate together with DNA polymerase beta in long patch base excision repair. WRN was found to have a specific role in preventing or repairing DNA damages resulting from chronic oxidative stress, particularly in slowly replicating cells. This finding suggested that WRN may be important in dealing with oxidative DNA damages that underlie normal aging (see DNA damage theory of aging). BLM helicase Cells from humans with Bloom syndrome are sensitive to DNA damaging agents such as UV and methyl methanesulfonate indicating deficient DNA repair capability. The budding yeast Saccharomyces cerevisiae encodes an ortholog of the Bloom syndrome (BLM) protein that is designated Sgs1 (Small growth suppressor 1). Sgs1(BLM) is a helicase that functions in homologous recombinational repair of DNA double-strand breaks. The Sgs1(BLM) helicase appears to be a central regulator of most of the recombination events that occur during S. cerevisiae meiosis. During normal meiosis Sgs1(BLM) is responsible for directing recombination towards the alternate formation of either early non-crossovers or Holliday junction joint molecules, the latter being subsequently resolved as crossovers. In the plant Arabidopsis thaliana, homologs of the Sgs1(BLM) helicase act as major barriers to meiotic crossover formation. These helicases are thought to displace the invading strand allowing its annealing with the other 3'overhang end of the double-strand break, leading to non-crossover recombinant formation by a process called synthesis-dependent strand annealing (SDSA) (see Wikipedia article "Genetic recombination"). It is estimated that only about 5% of double-strand breaks are repaired by crossover recombination. Sequela-Arnaud et al. suggested that crossover numbers are restricted because of the long-term costs of crossover recombination, that is, the breaking up of favorable genetic combinations of alleles built up by past natural selection. RECQL4 helicase In humans, individuals with Rothmund–Thomson syndrome, and carrying the RECQL4 germline mutation, have several clinical features of accelerated aging. These features include atrophic skin and pigment changes, alopecia, osteopenia, cataracts and an increased incidence of cancer. RECQL4 mutant mice also show features of accelerated aging. RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair. When RECQL4 is depleted, HR-mediated repair and 5' end resection are severely reduced in vivo. RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining, nucleotide excision repair and base excision repair. The association of deficient RECQL4 mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging. See also Bloom syndrome References Further reading External links RecQ Helicases , introduction at UNC's Sekelsky Lab. BLM gene encodes a RecQ Helicase, description of the gene EC 3.6.1 Aging-related enzymes Helicases Senescence DNA repair
RecQ helicase
[ "Chemistry", "Biology" ]
1,746
[ "DNA repair", "Aging-related enzymes", "Senescence", "Molecular genetics", "Cellular processes", "Metabolism" ]
1,013,768
https://en.wikipedia.org/wiki/LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines. LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors, and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK. Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions. Naming scheme Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit. A LAPACK subroutine name is in the form pmmaaa, where: p is a one-letter code denoting the type of numerical constants used. S, D stand for real floating-point arithmetic respectively in single and double precision, while C and Z stand for complex arithmetic with respectively single and double precision. The newer version, LAPACK95, uses generic subroutines in order to overcome the need to explicitly specify the data type. mm is a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the code DI is given, the subroutine expects a vector of length n containing the elements on the diagonal, while when the code GE is given, the subroutine expects an array containing the entries of the matrix. aaa is a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g. SV denotes a subroutine to solve linear system, while R denotes a rank-1 update. For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV. Use with other programming languages and libraries Many programming environments today support the use of libraries with C binding (LAPACKE, a standardised C interface, has been part of LAPACK since version 3.4.0), allowing LAPACK routines to be used directly so long as a few restrictions are observed. Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such as R, MATLAB, and SciPy. Several alternative language bindings are also available: Armadillo for C++ IT++ for C++ LAPACK++ for C++ Lacaml for OCaml SciPy for Python Gonum for Go PDL::LinearAlgebra for Perl Data Language Math::Lapack for Perl NLapack for .NET CControl for C in embedded systems lapack for rust Implementations As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are: Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. Netlib LAPACK The official LAPACK. Netlib ScaLAPACK Scalable (multicore) LAPACK, built on top of PBLAS. Intel MKL Intel's Math routines for their x86 CPUs. OpenBLAS Open-source reimplementation of BLAS and LAPACK. Gonum LAPACK A partial native Go implementation. Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is. Similar projects These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK: Libflame A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation. Eigen A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility. MAGMA Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs. PLASMA The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph. See also List of numerical libraries Math Kernel Library (MKL) NAG Numerical Library SLATEC, a FORTRAN 77 library of mathematical and statistical routines QUADPACK, a FORTRAN 77 library for numerical integration References Fortran libraries Free software programmed in Fortran Numerical linear algebra Numerical software Software using the BSD license
LAPACK
[ "Mathematics" ]
1,248
[ "Numerical software", "Mathematical software" ]
1,013,793
https://en.wikipedia.org/wiki/Tube%20socket
Tube sockets are electrical sockets into which vacuum tubes (electronic valves) can be plugged, holding them in place and providing terminals, which can be soldered into the circuit, for each of the pins. Sockets are designed to allow tubes to be inserted in only one orientation. They were used in most tube electronic equipment to allow easy removal and replacement. When tube equipment was common, retailers such as drug stores had vacuum tube testers, and sold replacement tubes. Some Nixie tubes were also designed to use sockets. Throughout the tube era, as technology developed, sometimes differently in different parts of the world, many tube bases and sockets came into use. Sockets are not universal; different tubes may fit mechanically into the same socket, though they may not work properly and possibly become damaged. Tube sockets were typically mounted in holes on a sheet metal chassis and wires or other components were hand soldered to lugs on the underside of the socket. In the 1950s, printed circuit boards were introduced and tube sockets were developed whose contacts could be soldered directly to the printed wiring tracks. Looking at the bottom of a socket, or, equivalently, a tube from its bottom, the pins were numbered clockwise, starting at an index notch or gap, a convention that has persisted into the integrated circuit era. In the 1930s, tubes often had the connection to the control grid brought out through a metal top cap on the top of the tube. This was connected by using a clip with an attached wire lead. An example would be the 6A7 pentagrid converter. Later, some tubes, particularly those used as radio frequency (RF) power amplifiers or horizontal deflection amplifiers in TV sets, such as the 6DQ6, had the plate or anode lead protrude through the envelope. In both cases this allowed the tube's output circuitry to be isolated from the input (grid) circuit more effectively. In the case of the tubes with the plate brought out to a cap, this also allowed the plate to run at higher voltages (over 26,000 volts in the case of rectifiers for color television, such as the 3A3, as well as high-voltage regulator tubes.) A few unusual tubes had caps for both grid and plate; the caps were symmetrically placed, with divergent axes. The first tubes The earliest tubes, like the Deforest Spherical Audion from , used the typical light bulb Edison socket for the heater, and flying leads for the other elements. Other tubes directly used flying leads for all of their contacts, like the Cunningham AudioTron from 1915, or the Deforest Oscillion. Type C6A xenon thyratrons, used in servos for the U.S. Navy Stable Element Mark 6, had a mogul screw base and L-shaped stiff wires at the top for grid and anode connections. Mating connectors were machined pairs of brass blocks with clamping screws, attached to flying leads (free hanging). Early bases When tubes became more widespread, and new electrodes were added, more connections were required. Specially designed bases were created to account for this need. However, as the world was suffering from World War I, and the new electronics technology was just emerging, designs were far from being standardized. Usually, each company had their own tubes and sockets, which were not interchangeable with tubes from other companies. By the early 1920s, this situation was finally changing, and several standard bases were created. They consisted of a base (ceramic, metal, bakelite, etc.) with a number of prongs ranging from three to seven, with either a non-regular distribution or with one or two of the prongs of bigger diameter than the other, so that the tube could only be inserted in a certain position. Sometimes they relied on a bayonet on the side of the base. Examples of these are the very common USA bases UX4, UV4, UY5 and UX6, and the European B5, B6, B7, B8, C7, G8A, etc. Tubes in the USA typically had from four to seven pins in a circular array, with adjacent pairs of larger pins for heater connections. Before alternating current (AC) line/mains-powered radios were developed, some four-pin tubes (in particular, the very common UX-201A ('01A)) had a bayonet pin on the side of a cylindrical base. The socket used that pin for retaining the tube; insertion finished with a slight clockwise turn. Leaf springs, essentially all in the same plane, pressed upward on the bottoms of the pins, also keeping the bayonet pin engaged. The first hot-cathode CRT, the Western Electric 224-B, had a standard four-pin bayonet base, and the bayonet pin was a live connection. (Five effective pins: It was an electrostatic-deflection gas-focused type, with a diode gun and single-ended deflection. The anode and the other two plates were common.) An early exception to these types of bases is the Peanut 215, which instead of using prongs had a tiny bayonet base with four drop-like contacts. Another exception is the European Side Contact series commonly known as P, which instead of using a prong, relied on side contacts at 90 degrees from the tube axis with four to twelve contacts. Octal In April 1935, the General Electric Company introduced a new eight-pin tube base with their new metal envelope tubes. The new base became known as the octal base. The octal base provided one more conductor with a smaller overall size of the base than the previous line of U. S. tube bases which had provided a maximum of seven conductors. Octal bases, as defined in IEC 60067, diagram IEC 67-I-5a, have a 45-degree angle between pins, which form a diameter circle around a diameter keyed post (sometimes called a spigot) in the center. Octal sockets were designed to accept octal tubes, the rib in the keyed post fitting an indexing slot in the socket so the tube could only be inserted in one orientation. When used on metal tubes, pin 1 was always reserved for a connection to the metal shell, which was usually grounded for shielding purposes. This reservation prevented tubes such as the 6SL7/6SN7 dual triodes from being issued with metal envelopes, as such valves need three connections (cathode, grid, anode) for each triode (making six total) plus two connections for the paralleled heaters. The octal base soon caught on for glass tubes, where the large central post could also house and protect the "evacuation tip" of the glass tube. The eight available pins allowed more complex tubes than before, such as dual triodes, to be constructed. The glass envelope of an octal base tube was cemented into a bakelite or plastic base with a hollow post in the center, surrounded by eight metal pins. The wire leads from the tube were soldered into the pins, and the evacuation tip was protected inside the post. Matching plugs were also manufactured that let tube sockets be used as eight-pin electrical connectors; bases from discarded tubes could be salvaged for this purpose. Octal sockets were used to mount other components, particularly electrolytic capacitor assemblies and electrical relays; octal-mount relays are still common. Most octal tubes following the widespread European designation system have penultimate digit "3" as in ECC34 (full details in the Mullard–Philips tube designation article). There is a different, totally obsolete, pre-world-war-II German octal type. Octal and miniature tubes are still in use in tube-type audio hi-fi and guitar amplifiers. Relays were historically manufactured in a vacuum tube form, and industrial-grade relays continue to use the octal base for their pinout. Loctal A variant of the octal base, the B8G loctal base or lock-in base (sometimes spelled "loktal" — trademarked by Sylvania), was developed by Sylvania for ruggedized applications such as automobile radios. Along with B8B (a British designation out of date by 1958), these eight-pin locking bases are almost identical and the names usually taken as interchangeable (although there are some minor differences in specifications, such as spigot material and spigot taper, etc.). The pin geometry was the same as for octal, but the pins were thinner (although they will fit into a standard octal socket, they wobble and do not make good contact), the base shell was made of aluminium, and the center hole had an electrical contact that also mechanically locked (hence "loctal") the tube in place. Loctal tubes were only used widely by a few equipment manufacturers, most notably Philco, which used the tubes in many table radios. Loctal tubes have a small indexing mark on the side of the base skirt; they do not release easily from their sockets unless pushed from that side. Because the pins are actually the Fernico or Cunife lead-out wires from the tube, they are prone to intermittent connections caused by the build-up of electrolytic corrosion products due to the pin being of a different metallic composition to the socket contact. The loctal tube's structure was supported directly by the connecting pins passing through the glass "button" base. Octal tube structures were supported on a glass "pinch", formed by heating the bottom of the envelope to fusing temperature, then squeezing the pinch closed. Sealing the pinch embedded the connecting wires in the pinch's glass and gave a vacuum-tight seal. The connecting wires then passed through the hollow base pins, where they were soldered to make permanent connections. Loctal tubes had shorter connecting lengths between the socket pins and the internal elements than did their octal counterparts. This allowed them to operate at higher frequencies than octal tubes. The advent of miniature "all-glass" seven- and nine-pin tubes overtook both octals and loctals, so the loctal's higher-frequency potential was never fully exploited. Loctal tube type numbers in the USA typically begin with "7" (for 6.3-volt types) or "14" for 12.6-volt types. This was fudged by specifying the heater voltage as nominally 7 or 14 volts so that the tube nomenclature fitted. Battery types (mostly 1.4-volt) are coded "1Lxn", where x is a letter and "n" a number, such as "1LA4". Russian loctals end in L, e.g. 6J1L. European designations are ambiguous; all B8G loctals have numbers either in the range: 20–29, (such as EBL21, ECH21, EF22) except for early tubes in the series: DAC21, DBC21, DCH21, DF21, DF22, DL21, DLL21, DM21 which have either B9G or octal bases, the change to Sylvania's locktal standard coming in 1942 or 50–59 (special bases, including the European 9-pin lock-in base), but other types are in the same range (e.g. while EF51 is B8G loctal, the EF55 is 9-pin loctal, B9G, and the EL51 has a side-contact P8A base). Other loctals Nine-pin loctal bases, B9G, include the 1938 Philips EF50, EL60 and some type numbers in the European 20–29 and 50–59 range; There is a different "loctal Lorenz" in the Mullard–Philips tube designation . Miniature tubes Efforts to introduce small tubes into the marketplace date from the 1920s, when experimenters and hobbyists made radios with so-called peanut tubes like the Peanut 215 mentioned above. Because of the primitive manufacturing techniques of the time, these tubes were too unreliable for commercial use. RCA announced new miniature tubes in Electronics magazine, which proved reliable. The first ones, such as the 6J6 ECC91 VHF dual triode, were introduced in 1939. The bases commonly referred to as "miniature" are the seven-pin B7G type, and the slightly later nine-pin B9A (Noval). The pins are arranged evenly in a circle of eight or ten evenly spaced positions, with one pin omitted; this allows the tube to be inserted in only one orientation. Keying by omitting a pin is also used in 8- (subminiature), 10-, and 12-pin (Compactron) tubes (a variant 10-pin form, "Noval+1", is basically a nine-pin socket with an added center contact). As with loctal tubes, the pins of miniature tube are stiff wires protruding through the bottom of the glass envelope which plug directly into the socket. However, unlike all their predecessors, miniature tubes are not fitted with separate bases; the base is an integral part of the glass envelope. The pinched-off air evacuation nub is at the top of the tube, giving it its distinctive appearance. More than one functional section can be included in a single envelope; a dual triode configuration is particularly common. Seven- and nine-pin tubes were common, though miniature tubes with more pins, such as the Compactron series, were later introduced, and could fit up to three amplifying elements. Some miniature tube sockets had a skirt that mated with a cylindrical metal electrostatic shield that surrounded the tube, fitted with a spring to hold the tube in place if the equipment was subject to vibration. Sometimes the shield was also fitted with thermal contacts to transfer heat from the glass envelope to the shield and act as a heat sink, which was considered to improve tube life in higher power applications. Electrolytic effects from the differing metal alloys used for the miniature tube pins (usually Cunife or Fernico) and the tube base could cause intermittent contact due to local corrosion, especially in relatively low current tubes, such as were used in battery-operated radio sets. Malfunctioning equipment with miniature tubes can sometimes be brought back to life by removing and reinserting the tubes, disturbing the insulating layer of corrosion. Miniature tubes were widely manufactured for military use during World War II, and also used in consumer equipment. The Sonora Radio and Television Corporation produced the first radio using these miniature tubes, the "Candid", in April 1940. In June 1940 RCA released its battery-operated Model BP-10, the first superheterodyne receiver small enough to fit in a handbag or coat pocket. This model had the following tube lineup: 1R5 — pentagrid converter; 1T4 — I.F. amplifier; 1S5 — Detector/AVC/AF Amplifier; 1S4 — Audio Output. The BP-10 proved so popular that Zenith, Motorola, Emerson, and other radio manufacturers produced similar pocket radios based on RCA's miniature tubes. Several of these pocket radios were introduced in 1941 and sold until the suspension of radio production in April 1942 for the duration of World War II. After the war miniature tubes continued to be manufactured for civilian use, regardless of any technical advantage, as they were cheaper than octals and loctals. Miniature seven-pin base The B7G (or "small-button" or "heptal") seven-pin miniature tubes are smaller than Noval, with seven pins arranged at 45-degree spacing in a 9.53 mm (3/8th inch) diameter arc, the "missing" pin position being used to position the tube in its socket (unlike octal, loctal and rimlock sockets). Examples include the 6AQ5/EL90 and 6BE6/EK90. European tubes of this type have numbers 90-99, 100-109, 190-199, 900-999. A few in the 100-109 series have unusual, non-B7G bases, e.g., Wehrmacht base. Noval base The nine-pin miniature Noval B9A base, sometimes called button 9-pin, B9-1, offered a useful reduction in physical size compared to previous common types, such as octal (especially important in TV receivers where space was limited), while also providing a sufficient number of connections (unlike B7G) to allow effectively unrestricted access to all the electrodes, even of relatively complex tubes such as double triodes and triode-hexodes. It could also provide multiple connections to an electrode of a simpler device where useful, as in the four connections to the grid of a conventional grounded-grid UHF triode, e.g., 6AM4, to minimise the deleterious effects of lead inductance on the high-frequency performance. This base type was used by many of the United States and most of the European tubes, e.g., 12AX7-ECC83, EF86 and EL84, produced commercially towards the end of the era before transistors largely displaced their use. The IEC 67-I-12a specification calls for a 36-degree angle between the nine pins of 1.016 mm thickness, in an arc of diameter 11.89 mm. European tubes of this type have numbers 80-89, 180-189, 280-289, 800-899, 8000-8999. Duodecar base The Duodecar B12C base (IEC 67-I-17a) has 12 pins in a 19.1 mm diameter circle and dates from 1961. It was also called the Compactron T-9 construction/E12-70 base It is generally similar in form to a Noval socket, but larger. In the center is a clearance hole for a tube evacuation pip, which is typically on the bottom of a Compactron tube. (It should not be confused with the similar-sounding but differently sized Duodecal B12A base.) Rimlock base The Rimlock (B8A) base is an eight-pin design with a pin circle diameter close to Noval, and uses a nub on the side of the envelope to engage with a guide and retaining spring in the socket wall. This provides pin registration (since the pins are equi-spaced) and also a fair degree of retention. Early tubes with this base type typically had a metal skirt around the lower ~15mm of the envelope to match the socket wall, and this offered a degree of built-in screening, but these were fairly soon replaced by "skirtless" versions, which had a characteristic widening in the glass to compensate physically for the absence of the skirt. In the European naming scheme, rimlock tubes are numbered in the ranges 40-49, 110-119 (with exceptions), and 400-499, e.g., EF40. Although virtually unknown elsewhere, this was a very common base type in European radios of the late 1940s through the 1950s, but was eventually displaced by the ubiquitous B7G and Noval (B9A) base types. UHF tubes By 1935 new tube technologies were required for the development of radar and telecommunications. UHF requirements severely limited the existing tubes, so radical ideas were implemented which affected how these tubes connected to the host system. Two new bases appeared, the acorn tube and the lighthouse tube, both solving the same problems but with different approaches. Thompson, G.M. Rose, Saltzberg and Burnside from RCA created the acorn tube by using far smaller electrodes, with radial short connections. A different approach was taken by the designers of the lighthouse tube, such as the octal-base 2C43, which relied on using concentric cylindrical metal contacts in connections that minimized inductance, thus allowing a much higher frequency. Nuvistors were very small, reducing stray capacitances and lead inductances. The base and socket were so compact that they were widely used in UHF TV tuners. They could also be used in small-signal applications at lower frequencies, as in the Ampex MR-70, a costly studio tape recorder whose entire electronics section was based on nuvistors. Other socket styles There are many other socket types, of which a few are: Decal B10B base (IEC 67-I-41a) 10 pins with 1.02 mm diameter in an 11.89 mm diameter circle, e.g. PFL200 Decar B10G base (IEC E10-73) A 10th pin added to the center of a standard 9-pin miniature base, e.g. 6C9 Magnoval B9D base (IEC 67-I-36a) 9 pins with 1.27 mm diameter in a 17.45 mm pin circle diameter arc, e.g. EL503, EL509, PD500, etc. - not to be confused with... Novar B9E base, 9 pins with 1.02 mm diameter in a 17.45 mm pin circle diameter arc, one of several Compactron types, which looks similar to Magnoval (but a Novar tube in a Magnoval socket will not make good pin contact, and a Magnoval tube in a Novar socket may damage the socket). Sub-Magnal B11A base (American), 11-pins. Also used as industrial relay socket and HV power supplies. Amphenol / WirePro (WPI) / Eaton 78-series, Socket (female) part number: 78-S-11. Matching Plug (male) is part number: 86-CP-11 Neo Eightar base (IEC 67-I-31a) 8 pins in a 15.24 mm diameter circle 5-pin sub-miniature wire-ended B5A base (no socket used; e.g. EA76) A remarkably wide variety of tube and similar sockets is listed and described, with some informal application notes, at a commercial site, Pacific T.V., including nuvistor, eight-pin subminiature, vidicon, reflex klystron, nine-pin octal-like, 10-pin miniature (two types), 11-pin sub-magnal, diheptal 14-pin, and many display tubes such as Nixies and vacuum fluorescent types (and even more). As well, each socket has a link to a clear, high-quality picture. Some subminiature tubes with flexible wire leads all exiting in the same plane were connected by subminiature inline sockets. Some low-power reflex klystrons such as the 2K25 and 2K45 had small-diameter rigid coaxial outputs parallel to octal base pins. To accommodate the coax, one contact was replaced by a clearance hole. Vacuum tubes for high-power applications often required custom socket designs. A jumbo four-prong socket was used for various industrial tubes. A specialized seven-pin socket (Septar or B7A), with all pins in a circle with one pin wider than the others, was used for transmitting tubes. Subminiature tubes with long wire leads, introduced in the 1950s, were often soldered directly to printed circuit boards. Sockets were made for early transistors, but quickly fell out of favor as transistor reliability became established. This also happened with early integrated circuits; IC sockets later became used only for devices that may need to be upgraded. Summary of base details References See also Nuvistor Compactron Amphenol List of vacuum tubes Vacuum tubes
Tube socket
[ "Physics" ]
4,942
[ "Vacuum tubes", "Vacuum", "Matter" ]
1,013,797
https://en.wikipedia.org/wiki/Aqua%20Tofana
Aqua Tofana (also known as Acqua Toffana and Aqua Tufania and Manna di San Nicola) was a strong poison created in Sicily around 1630 that was reputedly widely used in Palermo, Naples, Perugia, and Rome, Italy. It has been associated with Giulia Tofana, or Tofania, a woman from Palermo, purportedly the leader of a ring of six poisoners in Rome, who sold Aqua Tofana to would-be murderers. Original creation The first recorded mention of Aqua Tofana is from 1632–33 when it was used by two women, Francesca la Sarda and Teofania di Adamo, to poison their victims. It may have been invented by, and named after, Teofania. She was executed for her crimes, but several women associated with her including Giulia Tofana (who may have been her daughter) and Gironima Spana moved on to Rome and continued manufacturing and distributing the poison. The 'tradename' "Manna di San Nicola" ("Manna of St. Nicholas of Bari") may have been a marketing device intended to divert the authorities, given that the poison was openly sold both as a cosmetic and a devotional object in vials that included a picture of St. Nicholas. Over 600 victims are alleged to have died from this poison, mostly husbands. Between 1666 and 1676, the Marchioness de Brinvilliers poisoned her father and two brothers, amongst others, and she was executed on July 16, 1676. Ingredients The active ingredients of the mixture are known, but not how they were blended. Aqua Tofana contained mostly arsenic and lead, and possibly belladonna. It was a colorless, tasteless liquid and therefore easily mixed with water or wine to be served during meals. Symptoms Poisoning by Aqua Tofana could go unnoticed, as the substance is clear and has no taste. It is slow-acting, with symptoms resembling progressive disease or other natural causes. The symptoms seen are similar to the effects of arsenic poisoning. Those poisoned by Aqua Tofana reported several symptoms. The first small dosage would produce cold-like symptoms. The victim was very ill by the third dose; symptoms included vomiting, dehydration, diarrhea, and a burning sensation in the digestive system. The fourth dose would kill the victim. As it was slow acting, it allowed victims time to prepare for their death, including writing a will and repenting. The antidote often given was vinegar and lemon juice. Legend about Mozart The legend that Wolfgang Amadeus Mozart (1756–1791) was poisoned using Aqua Tofana is completely unsubstantiated, even though it was Mozart himself who started this rumor. References External links Definition at thefreedictionary.com Definition at infoplease.com Poisons Arsenic
Aqua Tofana
[ "Environmental_science" ]
585
[ "Poisons", "Toxicology" ]
1,013,835
https://en.wikipedia.org/wiki/Hypolith
In Arctic and Antarctic ecology, a hypolith is a community of photosynthetic organisms, and extremophiles, that live underneath rocks in climatically extreme deserts such as Cornwallis Island and Devon Island in the Canadian high Arctic. The community itself is the hypolithon. Hypolithons are protected by their rock from harsh ultraviolet irradiation and wind scouring. The rocks can also trap moisture and are generally translucent allowing light to penetrate while omitting incident ultra-violet light. Writing in Nature, ecologist Charles S. Cockell of the British Antarctic Survey and Dale Stokes (Scripps Institution of Oceanography) describe how hypoliths reported to date (until 2004) had been found under quartz, which is one of the most common translucent minerals. However, Cockell reported that on Cornwallis Island and Devon Island, 94-95% of a random sample of 850 opaque dolomitic rocks were colonized by hypoliths, and found that the communities were dominated by cyanobacteria. The rocks chosen were visually indistinguishable from those nearby, and were about 10 cm across; the hypolithon was visible as a greenish coloured band. Cockell proposed that rock sorting by periglacial action, including that during freeze–thaw cycles, improves light penetration around the edges of rocks (see granular material and Brazil nut effect). Cockell and Stokes went on to estimate the productivity of the Arctic communities by monitoring the uptake of sodium bicarbonate labelled with Carbon-14 and found that (for Devon Island) productivity of the hypolithon was comparable to that of plants, lichens, and bryophytes combined (0.8 ± 0.3 g m−2 y−1 and 1 ± 0.4 g m−2 y−1 respectively) and concluded that the polar hypolithon may double previous estimates of the productivity of that region of the rocky polar desert. See also Endolith References Extremophiles
Hypolith
[ "Biology", "Environmental_science" ]
422
[ "Organisms by adaptation", "Extremophiles", "Environmental microbiology", "Bacteria" ]
1,013,923
https://en.wikipedia.org/wiki/Datum%20reference
A datum reference or just datum (plural: datums) is some important part of an object—such as a point, line, plane, hole, set of holes, or pair of surfaces—that serves as a reference in defining the geometry of the object and (often) in measuring aspects of the actual geometry to assess how closely they match with the nominal value, which may be an ideal, standard, average, or desired value. For example, on a car's wheel, the lug nut holes define a bolt circle that is a datum from which the location of the rim can be defined and measured. This matters because the hub and rim need to be concentric to within close limits (or else the wheel will not roll smoothly). The concept of datums is used in many fields, including carpentry, metalworking, needlework, geometric dimensioning and tolerancing (GD&T), aviation, surveying, geodesy (geodetic datums), and others. Uses In carpentry, an alternative, more common name is "face side" and "face edge". The artisan nominates two straight edges on a workpiece as the "datum edges", and they are marked accordingly. One convention is to mark the first datum edge with a single slanted line (/) and the second with double lines (//). For most work, the datum references of the workpiece need to be square. If necessary they may be cut, planed or filed to make them so. In subsequent marking out, all measurements are then taken from either of the two datum references. In aviation, an aircraft is designed to operate within a specified range of weight and (chiefly longitudinal) balance; an airman is responsible for determining these factors for each flight under his or her command. This requires the calculation of moment for each variable mass in the aircraft (fuel, passengers, cargo, etc.), by multiplying its weight by its distance from a datum reference. The datum for light airplanes is usually the engine firewall or the tip of the spinner, but in all cases it is a fixed plane perpendicular to the aircraft's longitudinal axis, and specified in its operating handbook. Engineering An engineering datum used in geometric dimensioning and tolerancing is a feature on an object used to create a reference system for measurement. In engineering and drafting, a datum is a reference point, surface, or axis on an object against which measurements are made. These are then referred to by one or more 'datum references' which indicate measurements that should be made with respect to the corresponding datum feature . In geometric dimensioning and tolerancing, datum reference frames are typically 3D. Datum reference frames are used as part of the feature control frame to show where the measurement is taken from. A typical datum reference frame is made up of three planes. For example, the three planes could be one "face side" and two "datum edges". These three planes are marked A, B and C, where A is the face side, B is the first datum edge, and C is the second datum edge. In this case, the datum reference frame is A/B/C. A/B/C is shown at the end of feature control frame to show from where the measurement is taken. (See the ASME standard Y14.5M-2009 for more examples and material modifiers.) The engineer selects A/B/C based on the dimensional function of the part. The datums should be functional per the ASME standard. Typically, a part is required to fit with other parts. So, the functional datums are chosen based on how the part attaches. Note: Typically, the functional datums are not used to manufacture the part. The manufacturing datums are typically different from the functional datums to save cost, improve process speed, and repeatability. A tolerance analysis may be needed in many cases to convert between the functional datums and the manufacturing datums. Computer software can be purchased for dimensional analysis. A trained engineer is required to run the software. There are typically 6 degrees of freedom that need to be considered by the engineer before choosing which feature is A, B, or C. For this example, A is the primary datum, B is the secondary, and C is the tertiary datum. The primary datum controls the most degrees of freedom. The tertiary datum controls the least degrees of freedom. For this example, of a block of wood, Datum A controls 3 degrees of freedom, B controls 2 degrees of freedom, and C controls 1 degree of freedom. 3+2+1 = 6, all 6 degrees of freedom are considered. The 6 degrees of freedom in this example are 3 translation and 3 rotation about the 3D coordinate system. Datum A controls 3: translation along the Z axis, rotation about the x axis, and rotation about the y axis. Datum B controls 2: translation along the y axis and rotation about the z axis. Finally, Datum C controls 1 degree of freedom, namely the translation along the x axis. See also Datum (geodesy) Exact constraint Reference frame Surface plate Spatial reference system Notes References Geometry Coordinate systems
Datum reference
[ "Mathematics" ]
1,088
[ "Geometry", "Coordinate systems" ]
1,013,950
https://en.wikipedia.org/wiki/Heilbronn%20triangle%20problem
In discrete geometry and discrepancy theory, the Heilbronn triangle problem is a problem of placing points in the plane, avoiding triangles of small area. It is named after Hans Heilbronn, who conjectured that, no matter how points are placed in a given area, the smallest triangle area will be at most inversely proportional to the square of the number of points. His conjecture was proven false, but the asymptotic growth rate of the minimum triangle area remains unknown. Definition The Heilbronn triangle problem concerns the placement of points within a shape in the plane, such as the unit square or the unit disk, for a given Each triple of points form the three vertices of a triangle, and among these triangles, the problem concerns the smallest triangle, as measured by area. Different placements of points will have different smallest triangles, and the problem asks: how should points be placed to maximize the area of the smallest More formally, the shape may be assumed to be a compact set in the plane, meaning that it stays within a bounded distance from the origin and that points are allowed to be placed on its boundary. In most work on this problem, is additionally a convex set of nonzero area. When three of the placed points lie on a line, they are considered as forming a degenerate triangle whose area is defined to be zero, so placements that maximize the smallest triangle will not have collinear triples of points. The assumption that the shape is compact implies that there exists an optimal placement of points, rather than only a sequence of placements approaching optimality. The number may be defined as the area of the smallest triangle in this optimal An example is shown in the figure, with six points in a unit square. These six points form different triangles, four of which are shaded in the figure. Six of these 20 triangles, with two of the shaded shapes, have area 1/8; the remaining 14 triangles have larger areas. This is the optimal placement of six points in a unit square: all other placements form at least one triangle with area 1/8 or smaller. Therefore, Although researchers have studied the value of for specific shapes and specific small numbers of points, Heilbronn was concerned instead about its asymptotic behavior: if the shape is held fixed, but varies, how does the area of the smallest triangle vary That is, Heilbronn's question concerns the growth rate as a function For any two shapes the numbers and differ only by a constant factor, as any placement of points within can be scaled by an affine transformation to fit changing the minimum triangle area only by a constant. Therefore, in bounds on the growth rate of that omit the constant of proportionality of that growth, the choice of is irrelevant and the subscript may be Heilbronn's conjecture and its disproof Heilbronn conjectured prior to 1951 that the minimum triangle area always shrinks rapidly as a function —more specifically, inversely proportional to the square In terms of big O notation, this can be expressed as the bound In the other direction, Paul Erdős found examples of point sets with minimum triangle area proportional demonstrating that, if true, Heilbronn's conjectured bound could not be strengthened. Erdős formulated the no-three-in-line problem, on large sets of grid points with no three in a line, to describe these examples. As Erdős observed, when is a prime number, the set of points on an integer grid (for have no three collinear points, and therefore by Pick's formula each of the triangles they form has area at When these grid points are scaled to fit within a unit square, their smallest triangle area is proportional matching Heilbronn's conjectured upper bound. If is not prime, then a similar construction using a prime number close to achieves the same asymptotic lower eventually disproved Heilbronn's conjecture, by using the probabilistic method to find sets of points whose smallest triangle area is larger than the ones found by Erdős. Their construction involves the following steps: Randomly place points in the unit square, for Remove all pairs of points that are unexpectedly close together. Prove that there are few remaining low-area triangles and therefore only a sublinear number of cycles formed by two, three, or four low-area triangles. Remove all points belonging to these cycles. Apply a triangle removal lemma for 3-uniform hypergraphs of high girth to show that, with high probability, the remaining points include a subset of points that do not form any small-area triangles. The area resulting from their construction grows asymptotically as The proof can be derandomized, leading to a polynomial-time algorithm for constructing placements with this triangle area. Upper bounds Every set of points in the unit square forms a triangle of area at most inversely proportional One way to see this is to triangulate the convex hull of the given point and choose the smallest of the triangles in the triangulation. Another is to sort the points by their and to choose the three consecutive points in this ordering whose are the closest together. In the first paper published on the Heilbronn triangle problem, in 1951, Klaus Roth proved a stronger upper bound of the form The best bound known to date is of the form for some proven by . A new upper bound equal to was proven by . Specific shapes and numbers has investigated the optimal arrangements of points in a square, for up to 16. Goldberg's constructions for up to six points lie on the boundary of the square, and are placed to form an affine transformation of the vertices of a regular polygon. For larger values improved Goldberg's bounds, and for these values the solutions include points interior to the square. These constructions have been proven optimal for up to seven points. The proof used a computer search to subdivide the configuration space of possible arrangements of the points into 226 different subproblems, and used nonlinear programming techniques to show that in 225 of those cases, the best arrangement was not as good as the known bound. In the remaining case, including the eventual optimal solution, its optimality was proven using symbolic computation techniques. The following are the best known solutions for 7–12 points in a unit square, found through simulated annealing; the arrangement for seven points is known to be optimal. Instead of looking for optimal placements for a given shape, one may look for an optimal shape for a given number of points. Among convex shapes with area one, the regular hexagon is the one that for this shape, with six points optimally placed at the hexagon vertices. The convex shapes of unit area that maximize have Variations There have been many variations of this problem including the case of a uniformly random set of points, for which arguments based on either Kolmogorov complexity or Poisson approximation show that the expected value of the minimum area is inversely proportional to the cube of the number of points. Variations involving the volume of higher-dimensional simplices have also been studied. Rather than considering simplices, another higher-dimensional version adds another and asks for placements of points in the unit hypercube that maximize the minimum volume of the convex hull of any subset of points. For these subsets form simplices but for larger values relative they can form more complicated shapes. When is sufficiently large relative randomly placed point sets have minimum convex hull No better bound is possible; any placement has points with obtained by choosing some consecutive points in coordinate order. This result has applications in range searching data structures. See also Danzer set, a set of points that avoids empty triangles of large area Notes References External links Erich's Packing Center, by Erich Friedman, including the best known solutions to the Heilbronn problem for small values of for squares, circles, equilateral triangles, and convex regions of variable shape but fixed area Discrete geometry Triangle problems Area Discrepancy theory
Heilbronn triangle problem
[ "Physics", "Mathematics" ]
1,634
[ "Scalar physical quantities", "Geometry problems", "Discrete mathematics", "Physical quantities", "Discrete geometry", "Quantity", "Size", "Combinatorics", "Discrepancy theory", "Wikipedia categories named after physical quantities", "Mathematical problems", "Area", "Triangle problems" ]
1,013,989
https://en.wikipedia.org/wiki/Tola%20%28unit%29
The tola ( / ; also transliterated as tolah or tole) is a traditional Ancient Indian and South Asian unit of mass, now standardised as 180 grains () or exactly  troy ounce. It was the base unit of mass in the British Indian system of weights and measures introduced in 1833, although it had been in use for much longer. It was also used in Aden and Zanzibar: in the latter, one tola was equivalent to 175.90 troy grains (0.97722222 British tolas, or 11.33980925 grams). The tola is a Vedic measure, with the name derived from the Sanskrit (from the root ) meaning "weighing" or "weight". One tola was traditionally the weight of 100 Ratti (ruttee) seeds, and its exact weight varied according to locality. However, it is also a convenient mass for a coin: several pre-colonial coins, including the currency of Akbar the Great (1556–1605), had a mass of "one tola" within slight variation. The first rupee (; rupayā), minted by Sher Shah Suri (1540–45), had a mass of 178 troy grains, or about 1% less than the British tola. The British East India Company issued a silver rupee coin of 180 troy grains, and this became the practical standard mass for the tola well into the 20th century. The British tola of 180 troy grains (from 1833) can be seen as more of a standardisation than a redefinition: the previous standard in the Bengal Presidency, the system of "sicca weights", was the mass of one Murshidabad rupee, 179.666 troy grains. For the larger weights used in commerce (in the Bengal Presidency), the variation in the pre-1833 standards was found to be greater than the adjustment. The tola formed the base for units of mass under the British Indian system, and was also the standard measure of gold and silver bullion. Although the tola has been officially replaced by metric units since 1956, it is still in current use, and is a popular denomination for gold bullion bars in Bangladesh, India, Nepal, Pakistan and Singapore, with a ten tola bar being the most commonly traded. In Nepal, minting of tola size gold coins continue up to the present, even though the currency of Nepal is called rupee and has no official connection to the tola. It is also used in most gold markets (bazars/souks) in the United Arab Emirates and in all the Cooperation Council for the Arab States of the Gulf (GCC) countries. Tola is still used as a measure of charas (Indian hashish). On the black market, however, one tola equals a mass of approximately and not the actual mass of one tola. See also Troy ounce References External links Tola to Gram Calculator Tola unit converter Units of mass Customary units in India
Tola (unit)
[ "Physics", "Mathematics" ]
623
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
1,014,025
https://en.wikipedia.org/wiki/Blue%20roof
A blue roof is a roof of a building that is designed explicitly to provide initial temporary water storage and then gradual release of stored water, typically rainfall. Blue roofs are constructed on flat or low sloped roofs in urban communities where flooding is a risk due to a lack of permeable surfaces for water to infiltrate, or seep back into the ground. Water is stored in blue roof systems until it either evaporates or is released downstream after the storm event has passed. Blue roofs that are used for temporary rooftop storage can be classified as "active" or "passive" depending on the types of control devices used to regulate drainage of water from the roof. Blue roofs can provide a number of benefits depending on design. These benefits include temporary storage of rainfall to mitigate runoff impacts, storage for reuse such as irrigation or cooling water makeup, or recreational opportunities. Stormwater management and other benefits Flood mitigation Due to the density of urban development, there is a general lack of permeable surfaces in cities. This lack of area for stormwater to infiltrate back into the ground leaves cities vulnerable to flooding. A number of blue roof pilot projects have been implemented around the United States, the results from which highlight their efficacy in reducing stormwater runoff during and after severe weather events. Pollution reduction While blue roofs do not remove pollutants from water by temporarily detaining it, they do reduce the load severe rain events place on storm sewers which stops emergency overflow from combined sewer systems from discharging untreated wastewater into rivers, streams, and coastal waters. A significant blue roof pilot project intended to evaluate the potential of the systems for mitigating combined sewer overflow impacts was conducted between 2010 and 2012 by the New York City Department of Environmental Protection. The NYCDEP blue-roof projects are the first to utilize a novel passive blue roof tray design which relies on the lateral transitivity of non-woven filter fabric for drawdown control in a full scale pilot. Monitoring of these systems has demonstrated their performance as an effective means for mitigation of peak flows and alteration of timing in combined sewer systems. Water scarcity On the opposite side of the spectrum, cities with limited rainfall are vulnerable to drought and extended periods of restricted water usage. In drier climates, blue roofs act as a water conservation tool harvesting the water that falls on a roof's surface and collecting it at a controlled rate. Design compatibility Another major benefit of blue roofs are their ability to work alongside other rooftop systems such as solar panels (both solar thermal and pv), and HVAC mechanical equipment. Some recreational blue roofs integrate rooftop waterplay areas that can also be used to irrigate a green roof, or to cool the roof of a building on hot days, in order to eliminate or at least reduce the HVAC load placed on mechanical refrigeration equipment. Some blue roofs utilize stored water for beneficial on-site purposes cooling of solar panels and irrigation of a green roof. One example of a blue roof that provide ancillary services was the winning entry (First Place, 10,000 Euro prize) in the 2004 Coram Sustainable Design Award, by Steve Mann. Types Active blue roof Active blue roof systems control the rate at which water drains from a rooftop through mechanical means. Sometimes referred to as automated roof runoff management systems, active blue roofs use valve configurations and controls to monitor and regulate the discharge of stormwater runoff from roofs. Water ponded on the roof can be released in several ways, including via a pneumatically or hydraulically actuated pinch valve, an electronically controlled valve connected to a timer, or manually opening the valve. Active blue roofs for stormwater detention using forecast integration were first proposed in 2008. Passive blue roof Passive blue roof systems control the rate at which water drains from a rooftop through non-mechanical means. Unlike active systems which inhibit water flow through drainage pipes, passive systems temporarily detain water on the surface of the roof by lengthening the path the water must take in order to reach outlet drains. Blue roofs can include open water surfaces, storage within or beneath a porous media or modular surface, or below a raised decking surface or cover. Roof-integrated passive blue roof designs are built to retain water directly on a roof's surface, protected by a waterproof membrane, for extended periods of time. This ponding of water can be done either within a porous media, such as gravel, or free standing on the roof surface. The release rate of the stored water is controlled by weirs on the roof drain. Roof-integrated designs are most effective in new construction as achievable storage volume on existing flat roofs is often quite limited. Modular tray designs allow existing roofs to be retrofitted for stormwater retention capabilities with the addition of plastic or metal trays. Similarly to roof-integrated designs, water collected in the trays can either be ponded within a porous media or free standing within the tray. Modular tray blue roofs allow for more flexibility in the size and location of detention areas on a rooftop than a roof-integrated design. This selective placement of trays makes avoiding roof areas which cannot support the additional structural load, as well as any roof obstructions easier than other blue roof designs. Trays also have the added advantage of not using the roof material itself as a component of the detention structure and thus decrease instead of increase the hydraulic head on the underlying roofing membrane. As the water drains from the trays, it is released onto the roof surface itself and drains normally. Roof-dams or roof-checks physically interrupt the flow path of the water as it travels towards the roof drain. Similar to roof-integrated designs, the roof surface is the primary location of water detention with these impermeable or slow-releasing dams forcing water to pond behind them. The height of the dam and the size of weep holes can be used to control the detention time of the structures. Blue-green roof designs are aesthetically similar to green roofs in that they are vegetative roofs, but functionally different in that they have additional water storage capacity beneath the growing media to facilitate in stormwater retention. Blue colored roof A different type of "blue roof" has been proposed by researchers at the Lawrence Berkeley National Laboratory, who researched a pigment used by the ancient Egyptians known as "Egyptian blue." This color, derived from calcium copper silicate, absorbs visible light, and emits light in the near-infrared range, helping keep roofs and walls cool. See also Eco-village Energy-efficient landscaping Greywater Rainwater harvesting Rainwater tank Sod roof Sustainable city References Environmental engineering Hydrology and urban planning Roofs Sustainable architecture Sustainable building
Blue roof
[ "Chemistry", "Technology", "Engineering", "Environmental_science" ]
1,351
[ "Structural engineering", "Sustainable building", "Hydrology", "Sustainable architecture", "Building engineering", "Chemical engineering", "Structural system", "Construction", "Civil engineering", "Hydrology and urban planning", "Environmental engineering", "Environmental social science", "R...
1,014,111
https://en.wikipedia.org/wiki/Uppsala%E2%80%93DLR%20Asteroid%20Survey
The Uppsala–DLR Asteroid Survey (UDAS, also known as UAO–DLR Asteroid Survey) is an astronomical survey, dedicated for the search and follow–up characterization of asteroids and comets. UDAS puts a special emphasis on near-Earth objects (NEOs) in co-operation and support of global efforts in NEO-research, initiated by the Working Group on Near-Earth Objects of the International Astronomical Union (IAU), and the Spaceguard Foundation. UDAS began regular observations in September 1999, with some test runs during 1998. Discoveries of NEOs are reported to the Minor Planet Center (MPC). It is a kind of follow-on programme to ODAS, which had to close due to lack of further financial support. It should also not be confused with the Uppsala–DLR Trojan Survey (UDTS), which was conducted a few years before UDAS was launched. UAO stands for Uppsala Astronomical Observatory, Uppsala, Sweden. DLR stands for the Deutschen Zentrum für Luft- und Raumfahrt, the German Aerospace Center. The founder of Lap Power Claes Wellton-Persson has contributed to the project. List of discovered minor planets The MPC credits the Uppsala–DLR Asteroid Survey with the discovery of the following numbered minor planets during 1999–2005. See also List of asteroid-discovering observatories Uppsala–ESO Survey of Asteroids and Comets, UESAC References External links Official site MPC: Discovery Circumstances of Numbered Minor Planets Astronomical surveys Asteroid surveys Uppsala University
Uppsala–DLR Asteroid Survey
[ "Astronomy" ]
316
[ "Astronomical surveys", "Works about astronomy", "Astronomical objects" ]
1,014,142
https://en.wikipedia.org/wiki/David%20Braben
David John Braben (born 2 January 1964) is an English video game developer and designer, founder and President of Frontier Developments, and co-creator of the Elite series of space trading video games, first published in 1984. He is also a co-founder of and works as a trustee for the Raspberry Pi Foundation, which in 2012 launched a low-cost computer for education. Biography Early life Braben was born in West Bridgford, Nottingham. He attended Buckhurst Hill County High School in Chigwell, Essex. He studied Natural Sciences at Jesus College, Cambridge, specialising in Electrical Science in his final year. Career In 2008, Braben was an investor and non-executive director of Phonetic Arts, a speech generation company led by Paul Taylor. Phonetic Arts was acquired by Google in 2010, for an undisclosed sum. In May 2011, Braben announced a new prototype computer intended to stimulate the teaching of basic computer science in schools. Called Raspberry Pi, the computer is mounted in a package the size of a credit card, has a USB port on one end with a HDMI monitor socket on the other, and provides an ARM processor running Linux for an estimated price of about £15 for a configured system, cheap enough to give to a child to do whatever he or she wants with it. The Raspberry Pi Foundation is a charity whose aim is to "promote the study of computer science and related topics, especially at school level, and to put the fun back into learning computing". Game development Braben has been called "one of the most influential computer game programmers of all time", based on his early game development with the Elite series in the 1980s and 1990s. Next Generation listed him in their "75 Most Important People in the Games Industry of 1995", chiefly due to the original Elite. Elite was developed in conjunction with programmer Ian Bell while both were undergraduate students at Cambridge University. Elite was first released in September 1984 and is known as the first game to have 3D hidden-line removal. In 1987, Braben published Zarch for the Acorn Archimedes, ported in 1988 as Virus for the Atari ST, Commodore Amiga, and PC. After Zarch, Braben went on to develop the sequel to Elite, Frontier, published in 1993, and founded Frontier Developments, a games development company whose first project was a version of Frontier for the Amiga CD32. Braben is still the CEO and majority shareholder of the company, whose projects since 2000 have included Dog's Life, Kinectimals, RollerCoaster Tycoon 3, LostWinds, Planet Coaster, Elite: Dangerous, Jurassic World Evolution, Kinect Disneyland Adventures, Zoo Tycoon, Coaster Crazy, and games based on the Wallace & Gromit franchise. In 2006, Braben was working on an ambitious next-generation game called The Outsider, being developed by Frontier Developments. As said in an interview, he was planning to start working on Elite 4 – as a space MMORPG game – as soon as The Outsider went gold. Braben said explicitly that this title was of special value to him. The Outsider was abandoned due to the removal of publisher support and was never published. In 2012, Braben explained in an interview with developer website Gamasutra his opinion that the sale of secondhand games negatively affects the development of new titles, also holding the price of games in general much higher than they would otherwise be. However, later in 2014 he acknowledged: "Piracy goes hand in hand with sales. If a game is pirated a lot, it will be bought a lot. People want a connected experience, so with pirated games we still have a route in to get them to upgrade to the real version. And even if someone's version is pirated, they might evangelise and their mates will buy the real thing." On 6 November 2012, Braben's Frontier Developments announced a new Elite sequel called Elite: Dangerous on the Kickstarter crowdfunding site. Elite: Dangerous achieved its funding goal and was listed as one of the most funded Kickstarter campaigns. The game was released on 16 December 2014, and by April 2015 had sold over 500,000 copies. As of August 2017, the game has sold over 2.75 million copies. In August 2022, Frontier announced David’s transition to his new role of President and Founder, stepping down as CEO. Personal life In May 1993, he married Katharin Dickinson in Cambridge. His current wife is Wendy Irvin-Braben, and he has two sons. According to the Sunday Times Rich List in 2020, Braben and his wife have an estimated combined worth of £182 million, an increase of £50 million from the previous year. Awards On 5 September 2005, Braben received the Development Legend Award at the Develop Industry Excellence Awards in Cambridge. In 2012, Braben was elected as a Fellow of the Royal Academy of Engineering. In 2013, Braben was co-award winner of Tech Personality of the Year at the UK Tech Awards 2013. In the same year, he was awarded an honorary degree by Abertay University. Braben was appointed Officer of the Order of the British Empire (OBE) in the 2014 Birthday Honours for services to the UK computer and video games industry. In January 2015, he received the 2015 Pioneer, Game Developers Choice Award (GDCA), for his work on the Raspberry Pi and for working more than 30 years as a game developer. On 12 March 2015, Braben was awarded the BAFTA Academy Fellowship Award in video gaming at the 11th British Academy Games Awards. Braben is the recipient of three honorary doctorates from Abertay University (2013), the Open University (2014), and the University of York (15 July 2015). Games References External links The Guardian article Masters of Their Universe (2003) 1964 births Living people Alumni of Jesus College, Cambridge BAFTA fellows British computer programmers British technology company founders British video game designers English chief executives English company founders Officers of the Order of the British Empire People from Chigwell British video game programmers People educated at Buckhurst Hill County High School Fellows of the Royal Academy of Engineering Fellows of the Institution of Engineering and Technology Game Developers Conference Pioneer Award recipients
David Braben
[ "Engineering" ]
1,270
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
1,014,250
https://en.wikipedia.org/wiki/Social%20degeneration
Social degeneration was a widely influential concept at the interface of the social and biological sciences in the 18th and 19th centuries. During the 18th century, scientific thinkers including Georges-Louis Leclerc, Comte de Buffon, Johann Friedrich Blumenbach, and Immanuel Kant argued that humans shared a common origin but had degenerated over time due to differences in climate. This theory provided an explanation of where humans came from and why some people appeared differently from others. In contrast, degenerationists in the 19th century feared that civilization might be in decline and that the causes of decline lay in biological change. These ideas derived from pre-scientific concepts of heredity ("hereditary taint") with Lamarckian emphasis on biological development through purpose and habit. Degeneration concepts were often associated with authoritarian political attitudes, including militarism and scientific racism, and a preoccupation with eugenics. The theory originated in racial concepts of ethnicity, recorded in the writings of such medical scientists as Johann Blumenbach and Robert Knox. From the 1850s, it became influential in psychiatry through the writings of Bénédict Morel, and in criminology with Cesare Lombroso. By the 1890s, in the work of Max Nordau and others, degeneration became a more general concept in social criticism. It also fed into the ideology of ethnic nationalism, attracting, among others, Maurice Barrès, Charles Maurras and the . Alexis Carrel, a French Nobel Laureate in Medicine, cited national degeneration as a rationale for a eugenics programme in collaborationist Vichy France. The meaning of degeneration was poorly defined, but can be described as an organism's change from a more complex to a simpler, less differentiated form, and is associated with 19th-century conceptions of biological devolution. In scientific usage, the term was reserved for changes occurring at a histological level – i.e. in body tissues. Although rejected by Charles Darwin, the theory's application to the social sciences was supported by some evolutionary biologists, most notably Ernst Haeckel and Ray Lankester. As the 19th century wore on, the increasing emphasis on degeneration reflected an anxious pessimism about the resilience of European civilization and its possible decline and collapse. Theories of degeneration in the 18th century In the second half of the eighteenth century, degeneration theory gained prominence as an explanation of the nature and origin of human difference. Among the most notable proponents of this theory was Georges-Louis Leclerc, Comte de Buffon. A gifted mathematician and eager naturalist, Buffon served as the curator of the Parisian Cabinet du Roi. The collections of the Cabinet du Roi served as the inspiration for Buffon's encyclopedic , of which he published thirty-six volumes between 1749 and his death in 1788. In the , Buffon asserted that differences in climate created variety within species. He believed that these changes occurred gradually and initially affected only a few individuals before becoming widespread. Buffon relied on an argument from analogy to contend that this process of degeneration occurred among humans. He claimed to have observed the transformation of certain animals by their climate and concluded that such changes must have also shaped humankind. Buffon maintained that degeneration had particularly adverse consequences in the New World. He believed America to be both colder and wetter than Europe. This climate limited the number of species in the New World and prompted a decline in size and vigor among the animals which did survive. Buffon also applied these principles to the people of the New World. He wrote in the Histoire Naturelle that the indigenous people lacked the ability to feel strong emotions for others. For Buffon, these individuals were incapable of love as well as desire. Buffon's theory of degeneration attracted the ire of many early American elites who feared that Buffon's depiction of the New World would negatively influence European perceptions of their nation. In particular, Thomas Jefferson mounted a vigorous defense of the American natural world. He attacked the premises of Buffon's argument in his 1785 Notes on the State of Virginia, writing that the animals of the New World felt the same sun and walked upon the same soil as their European counterparts. Jefferson believed that he could permanently alter Buffon's views of the New World by showing him firsthand the majesty of American wildlife. While serving as minister to France, Jefferson wrote repeatedly to his compatriots in the United States, pleading with them to send a stuffed moose to Paris. After months of effort, General John Sullivan responded to Jefferson's request and shipped a moose to France. Buffon died only three months after the moose's arrival, and his theory of New World degeneration remained forever preserved in the pages of the . In the years following Buffon's death, the theory of degeneration gained a number of new followers, many of whom were concentrated in German-speaking lands. The anatomist and naturalist Johann Friedrich Blumenbach praised Buffon in his lectures at the University of Göttingen. He adopted Buffon's theory of degeneration in his dissertation De Generis Humani Varietate Nativa. The central premise of this work was that all of mankind belonged to the same species. Blumenbach believed that a multitude of factors, including climate, air, and the strength of the sun, promoted degeneration and resulted in external differences between human beings. However, he also asserted that these changes could easily be undone and, thus, did not constitute the basis for speciation. In the essay "Über Menschen-Rassen und Schweine-Rassen", Blumenbach clarified his understanding of the relationship between different human races by calling upon the example of the pig. He contended that, if the domestic pig and the wild boar were seen as belonging to the same species, then different humans, regardless of skin color or height, must too belong to the same species. For Blumenbach, all people of the world existed as different gradations on a spectrum. Nevertheless, the third edition of De Generis Humani Varietate Nativa, published in 1795, is famed among scholars for its introduction of a system of racial classification which divided humans into members of the Caucasian, Ethiopian, Mongolian, Malayan, or American races. Blumenbach's views on degeneration emerged in dialogue with the works of other thinkers concerned with race and origin in the late eighteenth century. In particular, Blumenbach participated in fruitful intellectual exchange with another prominent German scholar of his age, Immanuel Kant. Kant, a philosopher and professor at the University of Königsberg, taught a course on physical geography for some forty years, fostering an interest in biology and taxonomy. Like Blumenbach, Kant engaged closely with the writings of Buffon while developing his position on these subjects. In his 1777 essay Von der verschiedenen Racen der Menschen, Kant expressed the belief that all humans shared a common origin. He called upon the ability of humans to interbreed as evidence for this assertion. Additionally, Kant introduced the term "degeneration", which he defined as hereditary differences between groups with a shared root. Kant also arrived at a meaning of "race" from this definition of degeneration. He claimed that races developed when degenerations were preserved over a long period of time. A group could only constitute a race if breeding with a different degeneration resulted in "intermediate offspring." Although Kant advocated for a theory of shared human origin, he also contended that there was an innate hierarchy between existing races. In 1788, Kant wrote "Über den Gebrauch teleologischer Prinzipien". He maintained in this work that a human's place in nature was determined by the amount of sweat the individual produced, which revealed an innate ability to survive. Sweat emerged from the skin. Therefore, skin color indicated important distinctions between humans. History The concept of degeneration arose during the European enlightenment and the industrial revolution – a period of profound social change and a rapidly shifting sense of personal identity. Several influences were involved. The first related to the extreme demographic upheavals, including urbanization, in the early years of the 19th century. The disturbing experience of social change and urban crowds, largely unknown in the agrarian 18th century, was recorded in the journalism of William Cobbett, the novels of Charles Dickens and in the paintings of J. M. W. Turner. These changes were also explored by early writers on social psychology, including Gustav Le Bon and Georg Simmel. The psychological impact of industrialisation is comprehensively described in Humphrey Jennings' masterly anthology Pandaemonium 1660 – 1886. Victorian social reformers including Edwin Chadwick, Henry Mayhew and Charles Booth voiced concerns about the "decline" of public health in the urban life of the British working class, arguing for improved housing and sanitation, access to parks and recreational facilities, an improved diet and a reduction in alcohol intake. These contributions from the public health perspective were discussed by the Scottish physician Sir James Cantlie in his influential 1885 lecture Degeneration Amongst Londoners. The novel experience of everyday contact with the urban working classes gave rise to a kind of horrified fascination with their perceived reproductive energies which appeared to threaten middle-class culture. Secondly, the proto-evolutionary biology and transformatist speculations of Jean-Baptiste Lamarck and other natural historians—taken together with the Baron von Cuvier's theory of extinctions—played an important part in establishing a sense of the unsettled aspects of the natural world. The polygenic theories of multiple human origins, supported by Robert Knox in his book The Races of Men, were firmly rejected by Charles Darwin who, following James Cowles Prichard, generally agreed on a single African origin for the entire human species. Thirdly, the development of world trade and colonialism, the early European experience of globalization, resulted in an awareness of the varieties of cultural expression and the vulnerabilities of Western civilization. Finally, the growth of historical scholarship in the 18th century, exemplified by Edward Gibbon's The History of the Decline and Fall of The Roman Empire (1776–1789), excited a renewed interest in the narratives of historical decline. This resonated uncomfortably with the difficulties of French political life in the post-revolutionary nineteenth century. Degeneration theory achieved a detailed articulation in Bénédict Morel's Treatise on Degeneration of the Human Species (1857), a complicated work of clinical commentary from an asylum in Normandy (Saint Yon in Rouen) which, in the popular imagination at least, coalesced with de Gobineau's Essay on The Inequality of the Human Races (1855). Morel's concept of mental degeneration – in which he believed that intoxication and addiction in one generation of a family would lead to hysteria, epilepsy, sexual perversions, insanity, learning disability and sterility in subsequent generations – is an example of Lamarckian biological thinking, and Morel's medical discussions are reminiscent of the clinical literature surrounding syphilitic infection (syphilography). Morel's psychiatric theories were taken up and advocated by his friend Philippe Buchez, and through his political influence became an official doctrine in French legal and administrative medicine. Arthur de Gobineau came from an impoverished family (with a domineering and adulterous mother) which claimed an aristocratic ancestry; he was a failed author of historical romances, and his wife was widely rumored to be a Créole from Martinique. De Gobineau nevertheless argued that the course of history and civilization was largely determined by ethnic factors, and that interracial marriage ("miscegenation") resulted in social chaos. De Gobineau built a successful career in the French diplomatic service, living for extended periods in Iran and Brazil, and spent his later years travelling through Europe, lamenting his mistreatment at the hands of his wife and daughters. He died of a heart attack in 1882 while boarding a train in Turin. His work was well received in German translation—not least by the composer Richard Wagner—and the leading German psychiatrist Emil Kraepelin later wrote extensively on the dangers posed by degeneration to the German people. De Gobineau's writings exerted an enormous influence on the thinkers antecedent to the Third Reich – although they are curiously free of anti-Semitic prejudice. Quite different historical factors inspired the Italian Cesare Lombroso in his work on criminal anthropology with the notion of atavistic retrogression, probably shaped by his experiences as a young army doctor in Calabria during the risorgimento. In Britain, degeneration received a scientific formulation from Ray Lankester whose detailed discussions of the biology of parasitism were hugely influential; the poor physical condition of many British Army recruits for the Second Boer War (1899–1902) led to alarm in government circles. Psychiatrist Henry Maudsley initially argued that degenerate family lines would die out with little social consequence, but later became more pessimistic about the effects of degeneration on the general population; Maudsley also warned against the use of the term "degeneration" in a vague and indiscriminate way. Anxieties in Britain about the perils of degeneration found legislative expression in the Mental Deficiency Act 1913 which gained strong support from Winston Churchill, then a senior member of the Liberal government. In the fin-de-siècle period, Max Nordau scored an unexpected success with his bestselling Degeneration (1892). Sigmund Freud met Nordau in 1885 while he was studying in Paris and was notably unimpressed by him and hostile to the degeneration concept. Degeneration fell from popular and fashionable favor around the time of the First World War, although some of its preoccupations persisted in the writings of the eugenicists and social Darwinists (for example, R. Austin Freeman; Anthony Ludovici; Rolf Gardiner; and see also Dennis Wheatley's Letter to posterity). Oswald Spengler's The Decline of the West (1919) captured something of the degenerationist spirit in the aftermath of the war. Psychology and Emil Kraepelin Degeneration theory is, at its heart, a way of thinking, and something that is taught, not innate. A major influence on the theory was Emil Kraepelin, lining up degeneration theory with his psychiatry practice. The central idea of this concept was that in "degenerative" illness, there is a steady decline in mental functioning and social adaptation from one generation to the other. For example, there might be an intergenerational development from nervous character to major depressive disorder, to overt psychotic illness and, finally, to severe and chronic cognitive impairment, something akin to dementia. This theory was advanced decades before the rediscovery of Mendelian genetics and their application to medicine in general and to psychiatry in particular. Kraepelin and his colleagues mostly derived from degeneration theory broadly. He rarely made a specific references to the theory of degeneration, and his attitude towards degeneration theory was not straightforward. Positive, but more ambivalent. The concept of disease, especially chronic mental disease fit very well into this framework insofar these phenomena were regarded as signs of an evolution in the wrong direction, as a degenerative process which diverts from the usual path of nature. However, he remained skeptical of over-simplistic versions of this concept: While commenting approvingly on the basic ideas of Cesare Lombroso's "criminal anthropology", he did not accept the popular idea of overt "stigmata of degeneration", by which individual persons could be identified as being "degenerated" simply by their physical appearance. While Kraepelin and his colleagues may not have focused on this, it did not stop others from advancing the converse idea. An early application of this theory was the Mental Deficiency Act supported by Winston Churchill in 1913. This entailed placing those deemed "idiots" into separate colonies, and included those who showed sign of a "degeneration". While this did apply to those with mental disorders of a psychiatric nature, the execution was not always in the same vein, as some of the language was used to the those "morally weak", or deemed "idiots". The belief in the existence of degeneration helped foster a sense that a sense of negative energy was inexplicable and was there to find sources of "rot" in society. This forwarded the notion the idea that society was structured in a way that produced regression, an outcome of the "darker side of progress". Those who had developed the label of "degenerate" as a means of qualifying difference in a negative manner could use the idea that this "darker side of progress" was inevitable by having the idea society could "rot". Considerations to the pervasiveness of an allegedly superior condition were, during the nineteenth century, frighteningly reinforced in the language and habits of this destructive thinking. As "dark side" of progress The idea of progress was at once a social, political and scientific theory. The theory of evolution, as described in Darwin's The Origin of Species, provided for many social theorists the necessary scientific foundation for the idea of social and political progress. The terms evolution and progress were often used interchangeably in the 19th century. According to the theory of degeneration, a host of individual and social pathologies in a finite network of diseases, disorders and moral habits could be explained by a biologically based affliction. The primary symptoms of the affliction were thought to be a weakening of the vital forces and willpower of its victim. In this way, a wide range of social and medical deviations, including crime, violence, alcoholism, prostitution, gambling, and pornography, could be explained by reference to a biological defect within the individual. The theory of degeneration was therefore predicated on evolutionary theory. The forces of degeneration opposed those of evolution, and those afflicted with degeneration were thought to represent a return to an earlier evolutionary stage. Development of the concept The earliest uses of the term degeneration can be found in the writings of Blumenbach and Buffon at the end of the 18th century, when these early writers on natural history considered scientific approaches to the human species. With the taxonomic mind-set of natural historians, they drew attention to the different ethnic groupings of mankind, and raised general enquiries about their relationships, with the idea that racial groupings could be explained by environmental effects on a common ancestral stock. This pre-Darwinian belief in the heritability of acquired characteristics does not accord with modern genetics. An alternative view of the multiple origins of different racial groups, called "polygenic theories", was also rejected by Charles Darwin, who favored explanations in terms of differential geographic migrations from a single, probably African, population. The theory of degeneration found its first detailed presentation in the writings of Bénédict Morel (1809–1873), especially in his (Treatise on Degeneration of the Human Species) (1857). This book was published two years before Darwin's Origin of Species. Morel was a highly regarded psychiatrist, the very successful superintendent of the Rouen asylum for almost twenty years and a fastidious recorder of the family histories of his variously disabled patients. Through the details of these family histories, Morel discerned an hereditary line of defective parents infected by pollutants and stimulants; a second generation liable to epilepsy, neurasthenia, sexual deviations and hysteria; a third generation prone to insanity; and a final generation doomed to congenital idiocy and sterility. In 1857, Morel proposed a theory of hereditary degeneracy, bringing together environmental and hereditary elements in an uncompromisingly pre-Darwinian mix. Morel's contribution was further developed by Valentin Magnan (1835–1916), who also stressed the role of alcohol—particularly absinthe—in the generation of psychiatric disorders. Morel's ideas were greatly extended by the Italian medical scientist Cesare Lombroso (1835–1909) whose work was defended and translated into English by Havelock Ellis. In his L'uomo delinquente (1876), Lombroso outlined a comprehensive natural history of the socially deviant person and detailed the stigmata of the person who was born to be criminally insane. These included a low, sloping forehead, hard and shifty eyes, large, handle-shaped ears, a flattened or upturned nose, a forward projection of the jaw, irregular teeth, prehensile toes and feet, long simian arms and a scanty beard and baldness. Lombroso also listed the features of the degenerate mentality, supposedly released by the disinhibition of the primitive neurological centres. These included apathy, the loss of moral sense, a tendency to impulsiveness or self-doubt, an unevenness of mental qualities such as unusual memory or aesthetic abilities, a tendency to mutism or to verbosity, excessive originality, preoccupation with the self, mystical interpretations placed on simple facts or perceptions, the abuse of symbolic meanings and the magical use of words, or mantras. Lombroso, with his concept of atavistic retrogression, suggested an evolutionary reversion, complementing hereditary degeneracy, and his work in the medical examination of criminals in Turin resulted in his theory of criminal anthropology—a constitutional notion of abnormal personality that was not actually supported by his own scientific investigations. In his later life, Lombroso developed an obsession with spiritualism, engaging with the spirit of his long dead mother. In 1892, Max Nordau, an expatriate Hungarian living in Paris, published his extraordinary bestseller Degeneration, which greatly extended the concepts of Bénédict Morel and Cesare Lombroso (to whom he dedicated the book) to the entire civilization of western Europe, and transformed the medical connotations of degeneration into a generalized cultural criticism. Adopting some of Charcot's neurological vocabulary, Nordau identified a number of weaknesses in contemporary Western culture which he characterized in terms of ego-mania, i.e., narcissism and hysteria. He also emphasized the importance of fatigue, enervation and ennui. Nordau, horrified by the anti-Semitism surrounding the Dreyfus affair, devoted his later years to Zionist politics. Degeneration theory fell from favour around the time of the First World War because of an improved understanding of the mechanisms of genetics as well as the increasing vogue for psychoanalytic thinking. However, some of its preoccupations lived on in the world of eugenics and social Darwinism. It is notable that the Nazi attack on western liberal society was largely couched in terms of degenerate art with its associations of racial miscegenation and fantasies of racial purity—and included as its target almost all modernist cultural experiment. The role of women in furthering development of the concept of degeneration was reviewed by Anne McClintock, a professor of English at the University of Wisconsin, who found that women who were ambiguously placed on the so-called "imperial divide" (nurses, nannies, governesses, prostitutes and servants) happened to serve as boundary markers and mediators. These women were tasked with the purification and maintenance of boundaries and what was seen as "inferior" places in society they held at the time. Degenerationist devices Towards the close of the 19th century, in the fin-de-siècle period, something of an obsession with decline, descent and degeneration invaded the European creative imagination, partly fuelled by widespread misconceptions of Darwinian evolutionary theory. Among the main examples are the symbolist literary work of Charles Baudelaire, the Rougon-Macquart novels of Émile Zola, Robert Louis Stevenson's Strange Case of Dr Jekyll and Mr Hyde—published in the same year (1886) as Richard von Krafft-Ebing's Psychopathia Sexualis—and, subsequently, Oscar Wilde's only novel (containing his aesthetic manifesto) The Picture of Dorian Gray (1891). In Tess of the d'Urbervilles (1891), Thomas Hardy explores the destructive consequences of a family myth of noble ancestry. Norwegian dramatist Henrik Ibsen showed a sensitivity to degenerationist thinking in his theatrical presentations of Scandinavian domestic crises. Arthur Machen's The Great God Pan (1890/1894), with its emphasis on the horrors of psychosurgery, is frequently cited as an essay on degeneration. A scientific twist was added by H. G. Wells in The Time Machine (1895) in which Wells prophesied the splitting of the human race into variously degenerate forms, and again in his The Island of Doctor Moreau (1896) wherein forcibly mutated animal-human hybrids keep reverting to their earlier forms. Joseph Conrad alludes to degeneration theory in his treatment of political radicalism in the 1907 novel The Secret Agent. In her influential study The Gothic Body, Kelly Hurley draws attention to the literary device of the abhuman as a representation of damaged personal identity, and to lesser-known authors in the field, including Richard Marsh (1857–1915), author of The Beetle (1897), and William Hope Hodgson (1877–1918), author of The Boats of the Glen Carrig, The House on the Borderland and The Night Land. In 1897, Bram Stoker published Dracula, an enormously influential Gothic novel featuring the parasitic vampire Count Dracula in an extended exercise of reversed imperialism. Unusually, Stoker makes explicit reference to the writings of Lombroso and Nordau in the course of the novel. Arthur Conan Doyle's Sherlock Holmes stories include a host of degenerationist tropes, perhaps best illustrated (drawing on the ideas of Serge Voronoff) in The Adventure of the Creeping Man. See also Behavioral sink Decadence Declinism Degenerate art Devolution Dysgenics Human extinction Idiocracy Last man "The Marching Morons" Societal collapse Notes References Further reading Bioethics History of psychiatry History of psychology History of mental health Lamarckism History of eugenics Pseudo-scholarship Declinism
Social degeneration
[ "Technology", "Biology" ]
5,442
[ "Bioethics", "Obsolete biology theories", "Lamarckism", "Phrenology", "Ethics of science and technology", "Non-Darwinian evolution", "Biology theories" ]
1,014,333
https://en.wikipedia.org/wiki/Flood%20%28Halo%29
The Flood is a fictional parasitic alien lifeform and one of the primary antagonists in the Halo multimedia franchise. First introduced in the 2001 video game Halo: Combat Evolved, it returns in later entries in the series such as Halo 2, Halo 3, and Halo Wars. The Flood is driven by a desire to infect any sentient life of sufficient size; Flood-infected creatures, also called Flood, in turn can infect other hosts. The parasite is depicted as such a threat that the ancient Forerunners constructed artificial ringworld superweapons known as Halos to contain it and, as a last resort, to kill all sentient life in the galaxy in an effort to stop the Flood's spread by starving it. The Flood's design and fiction were led by Bungie artist Robert McLees, who started from unused concepts from earlier Bungie games and was inspired by personal experiences. The setting of the first game, the ringworld Halo, was stripped of many of its large creatures in order to make the Flood's surprise appearance midway through the game more startling. Bungie environment artist Vic DeLeon spent six months of pre-production time refining the Flood's fleshy aesthetic and designing the organic interiors of Flood-infested spaceships for Halo 3. The player's discovery of the Flood in Halo: Combat Evolved is a major plot twist, and was one of the surprises reviewers noted positively. The Flood's return in Halo 2 and Halo 3 was less enthusiastically praised. Reaction to the Flood itself has been positive, being consistently placed amongst the greatest video game villains by Video game magazines. Development The Flood is depicted as a parasitic organism that infects any sentient life of sufficient size. Small, bulbous infection forms seek out suitable hosts, living or dead, burrowing into the target and bringing it under Flood control. Depending on the size or condition of the body, the infection form mutates the hapless host into various specialized forms in the continual drive for more food. Larger hosts are turned into forms for combat, growing long whip-like tentacles, while mangled and disused hosts are turned into incubators for more infection forms. The Flood also creates forms known as "key minds" to coordinate the Flood; these include the apex of Flood evolution, known as "Graveminds". The Flood was added early in Bungie's development of the 2001 video game Halo: Combat Evolved. A design for one Flood form appeared as early as 1997. Commenting upon the inception of the Flood, Bungie staff member Chris Butcher noted that "the idea behind the Flood as the forgotten peril that ended a galaxy-spanning empire is a pretty fundamental tenet of good sci-fi. Yeah, and bad sci-fi too." One inspiration was Christopher Rowley's The Vang series. Early design for the Flood was done by Bungie artist and writer Robert McLees, who considers himself "the architect" of the Flood; the Flood's roots are reflected in concept art of a "fungal zombie" that McLees did for the earlier Bungie game Marathon 2: Durandal. McLees also did all the early concept art for the Flood. Based on the behavior of viruses and certain bacteria, the Flood was intended to be "disgusting and nasty"; McLees modeled one Flood form off the memory of his cousin's infected thumb, while the silhouette of the skittering infection forms came from a more innocuous source—an airborne palm tree from one of the Little Golden Books McLees had read as a child. The larger creatures were constructed from the corpses and bodies of former combatants, so the artists had to make sure the Flood soldiers were recognizable while changing their silhouette enough to differentiate them from the uninfected. Many concepts and ideas were discarded due to time constraints—initially, the Flood was intended to convert any species of the alien Covenant into soldiers. "We didn't have the resources to make it happen," McLees recalled, so they modified the game's fiction to suggest that some Covenant were too small or too frail to serve as combat troops. The technical inability to create different Flood forms procedurally informed the game's fiction that the Flood had optimized their host forms over years of trial and error, creating standardized templates that the developers used to obfuscate the repeated use of similar models. Likewise, the Flood enemy intelligence was intended to be as complicated as that of the other enemy faction in the game, but full implementation was cut for time. The dinosaur-like terrestrial wildlife that originally dwelled in Halo's environments were dropped due to gameplay constraints and fear that their presence would reduce the surprise and impact of the Flood. Bungie decided a new visual language for the Flood was needed for Halo 3. The task of developing the new Flood forms, organic Flood terrain, and other miscellaneous changes fell to Vic DeLeon, then Bungie's Senior Environment Artist. Early concepts of what became new morphing Flood types in the game called "pure forms" featured the creatures wielding an array of weapons via tendrils, while forms like the Flood infector and Flood transport concepts never made it into the final game. The pure forms had to morph between three radically different looks, and it proved challenging to make plausible transformations that also looked good once it was developed and animated in 3D. Artist Shi Kai Wang suggested that in the end, they had simply tried to do too much and the results were less than they wanted. Flood-infested structures were designed as angular to counterbalance Flood biomass, as well as provide surfaces for the game's artificial intelligence to exploit and move on. New additions were designed to be multi-purpose; exploding "growth pods" that spew Flood forms were added to the game to adjust pacing, provide instant action, and add to the visuals. Endoscopic pictures provided further inspiration. Halo 3 added new capabilities to the Flood, including the ability for the parasite to infect enemies in real time. Bungie used Halo 3s improved capacity for graphics to make a host's sudden transformation into Flood form more dramatic; two different character models and skeletons were fused and swapped in real-time. Appearances Games The Flood makes its first appearance more than halfway through Halo: Combat Evolved, during the story mission "343 Guilty Spark". A group of humans fleeing the enemy alien Covenant land on "Halo", a ringworld built by the alien Forerunners. The artificial intelligence Cortana sends the supersoldier Master Chief to find their captain, Jacob Keyes, who disappeared in a swamp while searching for a weapons cache. The Master Chief discovers that the Covenant have accidentally released the Flood. Keyes' squad is turned into soldiers for the parasite, while Keyes is interrogated by the Flood in an attempt to learn the location of Earth and ultimately assimilated. The emergence of the Flood prompts Halo's caretaker artificial intelligence 343 Guilty Spark to enlist the help of the Master Chief in activating Halo's defenses and preventing a Flood outbreak. When Master Chief learns that activating Halo would instead wipe the galaxy of sentient life to prevent the Flood's spread, he and Cortana detonate the human ship Pillar of Autumns engines, destroying the ring and preventing the Flood from escaping. The Flood returns in Halo 2 (2004), appearing on another Halo ring called "Delta Halo". The Flood on Delta Halo is led by the Gravemind, a massive Flood intelligence that dwells in the bowels of the ring. Gravemind brings together the Master Chief and the Covenant holy warrior known as the Arbiter and tasks them with stopping the Covenant leadership from activating the ring. In the meantime, Gravemind infests the human ship In Amber Clad and crashes it into the Covenant space station of High Charity. Once there, the Flood sweeps through the city, and the Gravemind captures Cortana. As the Flood spreads, the Covenant form a blockade in an effort to prevent the parasite from leaving its prison. The Flood reappears in Halo 3 (2007), on board a damaged ship that escapes the quarantine around Delta Halo. While the infestation of Earth is prevented and Truth activating all the Halo rings, the Gravemind manipulates the Master Chief and Arbiter allied with the Flood to stop the activation of all the rings at the Forerunner installation known as the Ark. Although Arbiter kills Truth, the Gravemind betrays them. The Master Chief fights his way to the center of High Charity, freeing Cortana and destroying the city, but Gravemind attempts to rebuild itself on a Halo under construction at the Ark. Realizing that activating the ring will destroy only the local Flood infestation due to the Ark's location outside of the Milky Way, the Master Chief, Arbiter, and Cortana proceed to Halo's control room, activate the ring and escape. The Flood also makes an appearance in the video game spinoffs Halo Wars and Halo Wars 2. In Halo Wars, they are encountered infesting a Forerunner installation and ultimately annihilated by the actions of the human ship Spirit of Fires crew. In the Halo Wars 2 expansion "Awakening the Nightmare", the surviving Flood are accidentally released by the Banished while salvaging the remaining wreckage of High Charity. The expansion features new Flood types alongside those seen in previous games. The parasite also serves as an enemy in the cooperative "Firefight" mode of Halo Wars 2 and The Master Chief Collection. The Flood also appear in cooperative play in Halo: Spartan Assault. With Halo 3, the developers added a multiplayer gametype called "Infection", a last man standing mode based on a fan-created scenario where human players defend against Flood-infected players, with each slain human adding to the infected's ranks. The game mode returned in Halo: Reach (2010), Halo 4 (2012), where it was renamed "Flood",The Master Chief Collection (2014), Halo 5 (2015), and Halo Infinite (2021). Other appearances The 2006 anthology The Halo Graphic Novel expands upon the Flood's release during the events of Halo: Combat Evolved in two stories, Last Voyage of the Infinite Succor and "Breaking Quarantine". Whereas the Flood is only hinted at being intelligent in the game, the Halo Graphic Novel shows the Flood has a hive mind, assimilating the knowledge of their hosts rapidly. Lee Hammock, writer of The Last Voyage of the Infinite Succor, described the basis of the story as a way to showcase the true danger of the Flood as an intelligent menace, rather than something the player encounters and shoots. Hammock also stated that the story would prove the intelligent nature of the Flood, and "hopefully euthanize the idea that they are just space zombies". The threat of the Flood is also highlighted in a short story from the Halo Evolutions anthology, "The Mona Lisa," which was later adapted into a motion comic. The Flood also features heavily in Greg Bear's trilogy of novels, the Forerunner Saga, which takes place thousands of years before the events of the main games. The novel Halo: Silentium reveals that the Flood is what remains of the Precursors, an ancient race that was said to accelerate the evolution of a species and shape galaxies. The Forerunners overthrew the Precursors; on the verge of extinction, some Precursors reduced themselves to a biological powder that would regenerate into their past selves. Time rendered the powder defective, and it became mutagenic, reacting with other living organisms to produce what would eventually mutate into the Flood. The Flood would threaten ancient humanity and then the Forerunners, who ultimately build and activate the Halo Array to stop the parasite's spread. The Flood appear in the finale of the Halo live-action series' second season; a Polygon review noted that the show's presentation is more akin to traditional zombies than that of the games. Analysis The name of the Flood is one of many names taken from religious stories in the Halo franchise. The Flood and especially the Gravemind serve as demonic or satanic figures, and the Master Chief's descent into the bowels of Halo to encounter the Flood can be likened to a journey to hell. Academic P.C. Paulissen notes that the name 'Flood' suggests a reference to the biblical deluge, with the Forerunner Ark being shelter from the Flood's destructive and cleansing power akin to the Bible. The lifecycle and parasitic nature of the Flood has similarities to the behaviors of real-world parasites. The Flood's induced physiological changes recall the modified eyestalks of hosts infected by Leucochloridium paradoxum, or malformed limbs of Ribeiroia-infected amphibians. The Flood's habit of altering its surroundings has parallels to the parasitoid wasp Hymenoepimecis argyraphagas use of spider's webs for protection. Cultural impact The surprise appearance of the Flood during Halo: Combat Evolved was seen as an important plot twist and a scary moment even after repeat playthroughs of the game. Gamasutra, writing about video game plots, gives the example of the Flood not only as an important reversal to the story of Halo, but an example of how games are made more interesting by twists in the plot. Rolling Stone and Kotaku credited the appearance of the Flood as an excellent way the game kept players on their toes, forcing them to adjust their strategies; Rolling Stone called the twist as shocking "as if, several levels into a game of Pac-Man, the dots suddenly began to attack you". IGN described Flood as one of their favorite video game monsters of all time, stating that "We like the Flood, but we hate them so very much." Despite the positive acclaim in Halo, the response to the presence of the Flood in Halo 2 and Halo 3 was mixed. A panel of online reviewers noted that the Flood appeared in Halo 2 for no obvious reasons, and was simply described as "aggravating" to play against. Daniel Weissenberger of Gamecritics.com noted in his review of Halo 3 that even though the Flood looked better than ever, its single strategy of rushing the player proved tedious over time. GamesRadar's Charlie Barratt listed the Flood as the worst part of Halo, contrasting what he considered fun, vibrant and open levels before the Flood's appearance with confined spaces and predictable enemies. The Flood has been recognized as one of the greatest game villains, making lists of greatest villains and enemies from Wizard Magazine, GameDaily, Guinness World Records Gamer's Edition, PC World, and Electronic Gaming Monthly. MTV considered Flood possession in Halo 3 as a "great gaming moment" of 2007, stating that "with the power of the Xbox 360's graphics, this reanimation comes to vivid, distressing life, more memorably than it had in the earlier games. Here are the zombies of gaming doing what they do worst. [...] It's grisly and unforgettable." IGN listed the Flood as the 45th best video game villain, describing it as one of the most hated video game villains. The Flood feature in Halo merchandise, including action figures produced by Joyride Studios for Combat Evolved and Halo 2; one review of the figures expressed the sentiment that Joyride's models could not totally capture the ghoulishness texture and detail of the Flood. Other Flood action figures and toys have been released by McFarlane Toys, Mega Bloks, and Jazwares. Other merchandise includes an Xbox 360 Avatar prop, and a limited edition silver-plated statue of Master Chief fighting a Flood form. References External links Flood Archives at halo.bungie.org The Flood profile at Halowaypoint.com Halo (franchise) characters Fictional monsters Extraterrestrial characters in video games Fictional extraterrestrial species and races Fictional species and races Fictional superorganisms Mutant characters in video games Undead characters in video games Video game characters introduced in 2001 Video game species and races Zombie characters in video games Hive minds in fiction
Flood (Halo)
[ "Biology" ]
3,299
[ "Superorganisms", "Fictional superorganisms" ]
1,014,354
https://en.wikipedia.org/wiki/Uniporter
Uniporters, also known as solute carriers or facilitated transporters, are a type of membrane transport protein that passively transports solutes (small molecules, ions, or other substances) across a cell membrane. It uses facilitated diffusion for the movement of solutes down their concentration gradient from an area of high concentration to an area of low concentration. Unlike active transport, it does not require energy in the form of ATP to function. Uniporters are specialized to carry one specific ion or molecule and can be categorized as either channels or carriers. Facilitated diffusion may occur through three mechanisms: uniport, symport, or antiport. The difference between each mechanism depends on the direction of transport, in which uniport is the only transport not coupled to the transport of another solute. Uniporter carrier proteins work by binding to one molecule or substrate at a time. Uniporter channels open in response to a stimulus and allow the free flow of specific molecules. There are several ways in which the opening of uniporter channels may be regulated: Voltage – Regulated by the difference in voltage across the membrane Stress – Regulated by physical pressure on the transporter (as in the cochlea of the ear) Ligand – Regulated by the binding of a ligand to either the intracellular or extracellular side of the cell Uniporters are found in mitochondria, plasma membranes and neurons.The uniporter in the mitochondria is responsible for calcium uptake. The calcium channels are used for cell signaling and triggering apoptosis. The calcium uniporter transports calcium across the inner mitochondrial membrane and is activated when calcium rises above a certain concentration. The amino acid transporters function in transporting neutral amino acids for neurotransmitter production in brain cells. Voltage-gated potassium channels are also uniporters found in neurons and are essential for action potentials. This channel is activated by a voltage gradient created by sodium-potassium pumps. When the membrane reaches a certain voltage, the channels open, which depolarizes the membrane, leading to an action potential being sent down the membrane. Glucose transporters are found in the plasma membrane and play a role in transporting glucose. They help to bring glucose from the blood or extracellular space into cells usually to be utilized for metabolic processes in generating energy. Uniporters are essential for certain physiological processes in cells, such as nutrient uptake, waste removal, and maintenance of ionic balance. Discovery Early research in the 19th and 20th centuries on osmosis and diffusion provided the foundation for understanding the passive movement of molecules across cell membranes. In 1855, the physiologist Adolf Fick was the first to define osmosis and simple diffusion as the tendency for solutes to move from a region of higher concentration to a lower concentration, also very well-known as Fick's Laws of Diffusion. Through the work of Charles Overton in the 1890s, the concept that the biological membrane is semipermeable became important to understanding the regulation of substances in and out of the cells. The discovery of facilitated diffusion by Wittenberg and Scholander suggested that proteins in the cell membrane aid in the transport of molecules. In the 1960s - 1970s, studies on the transport of glucose and other nutrients highlighted the specificity and selectivity of membrane transport proteins. Technological advancements in biochemistry helped isolate and characterize these proteins from cell membranes. Genetic studies on bacteria and yeast identified genes responsible for encoding transporters. This led to the discovery of glucose transporters (GLUT proteins), with GLUT1 being the first to be characterized. Identification of gene families encoding various transporters, such as solute carrier (SLC) families, also advanced knowledge on uniporters and its functions. Newer research is focusing on techniques using recombinant DNA technology, electrophysiology and advanced imaging to understand uniporter functions. These experiments are designed to clone and express transporter genes in host cells to further analyze the three-dimensional structure of uniporters, as well as directly observe the movement of ions through proteins in real-time. The discovery of mutations in uniporters has been linked to diseases such as GLUT1 deficiency syndrome, cystic fibrosis, Hartnup disease, primary hyperoxaluria and hypokalemic periodic paralysis. Types Glucose transporter (GLUTs) The glucose transporter (GLUTs) is a type of uniporter responsible for the facilitated diffusion of glucose molecules across cell membranes.Glucose is a vital energy source for most living cells, however, due to its large size, it cannot freely move through the cell membrane. The glucose transporter is specialized in transporting glucose specifically across the membrane. The GLUT proteins have several types of isoforms, each distributed in different tissues and exhibiting different kinetic properties. GLUTs are integral membrane proteins composed of 12 α-helix membrane spanning regions. The GLUT proteins are encoded by the SLC2 genes and categorized into three classes based on amino acid sequence similarity. Humans have been found to express fourteen GLUT proteins. Class I GLUTs include GLUT1, one of the most studied isoforms, and GLUT2. GLUT1 is found in various tissues like the red blood cells, brain, and blood-brain barrier and is responsible for basal glucose uptake. GLUT2 is predominantly found in the liver, pancreas, and small intestines. It plays an important role in insulin secretion from pancreatic beta cells. Class II includes the GLUT3 and GLUT4. GLUT3, primarily found in the brain, neurons and placenta, has a high affinity for glucose in facilitating glucose uptake into neurons. GLUT4 plays a role in insulin-regulated glucose uptake and is mainly found in insulin-sensitive tissues such as muscle and adipose tissue. Class III includes GLUT5, found in the small intestine, kidney, testes, and skeletal muscle. Unlike the other GLUTs, GLUT5 specifically transports fructose rather than glucose. Glucose transporters allow glucose molecules to move down their concentration gradient from areas of high glucose concentration to areas of low concentration. This process often involves bringing glucose from the extracellular space or blood into the cell. The concentration gradient set up by glucose concentrations fuels the process without the need for ATP. When glucose binds to the glucose transporter, the protein channels change shape and undergo a conformational change to transport the glucose across the membrane. Once the glucose unbinds, the protein returns to its original shape. The glucose transporter is essential for carrying out physiological processes that require high energy demands in the brain, muscles, and kidneys by providing an adequate amount of energy substrate for metabolism. Diabetes, an example of a condition that involves glucose metabolism, highlights the importance of the regulation of glucose uptake in disease management. Mitochondrial uniporter (MCU) The mitochondrial calcium uniporter (MCU) is a protein complex located in the inner mitochondrial matrix that functions to take up calcium ions (Ca2+) into the matrix from the cytoplasm. The transport of calcium ions is specifically used in cellular function for regulating energy production in the mitochondria, cytosolic calcium signaling, and cell death. The uniporter becomes activated when cytoplasmic levels of calcium rise above 1 uM. The MCU complex comprises 4 parts: the port-forming subunits, regulatory subunits MICU1 and MICU2, and an auxiliary subunit, EMRE. These subunits work together to regulate the uptake of calcium in the mitochondria. Specifically, the EMRE subunit functions for the transport of calcium, and the MICU subunit functions in tightly regulating the activity of MCU to prevent the overload of calcium concentrations in the cytoplasm. Calcium is fundamental for signaling pathways in cells, as well as for cell death pathways. The function of the mitochondrial uniporter is critical for maintaining cellular homeostasis. The MICU1 and MICU2 subunits are a heterodimer connected by a disulfide bridge. When there are high levels of cytoplasmic calcium, the MICU1-MICU2 heterodimer undergoes a conformational change. The heterodimer subunits have cooperative activation, which means binding to one MICU subunit in the heterodimer induces a conformational change on the other MICU subunits. The uptake of calcium is balanced by the sodium-calcium exchanger. Large neutral amino acid transporter (LAT1) The L-type amino acid transporter (LAT1) is a uniporter that mediates the transport of neutral amino acids like L-tryptophan, leucine, histidine, proline, alanine, etc. LAT1 favors the transport of amino acids with large branched or aromatic side chains. The amino acid transporter functions to move essential amino acids into the intestinal epithelium, placenta, and blood-brain barrier for cellular processes such as metabolism and cell signaling. The transporter is of particular significance in the central nervous system as it provides the necessary amino acids for protein synthesis and neurotransmitter production in brain cells. Aromatic amino acids like phenylalanine and tryptophan are precursors for neurotransmitters like dopamine, serotonin, and norepinephrine. LAT1 is a membrane protein of the SLC7 family of transporters and works in conjunction with the SLC3 family member 4F2hc to form a heterodimeric complex known as the 4F2hc complex. The heterodimer consists of a light chain and a heavy chain covalently bonded by a disulfide bond. The light chain is the one that carries out transport, while the heavy chain is needed to stabilize the dimer. There is some controversy over whether LAT1 is an uniporter or an antiporter. The transporter has uniporter characteristics of transporting amino acids into cells in a unidirectional manner down the concentration gradient. However, recently it has been found that the transporter has antiporter characteristics of exchanging neutral amino acids for abundant intracellular amino acids. Over-expression of LAT1 has been found in human cancer and is associated with playing a role in cancer metabolism. Equilibrative nucleoside transporters (ENTs) The nucleoside transporters, or equilibrative nucleoside transporters, are uniporters that transport nucleosides, nucleobases, and therapeutic drugs across the cell membrane. Nucleosides serve as building blocks for nucleic acid synthesis and are key components for energy metabolism in creating ATP/ GTP. They also act as ligands for purinergic receptors such as adenosine and inosine. ENTs allow the transport of nucleosides down their concentration gradient. They also have the ability to deliver nucleoside analogs to intracellular targets for the treatment of tumors and viral infections. ENTs are part of the Major Facilitator Superfamily (MFS) and are suggested to transport nucleosides using a clamp-and-switch model. In this model, the substrate first binds to the transporter, which leads to a conformational change that forms an occluded state (clamp). Then, the transporter switches to face the other side of the membrane and releases the bound substrate (switching). ENTs have been found in protozoa and mammals. In humans, they have been discovered as ENT3 (hENT1-3) and ENT4 (hENT4) transporters. ENTs are expressed across all tissue types, but certain ENT proteins have been found to be more abundant in specific tissues. hENT1 is found mostly in the adrenal glands, ovary, stomach and small intestines. hENT2 is expressed mostly in neurological tissues and small parts of the skin, placenta, urinary bladder, heart muscle and gallbladder. hENT3 is expressed highly in the cerebral cortex, lateral ventricle, ovary and adrenal gland. hENT4 is more commonly known as the plasma membrane monoamine transporter (PMAT), as it facilitates the movement of organic cations and biogenic amines across the membrane. Mechanism Uniporters work to transport molecules or ions by passive transport across a cell membrane down its concentration gradient. Upon binding and recognition of a specific substrate molecule on one side of the uniporter membrane, a conformational change is triggered in the transporter protein. This causes the transporter protein to change its three-dimensional shape, which ensures the substrate molecule is captured within the transporter proteins structure. The conformational change leads to the translocation of the substrate across the membrane onto the other side. On the other side of the membrane, the uniporter undergoes another conformational change in the release of the substrate molecule. The uniporter returns to its original conformation to bind another molecule for transport. Unlike symporters and antiporters, uniporters transport one molecule/ion in a single direction based on the concentration gradient. The entire process depends on the substrate's concentration difference across the membrane to be the driving force for the transport by uniporters. Cellular energy in the form of ATP is not required for this process. Physiological processes Uniporters play an essential role in carrying out various cellular functions. Each uniporter is specialized to facilitate the transport of a specific molecule or ion across the cell membrane. Examples of a few of the physiological roles uniporters aid in include: Nutrient Uptake: Uniporters facilitate the transport of essential nutrients into the cell. Glucose transporters (GLUTs) are uniporters that uptake glucose for energy production. Ion homeostasis: Uniporters facilitate in maintaining the balance of ions (i.e., , , ) within cells Metabolism: Uniporters are involved in the transport of essential ions, amino acids and molecules required for the metabolic pathway, protein synthesis and energy production Cell signaling: Calcium uniporters help regulate intercellular calcium levels essential for signal transduction Waste removal: Uniporters aid in removing metabolic waste products and toxins from cells pH regulation: Transport of ions by uniporters also helps to maintain the overall acid-base balance within cells Mutations Mutations in genes encoding uniporters lead to dysfunctional transporter proteins being formed. This loss of function in uniporters causes disruption in cellular function which leads to various diseases and disorders. See also Antiporter Symporter References Integral membrane proteins Transport phenomena
Uniporter
[ "Physics", "Chemistry", "Engineering" ]
3,077
[ "Transport phenomena", "Chemical engineering", "Physical phenomena" ]
1,014,366
https://en.wikipedia.org/wiki/Potassium%20iodide
Potassium iodide is a chemical compound, medication, and dietary supplement. It is a medication used for treating hyperthyroidism, in radiation emergencies, and for protecting the thyroid gland when certain types of radiopharmaceuticals are used. It is also used for treating skin sporotrichosis and phycomycosis. It is a supplement used by people with low dietary intake of iodine. It is administered orally. Common side effects include vomiting, diarrhea, abdominal pain, rash, and swelling of the salivary glands. Other side effects include allergic reactions, headache, goitre, and depression. While use during pregnancy may harm the baby, its use is still recommended in radiation emergencies. Potassium iodide has the chemical formula KI. Commercially it is made by mixing potassium hydroxide with iodine. Potassium iodide has been used medically since at least 1820. It is on the World Health Organization's List of Essential Medicines. Potassium iodide is available as a generic medication and over the counter. Potassium iodide is also used for the iodization of salt. Medical uses Dietary supplement Potassium iodide is a nutritional supplement in animal feeds and also in the human diet. In humans it is the most common additive used for iodizing table salt (a public health measure to prevent iodine deficiency in populations that get little seafood). The oxidation of iodide causes slow loss of iodine content from iodised salts that are exposed to excess air. The alkali metal iodide salt, over time and exposure to excess oxygen and carbon dioxide, slowly oxidizes to metal carbonate and elemental iodine, which then evaporates. Potassium iodate (KIO3) is used to iodize some salts so that the iodine is not lost by oxidation. Dextrose or sodium thiosulfate are often added to iodized table salt to stabilize potassium iodide thus reducing loss of the volatile chemical. Thyroid protection in nuclear accidents Thyroid iodine uptake blockade with potassium iodide is used in nuclear medicine scintigraphy and therapy with some radioiodinated compounds that are not targeted to the thyroid, such as iobenguane (MIBG), which is used to image or treat neural tissue tumors, or iodinated fibrinogen, which is used in fibrinogen scans to investigate clotting. These compounds contain iodine, but not in the iodide form. Since they may be ultimately metabolized or break down to radioactive iodide, it is common to administer non-radioactive potassium iodide to ensure that iodide from these radiopharmaceuticals is not sequestered by the normal affinity of the thyroid for iodide. The World Health Organization (WHO) provides guidelines for potassium iodide use following a nuclear accident. The dosage of potassium iodide is age-dependent: neonates (<1 month) require 16 mg/day; children aged 1 month to 3 years need 32 mg/day; those aged 3-12 years need 65 mg/day; and individuals over 12 years and adults require 130 mg/day. These dosages list mass of potassium iodide rather than elemental iodine. Potassium iodide can be administered as tablets or as Lugol's iodine solution. The same dosage is recommended by the US Food and Drug Administration. A single daily dose is typically sufficient for 24-hour protection. However, in cases of prolonged or repeated exposure, health authorities may recommend multiple daily doses. Priority for prophylaxis is given to the most sensitive groups: pregnant and breastfeeding women, infants, and children under 18 years. The recommended doses of potassium iodide, which contains a stable isotope of iodine, only protect the thyroid gland from radioactive iodine. It does not offer protection against other radioactive substances. Some sources recommend alternative dosing regimens. Not all sources are in agreement on the necessary duration of thyroid blockade, although agreement appears to have been reached about the necessity of blockade for both scintigraphic and therapeutic applications of iobenguane. Commercially available iobenguane is labeled with iodine-123, and product labeling recommends administration of potassium iodide 1 hour prior to administration of the radiopharmaceutical for all age groups, while the European Association of Nuclear Medicine recommends (for iobenguane labeled with either isotope), that potassium iodide administration begin one day prior to radiopharmaceutical administration, and continue until the day following the injection, with the exception of new-borns, who do not require potassium iodide doses following radiopharmaceutical injection. Product labeling for diagnostic iodine-131 iobenguane recommends potassium iodide administration one day before injection and continuing 5 to 7 days following administration, in keeping with the much longer half-life of this isotope and its greater danger to the thyroid. Iodine-131 iobenguane used for therapeutic purposes requires a different pre-medication duration, beginning 24–48 hours prior to iobenguane injection and continuing 10–15 days following injection. In 1982, the U.S. Food and Drug Administration approved potassium iodide to protect thyroid glands from radioactive iodine involving accidents or fission emergencies. In an accidental event or attack on a nuclear power plant, or in nuclear bomb fallout, volatile fission product radionuclides may be released. Of these products, (Iodine-131) is one of the most common and is particularly dangerous to the thyroid gland because it may lead to thyroid cancer. By saturating the body with a source of stable iodide prior to exposure, inhaled or ingested tends to be excreted, which prevents radioiodine uptake by the thyroid. According to one 2000 study "KI administered up to 48 h before exposure can almost completely block thyroid uptake and therefore greatly reduce the thyroid absorbed dose. However, KI administration 96 h or more before exposure has no significant protective effect. In contrast, KI administration after exposure to radioiodine induces a smaller and rapidly decreasing blockade effect." According to the FDA, KI should not be taken as a preventative before radiation exposure. Since KI protects for approximately 24 hours, it must be dosed daily until a risk of significant exposure to radioiodine no longer exists. Emergency 130 milligrams potassium iodide doses provide 100 mg iodide (the other 30 mg is the potassium in the compound), which is roughly 700 times larger than the normal nutritional need (see recommended dietary allowance) for iodine, which is 150 micrograms (0.15 mg) of iodine (as iodide) per day for an adult. A typical tablet weighs 160 mg, with 130 mg of potassium iodide and 30 mg of excipients, such as binding agents. Potassium iodide cannot protect against any other mechanisms of radiation poisoning, nor can it provide any degree of protection against dirty bombs that produce radionuclides other than those of iodine. The potassium iodide in iodized salt is insufficient for this use. A likely lethal dose of salt (more than a kilogram) would be needed to equal the potassium iodide in one tablet. The World Health Organization does not recommend KI prophylaxis for adults over 40 years, unless the radiation dose from inhaled radioiodine is expected to threaten thyroid function, because the KI side effects increase with age and may exceed the KI protective effects; "...unless doses to the thyroid from inhalation rise to levels threatening thyroid function, that is of the order of about 5 Gy. Such radiation doses will not occur far away from an accident site." The U.S. Department of Health and Human Services restated these two years later as "The downward KI (potassium iodide) dose adjustment by age group, based on body size considerations, adheres to the principle of minimum effective dose. The recommended standard (daily) dose of KI for all school-age children is the same (65 mg). However, adolescents approaching adult size (i.e., >70 kg [154 lbs]) should receive the full adult dose (130 mg) for maximal block of thyroid radioiodine uptake. Neonates ideally should receive the lowest dose (16 mg) of KI." Side effects There is reason for caution with prescribing the ingestion of high doses of potassium iodide and iodate, because their unnecessary use can cause conditions such as the Jod-Basedow phenomena, trigger and/or worsen hyperthyroidism and hypothyroidism, and then cause temporary or even permanent thyroid conditions. It can also cause sialadenitis (an inflammation of the salivary gland), gastrointestinal disturbances, and rashes. Potassium iodide is also not recommended for people with dermatitis herpetiformis and hypocomplementemic vasculitis – conditions that are linked to a risk of iodine sensitivity. There have been some reports of potassium iodide treatment causing swelling of the parotid gland (one of the three glands that secrete saliva), due to its stimulatory effects on saliva production. A saturated solution of KI (SSKI) is typically given orally in adult doses several times a day (5 drops of SSKI assumed to be  mL) for thyroid blockade (to prevent the thyroid from excreting thyroid hormone) and occasionally this dose is also used, when iodide is used as an expectorant (the total dose is about one gram KI per day for an adult). The anti-radioiodine doses used for uptake blockade are lower, and range downward from 100  mg a day for an adult, to less than this for children (see table). All of these doses should be compared with the far lower dose of iodine needed in normal nutrition, which is only 150 μg per day (150 micrograms, not milligrams). At maximal doses, and sometimes at much lower doses, side effects of iodide used for medical reasons, in doses of 1000 times the normal nutritional need, may include: acne, loss of appetite, or upset stomach (especially during the first several days, as the body adjusts to the medication). More severe side effects that require notification of a physician are: fever, weakness, unusual tiredness, swelling in the neck or throat, mouth sores, skin rash, nausea, vomiting, stomach pains, irregular heartbeat, numbness or tingling of the hands or feet, or a metallic taste in the mouth. In the event of a radioiodine release the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defence in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation. The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism. In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment. Industrial uses KI is used with silver nitrate to make silver iodide (AgI), an important chemical in film photography. KI is a component in some disinfectants and hair treatment chemicals. KI is also used as a fluorescence quenching agent in biomedical research, an application that takes advantage of collisional quenching of fluorescent substances by the iodide ion. However, for several fluorophores addition of KI in μM-mM concentrations results in increase of fluorescence intensity, and iodide acts as fluorescence enhancer. Potassium iodide is a component in the electrolyte of dye sensitised solar cells (DSSC) along with iodine. Potassium iodide finds its most important applications in organic synthesis mainly in the preparation of aryl iodides in the Sandmeyer reaction, starting from aryl amines. Aryl iodides are in turn used to attach aryl groups to other organics by nucleophilic substitution, with iodide ion as the leaving group. Chemistry Potassium iodide is an ionic compound which is made of the following ions: . It crystallises in the sodium chloride structure. It is produced industrially by treating KOH with iodine. It is a white salt, which is the most commercially significant iodide compound, with approximately 37,000 tons produced in 1985. It absorbs water less readily than sodium iodide, making it easier to work with. Aged and impure samples are yellow because of the slow oxidation of the salt to potassium carbonate and elemental iodine. Inorganic chemistry Since the iodide ion is a mild reducing agent, is easily oxidised to iodine () by powerful oxidising agents such as chlorine: This reaction is employed in the isolation of iodine from natural sources. Air will oxidize iodide, as evidenced by the observation of a purple extract when aged samples of KI are rinsed with dichloromethane. As formed under acidic conditions, hydriodic acid (HI) is a stronger reducing agent. Like other iodide salts, KI forms triiodide () when combined with elemental iodine. Unlike , salts can be highly water-soluble. Through this reaction, iodine is used in redox titrations. Aqueous (Lugol's iodine) solution is used as a disinfectant and as an etchant for gold surfaces. Potassium iodide and silver nitrate are used to make silver(I) iodide, which is used for high speed photographic film and for cloud seeding: Organic chemistry KI serves as a source of iodide in organic synthesis. A useful application is in the preparation of aryl iodides from arenediazonium salts. KI, acting as a source of iodide, may also act as a nucleophilic catalyst for the alkylation of alkyl chlorides, bromides, or mesylates. History Potassium iodide has been used medically since at least 1820. Some of the earliest uses included cures for syphilis, lead and mercury poisoning. Chernobyl Potassium iodide's (KI) value as a radiation protective (thyroid blocking) agent was demonstrated following the Chernobyl nuclear reactor disaster in April 1986. A saturated solution of potassium iodide (SSKI) was administered to 10.5 million children and 7 million adults in Poland as a preventative measure against accumulation of radioactive in the thyroid gland. Reports differ concerning whether people in the areas immediately surrounding Chernobyl itself were given the supplement. However the US Nuclear Regulatory Commission (NRC) reported, "thousands of measurements of I-131 (radioactive iodine) activity...suggest that the observed levels were lower than would have been expected had this prophylactic measure not been taken. The use of KI...was credited with permissible iodine content in 97% of the evacuees tested." With the passage of time, people living in irradiated areas where KI was not available have developed thyroid cancer at epidemic levels, which is why the US Food and Drug Administration (FDA) reported "The data clearly demonstrate the risks of thyroid radiation... KI can be used [to] provide safe and effective protection against thyroid cancer caused by irradiation." Chernobyl also demonstrated that the need to protect the thyroid from radiation was greater than expected. Within ten years of the accident, it became clear that thyroid damage caused by released radioactive iodine was virtually the only adverse health effect that could be measured. As reported by the NRC, studies after the accident showed that "As of 1996, except for thyroid cancer, there has been no confirmed increase in the rates of other cancers, including leukemia, among the... public, that have been attributed to releases from the accident." But equally important to the question of KI is the fact that radioactivity releases are not "local" events. Researchers at the World Health Organization accurately located and counted the residents with cancer from Chernobyl and were startled to find that "the increase in incidence [of thyroid cancer] has been documented up to 500 km from the accident site... significant doses from radioactive iodine can occur hundreds of kilometers from the site, beyond emergency planning zones." Consequently, far more people than anticipated were affected by the radiation, which caused the United Nations to report in 2002 that "The number of people with thyroid cancer... has exceeded expectations. Over 11,000 cases have already been reported." Hiroshima and Nagasaki The Chernobyl findings were consistent with studies of the effects of previous radioactivity releases. In 1945, several hundreds of thousands of people working and residing in the Japanese cities of Hiroshima and Nagasaki were exposed to high levels of radiation after atomic bombs were detonated over the two cities by the United States. Survivors of the A-bombings, also known as hibakusha, have markedly high rates of thyroid disease; a 2006 study of 4091 hibakusha found nearly half the participants (1833; 44.8%) had an identifiable thyroid disease. An editorial in The Journal of the American Medical Association regarding thyroid diseases in both hibakusha and those affected by the Chernobyl disaster reports that "[a] straight line adequately describes the relationship between radiation dose and thyroid cancer incidence" and states "it is remarkable that a biological effect from a single brief environmental exposure nearly 60 years in the past is still present and can be detected." Nuclear weapons testing The development of thyroid cancer among residents in the North Pacific from radioactive fallout following the United States' nuclear weapons testing in the 1950s (on islands nearly 200 miles downwind of the tests) were instrumental in the 1978 decision by the FDA to issue a request for the availability of KI for thyroid protection in the event of a release from a commercial nuclear power plant or weapons-related nuclear incident. Noting that KI's effectiveness was "virtually complete" and finding that iodine in the form of KI was substantially superior to other forms including iodate (KIO3) in terms of safety, effectiveness, lack of side effects, and speed of onset, the FDA invited manufacturers to submit applications to produce and market KI. Fukushima It was reported on 16 March 2011, that potassium iodide tablets were given preventively to U.S. Naval air crew members flying within 70 nautical miles of the Fukushima Daiichi Nuclear Power Plant damaged in the earthquake (8.9/9.0 magnitude) and ensuing tsunami on 11 March 2011. The measures were seen as precautions, and the Pentagon said no U.S. forces have shown signs of radiation poisoning. By 20 March, the US Navy instructed personnel coming within 100 miles of the reactor to take the pills. The Netherlands In the Netherlands, the central storage of iodine-pills is located in Zoetermeer, near The Hague. In 2017, the Dutch government distributed pills to hundreds of thousands of residents who lived within a certain distance of nuclear power plants and met some other criteria. Belgium By 2020, potassium iodide tablets are made available free of charge for all residents in all pharmacies throughout the country. Formulations Three companies (Anbex, Inc., Fleming Co, and Recipharm of Sweden) have met the strict FDA requirements for manufacturing and testing of KI, and they offer products (IOSAT, ThyroShield, and ThyroSafe, respectively) which are available for purchase. In 2012, Fleming Co. sold all its product rights and manufacturing facility to other companies and no longer exists. ThyroShield is currently not in production. Tablets of potassium iodide are supplied for emergency purposes related to blockade of radioiodine uptake, a common form of radiation poisoning due to environmental contamination by the short-lived fission product . Potassium iodide may also be administered pharmaceutically for thyroid storm. For reasons noted above, therapeutic drops of SSKI, or 130 mg tablets of KI as used for nuclear fission accidents, are not used as nutritional supplements, since an SSKI drop or nuclear-emergency tablet provides 300 to 700 times more iodine than the daily adult nutritional requirement. Dedicated nutritional iodide tablets containing 0.15 mg (150 micrograms (μg)) of iodide, from KI or from various other sources (such as kelp extract) are marketed as supplements, but they are not to be confused with the much higher pharmaceutical dose preparations. Potassium iodide can be conveniently prepared in a saturated solution, abbreviated SSKI. This method of delivering potassium iodide doesn't require a method to weigh out the potassium iodide, thus allowing it to be used in an emergency situation. KI crystals are simply added to water until no more KI will dissolve and instead sits at the bottom of the container. With pure water, the concentration of KI in the solution depends only on the temperature. Potassium iodide is highly soluble in water thus SSKI is a concentrated source of KI. At 20 degrees Celsius the solubility of KI is 140-148 grams per 100 grams of water. Because the volumes of KI and water are approximately additive, the resulting SSKI solution will contain about 1.00 gram (1000 mg) KI per milliliter (mL) of solution. This is 100% weight/volume (note units of mass concentration) of KI (one gram KI per mL solution), which is possible because SSKI is significantly more dense than pure water—about 1.67 g/mL. Because KI is about 76.4% iodide by weight, SSKI contains about 764 mg iodide per mL. This concentration of iodide allows the calculation of the iodide dose per drop, if one knows the number of drops per milliliter. For SSKI, a solution more viscous than water, there are assumed to be 15 drops per mL; the iodide dose is therefore approximately 51 mg per drop. It is conventionally rounded to 50 mg per drop. The term SSKI is also used, especially by pharmacists, to refer to a U.S.P. pre-prepared solution formula, made by adding KI to water to prepare a solution containing 1000 mg KI per mL solution (100% wt/volume KI solution), to closely approximate the concentration of SSKI made by saturation. This is essentially interchangeable with SSKI made by saturation, and also contains about 50 mg iodide per drop. Saturated solutions of potassium iodide can be an emergency treatment for hyperthyroidism (so-called thyroid storm), as high amounts of iodide temporarily suppress secretion of thyroxine from the thyroid gland. The dose typically begins with a loading dose, then  mL SSKI (5 drops or 250 mg iodine as iodide), three times per day. Iodide solutions made from a few drops of SSKI added to drinks have also been used as expectorants to increase the water content of respiratory secretions and encourage effective coughing. SSKI has been proposed as a topical treatment for sporotrichosis, but no trials have been conducted to determine the efficacy or side effects of such treatment. Potassium iodide has been used for symptomatic treatment of erythema nodosum patients for persistent lesions whose cause remains unknown. It has been used in cases of erythema nodosum associated with Crohn's disease. Due to its high potassium content, SSKI is extremely bitter, and if possible it is administered in a sugar cube or small ball of bread. It may also be mixed into much larger volumes of juices. Neither SSKI or KI tablets form nutritional supplements, since the nutritional requirement for iodine is only 150 micrograms (0.15 mg) of iodide per day. Thus, a drop of SSKI provides 50/0.15 = 333 times the daily iodine requirement, and a standard KI tablet provides twice this much. References External links World Health Organization's guidelines for iodine prophylaxis following a nuclear accident Potassium compounds Alkali metal iodides Disaster preparedness Expectorants Iodides Food additives Metal halides Photographic chemicals Radiobiology World Health Organization essential medicines Wikipedia medicine articles ready to translate Rock salt crystal structure Ophthalmology drugs
Potassium iodide
[ "Chemistry", "Biology" ]
5,158
[ "Inorganic compounds", "Radiobiology", "Salts", "Metal halides", "Radioactivity" ]
1,014,371
https://en.wikipedia.org/wiki/Resilient%20Packet%20Ring
Resilient Packet Ring (RPR), as defined by IEEE standard 802.17, is a protocol designed for the transport of data traffic over optical fiber ring networks. The standard began development in November 2000 and has undergone several amendments since its initial standard was completed in June 2004. The amended standards are 802.17a through 802.17d, the last of which was adopted in May 2011. It is designed to provide the resilience found in SONET and Synchronous Digital Hierarchy networks (50 ms protection) but, instead of setting up circuit oriented connections, provides a packet based transmission, in order to increase the efficiency of Ethernet and IP services. Technical details RPR works on a concept of dual counter rotating rings called ringlets. These ringlets are set up by creating RPR stations at nodes where traffic is supposed to drop, per flow (a flow is the ingress and egress of data traffic). RPR uses Media Access Control protocol (MAC) messages to direct the traffic, which can use either ringlet of the ring. The nodes also negotiate for bandwidth among themselves using fairness algorithms, avoiding congestion and failed spans. The avoidance of failed spans is accomplished by using one of two techniques known as steering and wrapping. Under steering, if a node or span is broken, all nodes are notified of a topology change and they reroute their traffic. In wrapping, the traffic is looped back at the last node prior to the break and routed to the destination station. Class of service and traffic queues All traffic on the ring is assigned a Class of Service (CoS) and the standard specifies three classes. Class A (or High) traffic is a pure committed information rate (CIR) and is designed to support applications requiring low latency and jitter, such as voice and video. Class B (or Medium) traffic is a mix of both a CIR and an excess information rate (EIR; which is subject to fairness queuing). Class C (or Low) is best effort traffic, utilizing whatever bandwidth is available. This is primarily used to support Internet access traffic. Spatial reuse Another concept within RPR is what is known as spatial reuse. Because RPR strips the signal once it reaches the destination (unlike a SONET UPSR/SDH SNCP ring, in which the bandwidth is consumed around the entire ring) it can reuse the freed space to carry additional traffic. The RPR standard also supports the use of learning bridges (IEEE 802.1D) to further enhance efficiency in point to multipoint applications and VLAN tagging (IEEE 802.1Q). One drawback of the first version of RPR was that it did not provide spatial reuse for frame transmission to/from MAC addresses not present in the ring topology. This was addressed by IEEE 802.17b, which defines an optional spatially aware sublayer (SAS). This allows spatial reuse for frame transmission to/from MAC address not present in the ring topology. See also Ethernet Automatic Protection Switching Spatial Reuse Protocol (Cisco) Metro Ring Protocol (Foundry Networks) Open Transport Network (Nokia Siemens Networks) Dynamic Packet Transport (Cisco) Ethernet Ring Protection Switching (ITU-T) References External links IEEE 802.17 Resilient Packet Ring Working Group IEEE 802 Network architecture IEEE standards
Resilient Packet Ring
[ "Technology", "Engineering" ]
680
[ "Network architecture", "Computer standards", "IEEE standards", "Computer networks engineering" ]
1,014,414
https://en.wikipedia.org/wiki/Antiporter
An antiporter (also called exchanger or counter-transporter) is an integral membrane protein that uses secondary active transport to move two or more molecules in opposite directions across a phospholipid membrane. It is a type of cotransporter, which means that uses the energetically favorable movement of one molecule down its electrochemical gradient to power the energetically unfavorable movement of another molecule up its electrochemical gradient. This is in contrast to symporters, which are another type of cotransporter that moves two or more ions in the same direction, and primary active transport, which is directly powered by ATP. Transport may involve one or more of each type of solute. For example, the Na+/Ca2+ exchanger, found in the plasma membrane of many cells, moves three sodium ions in one direction, and one calcium ion in the other. As with sodium in this example, antiporters rely on an established gradient that makes entry of one ion energetically favorable to force the unfavorable movement of a second molecule in the opposite direction. Through their diverse functions, antiporters are involved in various important physiological processes, such as regulation of the strength of cardiac muscle contraction, transport of carbon dioxide by erythrocytes, regulation of cytosolic pH, and accumulation of sucrose in plant vacuoles. Background Cotransporters are found in all organisms and fall under the broader category of transport proteins, a diverse group of transmembrane proteins that includes uniporters, symporters, and antiporters. Each of them are responsible for providing a means of movement for water-soluble molecules that otherwise would not be able to pass through lipid-based plasma membrane. The simplest of these are the uniporters, which facilitate the movement of one type of molecule in the direction that follows its concentration gradient. In mammals, they are most commonly responsible for bringing glucose and amino acids into cells. Symporters and antiporters are more complex because they move more than one ion and the movement of one of those ions is in an energetically unfavorable direction. As multiple molecules are involved, multiple binding processes must occur as the transporter undergoes a cycle of conformational changes to move them from one side of the membrane to the other. The mechanism used by these transporters limits their functioning to moving only a few molecules at a time. As a result, symporters and antiporters are characterized by a slower transport speed, moving between 102 and 104 molecules per second. Compare this to ion channels that provide a means for facilitated diffusion to occur and allow between 107 and 108 ions pass through the plasma membrane per second. Though ATP-powered pumps also move molecules in an energetically unfavorable direction and undergo conformational changes to do so, they fall under a different category of membrane proteins because they couple the energy derived from ATP hydrolysis to transport their respective ions. These ion pumps are very selective, consisting of a double gating system where at least one of the gates is always shut. The ion is allowed to enter from one side of the membrane while one of the gates is open, after which it will shut. Only then will the second gate open to allow the ion to leave on the membrane's opposite side. The time between the alternating gate opening is referred to as the occluded state, where the ions are bound and both gates are shut. These gating reactions limit the speed of these pumps, causing them to function even slower than transport proteins, moving between 100 and 103 ions per second. Structure and function To function in active transport, a membrane protein must meet certain requirements. The first of these is that the interior of the protein must contain a cavity that is able to contain its corresponding molecule or ion. Next, the protein must be able to assume at least two different conformations, one with its cavity open to the extracellular space and the other with its cavity open to the cytosol. This is crucial for the movement of molecules from one side of the membrane to the other. Finally, the cavity of the protein must contain binding sites for its ligands, and these binding sites must have a different affinity for the ligand in each of the protein's conformations. Without this, the ligand will not be able to bind to the transporter on one side of the plasma membrane and be released from it on the other side. As transporters, antiporters have all of these features. Because antiporters are highly diverse, their structure can vary widely depending upon the type of molecules being transported and their location in the cell. However, there are some common features that all antiporters share. One of these is multiple transmembrane regions that span the lipid bilayer of the plasma membrane and form a channel through which hydrophilic molecules can pass. These transmembrane regions are typically structured from alpha helices and are connected by loops in both the extracellular space and cytosol. These loops are what contain the binding sites for the molecules associated with the antiporter. These features of antiporters allow them to carry out their function in maintaining cellular homeostasis. They provide a space where a hydrophilic molecule can pass through the hydrophobic lipid bilayer, allowing them to bypass the hydrophobic interactions of the plasma membrane. This enables the efficient movement of molecules needed for the environment of the cell, such as in the acidification of organelles. The varying affinity of the antiporter for each ion or molecule on either side of the plasma membrane allows it to bind to and release its ligands on the appropriate side of the membrane according to the electrochemical gradient of the ion being harnessed for its energetically favorable concentration. Mechanism The mechanism of antiporter transport involves several key steps and a series of conformational changes that are dictated by the structural element described above: The substrate binds to its specific binding site on the extracellular side of the plasma membrane, forming a temporary substrate-bound open form of the antiporter. This becomes an occluded, substrate-bound state that is still facing the extracellular space. The antiporter undergoes a conformational change to become an occluded, substrate-bound protein that is now facing the cytosol. As it does so, it passes through a temporary fully-occluded intermediate stage. The substrate is released from the antiporter as it takes on an open, inward-facing conformation. The antiporter can now bind to its second substrate and transport it in the opposite direction by taking on its transient substrate-bound open state. This is followed by an occluded, substrate-bound state that is still facing the cytosol, a conformation change with a temporary fully-occluded intermediate stage, and a return to the antiporter's open, outward-facing conformation. The second substrate is released and the antiporter can return to its original conformation state, where it is ready to bind to new molecules or ions and repeat its transport process. History Antiporters were discovered as scientists were exploring ion transport mechanisms across biological membranes. The early studies took place in the mid-20th century and were focused on the mechanisms that transported ions such as sodium, potassium, and calcium across the plasma membrane. Researchers made the observation that these ions were moved in opposite directions and hypothesized the existence of membrane proteins that could facilitate this type of transport. In the 1960's, biochemist Efraim Racker made a breakthrough in the discovery of antiporters. Through purification from bovine heart mitochondria, Racker and his colleagues found a mitochondrial protein that could exchange inorganic phosphate for hydroxide ions. The protein is located in the inner mitochondrial membrane and transports phosphate ions for use in oxidative phosphorylation. It became known as the phosphate-hydroxide antiporter, or mitochondrial phosphate carrier protein, and was the first example of an antiporter identified in living cells. As time went on, researchers discovered other antiporters in different membranes and in various organisms. This includes the sodium-calcium exchanger (NCX), another crucial antiporter that regulates intracellular calcium levels through the exchange of sodium ions for calcium ions across the plasma membrane. It was discovered in the 1970s and is now a well-characterized antiporter known to be found in many different types of cells. Advances in the fields of biochemistry and molecular biology have enabled the identification and characterization of a wide range of antiporters. Understanding the transport processes of various molecules and ions has provided insight into cellular transport mechanisms, as well as the role of antiporters in various physiological functions and in the maintenance of homeostasis Role in homeostasis Sodium-calcium exchanger The sodium-calcium exchanger, also known as the Na+/Ca2+ exchanger or NCX, is an antiporter responsible for removing calcium from cells. This title encompasses a class of ion transporters that are commonly found in the heart, kidney, and brain. They use the energy stored in the electrochemical gradient of sodium to exchange the flow of three sodium ions into the cell for the export of one calcium ion. Though this exchanger is most common in the membranes of the mitochondria and the endoplasmic reticulum of excitable cells, it can be found in many different cell types in various species. Although the sodium-calcium exchanger has a low affinity for calcium ions, it can transport a high amount of the ion in a short period of time. Because of these properties, it is useful in situations where there is an urgent need to export high amounts of calcium, such as after an action potential has occurred. Its characteristics also enable NCX to work with other proteins that have a greater affinity for calcium ions without interfering with their functions. NCX works with these proteins to carry out functions such as cardiac muscle relaxation, excitation-contraction coupling, and photoreceptor activity. They also maintain the concentration of calcium ions in the sarcoplasmic reticulum of cardiac cells, endoplasmic reticulum of excitable and nonexcitable cells, and the mitochondria. Another key characteristic of this antiporter is its reversibility. This means that if the cell is depolarized enough, the extracellular sodium level is low enough, or the intracellular level of sodium is high enough, NCX will operate in the reverse direction and begin bringing calcium into the cell. For example, when NCX functions during excitotoxicity, this characteristic allows it to have a protective effect because the accompanying increase in intracellular calcium levels enables the exchanger to work in its normal direction regardless of the sodium concentration. Another example is the depolarization of cardiac muscle cells, which is accompanied by a large increase in the intracellular sodium concentration that causes NCX to work in reverse. Because the concentration of calcium is carefully regulated during the cardiac action potential, this is only a temporary effect as calcium is pumped out of the cell. The sodium-calcium exchanger's role in maintaining calcium homeostasis in cardiac muscle cells allows it to help relax the heart muscle as it exports calcium during diastole. Therefore, its dysfunction can result in abnormal calcium movement and the development of various cardiac diseases. Abnormally high intracellular calcium levels can hinder diastole and cause abnormal systole and arrhythmias. Arrhythmias can occur when calcium is not properly exported by NCX, causing delayed afterdepolarizations and triggering abnormal activity that can possibly lead to atrial fibrillation and ventricular tachycardia. If the heart experiences ischemia, the inadequate oxygen supply can disrupt ion homeostasis. When the body tries to stabilize this by returning blood to the area, ischemia-reperfusion injury, a type of oxidative stress, occurs. If NCX is dysfunctional, it can exacerbate the increase of calcium that accompanies reperfusion, causing cell death and tissue damage. Similarly, NCX dysfunction has found to be involved in ischemic strokes. Its activity is upregulated, causing a increased cytosolic calcium level, which can lead to neuronal cell death. The Na+/Ca2+ exchanger has also been implicated in neurological disorders such as Alzheimer's disease and Parkinson's disease. Its dysfunction can result in oxidative stress and neuronal cell death, contributing to the cognitive decline that characterizes Alzheimer's disease. The dysregulation of calcium homeostasis has been found to be a key part of neuron death and Alzheimer's pathogenesis. For example, neurons that have neurofibrillary tangles contain high levels of calcium and show hyperactivation of calcium-dependent proteins. The abnormal calcium handling of atypical NCX function can also cause the mitochondrial dysfunction, oxidative stress, and neuronal cell death that characterize Parkinson's. In this case, if dopaminergic neurons of the substantia nigra are affected, it can contribute to the onset and development of Parkinson's disease. Although the mechanism is not entirely understood, disease models have shown a link between NCX and Parkinson's and that NCX inhibitors can prevent death of dopaminergic neurons. Sodium-hydrogen antiporter The sodium–hydrogen antiporter, also known as the sodium-proton exchanger, Na+/H+ exchanger, or NHE, is an antiporter responsible for transporting sodium into the cell and hydrogen out of the cell. As such, it is important in the regulation of cellular pH and sodium levels. There are differences among the types of NHE antiporter families present in eukaryotes and prokaryotes. The 9 isoforms of this transporter that are found in the human genome fall under several families, including the cation-proton antiporters (CPA 1, CPA 2, and CPA 3) and sodium-transporting carboxylic acid decarboxylase (NaT-DC). Prokaryotic organisms contain the Na+/H+ antiporter families NhaA, NhaB, NhaC, NhaD, and NhaE. Because enzymes can only function at certain pH ranges, it is critical for cells to tightly regulate cytosolic pH. When a cell's pH is outside of the optimal range, the sodium-hydrogen antiporter detects this and is activated to transport ions as a homeostatic mechanism to restore pH balance. Since ion flux can be reversed in mammalian cells, NHE can also be used to transport sodium out of the cell to prevent excess sodium from accumulating and causing toxicity. As suggested by its functions, this antiporter is located in the kidney for sodium reabsorption regulation and in the heart for intracellular pH and contractility regulation. NHE plays an important role in the nephron of the kidney, especially in the cells of the proximal convoluted tubule and collecting duct. The sodium-hydrogen antiporter's function is upregulated by Angiotensin II in the proximal convoluted tubule when the body needs to reabsorb sodium and excrete hydrogen. Plants are sensitive to high amounts of salt, which can halt certain necessary functions of the eukaryotic organism, including photosynthesis. For the organisms to maintain homeostasis and carry out crucial functions, Na+/H+ antiporters are used to rid the cytoplasm of excess sodium by pumping Na+ out of the cell. These antiporters can also close their channel to stop sodium from entering the cell, along with allowing excess sodium within the cell to enter into a vacuole. Dysregulation of the sodium-hydrogen antiporter's activity has been linked to cardiovascular diseases, renal disorders, and neurological conditions NHE inhibitors are being developed to treat these issues. One of the isoforms of the antiporter, NHE1, is essential to the function of the mammalian myocardium. NHE is involved in the case of hypertrophy and when damage to the heart muscle occurs, such as during ischemia and reperfusion. Studies have shown that NHE1 is more active in animal models experiencing myocardial infarction and left ventricular hypertrophy. During these cardiac events, the function of the sodium-hydrogen antiporter causes an increase in the sodium levels of cardiac muscle cells. In turn, the work of the sodium-calcium antiporter leads to more calcium being brought into the cell, which is what results in damage to the myocardium. Five isoforms of NHE are found in kidney's epithelial cells. The best studied one is NHE3, which is mainly located in the proximal tubules of the kidney and plays a key role in acid-base homeostasis. Issues with NHE3 disrupt the reabsorption of sodium and secretion of hydrogen. The main conditions that NHE3 dysregulation can cause are hypertension and renal tubular acidosis (RTA). Hypertension can occur when more sodium is reabsorbed in the kidneys because water will follow the sodium ions and create an elevated blood volume. This, in turn, leads to elevated blood pressure. RTA is characterized by the inability of the kidneys to acidify the urine due to underactive NHE3 and reduced secretion of hydrogen ions, resulting in metabolic acidosis. On the other hand, overactive NHE3 can lead to excess secretion of hydrogen ions and metabolic alkalosis, where the blood is too alkaline. NHE can also be linked to neurodegeneration. The dysregulation or loss of the isoform NHE6 can lead to pathological changes in the tau proteins of human neurons, which can have huge consequences. For example, Christianson Syndrome (CS) is an X-linked disorder caused by a loss-of-function mutation in NHE6, which leads to the over acidification of endosomes. In studies done on postmortem brains of individuals with CS, lower NHE6 function was linked to higher levels of tau deposition. The level of tau phosphorylation was also found to be elevated, which leads to the formation of insoluble tangles that can cause neuronal damage and death. Tau proteins are also implicated in other neurodegenerative diseases, such as Alzheimer's and Parkinson's diseases. Chloride-bicarbonate antiporter The chloride-bicarbonate antiporter is crucial to maintaining pH and fluid balance through its function of exchanging bicarbonate and chloride ions through cell membranes. This exchange occurs in many different types of body cells. In the cardiac Purkinje fibers and smooth muscle cells of the ureters, this antiporter is the main mechanism of chloride transport into the cells. Epithelial cells such as those of the kidney use chloride-bicarbonate exchange to regulate their volume, intracellular pH, and extracellular pH. Gastric parietal cells, osteoclasts, and other acid-secreting cells have chloride-bicarbonate antiporters that function in the basolateral membrane to dispose of excess bicarbonate left behind by the function of carbonic anhydrase and apical proton pumps. However, base-secreting cells exhibit apical chloride-bicarbonate exchange and basolateral proton pumps. An example of a chloride-bicarbonate antiporter is the chloride anion exchanger, also known as down-regulated in adenoma (protein DRA). It is found in the intestinal mucosa, especially in the columnar epithelium and goblet cells of the apical surface of the membrane, where it carries out the function of chloride and bicarbonate exchange. Protein DRA's reuptake of chloride is critical to creating an osmotic gradient that allows the intestine to reabsorb water. Another well-studied chloride-bicarbonate antiporter is anion exchanger 1 (AE1), which is also known as band 3 anion transport protein or solute carrier family 4 member 1 (SLC4A1). This exchanger is found in red blood cells, where it helps transport bicarbonate and carbon dioxide between the lungs and tissues to maintain acid-base homeostasis. AE1 also expressed in the basolateral side of cells of the renal tubules. It is crucial in the collecting duct of the nephron, which is where its acid-secreting α-intercalated cells are located. These cells use carbon dioxide and water to generate hydrogen and bicarbonate ions, which is catalyzed by carbonic anhydrase. The hydrogen is exchanged across the membrane into the lumen of the collecting duct, and thus acid is excreted into the urine. Because of its importance to the reabsorption of water in the intestine, mutations in protein DRA cause a condition called congenital chloride diarrhea (CCD). This disorder is caused by an autosomal recessive mutation in the DRA gene on chromosome 7. CCD symptoms in newborns are chronic diarrhea with failure to thrive, and the disorder is characterized by diarrhea that causes metabolic alkalosis. Mutations of kidney AE1 can lead to distal renal tubular acidosis, a disorder characterized by the inability to secrete acid into the urine. This causes metabolic acidosis, where the blood is too acidic. A chronic state of metabolic acidosis can the health of the bones, kidneys, muscles, and cardiovascular system. Mutations in erythrocyte AE1 cause alterations of its function, leading to changes in red blood cell morphology and function. This can have serious consequences because the shape of red blood cells is closely tied to their function of gas exchange in the lungs and tissues. One such condition is hereditary spherocytosis, a genetic disorder characterized by spherical red blood cells. Another is Southeast Asian ovalocytosis, where a deletion in the AE1 gene generates oval-shaped erythrocytes. Finally, overhydrated hereditary stomatocytosis is a rare genetic disorder where red blood cells have an abnormally high volume, leading to changes in hydration status. The proper function of AE2, an isoform of AE1, is important in gastric secretion, osteoclast differentiation and function, and the synthesis of enamel. The hydrochloric acid secretion at the apical surface of both gastric parietal cells and osteoclasts relies on chloride-bicarbonate exchange in the basolateral surface. Studies found that mice with nonfunctional AE2 did not secrete hydrochloric acid, and it was concluded that the exchanger is necessary for hydrochloric acid loading in parietal cells. When AE2 expression was suppressed in an animal model, cell lines were unable to differentiate into osteoclasts and perform their functions. Additionally, cells that had osteoclast markers but were deficient in AE2 were abnormal compared to the wild-type cells and were unable to resorb mineralized tissue. This demonstrates the importance of AE2 in osteoclast function. Finally, as the hydroxyapatite crystals of enamel are being formed, a lot of hydrogen is produced, which must be neutralized so that mineralization can proceed. Mice with inactivated AE2 were toothless and suffered from incomplete enamel maturation. Chloride-hydrogen antiporter The chloride-hydrogen antiporter facilitates the exchange of chloride ions for hydrogen ions across plasma membranes, thus playing a critical role in maintaining acid-base balance and chloride homeostasis. It is found in various tissues, including the gastrointestinal tract, kidneys, and pancreas. The well-known chloride-hydrogen antiporters belong in the CLC family, which have isoforms from CLC-1 to CLC-7, each with a distinct tissue distribution. Their structure involves two CLC proteins coming together to form a homodimer or a heterodimer where both monomers contain an ion translocation pathway. CLC proteins can either be ion channels or anion-proton exchangers, so CLC-1 and CLC-2 are membrane chloride channels, while CLC-3 through CLC-7 are chloride-hydrogen exchangers. CLC-4 is a member of the CLC family that is prominent in the brain, but is also located in the liver, kidneys, heart, skeletal muscle, and intestine. It likely resides in endosomes and participates in their acidification, but can also be expressed in the endoplasmic reticulum and plasma membrane. Its roles are not entirely clear, but CLC-4 has been found to possibly participate in endosomal acidification, transferrin trafficking, renal endocytosis, and the hepatic secretory pathway. CLC-5 is one of the best-studied members of this protein family. It shares 80% of its amino acid sequence with CLC-3 and CLC-4, but it is mainly found in the kidney, especially in the proximal tubule, collecting duct, and ascending limb of the loop of Henle. It functions to transport substances through the endosomal membrane, so it is crucial for pinocytosis, receptor-mediated endocytosis, and endocytosis of plasma membrane proteins from the apical surface. CLC-7 is another example of a CLC family protein. It is ubiquitously expressed as the chloride-hydrogen antiporter in lysosomes and in the ruffled border of osteoclasts. CLC-7 may be important for regulating to concentration of chloride in lysosomes. It is associated with a protein called Ostm1, forming a complex that allows CLC-7 to carry out its functions. For example, these proteins are crucial to the process of acidifying the resorption lacuna, which enables bone remodeling to occur. CLC-4 has been connected with mental retardation involving seizure disorders, facial abnormalities, and behavior disorders. Studies found frameshift and missense mutations in patients exhibiting these symptoms. Because these symptoms were mostly exhibited in males, with less severe pathology in females, it is likely X-linked. Studies done on animal models have also shown the possibility of a connection between nonfunctional CLC-4 and impaired neural branching of hippocampus neurons. Defects in the CLC-5 gene were shown to be the cause of 60% of cases of Dent's disease, which is characterized by tubular proteinuria, formation of kidney stones, excess calcium in the urine, nephrocalcinosis, and chronic kidney failure. This is caused by abnormalities that occur in the endocytosis process when CLC-5 is mutated. Dent's disease itself is one of the causes of Fanconi syndrome, which occurs when the proximal convoluted tubules of the kidney do not perform an adequate level of reabsorption. It causes molecules produced by metabolic pathways, such as amino acids, glucose, and uric acid to be excreted in the urine instead of being reabsorbed. The result is polyuria, dehydration, rickets in children, osteomalacia in adults, acidosis, and hypokalemia. CLC-7's role in osteoclast function was revealed by studies on knockout mice that developed severe osteopetrosis. These mice were smaller, had shortened long bones, disorganized trabecular structure, a missing medullary cavity, and their teeth did not erupt. This was found to be caused by deletion mutations, missense mutations, and gain-of-function mutations that sped up the gating of CLC-7. CLC-7 is expressed in almost every neuronal cell type, and its loss led to widespread neurodegeneration in mice, especially in the hippocampus. In longer-lived models, the cortex and hippocampus had almost entirely disappeared after 1.5 years. Finally, because of its importance in lysosomes, altered expression of CLC-7 can lead to lysosomal storage disorders. Mice with a mutation introduced to the CLC-7 gene developed lysosomal storage disease and retinal degeneration. Reduced folate carrier protein The reduced folate carrier protein (RFC) is a transmembrane protein responsible for the transport of folate, or vitamin B9, into cells. It uses the large gradient of organic phosphate to move folate into the cell against its concentration gradient. The RFC protein can transport folates, reduced folates, the derivatives of reduced folate, and the drug methotrexate. The transporter is encoded by the SLC19A1 gene and is ubiquitously expressed in human cells. Its peak activity occurs at pH 7.4, with no activity occurring below pH 6.4. The RFC protein is critical because folates take the form of hydrophilic anions at physiological pH, so they do not diffuse naturally across biological membranes. Folate is essential for processes such as DNA synthesis, repair,and methylation, and without entry into cells, these could not occur. Because folates are essential for various life-sustaining processes, a deficiency in this molecule can lead to fetal abnormalities, neurological disorders, cardiovascular disease, and cancer. Folates cannot be synthesized in the body, so it must be taken in through diet and moved into cells. Without the RFC protein facilitating this movement, processes such as embryological development and DNA repair cannot occur. Adequate folate levels are required for the development of the neural tube in the fetus. Folate deficiency during pregnancy increases the risk of defects such as spina bifida and anencephaly. In mouse models, inactivating both alleles of the FRC protein gene causes death of the embryo. Even if folate is supplemented during gestation, the mice died within two weeks of birth from the failure of hematopoietic tissues. Altered function of the RFC protein can increase folate deficiency, enhancing cardiovascular disease, neurodegenerative diseases, and cancer. In terms of cardiovascular issues, folate contributes to homocysteine metabolism. Low folate levels result in elevated homocysteine levels, which is a risk factor for cardiovascular diseases. In terms of cancer, folate deficiency is related to an increased risk, especially that of colorectal cancers. In mouse models with altered RFC protein expression showed increased transcripts of genes related to colon cancer and increased proliferation of colonocytes. The cancer risk is likely related to the FRC protein's role in DNA synthesis because inadequate levels of folate can lead to DNA damage and aberrant DNA methylation. Vesicle neurotransmitter antiporters Vesicle neurotransmitter antiporters are responsible for packaging neurotransmitters into vesicles in neurons. They utilize the electrochemical gradient of hydrogen protons across the membranes of synaptic vesicles to move neurotransmitters into them. This is essential for the process of synaptic transmission, which requires neurotransmitters to be released into the synapse to bind to receptors on the next neuron. One of the best characterized of these antiporters is the vesicular monoamine transporter (VMAT). It is responsible for the storage, sorting, and release of neurotransmitters, as well as for protecting them from autoxidation. VMAT's transport functions are dependent on the electrochemical gradient created by a vesicular hydrogen proton-ATPase. VMAT1 and VMAT2 are two isoforms that can transport monoamines such as serotonin, norepinephrine, and dopamine in a proton-dependent fashion. VMAT1 can be found in neuroendocrine cells, while VMAT2 can be found in the neurons of the central and peripheral nervous systems, as well as in adrenal chromaffin cells. Another important vesicle neurotransmitter antiporter is the vesicular glutamate transporter (VGLUT). This family of proteins includes three isoforms, VGLUT1, VGLUT2, and VGLUT3, that are responsible for packaging glutamate - the most abundant excitatory neurotransmitter in the brain - into synaptic vesicles. These antiporters vary by location. VGLUT1 is found in areas of the brain related to higher cognitive functions, such as the neocortex. VGLUT2 works to regulate basic physiological functions and is expressed in subcortical regions such as the brainstem and hypothalamus. Finally, VGLUT3 can be seen in neurons that also express other neurotransmitters. VMAT2 has been found to contribute to neurological conditions such as mood disorders and Parkinson's disease. Studies done on an animal model of clinical depression showed that functional alterations of VMAT2 were associated with depression. The nucleus accumbens, pars compacta of the substantia nigra, and ventral tegmental area - all subregions of the brain involved in clinical depression - were found to have lower VMAT2 levels. The likely cause for this is VMAT's relationship with serotonin and norepinephrine, neurotransmitters that are related to depression. VMAT dysfunction may contribute to the altered levels of these neurotransmitters that occur in mood disorders. Lower expression of VMAT2 was found to correlate with a higher susceptibility to Parkinson's disease and the antiporter's mRNA was found in all cell groups damaged by Parkinson's. This is likely because VMAT2 dysfunction can lead to a decrease in dopamine packaging into vesicles, accounting for the dopamine depletion that characterizes the disease. For this reason, the antiporter has been identified as a protective factor that could be targeted for the prevention of Parkinson's. Because alterations in glutamate release have been linked to the generation of seizures in epilepsy, alterations in the function of VGLUT may be implicated. A study was conducted where the VGLUT1 gene was inactivated in the astrocytes and neurons of an animal model. When the gene was inactivated in astrocytes, there was an 80% loss in the antiporter protein itself and, in turn, a reduction in glutamate uptake. The mice in this condition experienced seizures, lower body mass, and higher mortality rates. The researchers concluded that VGLUT1 function in astrocytes is therefore critical to epilepsy resistance and normal weight gain. There is a lot of evidence that the glutamate system plays a role in long-term cell growth and synaptic plasticity. Disturbances of these processes has been linked to the pathology of mood disorders. The link between the function of the glutamatergic neurotransmitter system and mood disorders sets up VGLUT as one of the targets for treatment. See also Active transport Adenine nucleotide translocator Cotransporter Reduced folate carrier family Sodium-calcium exchanger Sodium-hydrogen antiporter Symporter Uniporter Vesicular monoamine transporter References Further reading External links Integral membrane proteins Transport phenomena
Antiporter
[ "Physics", "Chemistry", "Engineering" ]
7,406
[ "Transport phenomena", "Chemical engineering", "Physical phenomena" ]
1,014,475
https://en.wikipedia.org/wiki/Cotransporter
Cotransporters are a subcategory of membrane transport proteins (transporters) that couple the favorable movement of one molecule with its concentration gradient and unfavorable movement of another molecule against its concentration gradient. They enable coupled or cotransport (secondary active transport) and include antiporters and symporters. In general, cotransporters consist of two out of the three classes of integral membrane proteins known as transporters that move molecules and ions across biomembranes. Uniporters are also transporters but move only one type of molecule down its concentration gradient and are not classified as cotransporters. Background Cotransporters are capable of moving solutes either up or down gradients at rates of 1000 to 100000 molecules per second. They may act as channels or transporters, depending on conditions under which they are assayed. The movement occurs by binding to two molecules or ions at a time and using the gradient of one solute's concentration to force the other molecule or ion against its gradient. Some studies show that cotransporters can function as ion channels, contradicting the classical models. For instance the wheat HKT1 transporter shows two modes of transport by the same protein. Cotransporters can be classified as antiporters and symporters. Both use electric potential and/or chemical gradients to move protons and ions against their concentration gradient. In plants the proton is considered a secondary substance and high proton concentration in the apoplast powers the inward movement of certain ions by symporters. A Proton gradient moves the ions into the vacuole by proton-sodium antiporter or the proton-calcium antiporter. In plants, sucrose transport is distributed throughout the plant by the proton-pump where the pump, as discussed above, creates a gradient of protons so that there are many more on one side of the membrane than the other. As the protons diffuse back across the membrane, the free energy liberated by this diffusion is used to co-transport sucrose. In mammals, glucose is transported through sodium dependent glucose transporters, which use energy in this process. Here, since both glucose and sodium are transported in the same direction across the membrane, they would be classified as symporters. The glucose transporter system was first hypothesized by Dr. Robert K. Crane in 1960, this is discussed later in the article. History Dr. Robert K. Crane, a Harvard graduate, had been working in the field of carbohydrate biochemistry for quite some time. His experience in the areas of glucose-6-phosphate biochemistry, carbon dioxide fixation, hexokinase and phosphate studies led him to hypothesize cotransport of glucose along with sodium through the intestine. Pictured right is of Dr. Crane and his drawing of the cotransporter system he proposed in 1960, at the international meet on membrane transport and metabolism. His studies were confirmed by other groups and are now used as the classical model to understand cotransporters. Mechanism Antiporters and symporters both transport two or more different types of molecules at the same time in a coupled movement. An energetically unfavored movement of one molecule is combined with an energetically favorable movement of another molecule(s) or ion(s) to provide the power needed for transport. This type of transport is known as secondary active transport and is powered by the energy derived from the concentration gradient of the ions/molecules across the membrane the cotransporter protein is integrated within. Cotransporters undergo a cycle of conformational changes by linking the movement of an ion with its concentration gradient (downhill movement) to the movement of a cotransported solute against its concentration gradient (uphill movement). In one conformation the protein will have the binding site (or sites in the case of symporters) exposed to one side of the membrane. Upon binding of both the molecule which is to be transported uphill and the molecule to be transported downhill a conformational change will occur. This conformational change will expose the bound substrates to the opposite side of the membrane, where the substrates will disassociate. Both the molecule and the cation must be bound in order for the conformational change to occur. This mechanism was first introduced by Oleg Jardetzky in 1966. This cycle of conformational changes only transports one substrate ion at a time, which results in a fairly slow transport rate (100 to 104 ions or molecules per second) when compared to other transport proteins like ion channels. The rate at which this cycle of conformational changes occurs is called the turnover rate (TOR) and is expressed as the average number of complete cycles per second performed by a single cotransporter molecule. Types Antiporters Antiporters use the mechanism of cotransport (coupling the movement of one ion or molecule down its concentration gradient with the transport of another ion or molecule up its concentration gradient), to move the ions and molecule in opposite directions. In this situation one of the ions will move from the exoplasmic space into the cytoplasmic space while the other ion will move from the cytoplasmic space into the exoplasmic space. An example of an antiporter is the sodium-calcium exchanger. The sodium-calcium exchanger functions to remove excess calcium from the cytoplasmic space into the exoplasmic space against its concentration gradient by coupling its transport with the transport of sodium from the exoplasmic space down its concentration gradient (established by the active transport of sodium out of the cell by the sodium-potassium pump) into the cytoplasmic space. The sodium-calcium exchanger exchanges 3 sodium ions for 1 calcium ion and represents a cation antiporter. Cells also contain anion antiporters such as the Band 3 (or AE1) anion transport protein. This cotransporter is an important integral protein in mammalian erythrocytes and moves chloride ion and bicarbonate ion in a one-to-one ratio across the plasma membrane based only on the concentration gradient of the two ions. The AE1 antiporter is essential in the removal of carbon dioxide waste that is converted to bicarbonate inside the erythrocyte. Symporters In contrast to antiporters, symporters move ions or molecules in the same direction. In this case both ions being transported will be moved either from the exoplasmic space into the cytoplasmic space or from the cytoplasmic space into the exoplasmic space. An example of a symporter is the sodium-glucose linked transporter or SGLT. The SGLT functions to couple the transport of sodium in the exoplasmic space down its concentration gradient (again, established by the active transport of sodium out of the cell by the sodium-potassium pump) into the cytoplasmic space to the transport of glucose in the exoplasmic space against its concentration gradient into the cytoplasmic space. The SGLT couples the movement of 1 glucose ion with the movement of 2 sodium ions. Examples of cotransporters Na+/glucose cotransporter (SGLT1) – is also known as sodium-glucose cotransporter 1 and is encoded by the SLC5A1 gene. SGLT1 is an electrogenic transporter as the sodium electrochemical gradient drives glucose uphill into the cells. SGLT1 is a high affinity Na+ /glucose cotransporter that has an important role in transferring sugar across the epithelial cells of renal proximal tubules and of the intestine, in particular the small intestine. Na+/phosphate cotransporter (NaPi) – Sodium-phosphate cotransporters are from the SLC34 and SLC20 protein families. They are also found across the epithelial cells of renal proximal tubule and of the small intestine. It transfers inorganic phosphate into cells through active transport with the help of a Na+ gradient. Similar to SGTL1, they are classified as electrogenic transporters. NaPi coupled with 3 Na+ ions and 1 divalent Pi, are classified as NaPi IIa and NaPi IIb. NaPi that couples with 2 Na+ and 1 divalent Pi are classified as NaPi IIc. Na+/I− symporter (NIS) – Sodium-Iodide is a type of symporter that is responsible for transferring iodide in the thyroid gland. NIS is primarily found in cells of the thyroid gland and also in the mammary glands. They are located on the basolateral membrane of thyroid follicular cells where 2 Na+ ions and 1 I− ion is coupled to transfer the iodide. NIS activity helps in the diagnosis and treatment of thyroid disease, including the highly successful treatment of thyroid cancer with radioiodide after thyroidectomy. Na-K-2Cl symporter – This specific cotransporter regulates the cell volume by controlling the water and electrolyte content within the cell. The Na-K-2Cl Cotransporter is vital in salt secretion in secretory epithelia cells along with renal salt reabsorption. Two variations of the Na-K-2Cl symporter exist and are known as NKCC1 and NKCC2. The NKCC1 cotransport protein is found throughout the body but NKCC2 is found only in the kidney and removes the sodium, potassium, and chloride found in the body's urine, so it can be absorbed into the blood. GABA transporter (GAT) – neurotransmitter γ-aminobutyric acid (GABA) transporters are members of the solute carrier family 6 (SLC6) of sodium- and chloride-dependent neurotransmitter receptor transporters that are located in the plasma membrane and regulate the concentration of GABA in the synaptic cleft. The SLC6A1 gene encodes GABA transporters. The transporters are electrogenic and couples 2 Na+, 1 Cl− and 1 GABA for inward translocation. K+Cl− Symporter – The K+-Cl− cotransporter family consists of four specific symporters known as KCC1, KCC2, KCC3, and KCC4. The KCC2 isoform is specific to neuronal tissue and the other three can be found in various tissues throughout the body. This cotransporter family controls the concentration levels of potassium and chloride within cells through the combined movement of K+/H+ and Cl−/HCO3− exchangers or through combined movement of both ions due to concentration activated channels. The four known KCC proteins team up to form two separate subfamilies with KCC1 and KCC3 pairing together and KCC2 and KCC4 becoming a pair to facilitate ion movement. Associated diseases Table 1: List of diseases related to transporters. See also Na-K-2Cl symporter K-Cl cotransporter Sodium/phosphate cotransporter Sodium-glucose transport proteins Glucose transporter Cystic fibrosis References Integral membrane proteins Transport phenomena pl:Symport
Cotransporter
[ "Physics", "Chemistry", "Engineering" ]
2,353
[ "Transport phenomena", "Chemical engineering", "Physical phenomena" ]
5,575,498
https://en.wikipedia.org/wiki/Symbolic%20simulation
In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation. Symbolic simulation is a form of simulation where many possible executions of a system are considered simultaneously. This is typically achieved by augmenting the domain over which the simulation takes place. A symbolic variable can be used in the simulation state representation in order to index multiple executions of the system. For each possible valuation of these variables, there is a concrete system state that is being indirectly simulated. Because symbolic simulation can cover many system executions in a single simulation, it can greatly reduce the size of verification problems. Techniques such as symbolic trajectory evaluation (STE) and generalized symbolic trajectory evaluation (GSTE) are based on this idea of symbolic simulation. See also Symbolic execution Symbolic computation References Electronic design automation Formal methods
Symbolic simulation
[ "Technology", "Engineering" ]
221
[ "Computer science stubs", "Software engineering", "Computer science", "Computing stubs", "Formal methods" ]
5,575,501
https://en.wikipedia.org/wiki/Electrojet
An electrojet is an electric current which travels around the E region of the Earth's ionosphere. There are three electrojets: one above the magnetic equator (the equatorial electrojet), and one each near the Northern and Southern Polar Circles (the Auroral Electrojets). Electrojets are Hall currents carried primarily by electrons at altitudes from 100 to 150 km. In this region the electron gyro frequency (Larmor frequency) is much greater than the electron-neutral collision frequency. In contrast, the principal E region ions (O2+ and NO+) have gyrofrequencies much lower than the ion-neutral collision frequency. Kristian Birkeland was the first to suggest that polar electric currents (or auroral electrojets) are connected to a system of filaments (now called "Birkeland currents") that flow along geomagnetic field lines into and away from the polar region. Equatorial Electrojet The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (100–130 km altitude). Resulting from this current is an electrostatic field directed E-W (dawn-dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. Auroral Electrojet The term 'auroral electrojet' is the name given to the large horizontal currents that flow in the D and E regions of the auroral ionosphere. Although horizontal ionospheric currents can be expected to flow at any latitude where horizontal ionospheric electric fields are present, the auroral electrojet currents are remarkable for their strength and persistence. There are two main factors in the production of the electrojet. First of all, the conductivity of the auroral ionosphere is generally larger than that at lower latitudes. Secondly, the horizontal electric field in the auroral ionosphere is also larger than that at lower latitudes. Since the strength of the current is directly proportional to the vector product of the conductivity and the horizontal electric field, the auroral electrojet currents are generally larger than those at lower latitudes. During magnetically quiet periods, the electrojet is generally confined to the auroral oval. However, during disturbed periods, the electrojet increases in strength and expands to both higher and lower latitudes. This expansion results from two factors, enhanced particle precipitation and enhanced ionospheric electric fields. The Auroral Electrojet Index measures magnetic activity as observed by a chain of high-latitude observatories. See also Magnetohydrodynamics Kennelly–Heaviside layer Ionosphere "The Earth's Ionosphere: Plasma Physics and Electrodynamics," by Michael Kelley, Academic Press, References External links https://web.archive.org/web/20100705021933/http://www-star.stanford.edu/~vlf/ejet/electrojet.html ionosphere Ionosphere
Electrojet
[ "Physics", "Materials_science", "Astronomy" ]
654
[ "Physical phenomena", "Materials science stubs", "Atmospheric electricity", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Electrical phenomena", "Electromagnetism stubs" ]
5,576,186
https://en.wikipedia.org/wiki/Index%20arbitrage
Index arbitrage is a subset of statistical arbitrage focusing on index components. An index (such as S&P 500) is made up of several components (in the case of the S&P 500, 500 large US stocks picked by S&P to represent the US market), and the value of the index is typically computed as a linear function of the component prices, where the details of the computation (such as the weights of the linear function) are determined in accordance with the index methodology. The idea of index arbitrage is to exploit discrepancies between the market price of a product that tracks the index (such as a Stock market index future or Exchange-traded fund) and the market prices of the underlying index components, which are typically stocks. For example, an arbitrageur could take the current prices of traded stocks, calculate a synthetic index value using the relevant index methodology, and then apply an interest rate and dividend adjustment to calculate the "fair value" of the stock market index future. If the stock market index future is trading above its "fair value", the arbitrageur can buy the component stocks and sell the index future. Likewise, if the stock market index futures is trading below its "fair value", the arbitrageur can short the component stocks and buy the index future. In both cases, then the arbitrageur would be exposed to Basis risk if the interest rate and dividend yield risks are left unhedged. In a different example, the arbitrageur can take the current prices of traded stocks, calculate the "fair value" of an ETF (based on its holdings, which are chosen to track the index) and arbitrage between the market price of the ETF and the market prices of the stock holdings. In this scenario, the arbitrageur would use the ETF creation and redemption process to net-out the offsetting ETF and stock positions. See also Algorithmic trading Complex event processing Dark pool Electronic trading Implementation shortfall Investment strategy Quantitative trading Quote stuffing References Arbitrage Financial markets Mathematical finance
Index arbitrage
[ "Mathematics" ]
430
[ "Applied mathematics", "Mathematical finance" ]
5,576,472
https://en.wikipedia.org/wiki/North%20American%20Datum
The North American Datum (NAD) is the horizontal datum now used to define the geodetic network in North America. A datum is a formal description of the shape of the Earth along with an "anchor" point for the coordinate system. In surveying, cartography, and land-use planning, two North American Datums are in use for making lateral or "horizontal" measurements: the North American Datum of 1927 (NAD 27) and the North American Datum of 1983 (NAD 83). Both are geodetic reference systems based on slightly different assumptions and measurements. Vertical measurements, based on distances above or below Mean High Water (MHW), are calculated using the North American Vertical Datum of 1988 (NAVD 88). NAD 83, along with NAVD 88, is set to be replaced with a new GPS- and gravimetric geoid model-based geometric reference frame and geopotential datum, potentially in 2025. First North American Datum of 1901 In 1901 the United States Coast and Geodetic Survey adopted a national horizontal datum called the United States Standard Datum, based on the Clarke Ellipsoid of 1866. It was fitted to data previously collected for regional datums, which by that time had begun to overlap. In 1913, Canada and Mexico adopted that datum, so it was also renamed the North American Datum. North American Datum of 1927 As more data were gathered, discrepancies appeared, so the datum was recomputed in 1927, using the same spheroid and origin as its predecessor. The North American Datum of 1927 (NAD 27) was based on surveys of the entire continent from a common reference point that was chosen in 1901, because it was as near the center of the contiguous United States as could be calculated: It was based on a triangulation station at the junction of the transcontinental triangulation arc of 1899 on the 39th parallel north and the triangulation arc along the 98th meridian west that was near the geographic center of the contiguous United States. The datum declares the Meades Ranch Triangulation Station in Osborne County, Kansas to be 39°13′26.686″ north latitude, 98°32′30.506″ west longitude. NAD 27 is oriented by declaring the azimuth from Meades Ranch to Waldo Station (also in Osborne County, about northwest of Waldo, Russell County) to be 255°28′14.52″ from north. The latitude and longitude of every other point in North America is then based on its distance and direction from Meades Ranch: If a point was X meters in azimuth Y degrees from Meades Ranch, measured on the Clarke Ellipsoid of 1866, then its latitude and longitude on that ellipsoid were defined and could be calculated. These are the defining dimensions for NAD 27, but Clarke actually defined his 1866 spheroid as a = 20,926,062 British feet, b = 20,855,121 British feet. The conversion to meters uses Clarke's 1865 inch-meter ratio of 39.370432. The length of a foot or meter at the time could not practically be benchmarked to better than about 0.02 mm. Most USGS topographic maps were published in NAD 27 and many major projects by the United States Army Corps of Engineers and other agencies were defined in NAD 27, so the datum remains important, despite more refined datums being available. North American Datum of 1983 Because Earth deviates significantly from a perfect ellipsoid, the ellipsoid that best approximates its shape varies region by region across the world. Clarke 1866, and North American Datum of 1927 with it, were surveyed to best suit North America as a whole. Likewise, historically, most regions of the world used ellipsoids measured locally to best suit the vagaries of Earth's shape in their respective locales. While ensuring the most accuracy locally, this practice makes integrating and disseminating information across regions troublesome. As satellite geodesy and remote sensing technology reached high precision and were made available for civilian applications, it became feasible to acquire information referred to a single global ellipsoid. This is because satellites naturally deal with Earth as a monolithic body. Therefore, the GRS 80 ellipsoid was developed for best approximating the Earth as a whole, and it became the foundation for the North American Datum of 1983. Though GRS 80 and its close relative, WGS 84, are generally not the best fit for any given region, a need for the closest fit largely evaporates when a global survey is combined with computers, databases, and software able to compensate for local conditions. Comparing NAD 27 to NAD 83 A point having a given latitude and longitude in NAD 27 may be displaced on the order of many tens of meters from another point having the identical latitude and longitude in NAD 83, so it is important to specify the datum along with the coordinates. The North American Datum of 1927 is defined by the latitude and longitude of an initial point (Meades Ranch Triangulation Station in Kansas), the direction of a line between this point and a specified second point, and two dimensions that define the spheroid. The North American Datum of 1983 is based on a newer defined spheroid (GRS 80); it is an Earth-centered (or "geocentric") datum having no initial point or initial direction. NOAA provides a converter between the two systems. The practical impact is that if you use a modern GPS device set to work in NAD 83 or WGS 84 to navigate to NAD 27 coordinates (as from a topo map) near Seattle, you would be off by about 95 meters (not far enough west), and you'd be about 47 meters off near Miami (not far enough north-northeast), whereas you would be much closer for points near Chicago. Comparing NAD 83 to WGS 84 The definition of NAD 83(1986) is based on the GRS 80 spheroid, as was WGS 84, so many older publications indicate no difference. WGS 84 subsequently changed to a slightly less flattened spheroid. This change in flattening is about 0.1 mm, a difference so small that computational programs often do not distinguish between the two ellipsoids. However, due to differences in how the reference ellipsoids are centered and oriented, coordinates in the two datums differ from each other by amounts on the order of a meter over much of the United States. Each datum has undergone refinements with more accurate and later measurements. One well-known difference is the placement of the center of the Earth, with the two systems differing by about . In addition, NAD 83 is defined to remain constant over time for points on the North American Plate, whereas WGS 84 is defined with respect to the average of stations all over the world. Thus the two systems naturally diverge over time. For much of the United States the relative rate is on the order of 1 to 2 cm per year. Hawaii and the coastal portions of central and southern California west of the San Andreas Fault are not on the North American Plate, so their divergence rate differs. Current implementation of NAD 83 The United States National Spatial Reference System NAD 83(2011/MA11/PA11) epoch 2010.00, is a refinement of the NAD 83 datum using data from a network of very accurate GPS receivers at Continuously Operating Reference Stations (CORS). The NAD 83(2011) describes the main North American Plate, while the MA11 and PA11 solutions are for the Mariana Plate and the Pacific Plate respectively. New Datum of 2022 To improve the National Spatial Reference System, NAD 83, along with North American Vertical Datum of 1988 (NAVD 88), are set to be replaced with a new geometric reference frame and geopotential datum based on global navigation satellite systems (GNSS), such as the Global Positioning System (GPS), and new gravimetric geoid model, potentially in 2025 or 2026. The new gravimetric geoid model is the product of the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project. These new reference frames are intended to be easier to access and to maintain than NAD 83 and NAVD 88, which rely on physical survey marks that deteriorate over time. See also North American Vertical Datum of 1988 World Geodetic System References External links NOAA-NGS-coordinates CORS-active network - explanation of NAD 83(2011/MA11/PA11) epoch 2010.00 NOAA-NGS-coordinates passive network - explanation of most recent adjustment of passive network NADCON – a free utility for Microsoft Windows to convert between NAD 27 and NAD 83 nadcon.prl – a web-based utility for NADCON NAD 83: What Is It and Why You Should Care by Dane E. Ericksen, P.E., Hammett & Edison, Inc., Consulting Engineers. 1994 SBE National Convention and World Media Expo. Geodetic datums Surveying of the United States Surveying of Canada
North American Datum
[ "Mathematics" ]
1,894
[ "Geodetic datums", "Coordinate systems" ]
5,576,508
https://en.wikipedia.org/wiki/Vaporific%20effect
The vaporific effect is a flash fire resulting from the impact of high-velocity projectiles with metallic objects. Impacts produce particulate matter originating from either the projectile, the target, or both. Particles heated from the force of impact can burn in the presence of air (oxidizer) or water vapor. An explosion can result from the mixture of metal-dust and air, the resulting dust explosion causing significant overpressure within metallic enclosures (aircraft, vehicles, metallic enclosures, etc.). The vaporific effect is particularly pronounced when these enclosures are constructed of pyrophoric metals (metals that react upon contact with air, such as aluminium, magnesium, or their alloys). This effect is often referenced in movies, such as when a single bullet makes a helicopter explode. 1964 study A November 1964 study by the United States Department of Defense aimed at studying this phenomenon and what causes it. The "Warhead Mechanisms Study" found that dense, white smoke as well as aluminum oxide and iron oxide were present with each vaporific effect. This was greatly diminished in a nitrogen atmosphere (compared to air), thus it was found that the "oxidation of metal fragments is a major factor." See also ballistics explosives References Types of fire Ballistics
Vaporific effect
[ "Physics" ]
257
[ "Applied and interdisciplinary physics", "Ballistics" ]
5,576,639
https://en.wikipedia.org/wiki/Magnetic%20field%20viewing%20film
Magnetic field viewing film is used to show stationary or (less often) slowly changing magnetic fields; it shows their location and direction. It is a translucent thin flexible sheet, coated with micro-capsules containing nickel flakes suspended in oil. When magnetic lines of force are parallel to the surface of the carrier sheet, the surfaces of the flakes are reflective, and appear bright. When lines of force are perpendicular to the sheet, the flakes are edge-on, and appear significantly darker. When the film is placed on a magnet's pole, the latter case applies. Magnetic field viewing film together with a ruler can be used to measure the poles per inch of a magnet. See also Ferrofluid Magna Doodle References External links Magnetic devices
Magnetic field viewing film
[ "Materials_science" ]
153
[ "Materials science stubs", "Electromagnetism stubs" ]
5,577,048
https://en.wikipedia.org/wiki/Spring%20supply
A spring supply is a provision of piped mains water to a number of consumers direct from a natural spring. Spring supplies are therefore a source of groundwater, which in most instances has fewer micro-organisms (e.g. coliform bacteria and protozoa such as Giardia and Cryptosporidium) and chemical contaminants than a supply from surface water. The point at which the groundwater reaches the surface is prone to contamination, so must be protected using a structure called a spring box. This is often surrounded by a fence to keep animals out, with other common features being a ditch on the uphill side, an overflow pipe and a well fitting lid. Spring supplies can range from single property supplies that are privately owned, to large supplies that are managed by water companies and serve entire communities. As with any water supply, a spring supply may need to be treated in order to bring it up to drinking water standards. The method for doing this will vary according to the contaminant, but can include sand filters, pH balancing units and ultraviolet light. Countries In the United Kingdom, over half a million people live or work in a premises that relies on a private water supply such as a spring. The Drinking Water Inspectorate (DWI) of England and Wales produces an annual report on the quality of private water supplies. See also Water supply Improved water source Water well References Springs (hydrology) Water supply infrastructure Water supply
Spring supply
[ "Chemistry", "Engineering", "Environmental_science" ]
294
[ "Hydrology", "Water supply", "Springs (hydrology)", "Environmental engineering" ]
5,577,094
https://en.wikipedia.org/wiki/Klein%20Sexual%20Orientation%20Grid
The Klein Sexual Orientation Grid (KSOG) developed by Fritz Klein attempts to measure sexual orientation by expanding upon the earlier Kinsey scale. Fritz Klein founded the American Institute of Bisexuality in 1998 which is continuing his work by sponsoring bisexual-inclusive sex research, educating the general public on sexuality, and promoting the bisexual community. Klein first described the KSOG in his 1978 book The Bisexual Option. In response to the criticism of the Kinsey scale only measuring two dimensions of sexual orientation, Klein developed a multidimensional grid for describing sexual orientation. Unlike the Kinsey scale, the Klein grid investigates sexual orientation in the past, the present and in the idealized future with respect to seven factors each, for a total of twenty-one values. The KSOG uses values of 1–7, rather than the 0–6 scale of the Kinsey scale, to describe a continuum from exclusively opposite-sex to exclusively same-sex attraction. Overview The KSOG is often used as a tool in research. Introduced in Klein's book The Bisexual Option the KSOG uses a seven-point scale to assess seven different dimensions of sexuality at three different points in an individual's life: past (from early adolescence up to one year ago), present (within the last 12 months), and ideal (what would be chosen if it were voluntary). Studies using the KSOG have used cluster analysis to investigate patterns within the KSOG's twenty-one parameters, in one case suggesting a five-label (straight, bi-straight, bi-bi, bi-gay, gay) model of orientation. The KSOG has also been used in studies of conversion therapy. Shortcomings Klein, while recognizing that the grid explores many more dimensions of sexual orientation than previous scales, acknowledged that it omits the following "aspects" of sexual orientation: Age of partner Differentiation of love and friendship in the emotional preference variable Sexual attraction being distinguished between sexual desire and limerence Whether sexual activity referred to number of partners or number of occurrences Sex roles as well as masculine and feminine roles Additionally, factors not addressed by Klein include: Attraction to non-binary/transgender orientations. While Klein held the belief that including more dimensions of sexual orientation was better, Weinrich et al. (1993) found that all of the dimensions of the KSOG seemed to be measuring the same construct. The study conducted a factor analysis of the KSOG to see how many factors emerged in two different samples. In both groups, the first factor to emerge loaded substantially on all of the grid's 21 items, indicating that this factor accounted for a majority of the variance. They further found that a second factor emerged containing time dimensions of social and emotional preferences, suggesting that those dimensions may have also been measuring something other than sexual orientation. Therefore, despite the scale being helpful in promoting the concept of sexual orientation as being multidimensional and dynamic, the additional dimensions measured do not necessarily reveal any more of an accurate description of one's overall sexual orientation than the Kinsey scale. Another concern with the KSOG is that different dimensions of sexual orientation may not identify all people of a certain orientation in the same way. Measures of sexual attraction, sexual activity, and sexual identity identify different (though often overlapping) populations. Laumann et al. (1994) found that of the 8.6% of women reporting some same gender sexuality, 88% reported same gender sexual attraction, 41% reported some same gender sexual behaviour and 16% reported a lesbian or gay identity. See also Affectional orientation References Further reading Klein, Fritz, MD. The Bisexual Option, Second Edition . Binghamton, NY: The Haworth Press, 1993. . . Klein Sexual Orientation Grid—Online version of original Klein Sexual Orientation Grid. 1987 introductions Bisexuality Human sexuality LGBTQ and society Sexology Sexual orientation and science
Klein Sexual Orientation Grid
[ "Biology" ]
776
[ "Human sexuality", "Behavior", "Human behavior", "Sexology", "Behavioural sciences", "Sexuality" ]
5,577,197
https://en.wikipedia.org/wiki/Brahmasthan
A brahmasthan is a principle of Vedic architecture and community planning that designates the center point of a building or geographical area. Vedic architecture is based on Vastu Shastra. The brahmasthan is a special central zone in a building. It is free from any obstructions in the form of a wall, pillar or beam, furniture or fixtures and is often well lit from above, by skylights for instance. See also External links Maharishi Vastu buildings in harmony with Natural Law Architectural elements
Brahmasthan
[ "Technology", "Engineering" ]
105
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
5,577,839
https://en.wikipedia.org/wiki/InVesalius
InVesalius is a free medical software used to generate virtual reconstructions of structures in the human body. Based on two-dimensional images, acquired using computed tomography or magnetic resonance imaging equipment, the software generates virtual three-dimensional models correspondent to anatomical parts of the human body. After constructing three-dimensional DICOM images, the software allows the generation of STL (stereolithography) files. These files can be used for rapid prototyping. InVesalius was developed at CTI (Renato Archer Information Technology Center), a research institute of the Brazilian Science and Technology Center and is available at no cost at the homepage of Public Software Portal homepage. The software license is CC-GPL 2. It is available in English, Japanese, Czech, Portuguese (Brazil), Russian, Spanish, Italian, German, Portuguese, Turkish (Turkey), Romanian, French, Korean, Catalan, Chinese (Taiwan) and Greek. InVesalius was developed using Python and works under Linux, Windows and Mac OS X. It also uses graphic libraries VTK, wxPython, Numpy, Scipy and GDCM. The software's name is a tribute to Belgian physician Andreas Vesalius (1514–1564), considered the "father of modern anatomy". Developed since 2001 for attending Brazilian Public Hospitals demands, InVesalius development was directed for promoting social inclusion of individuals with severe facial deformities. Since then, however, it has been employed in various research areas of dentistry, medicine, veterinary medicine, paleontology and anthropology. It has been used not only in public hospitals, but also in private clinics and hospitals. Until 2017, the software had already been used for generating more than 5000 rapid prototyping models of anatomical structures at Promed project. External links Official InVesalius website Alternative InVesalius website InVesalius source code InVesalius Translation page at Transifex InVesalius at Ohloh InVesalius at Twitter Public Software Portal (Portuguese) Rapid Prototyping for Medicine(Portuguese) Related works Confex.com (in English) Studierfenster Free science software Medical software Neuroimaging software Free health care software Free DICOM software Software that uses VTK
InVesalius
[ "Biology" ]
470
[ "Medical software", "Medical technology" ]
5,578,011
https://en.wikipedia.org/wiki/Sulfur%20trioxide%20pyridine%20complex
Sulfur trioxide pyridine complex is the compound with the formula C5H5NSO3. It is a colourless solid that dissolves in polar organic solvents. It is the adduct formed from the Lewis base pyridine and the Lewis acid sulfur trioxide. The compound is mainly used as a source of sulfur trioxide, for example in the synthesis of sulfate esters from alcohols: ROH + C5H5NSO3 → [C5H5NH]+[ROSO3]− It also is useful for sulfamations: R2NH + C5H5NSO3 → C5H5N + R2NSO3H The compound is used for sulfonylation reactions, especially in the sulfonylation of furans. It is also an activating electrophile in a Parikh-Doering oxidation. References Sulfur(VI) compounds Pyridine complexes Reagents for organic chemistry
Sulfur trioxide pyridine complex
[ "Chemistry" ]
203
[ "Reagents for organic chemistry" ]
5,578,072
https://en.wikipedia.org/wiki/Polar%20surface%20area
The polar surface area (PSA) or topological polar surface area (TPSA) of a molecule is defined as the surface sum over all polar atoms or molecules, primarily oxygen and nitrogen, also including their attached hydrogen atoms. PSA is a commonly used medicinal chemistry metric for the optimization of a drug's ability to permeate cells. Molecules with a polar surface area of greater than 140 angstroms squared (Å2) tend to be poor at permeating cell membranes. For molecules to penetrate the blood–brain barrier (and thus act on receptors in the central nervous system), a PSA less than 90 Å2 is usually needed. TPSA is a valuable tool in drug discovery and development. By analyzing a drug candidate's TPSA, scientists can predict its potential for oral bioavailability and ability to reach target sites within the body. This prediction hinges on a drug's ability to permeate biological barriers. Permeating these barriers, such as the Blood-Brain Barrier (BBB), the Placental Barrier (PB), and the Blood-Mammary Barrier (BM), is crucial for many drugs to reach their intended targets. The BBB, for example, protects the brain from harmful substances. Drugs with a lower TPSA (generally below 90 Ų) tend to permeate the BBB more easily, allowing them to reach the brain and exert their therapeutic effects (Shityakov et al., 2013). Similarly, for drugs intended to treat the fetus, a lower TPSA (below 60 Ų) is preferred to ensure they can pass through the placenta (Augustiño-Roubina et al., 2019). Breastfeeding mothers also need consideration. Here, an optimal TPSA for a drug is around 60-80 Ų to allow it to reach the breast tissue for milk production, while drugs exceeding 90 Ų are less likely to permeate the Blood-Mammary Barrier. See also Biopharmaceutics Classification System Cheminformatics Chemistry Development Kit JOELib Implicit solvation Lipinski's rule of five References Literature Ertl, P. Polar Surface Area, in Molecular Drug Properties, R. Mannhold (ed), Wiley-VCH, pp. 111–126, 2007 External links Interactive Polar Surface Area calculator Free, Programmable TPSA Calculator Cheminformatics Medicinal chemistry
Polar surface area
[ "Chemistry", "Technology", "Biology" ]
502
[ "Biochemistry", "Computer science stubs", "Computer science", "Computational chemistry", "Cheminformatics", "Medicinal chemistry", "nan", "Computing stubs" ]
5,578,523
https://en.wikipedia.org/wiki/Witt%20group
In mathematics, a Witt group of a field, named after Ernst Witt, is an abelian group whose elements are represented by symmetric bilinear forms over the field. Definition Fix a field k of characteristic not equal to 2. All vector spaces will be assumed to be finite-dimensional. Two spaces equipped with symmetric bilinear forms are equivalent if one can be obtained from the other by adding a metabolic quadratic space, that is, zero or more copies of a hyperbolic plane, the non-degenerate two-dimensional symmetric bilinear form with a norm 0 vector. Each class is represented by the core form of a Witt decomposition. The Witt group of k is the abelian group W(k) of equivalence classes of non-degenerate symmetric bilinear forms, with the group operation corresponding to the orthogonal direct sum of forms. It is additively generated by the classes of one-dimensional forms. Although classes may contain spaces of different dimension, the parity of the dimension is constant across a class and so rk: W(k) → Z/2Z is a homomorphism. The elements of finite order in the Witt group have order a power of 2; the torsion subgroup is the kernel of the functorial map from W(k) to W(kpy), where kpy is the Pythagorean closure of k; it is generated by the Pfister forms with a non-zero sum of squares. If k is not formally real, then the Witt group is torsion, with exponent a power of 2. The height of the field k is the exponent of the torsion in the Witt group, if this is finite, or ∞ otherwise. Ring structure The Witt group of k can be given a commutative ring structure, by using the tensor product of quadratic forms to define the ring product. This is sometimes called the Witt ring W(k), though the term "Witt ring" is often also used for a completely different ring of Witt vectors. To discuss the structure of this ring one assumes that k is of characteristic not equal to 2, so that one may identify symmetric bilinear forms and quadratic forms. The kernel of the rank mod 2 homomorphism is a prime ideal, I, of the Witt ring termed the fundamental ideal. The ring homomorphisms from W(k) to Z correspond to the field orderings of k, by taking signature with respective to the ordering. The Witt ring is a Jacobson ring. It is a Noetherian ring if and only if there are finitely many square classes; that is, if the squares in k form a subgroup of finite index in the multiplicative group of k. If k is not formally real, the fundamental ideal is the only prime ideal of W and consists precisely of the nilpotent elements; W is a local ring and has Krull dimension 0. If k is real, then the nilpotent elements are precisely those of finite additive order, and these in turn are the forms all of whose signatures are 0; W has Krull dimension 1. If k is a real Pythagorean field then the zero-divisors of W are the elements for which some signature is 0; otherwise, the zero-divisors are exactly the fundamental ideal. If k is an ordered field with positive cone P then Sylvester's law of inertia holds for quadratic forms over k and the signature defines a ring homomorphism from W(k) to Z, with kernel a prime ideal KP. These prime ideals are in bijection with the orderings Xk of k and constitute the minimal prime ideal spectrum MinSpec W(k) of W(k). The bijection is a homeomorphism between MinSpec W(k) with the Zariski topology and the set of orderings Xk with the Harrison topology. The n-th power of the fundamental ideal is additively generated by the n-fold Pfister forms. Examples The Witt ring of C, and indeed any algebraically closed field or quadratically closed field, is Z/2Z. The Witt ring of R is Z. The Witt ring of a finite field Fq with q odd is Z/4Z if q ≡ 3 mod 4 and isomorphic to the group ring (Z/2Z)[F*/F*2] if q ≡ 1 mod 4. The Witt ring of a local field with maximal ideal of norm congruent to 1 modulo 4 is isomorphic to the group ring (Z/2Z)[V] where V is the Klein 4-group. The Witt ring of a local field with maximal ideal of norm congruent to 3 modulo 4 is (Z/4Z)[C2] where C2 is a cyclic group of order 2. The Witt ring of Q2 is of order 32 and is given by . Invariants Certain invariants of a quadratic form can be regarded as functions on Witt classes. Dimension mod 2 is a function on classes: the discriminant is also well-defined. The Hasse invariant of a quadratic form is again, a well-defined function on Witt classes with values in the Brauer group of the field of definition. Rank and discriminant A ring is defined over K, Q(K), as a set of pairs (d, e) with d in K*/K*2 and e in Z/2Z. Addition and multiplication are defined by: . Then there is a surjective ring homomorphism from W(K) to this obtained by mapping a class to discriminant and rank mod 2. The kernel is I2. The elements of Q may be regarded as classifying graded quadratic extensions of K. Brauer–Wall group The triple of discriminant, rank mod 2 and Hasse invariant defines a map from W(K) to the Brauer–Wall group BW(K). Witt ring of a local field Let K be a complete local field with valuation v, uniformiser π and residue field k of characteristic not equal to 2. There is an injection W(k) → W(K) which lifts the diagonal form ⟨a1,...an⟩ to ⟨u1,...un⟩ where ui is a unit of K with image ai in k. This yields identifying W(k) with its image in W(K). Witt ring of a number field Let K be a number field. For quadratic forms over K, there is a Hasse invariant ±1 for every finite place corresponding to the Hilbert symbols. The invariants of a form over a number field are precisely the dimension, discriminant, all local Hasse invariants and the signatures coming from real embeddings. The symbol ring is defined over K, Sym(K), as a set of triples (d, e, f ) with d in K*/K*2, e in Z/2 and f a sequence of elements ±1 indexed by the places of K, subject to the condition that all but finitely many terms of f are +1, that the value on acomplex places is +1 and that the product of all the terms in f is +1. Let [a, b] be the sequence of Hilbert symbols: it satisfies the conditions on f just stated. Addition and multiplication is defined as follows: . Then there is a surjective ring homomorphism from W(K) to Sym(K) obtained by mapping a class to discriminant, rank mod 2, and the sequence of Hasse invariants. The kernel is I3. The symbol ring is a realisation of the Brauer-Wall group. Witt ring of the rationals The Hasse–Minkowski theorem implies that there is an injection . One can make this concrete and compute the image by using the "second residue homomorphism" W(Qp) → W(Fp). Composed with the map W(Q) → W(Qp), one obtains a group homomorphism ∂p: W(Q) → W(Fp) (for p = 2, ∂2 is defined to be the 2-adic valuation of the discriminant, taken mod 2). One will then have a split exact sequence which can be written as an isomorphism where the first component is the signature. Witt ring and Milnor's K-theory Let k be a field of characteristic not equal to 2. The powers of the ideal I of forms of even dimension ("fundamental ideal") in form a descending filtration and one may consider the associated graded ring, that is the direct sum of quotients . Let be the quadratic form considered as an element of the Witt ring. Then is an element of I and correspondingly a product of the form is an element of . John Milnor in a 1970 paper proved that the mapping from to that sends to is multilinear and maps Steinberg elements (elements such that for some and such that one has ) to 0. This means that this mapping defines a homomorphism from the Milnor ring of k to the graded Witt ring. Milnor showed also that this homomorphism sends elements divisible by 2 to 0 and that it is surjective. In the same paper, he made a conjecture that this homomorphism is an isomorphism for all fields k (of characteristic different from 2). This became known as the Milnor conjecture on quadratic forms. The conjecture was proved by Dmitry Orlov, Alexander Vishik, and Vladimir Voevodsky in 1996 (published in 2007) for the case , leading to increased understanding of the structure of quadratic forms over arbitrary fields. Grothendieck-Witt ring The Grothendieck-Witt ring GW is a related construction generated by isometry classes of nonsingular quadratic spaces with addition given by orthogonal sum and multiplication given by tensor product. Since two spaces that differ by a hyperbolic plane are not identified in GW, the inverse for the addition needs to be introduced formally through the construction that was discovered by Grothendieck (see Grothendieck group). There is a natural homomorphism GW → Z given by dimension: a field is quadratically closed if and only if this is an isomorphism. The hyperbolic spaces generate an ideal in GW and the Witt ring W is the quotient. The exterior power gives the Grothendieck-Witt ring the additional structure of a λ-ring. Examples The Grothendieck-Witt ring of C, and indeed any algebraically closed field or quadratically closed field, is Z. The Grothendieck-Witt ring of R is isomorphic to the group ring Z[C2], where C2 is a cyclic group of order 2. The Grothendieck-Witt ring of any finite field of odd characteristic is Z ⊕ Z/2Z with trivial multiplication in the second component. The element (1, 0) corresponds to the quadratic form ⟨a⟩ where a is not a square in the finite field. The Grothendieck-Witt ring of a local field with maximal ideal of norm congruent to 1 modulo 4 is isomorphic to Z ⊕ (Z/2Z)3. The Grothendieck-Witt ring of a local field with maximal ideal of norm congruent to 3 modulo 4 it is Z' ⊕ Z/4Z ⊕ Z/2Z. Grothendieck-Witt ring and motivic stable homotopy groups of spheres Fabien Morel showed that the Grothendieck-Witt ring of a perfect field is isomorphic to the motivic stable homotopy group of spheres π0,0(S0,0) (see "A¹ homotopy theory"). Witt equivalence Two fields are said to be Witt equivalent if their Witt rings are isomorphic. For global fields there is a local-to-global principle: two global fields are Witt equivalent if and only if there is a bijection between their places such that the corresponding local fields are Witt equivalent. In particular, two number fields K and L are Witt equivalent if and only if there is a bijection T between the places of K and the places of L and a group isomorphism t between their square-class groups, preserving degree 2 Hilbert symbols. In this case the pair (T, t) is called a reciprocity equivalence or a degree 2 Hilbert symbol equivalence. Some variations and extensions of this condition, such as "tame degree l Hilbert symbol equivalence", have also been studied. Generalizations Witt groups can also be defined in the same way for skew-symmetric forms, and for quadratic forms, and more generally ε-quadratic forms, over any *-ring R. The resulting groups (and generalizations thereof) are known as the even-dimensional symmetric L-groups L2k(R) and even-dimensional quadratic L-groups L2k(R). The quadratic L-groups are 4-periodic, with L0(R) being the Witt group of (1)-quadratic forms (symmetric), and L2(R) being the Witt group of (−1)-quadratic forms (skew-symmetric); symmetric L-groups are not 4-periodic for all rings, hence they provide a less exact generalization. L-groups are central objects in surgery theory, forming one of the three terms of the surgery exact sequence. See also Reduced height of a field Notes References Further reading External links Witt rings in the Springer encyclopedia of mathematics Quadratic forms
Witt group
[ "Mathematics" ]
2,887
[ "Quadratic forms", "Number theory" ]
5,578,871
https://en.wikipedia.org/wiki/Drinfeld%20module
In mathematics, a Drinfeld module (or elliptic module) is roughly a special kind of module over a ring of functions on a curve over a finite field, generalizing the Carlitz module. Loosely speaking, they provide a function field analogue of complex multiplication theory. A shtuka (also called F-sheaf or chtouca) is a sort of generalization of a Drinfeld module, consisting roughly of a vector bundle over a curve, together with some extra structure identifying a "Frobenius twist" of the bundle with a "modification" of it. Drinfeld modules were introduced by , who used them to prove the Langlands conjectures for GL2 of an algebraic function field in some special cases. He later invented shtukas and used shtukas of rank 2 to prove the remaining cases of the Langlands conjectures for GL2. Laurent Lafforgue proved the Langlands conjectures for GLn of a function field by studying the moduli stack of shtukas of rank n. "Shtuka" is a Russian word штука meaning "a single copy", which comes from the German noun “Stück”, meaning “piece, item, or unit". In Russian, the word "shtuka" is also used in slang for a thing with known properties, but having no name in a speaker's mind. Drinfeld modules The ring of additive polynomials We let be a field of characteristic . The ring is defined to be the ring of noncommutative (or twisted) polynomials over , with the multiplication given by The element can be thought of as a Frobenius element: in fact, is a left module over , with elements of acting as multiplication and acting as the Frobenius endomorphism of . The ring can also be thought of as the ring of all (absolutely) additive polynomials in , where a polynomial is called additive if (as elements of ). The ring of additive polynomials is generated as an algebra over by the polynomial . The multiplication in the ring of additive polynomials is given by composition of polynomials, not by multiplication of commutative polynomials, and is not commutative. Definition of Drinfeld modules Let F be an algebraic function field with a finite field of constants and fix a place of F. Define A to be the ring of elements in F that are regular at every place except possibly . In particular, A is a Dedekind domain and it is discrete in F (with the topology induced by ). For example, we may take A to be the polynomial ring . Let L be a field equipped with a ring homomorphism . A Drinfeld A-module over L is a ring homomorphism whose image is not contained in L, such that the composition of with coincides with . The condition that the image of A is not in L is a non-degeneracy condition, put in to eliminate trivial cases, while the condition that gives the impression that a Drinfeld module is simply a deformation of the map . As L{τ} can be thought of as endomorphisms of the additive group of L, a Drinfeld A-module can be regarded as an action of A on the additive group of L, or in other words as an A-module whose underlying additive group is the additive group of L. Examples of Drinfeld modules Define A to be Fp[T], the usual (commutative!) ring of polynomials over the finite field of order p. In other words, A is the coordinate ring of an affine genus 0 curve. Then a Drinfeld module ψ is determined by the image ψ(T) of T, which can be any non-constant element of L{τ}. So Drinfeld modules can be identified with non-constant elements of L{τ}. (In the higher genus case the description of Drinfeld modules is more complicated.) The Carlitz module is the Drinfeld module ψ given by ψ(T) = T+τ, where A is Fp[T] and L is a suitable complete algebraically closed field containing A. It was described by L. Carlitz in 1935, many years before the general definition of Drinfeld module. See chapter 3 of for more information about the Carlitz module. See also Carlitz exponential. Shtukas Suppose that X is a curve over the finite field Fp. A (right) shtuka of rank r over a scheme (or stack) U is given by the following data: Locally free sheaves E, E′ of rank r over U×X together with injective morphisms E → E′ ← (Fr×1)*E, whose cokernels are supported on certain graphs of morphisms from U to X (called the zero and pole of the shtuka, and usually denoted by 0 and ∞), and are locally free of rank 1 on their supports. Here (Fr×1)*E is the pullback of E by the Frobenius endomorphism of U. A left shtuka is defined in the same way except that the direction of the morphisms is reversed. If the pole and zero of the shtuka are disjoint then left shtukas and right shtukas are essentially the same. By varying U, we get an algebraic stack Shtukar of shtukas of rank r, a "universal" shtuka over Shtukar×X and a morphism (∞,0) from Shtukar to X×X which is smooth and of relative dimension 2r − 2. The stack Shtukar is not of finite type for r > 1. Drinfeld modules are in some sense special kinds of shtukas. (This is not at all obvious from the definitions.) More precisely, Drinfeld showed how to construct a shtuka from a Drinfeld module. See Drinfeld, V. G. Commutative subrings of certain noncommutative rings. Funkcional. Anal. i Prilovzen. 11 (1977), no. 1, 11–14, 96. for details. Applications The Langlands conjectures for function fields state (very roughly) that there is a bijection between cuspidal automorphic representations of GLn and certain representations of a Galois group. Drinfeld used Drinfeld modules to prove some special cases of the Langlands conjectures, and later proved the full Langlands conjectures for GL2 by generalizing Drinfeld modules to shtukas. The "hard" part of proving these conjectures is to construct Galois representations with certain properties, and Drinfeld constructed the necessary Galois representations by finding them inside the l-adic cohomology of certain moduli spaces of rank 2 shtukas. Drinfeld suggested that moduli spaces of shtukas of rank r could be used in a similar way to prove the Langlands conjectures for GLr; the formidable technical problems involved in carrying out this program were solved by Lafforgue after many years of effort. See also Level structure (algebraic geometry) Moduli stack of elliptic curves References Drinfeld modules . English translation in Math. USSR Sbornik 23 (1974) 561–592. . . Shtukas Drinfeld, V. G. Cohomology of compactified moduli varieties of F-sheaves of rank 2. (Russian) Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 162 (1987), Avtomorfn. Funkts. i Teor. Chisel. III, 107–158, 189; translation in J. Soviet Math. 46 (1989), no. 2, 1789–1821 . English translation: Functional Anal. Appl. 21 (1987), no. 2, 107–122. Algebraic number theory Algebraic geometry Finite fields
Drinfeld module
[ "Mathematics" ]
1,650
[ "Fields of abstract algebra", "Algebraic number theory", "Number theory", "Algebraic geometry" ]
5,578,915
https://en.wikipedia.org/wiki/Norcamphor
Norcamphor is an organic compound, classified as a bicyclic ketone. It is an analog of camphor, but without the three methyl groups. A colorless solid, it is used as a building block in organic synthesis. Norcamphor is prepared from norbornene via the 2-formate ester, which is oxidized. It is a useful precursor to norborneols. See also Norbornane References Ketones Cyclopentanes Norbornanes
Norcamphor
[ "Chemistry" ]
102
[ "Ketones", "Functional groups" ]
5,578,920
https://en.wikipedia.org/wiki/Circular%20arc
A circular arc is the arc of a circle between a pair of distinct points. If the two points are not directly opposite each other, one of these arcs, the minor arc, subtends an angle at the center of the circle that is less than radians (180 degrees); and the other arc, the major arc, subtends an angle greater than radians. The arc of a circle is defined as the part or segment of the circumference of a circle. A straight line that connects the two ends of the arc is known as a chord of a circle. If the length of an arc is exactly half of the circle, it is known as a semicircular arc. Length The length (more precisely, arc length) of an arc of a circle with radius r and subtending an angle θ (measured in radians) with the circle center — i.e., the central angle — is This is because Substituting in the circumference and, with α being the same angle measured in degrees, since θ = , the arc length equals A practical way to determine the length of an arc in a circle is to plot two lines from the arc's endpoints to the center of the circle, measure the angle where the two lines meet the center, then solve for L by cross-multiplying the statement: measure of angle in degrees/360° = L/circumference. For example, if the measure of the angle is 60 degrees and the circumference is 24 inches, then This is so because the circumference of a circle and the degrees of a circle, of which there are always 360, are directly proportional. The upper half of a circle can be parameterized as Then the arc length from to is Sector area The area of the sector formed by an arc and the center of a circle (bounded by the arc and the two radii drawn to its endpoints) is The area A has the same proportion to the circle area as the angle θ to a full circle: We can cancel on both sides: By multiplying both sides by r, we get the final result: Using the conversion described above, we find that the area of the sector for a central angle measured in degrees is Segment area The area of the shape bounded by the arc and the straight line between its two end points is To get the area of the arc segment, we need to subtract the area of the triangle, determined by the circle's center and the two end points of the arc, from the area . See Circular segment for details. Radius Using the intersecting chords theorem (also known as power of a point or secant tangent theorem) it is possible to calculate the radius r of a circle given the height H and the width W of an arc: Consider the chord with the same endpoints as the arc. Its perpendicular bisector is another chord, which is a diameter of the circle. The length of the first chord is W, and it is divided by the bisector into two equal halves, each with length . The total length of the diameter is 2r, and it is divided into two parts by the first chord. The length of one part is the sagitta of the arc, H, and the other part is the remainder of the diameter, with length 2r − H. Applying the intersecting chords theorem to these two chords produces whence so The arc, chord, and sagitta derive their names respectively from the Latin words for bow, bowstring, and arrow. See also Biarc Circle of a sphere Circular-arc graph Circular interpolation Lemon (geometry) Meridian arc Circumference Circular motion Tangential speed External links Table of contents for Math Open Reference Circle pages Math Open Reference page on circular arcs With interactive animation Math Open Reference page on Radius of a circular arc or segment With interactive animation Circles Curves
Circular arc
[ "Mathematics" ]
801
[ "Circles", "Pi" ]
5,579,181
https://en.wikipedia.org/wiki/Ethernet%20in%20the%20first%20mile
Ethernet in the first mile (EFM) refers to using one of the Ethernet family of computer network technologies between a telecommunications company and a customer's premises. From the customer's point of view, it is their first mile, although from the access network's point of view it is known as the last mile. A working group of the Institute of Electrical and Electronics Engineers (IEEE) produced the standards known as IEEE 802.3ah-2004, which were later included in the overall standard IEEE 802.3-2008. EFM is often used in active optical network deployments. Although it is often used for businesses, it can also be known as Ethernet to the home (ETTH). One family of standards known as Ethernet passive optical network (EPON) uses a passive optical network. History With wide, metro, and local area networks using various forms of Ethernet, the goal was to eliminate non-native transport such as Ethernet over Asynchronous Transfer Mode (ATM) from access networks. One early effort was the EtherLoop technology invented at Nortel Networks in 1996, and then spun off into the company Elastic Networks in 1998. Its principal inventor was Jack Terry. The hope was to combine the packet-based nature of Ethernet with the ability of digital subscriber line (DSL) technology to work over existing telephone access wires. The name comes from local loop, which traditionally describes the wires from a telephone company office to a subscriber. The protocol was half-duplex with control from the provider side of the loop. It adapted to line conditions with a peak of 10 Mbit/s advertised, but 4-6 Mbit/s more typical, at a distance of about . Symbol rates were 1 megabaud or 1.67 megabaud, with 2, 4, or 6 bits per symbol. The EtherLoop product name was registered as a trademark in the US and Canada. The EtherLoop technology was eventually purchased by Paradyne Networks in 2002, which was in turn purchased by Zhone Technologies in 2005. Another effort was the concept promoted by Michael Silverton of using Ethernet variants that used fiber-optic communication to residential as well as business customers. This was an example of what has become known as fiber to the home (FTTH). The Fiberhood Networks company provided this service from 1999 to 2001. Some early products around the year 2000, were marketed as 10BaseS by Infineon Technologies, although they did not technically use baseband signalling, but rather passband as in very-high-bit-rate digital subscriber line (VDSL) technology. A patent was filed in 1997 by Peleg Shimon, Porat Boaz, Noam Alroy, Rubinstain Avinoam and Sfadya Yackow. Long Reach Ethernet was the product name used by Cisco Systems starting in 2001. It supported modes of 5 Mbit/s, 10 Mbit/s, and 15 Mbit/s depending on distance. In October 2000 Howard Frazier issued a call for interest on "Ethernet in the Last Mile". At the November 2000 meeting, IEEE 802.3 created the "Ethernet in the First Mile" study group, and on July 16, 2001, the 802.3ah working group. In parallel participating vendors formed the Ethernet in the First Mile Alliance (EFMA) in December 2001 to promote Ethernet subscriber access technology and support the IEEE standard efforts. At an early meeting, the EtherLoop technology was called 100BASE-CU and another technology called EoVDSL for Ethernet over VDSL. The working group's EFM standard was approved on June 24, 2004, and published on September 7, 2004, as IEEE 802.3ah-2004. In 2005, it was included into the base IEEE 802.3 standard. In 2005, the EFMA was absorbed by the Metro Ethernet Forum. In early 2006, work began on an even higher-speed 10 gigabit/second Ethernet passive optical network (10G-EPON) standard, ratified in 2009 as IEEE 802.3av. The work on the EPON was continued by the IEEE P802.3bk Extended EPON Task Force, formed in March 2012. The major goals for this Task Force included adding support for PX30, PX40, PRX40, and PR40 power budget classes to both 1G-EPON and 10G-EPON. The 802.3bk amendment was approved by the IEEE-SA SB in August 2013 and published soon thereafter as the standard IEEE Std 802.3bk-2013. In November 2011, IEEE 802.3 began work on EPON Protocol over Coax (EPoC). On June 4, 2020, the IEEE approved IEEE 802.3ca which allows for symmetric or asymmetric operation with downstream speeds of 25 Gbit/s or 50 Gbit/s, and upstream speeds of 10 Gbit/s, 25 Gbit/s, or 50 Gbit/s over passive optical networks. Description EFM defines how Ethernet can be transmitted over new media types using new Ethernet physical layer (PHY) interfaces: Voice-grade copper. These new EFM copper (EFMCu), or Ethernet over copper, interfaces allow optional multi-pair aggregation Long-wavelength single optical fiber (as well as long-wavelength dual-strand fiber) Point-to-multipoint (P2MP) fiber. These new interfaces are known under the collective name of Ethernet over passive optical networks (EPON). EFM also addresses other issues, required for mass deployment of Ethernet services, such as operations, administration, and management (OA&M) and compatibility with existing technologies (such as plain old telephone service spectral compatibility for copper twisted pair). Copper wires 2BASE-TL – defined in clauses 61 and 63. Full-duplex long-reach point-to-point link over voice-grade copper wiring. 2BASE-TL PHY can deliver a minimum of 2 Mbit/s and a maximum of 5.69 Mbit/s over distances of up to 2700 m (9,000 ft), using ITU-T G.991.2 (G.SHDSL.bis) technology over a single copper pair. 10PASS-TS – defined in clauses 61 and 62. Full-duplex short-reach point-to-point link over voice-grade copper wiring. 10PASS-TS PHY can deliver a minimum of 10 Mbit/s over distances of up to 750 m (2460 ft), using ITU G.993.1 (VDSL) technology over a single copper pair. Active fiber optics 100BASE-LX10 defined in clause 58, providing point-to-point 100 Mbit/s Ethernet links over a pair of single-mode fibers up to at least 10 km. 100BASE-BX10 defined in clause 58, providing point-to-point 100 Mbit/s Ethernet links over an individual single-mode fiber up to at least 10 km. 1000BASE-LX10 defined in clause 59, providing point-to-point 1000 Mbit/s Ethernet links over a pair of single-mode fibers up to at least 10 km. 1000BASE-BX10 defined in clause 59, providing point-to-point 1000 Mbit/s Ethernet links over an individual single-mode fiber up to at least 10 km. Passive optical network Fiber to the home can use a passive optical network. 1000BASE-PX10 defined in Clause 60 (added by IEEE Std 802.3ah-2004), providing P2MP 1000 Mbit/s Ethernet links over PONs, at the distance of at least 10 km, at the split of at least 1:16. 1000BASE-PX20 defined in Clause 60 (added by IEEE Std 802.3ah-2004), providing P2MP 1000 Mbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:16. 1000BASE-PX30 defined in Clause 60 (added by IEEE Std 802.3bk-2013), providing P2MP 1000 Mbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:32. 1000BASE-PX40 defined in Clause 60 (added by IEEE Std 802.3bk-2013), providing P2MP 1000 Mbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:64. 10GBASE-PR10 defined in Clause 91 (added by IEEE Std 802.3av-2009), providing P2MP 10 Gbit/s Ethernet links over PONs, at the distance of at least 10 km, at the split of at least 1:16. 10GBASE-PR20 defined in Clause 91 (added by IEEE Std 802.3av-2009), providing P2MP 10 Gbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:16. 10GBASE-PR30 defined in Clause 91 (added by IEEE Std 802.3av-2009), providing P2MP 10 Gbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:32. 10GBASE-PR40 defined in Clause 60 (added by IEEE Std 802.3bk-2013), providing P2MP 10 Gbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:64. 25GBASE and 50GBASE added by IEEE Std 802.3ca-2020, providing P2MP 25 Gbit/s Ethernet links over PONs, at the distance of at least 20 km, at the split of at least 1:32. 50 Gbit/s to a single end-point is achieved by using two different wavelengths of light. Additionally clause 57 defines link-level OA&M, including discovery, link monitoring, remote fault indication, loopbacks, and variable access. 2BASE-TL 2BASE-TL is an IEEE 802.3-2008 Physical Layer (PHY) specification for a full-duplex long-reach point-to-point Ethernet link over voice-grade copper wiring. Rates and distances Unlike 10/100/1000 PHYs, providing a single rate of 10, 100, or 1000 Mbit/s, the 2BASE-TL link rate can vary, depending on the copper media characteristics (such as length, wire diameter or gauge, number of pairs if the link is aggregated, amount of crosstalk between the pairs, etc.), desired link parameters (such as desired SNR margin, Power Back-Off, etc.), and regional spectral limitations. 2BASE-TL PHYs deliver a minimum of 2 Mbit/s over distances of up to , using ITU-T G.991.2 (G.SHDSL.bis) technology over a single copper pair. These PHYs may also support an optional aggregation or bonding of multiple copper pairs, called PME Aggregation Function (PAF). For a single pair, the minimum possible link bitrate is 192 kbit/s (3 x 64 kbit/s) and the maximum bitrate is 5.7 Mbit/s (89 x 64 kbit/s). On a 0.5 mm wire with 3 dB noise margin and no spectral limitations, the max bitrate can be achieved over distances of up to . At the maximum achievable bitrate is about 850 kbit/s. The throughput of a 2BASE-TL link is lower than the link's bitrate by an average 5%, due to 64/65-octet encoding and PAF overhead; both factors depend on packet size. 10PASS-TS 10PASS-TS is an IEEE 802.3-2008 Physical Layer (PHY) specification for a full-duplex short-reach point-to-point Ethernet link over voice-grade copper wiring. 10PASS-TS PHYs deliver a minimum of 10 Mbit/s over distances of up to , using ITU-T G.993.1 (VDSL) technology over a single copper pair. These PHYs may also support an optional aggregation or bonding of multiple copper pairs, called PME Aggregation Function (PAF). Details Unlike other Ethernet physical layers that provide a single rate such as 10, 100, or 1000 Mbit/s, the 10PASS-TS link rate can vary, similar to 2BASE-TL, depending on the copper channel characteristics, such as length, wire diameter (gauge), wiring quality, the number of pairs if the link is aggregated and other factors. VDSL is a short range technology designed to provide broadband over distances less than 1 km of voice-grade copper twisted pair line, but connection data rates deteriorate quickly as the line distance increases. This has led to VDSL being referred to as a "fiber to the curb" technology, because it requires fiber backhaul to connect with a carrier network over greater distances. VDSL Ethernet in the first mile services using may be a useful way to standardise functionality on metro Ethernet networks, or potentially to distribute internet access services over voice-grade wiring in multi-dwelling unit buildings. However, VDSL2 has already proven to be a versatile and faster standard with greater reach than VDSL. See also 10G-EPON PME Aggregation Function G.SHDSL 10BROAD36 – Ethernet over Cable-modem ITU G.993.2 VDSL2 Passive optical network References Further reading External links 802.3-2018 IEEE Standard for Ethernet – EFM is contained in section 5 Ethernet in the First Mile FAQ EFM Knowledge Base at the UNH-IOL Bonding protocols Physical layer protocols Data transmission First Mile Network access Local loop
Ethernet in the first mile
[ "Engineering" ]
2,915
[ "Electronic engineering", "Network access" ]
5,579,825
https://en.wikipedia.org/wiki/JTS%20Topology%20Suite
JTS Topology Suite (Java Topology Suite) is an open-source Java software library that provides an object model for Euclidean planar linear geometry together with a set of fundamental geometric functions. JTS is primarily intended to be used as a core component of vector-based geomatics software such as geographical information systems. It can also be used as a general-purpose library providing algorithms in computational geometry. JTS implements the geometry model and API defined in the OpenGIS Consortium Simple Features Specification for SQL. JTS defines a standards-compliant geometry system for building spatial applications; examples include viewers, spatial query processors, and tools for performing data validation, cleaning and integration. In addition to the Java library, the foundations of JTS and selected functions are maintained in a C++ port, for use in C-style linking on all major operating systems, in the form of the GEOS software library. Up to JTS 1.14, and the GEOS port, are published under the GNU Lesser General Public License (LGPL). With the LocationTech adoption future releases will be under the EPL/BSD licenses. Scope JTS provides the following functionality: Geometry model Geometry classes support modelling points, linestrings, polygons, and collections. Geometries are linear, in the sense that boundaries are implicitly defined by linear interpolation between vertices. Geometries are embedded in the 2-dimensional Euclidean plane. Geometry vertices may also carry a Z value. User-defined precision models are supported for geometry coordinates. Computation is performed using algorithms which provide robust geometric computation under all precision models. Geometric functions Topological validity checking Area and Distance functions Spatial Predicates based on the Egenhofer DE-9IM model Overlay functions (including intersection, difference, union, symmetric difference) Buffer computation (including different cap and join types) Convex hull Geometric simplification including the Douglas–Peucker algorithm Geometric densification Linear referencing Precision reduction Delaunay triangulation and constrained Delaunay triangulation Voronoi diagram generation Smallest enclosing rectangle Discrete Hausdorff distance Spatial structures and algorithms Robust line segment intersection Efficient line arrangement intersection Efficient point in polygon Spatial index structures including quadtree and STR-tree Planar graph structures and algorithms I/O capabilities Reading and writing of WKT, WKB and GML formats History Funding for the initial work on JTS was obtained in the Fall 2000 from GeoConnections and the Government of British Columbia, based on a proposal put forward by Mark Sondheim and David Skea. The work was carried out by Martin Davis (software design and lead developer) and Jonathan Aquino (developer), both of Vivid Solutions at the time. Since then JTS has been maintained as an independent software project by Martin Davis. Since late 2016/early 2017 JTS has been adopted by LocationTech. Projects using JTS GeoServer GeoTools OpenJUMP and forks uDig gvSIG Batik Hibernate Spatial Whitebox Geospatial Analysis Tools Platforms JTS is developed under the Java JDK 1.4 platform. It is 100% pure Java. It will run on all more recent JDKs as well. JTS has been ported to the .NET Framework as the Net Topology Suite. A JTS subset has been ported to C++, with entry points declared as C interfaces, as the GEOS library. C/C++ port: GEOS GEOS is the C/C++ port of a subset of JTS and selected functions. It is a foundation component in a software ecosystem of native, compiled executable binaries on Linux, Mac and Windows platforms. Due to the runtime construction of Java and the Java Virtual Machine (JVM), code libraries that are written in Java are basically not usable as libraries from a standardized cross-linking environment (often built from C). Linux, Microsoft Windows and the BSD family, including Mac OSX, use a linking structure that enables libraries from various languages to be integrated (linked) into a native runtime executable. Java, by design, does not participate in this interoperability without unusual measures (JNI). Applications using GEOS GEOS links and ships internally in popular applications listed below; and, by delineating and implementing standards-based geometry classes available to GDAL, which in turn is a widely supported inner-engine in GIS, GEOS becomes a core geometry implementation in even more applications: GDAL - OGR - raster and vector data munging QGIS - Desktop cross-platform, open source GIS PostGIS - spatial types and operations for PostgreSQL GeoDjango – Django's support for GIS-enabled databases Google Earth – A virtual globe and world imaging program GRASS GIS Library and Application MapServer - an open source development environment for building spatially enabled internet applications World Wind Java – NASA's open source virtual globe and world imaging technology Orfeo toolbox – A satellite image processing library R – Open source statistical software with extensions for spatial data analysis. SAGA GIS A cross-platform open source GIS software See also DE-9IM, a topological model Geospatial topology References External links Net Topology Suite Home page GEOS Home page Geometric algorithms Application programming interfaces Free software programmed in Java (programming language) Geographic data and information software Geometric topology
JTS Topology Suite
[ "Mathematics" ]
1,102
[ "Topology", "Geometric topology" ]
5,580,444
https://en.wikipedia.org/wiki/Amorphous%20carbonia
Amorphous carbonia, also called a-carbonia or a-CO2, is an exotic amorphous solid form of carbon dioxide that is analogous to amorphous silica glass. It was first made in the laboratory in 2006 by subjecting dry ice to high pressures (40-48 gigapascal, or 400,000 to 480,000 atmospheres), in a diamond anvil cell. Amorphous carbonia is not stable at ordinary pressures—it quickly reverts to normal CO2. While normally carbon dioxide forms molecular crystals, where individual molecules are bound by Van der Waals forces, in amorphous carbonia a covalently bound three-dimensional network of atoms is formed, in a structure analogous to silicon dioxide or germanium dioxide glass. Mixtures of a-carbonia and a-silica may be a prospective very hard and stiff glass material stable at room temperature. Such glass may serve as protective coatings, e.g. in microelectronics. The discovery has implications for astrophysics, as interiors of massive planets may contain amorphous solid carbon dioxide. Notes References External links Dry ice creates toughened glass Physicsweb: Dry ice forms ultrahard glass Carbon dioxide Amorphous solids Physical chemistry Astrophysics Glass compositions
Amorphous carbonia
[ "Physics", "Chemistry", "Astronomy" ]
263
[ "Glass chemistry", "Applied and interdisciplinary physics", "Glass compositions", "Unsolved problems in physics", "Astronomy stubs", "Astrophysics", "Physical chemistry", "Astrophysics stubs", "nan", "Greenhouse gases", "Carbon dioxide", "Amorphous solids", "Physical chemistry stubs", "Ast...
5,580,524
https://en.wikipedia.org/wiki/Sugarcane%20smut
Sugarcane smut is a fungal disease of sugarcane caused by the fungus Sporisorium scitamineum. The disease is known as culmicolous, which describes the outgrowth of fungus of the stalk on the cane. It attacks several sugarcane species and has been reported to occur on a few other grass species as well, but not to a critical amount. The most recognizable characteristic of this disease is a black or gray growth that is referred to as a "smut whip". Resistance to sugarcane smut is the best course of action for management, but also the use of disease free seed is important. On smaller scale operations treatments using hot water and removing infected plants can be effective. The main mode of spore dispersal is the wind but the disease also spreads through the use of infected cuttings. Sugarcane smut is a devastating disease in sugarcane growing areas globally. Hosts and symptoms Sugarcane smut infects all sugarcane species unless the species is resistant. The damage caused depends on the susceptibility of the species. Sugarcane fields are planted using vegetative cuttings from mother plants so they have the same genetic make-up of the parent plant. Seeds are not used in propagation because sugarcane is a multi-species hybrid and therefore is difficult to breed. Sugarcane smuts can also infect some other grass species outside of sugarcane. However, mostly it remains on plants of the genus Saccharum. Two to four months after the fungus has infected the plant, black whip-like structures, instead of a spindle leaf, emerge from the meristem, or growing point, of the plant. The developing whip is a mixture of plant tissue and fungal tissue. The whip reaches maturity between the sixth and the seventh month. When spores that are contained inside the whip are released, the core of the whip remains behind and is a straw-like color. Plants infected with the fungus usually appear to have thin stalks and are often stunted. They end up tillering much more than normal and this results in leaves that are more slender and much weaker. They sometimes appear more grass-like than non-infected plants. Less common symptoms of the disease are stem or leaf galls and proliferating buds. Disease cycle Sugarcane smut is disseminated via teliospores that are produced in the smut whip. These teliospores located either in the soil or on the plant, germinate in the presence of water. After germination they produce promycelium and undergo meiosis to create four haploid sporidia. Sugarcane smut is bipolar and therefore produces two different mating types of sporida. For infection to occur, two sporida from different mating types must come together and form a dikaryon. This dikaryon then produces hyphae that penetrate the bud scales of the sugarcane plant and infect the meristematic tissue. The fungus grows within the meristematic tissue and induces formation of flowering structures which it colonises to produce its teliopores. The flowering structures, usually typical grass arrows, are transformed into a whip like sorus that grows out between the leaf sheaths. At first it is covered by a thin silvery peridium (this is the host tissue) which easily peels back when desiccated to expose the sooty black-brown teliospores. These teliospores are then dispersed via wind and the cycle continues. The spores are reddish brown, round and subovoid and may be smooth to moderately echinulate. The size varies from 6.5 to 8 um. Sugarcane cultivars intended for distribution to other geographical areas should be tested for susceptibility to S. scitamineum populations in each area. Environment Sugarcane smut is a very widespread disease and is prevalent in Central and South America, Africa, and South-Western Asia. Sugarcane smut has been reported in all countries that lie between 20 degrees north and south of the equator. The pathogen does well in hot dry weather for most of the disease cycle but requires wet conditions for teliospores to germinate. Gene expression Plant disease resistance is the result of coevolution between the plant and pathogen. During Ustilago scitaminea infection, the fungus grows within the meristematic tissue and induces formation of flowering structures, which it colonises to produce its teliopores. The flowering structures, usually typical grass panicles, are transformed into a whip-like sorus that grows rapidly and protrudes out between the leaf sheaths. The development of sugarcane smut depends on the interaction among environment, the sugarcane variety and the pathogen itself. If the interaction between smut-resistant varieties and the pathogen is nonaffinity, disease resistance occurs; however, if the interaction between smut-susceptible varieties and the pathogen is affinity, disease susceptibility occurs. A series of physiological and biochemical changes, together with the molecular response, occur during the period between the appearance of the stress on plant from the invasion of the pathogen and the subsequent plant-pathogen interaction. Progress has been made in studies of the molecular basis of sugarcane smut resistance. According to one study, the type of resistance is a single gene resistance at the N52/219 gene site. Furthermore this study talked about several different strains or races of Ustilago scitaminea. Despite what has been learned, more studies on the molecular interaction in this pathosystem are needed to discover the mechanisms of smut resistance. Protein expression Despite what has been learned, little is known about the proteomic background of the interaction between pathogen and host in this pathosystem. Management The management of sugarcane smut is done through the use of resistant cultivars, fungicide and using disease free planting stock. Control is mainly accomplished through the use of resistant cultivars in areas where the disease is present. Fungicides also are used in the control of this disease, but typically resistant cultivars are preferred due to the cost of fungicides. In areas where this disease is not yet found it is important to use disease-free planting stock so as not to introduce the pathogen. Important regulations are sometimes implemented by governments to help prevent the spread of the disease. Quarantines are also implemented in areas that are infected. Importance Historically, sugarcane smut was first noted in 1877, in the Natal region of South Africa. The disease has been a problem in almost all countries where sugarcane is grown. Sugarcane smut did not make it to the western hemisphere until the 1940s when it reached Argentina. Australia was the last major producer of sugarcane to be infected. In 1998, the western coast was infected but the major production centers for Australia are on the country's east coast. Now infected plants have been found on both sides of the country, making sugarcane smut an issue in all production centers. At times the disease would go unnoticed or undetected until it would completely wipe out huge tracts of the crop. Sugarcane smut can cause any amount of loss to susceptible varieties. Anywhere from 30% to total crop failure could be seen. The reduction in yield is mainly dependent on the races of the pathogen present, the variety of sugarcane, and the environmental conditions. Sugarcane plants are ratoon, meaning the plant resprouts after it is harvested providing the next crop. Because of this perennial nature, a total crop failure can lead to the need to replant a field. Now, it is typical to replace areas that have been infected with resistant varieties of sugarcane. See also Smut (fungus) References Sugarcane diseases Fungal plant pathogens and diseases Ustilaginomycotina Taxa named by Hans Sydow Fungus species
Sugarcane smut
[ "Biology" ]
1,560
[ "Fungi", "Fungus species" ]
5,580,651
https://en.wikipedia.org/wiki/Jesmonite
Jesmonite is a composite material used in fine arts, crafts, and construction. It consists of a gypsum-based material in an acrylic resin. It was invented in the United Kingdom in 1984 by Peter Hawkins. Usage Jesmonite is a versatile material and is used in several ways. It is typically used for creating sculptures and other three-dimensional works, but can be used with other materials as a ground for painting. It can be used as a surface material in building and construction. It is considered an attractive alternative to other resin-based materials, such as polyester and fiberglass. It can be used for casting and laminating. Besides its popularity in sculpture, jesmonite is popular in other areas where casting and moulding are common, such as architectural stone and plasterwork that has a requirement to be very lightweight, taxidermy, archaeology, and palaeontology. A 2016 Financial Times article described jesmonite's increasing use in interior design, seeing it as a natural-looking alternative to plastic for "high-end" goods. In 2017, jesmonite was named "Material of the Year" by the London Design Fair. Properties Jesmonite is considered durable, flame resistant, and resistant to impact. It can be used to fabricate both small and large objects. When mixed, it accepts coloured pigments and metal powders. Its surface can be finished to resemble plaster, stone, metal, and wood. Jesmonite is considered a low-hazard material. The finished composite emits no toxic fumes. The mixing process requires no harmful solvents. However, the mixing should be performed with rubber gloves, eye protection, and dust mask, and should take place in a well-ventilated area. Cleanup is performed with water. 2012 Thames Diamond Jubilee Pageant In the 2012 Thames Diamond Jubilee Pageant, the ornate prow sculptures on the Royal barges Gloriana and MV Spirit of Chartwell were carved and moulded in Jesmonite and decorated with gold leaf. These included dolphins, relief plaques and Old Father Thames. A Spire A Spire is a cast jesmonite sculpture by British-Japanese sculptor Simon Fujiwara, commissioned to stand outside the new Laidlaw Library of the University of Leeds, England, in 2015. The lower sections incorporate particles of coal, to acknowledge the city's early industries, and the upper stages show cables and leaves reflecting today's digital and natural world. The cylindrical form relates to two nearby church spires on Woodhouse Lane. References External links Jesmonite official website Jesmonite Palestine Distributor website Woodberg Jesmonite North America Distributor website Jesmonite Middle East Distributor website Resinarthub Composite materials Building materials Ceramic materials Architectural elements
Jesmonite
[ "Physics", "Technology", "Engineering" ]
566
[ "Building engineering", "Composite materials", "Architecture", "Construction", "Materials", "Architectural elements", "Ceramic materials", "Ceramic engineering", "Components", "Matter", "Building materials" ]
5,580,714
https://en.wikipedia.org/wiki/Proliferating%20cell%20nuclear%20antigen
Proliferating cell nuclear antigen (PCNA) is a DNA clamp that acts as a processivity factor for DNA polymerase δ in eukaryotic cells and is essential for replication. PCNA is a homotrimer and achieves its processivity by encircling the DNA, where it acts as a scaffold to recruit proteins involved in DNA replication, DNA repair, chromatin remodeling and epigenetics. Many proteins interact with PCNA via the two known PCNA-interacting motifs PCNA-interacting peptide (PIP) box and AlkB homologue 2 PCNA interacting motif (APIM). Proteins binding to PCNA via the PIP-box are mainly involved in DNA replication whereas proteins binding to PCNA via APIM are mainly important in the context of genotoxic stress. Function The protein encoded by this gene is found in the nucleus and is a cofactor of DNA polymerase delta. The encoded protein acts as a homotrimer and helps increase the processivity of leading strand synthesis during DNA replication. In response to DNA damage, this protein is ubiquitinated and is involved in the RAD6-dependent DNA repair pathway. Two transcript variants encoding the same protein have been found for this gene. Pseudogenes of this gene have been described on chromosome 4 and on the X chromosome. PCNA is also found in archaea, as a processivity factor of polD, the single multi-functional DNA polymerase in this domain of life. Expression in the nucleus during DNA synthesis PCNA was originally identified as an antigen that is expressed in the nuclei of cells during the DNA synthesis phase of the cell cycle. Part of the protein was sequenced and that sequence was used to allow isolation of a cDNA clone. PCNA helps hold DNA polymerase delta (Pol δ) to DNA. PCNA is clamped to DNA through the action of replication factor C (RFC), which is a heteropentameric member of the AAA+ class of ATPases. Expression of PCNA is under the control of E2F transcription factor-containing complexes. Role in DNA repair Since DNA polymerase epsilon is involved in resynthesis of excised damaged DNA strands during DNA repair, PCNA is important for both DNA synthesis and DNA repair. PCNA is also involved in the DNA damage tolerance pathway known as post-replication repair (PRR). In PRR, there are two sub-pathways: (1) a translesion synthesis pathway, which is carried out by specialised DNA polymerases that are able to incorporate damaged DNA bases into their active sites (unlike the normal replicative polymerase, which stall), and hence bypass the damage, and (2) a proposed "template switch" pathway that is thought to involve damage bypass by recruitment of the homologous recombination machinery. PCNA is pivotal to the activation of these pathways and the choice as to which pathway is utilised by the cell. PCNA becomes post-translationally modified by ubiquitin. Mono-ubiquitin of lysine number 164 on PCNA activates the translesion synthesis pathway. Extension of this mono-ubiquitin by a non-canonical lysine-63-linked poly-ubiquitin chain on PCNA is thought to activate the template switch pathway. Furthermore, sumoylation (by small ubiquitin-like modifier, SUMO) of PCNA lysine-164 (and to a lesser extent, lysine-127) inhibits the template switch pathway. This antagonistic effect occurs because sumoylated PCNA recruits a DNA helicase called Srs2, which has a role in disrupting Rad51 nucleoprotein filaments fundamental for initiation of homologous recombination. PCNA-binding proteins PCNA interacts with many proteins. Apoptotic factors ATPases Base excision repair enzymes Cell-cycle regulators Chromatin remodeling factor Clamp loader Cohesin DNA ligase DNA methyltransferase DNA polymerases E2 SUMO-conjugating enzyme E3 ubiquitin ligases Flap endonuclease Helicases Histone acetyltransferase Histone chaperone Histone deacetylase Mismatch repair enzymes Licensing factor NKp44 receptor Nucleotide excision repair enzyme Poly ADP ribose polymerase Procaspases Protein kinases TCP protein domain Topoisomerase Interactions PCNA has been shown to interact with: Annexin A2 CAF-1 CDC25C CHTF18 Cyclin D1 Cyclin O Cyclin-dependent kinase 4 Cyclin-dependent kinase inhibitor 1C DNMT1 EP300 Establishment of Sister Chromatid Cohesion 2 Flap structure-specific endonuclease 1 GADD45A GADD45G HDAC1 HUS1 ING1 KCTD13 KIAA0101 Ku70 Ku80 MCL1 MSH3 MSH6 MUTYH P21 POLD2 POLD3 POLDIP2 POLH POLL RFC1 RFC2 RFC3 RFC4 RFC5 Ubiquitin C Werner syndrome ATP-dependent helicase XRCC1 Y box binding protein 1 Proteins interacting with PCNA via APIM include human AlkB homologue 2, TFIIS-L, TFII-I, Rad51B, XPA, ZRANB3, and FBH1. Uses Antibodies against proliferating cell nuclear antigen (PCNA) or monoclonal antibody termed Ki-67 can be used for grading of different neoplasms, e.g. astrocytoma. They can be of diagnostic and prognostic value. Imaging of the nuclear distribution of PCNA (via antibody labeling) can be used to distinguish between early, mid and late S phase of the cell cycle. However, an important limitation of antibodies is that cells need to be fixed leading to potential artifacts. On the other hand, the study of the dynamics of replication and repair in living cells can be done by introducing translational fusions of PCNA. To eliminate the need for transfection and bypass the problem of difficult to transfect and/or short lived cells, cell permeable replication and/or repair markers can be used. These peptides offer the distinct advantage that they can be used in situ in living tissue and even distinguish cells undergoing replication from cells undergoing repair. caPCNA, a post-translationally modified isoform of PCNA common in cancer cells, is a potential therapeutic target in cancer therapy. In 2023 City of Hope National Medical Center published preclinical research on a targeted chemotherapy using AOH1996 that appears to suppress tumor growth without causing discernable side effects. See also Ki-67 – cellular marker for proliferation Transcription References Further reading External links ANA: Cell cycle related (Mitotic): PCNA type 1 and type 2 Antibody Patterns—Antibody Patterns.com Cell cycle regulators DNA replication DNA repair Proteins
Proliferating cell nuclear antigen
[ "Chemistry", "Biology" ]
1,455
[ "Genetics techniques", "Biomolecules by chemical classification", "DNA repair", "Signal transduction", "DNA replication", "Molecular genetics", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle regulators" ]
5,580,736
https://en.wikipedia.org/wiki/Jean%20Stevens
Emily Jean Stevens (1900–1967) was a New Zealand iris hybridiser in the 1940s and 1950s who created the 'Pinnacle' iris as well as a number of other outstanding amoenas (iris with white standards and colored falls). Childhood Emily Jean Burgess was born on 3 September 1900 at Stratford, New Zealand, to Alfred Henry Burgess and Fanny Eleanor Hollard. Her parents were farmers, and the family moved to Kaiti, Gisborne, where Jean attended Kaiti School and won a scholarship in 1913. The following year, when their youngest daughter fell ill, the family moved to Auckland, where Jean briefly attended Auckland Girls' Grammar School. A subsequent move took the family to Waikanae in 1915, where Jean's parents established a new bulb-growing and cut-flower business. Jean stayed home to care for her youngest sister and also worked in the family business. Early hybrids In 1921, Alfred Burgess imported some hybrid cultivars of tall bearded iris, and two years later Jean was given responsibility for their propagation and sale. Her interest in iris awakened, she began experimenting with new crosses and quickly showed aptitude for iris breeding. Her early efforts were guided in part by a paper on the subject by the English iris breeder A. J. Bliss. She joined the Iris Society (later the British Iris Society) and in 1928 sent selections of her own crosses to overseas experts for assessment. Her first success was the Destiny hybrid, which Geoffrey Pilkington, the secretary of the Iris Society, promoted for release on the British market. In 1934, it became the first southern hemisphere–bred iris to receive the society's bronze medal. In 1936, Jean married Wallace Rex Stevens, a partner in Stevens Brothers nursery, Bulls, whom she had met at a flower show. They had one child, Jocelyn, in 1937. Amoena hybrids In 1937, Stevens Brothers began including bearded iris in its catalogues. Between 1936 and 1939, three of Jean's irises won awards of merit from the Royal Horticultural Society, and a fourth prompted the American iris breeder Robert Schreiner to introduce some of her cultivars into the North American market. Their association would continue for another 30 years. In 1945, Jean and Wallace moved the Stevens Brothers nursery to Bastia Hill, Wanganui. Although the business name remained unchanged, Jean had taken over from Wallace's brother as a full partner. Jean started working on a new challenge: to widen the colour range of tall bearded irises known as amoenas—that is, those with white standards and violet, violet-blue, or purple falls. This involved the difficult task of working with recessive genes in plants with poor germination. In 1949, Stevens introduced Pinnacle, a very fine white and yellow amoena that gained international recognition and became one of the world's most popular iris cultivars. Both the American Iris Society (1951) and the Royal Horticultural Society (1959) granted Stevens an award of merit for its creation. It has been suggested that the originality of 'Pinnacle' would have won Jean the AIS's highest award, the Dykes Medal, if she had been eligible for it. Jean went on to create amoenas in a range of other colours, including deeper yellow, pale blue, plum, and pink shades. In 1967, her amoena 'Sunset Snows' with its cocoa-tinged pink falls took third place at an international iris competition in Florence and won cups for the best early variety and for the most original colour, marking the first time a prize in the competition had gone to the southern hemisphere and the first time that a single cultivar had collected three different prizes. Of all of Stevens' introductions, 'Sunset Snows' has been the most used by other hybridisers especially those searching for pink amoenas. She worked with other iris groups as well and is thought to have made some of the earliest crosses between Iris juncea and Iris boissieri, as well as between Iris wattii and Iris tectorum. Leadership and publications Stevens was active in various horticultural associations. She was a founding member of the Australian Iris Society in 1948, and the following year she became federal president of the renamed Australian and New Zealand Iris Society. Administrative difficulties led to her recommending a separation of the two bodies, and one result of the split was that she cofounded the New Zealand Iris Society with C. A. Teschner and D'Arcy Blackburn in 1949. She served as its president twice (1949–1951; 1956-1957) and was elected a life member in 1959. Stevens was also the editor of the New Zealand Iris Society for 10 years and registrar of New Zealand cultivars from 1957 until her death. Her writings appeared in New Zealand gardening magazines and in iris publications overseas, and in 1952 her handbook for Southern Hemisphere growers, "The Iris and Its Culture," was published in Australia. Jean and Wallace Stevens also led the way in developing native Australasian and South African flora for cut-flower production, especially proteas and Leucadendron. Jean made the first known crosses between Leucadendron laureolum and Leucadendron salignum, and at her prompting her son-in-law Ian Bell (who joined the partnership around 1961) began a more extensive hybridisation programme from which came 'Safari Sunset', a leucadendron with deep red bracts that became an important export flower. In the early 1960s, the Stevenses faced losing part of their land to a proposed primary school, but their appeal was supported by New Zealand and British horticultural authorities and was upheld. The Queen Mother visited the Stevens' gardens during her 1966 tour and reportedly left 'with an armful of slips and cuttings'. Stevens continued to win prestigious awards for her cultivars, including the British Iris Society's Foster Memorial Plaque (1953) and the American Iris Society's hybridisers' medal (1955). Between 1949 and 1961 her cultivars won two awards of merit and six honourable mentions in American iris competitions. She was guest speaker at the American society's annual convention in 1956—the first woman to be so honored—and was appointed an honorary judge in 1962. Early in 1967 Stevens was elected an associate of honour of the Royal New Zealand Institute of Horticulture. Legacy Jean Stevens died in Wanganui on 8 August 1967, having registered nearly 400 iris hybrids in her lifetime. The wholesale floristry business was continued by her husband until he died in 1974, and afterwards remained in the family with Ian and Jocelyn Bell. In 1970, the New Zealand Iris Society inaugurated an annual lecture series, the Jean Stevens Memorial Lecture. References Plant breeding New Zealand horticulturists 1900 births 1967 deaths People from Stratford, New Zealand
Jean Stevens
[ "Chemistry" ]
1,385
[ "Plant breeding", "Molecular biology" ]
5,580,754
https://en.wikipedia.org/wiki/Penis%20fencing
Penis fencing is a mating behavior engaged in by many species of flatworm, such as Pseudobiceros hancockanus. Species which engage in the practice are hermaphroditic; each individual has both egg-producing ovaries and sperm-producing testes. The flatworms "fence" using extendable two-headed dagger-like stylets. These stylets are pointed (and in some species hooked) in order to pierce their mate's epidermis and inject sperm into the haemocoel in an act known as intradermal hypodermic insemination, or traumatic insemination. Pairs can either compete, with only one individual transferring sperm to the other, or the pair can transfer sperm bilaterally. Both forms of sperm transfer can occur in the same species, depending on various factors. Unilateral sperm transfer One organism will inseminate the other, with the inseminating individual being the father. The sperm is absorbed through pores or sometimes wounds in the skin from the partner's stylet, causing fertilization in the other, who becomes the mother. The battle may last for up to an hour in some species. Parturition, while necessary for successful offspring production, requires a considerable parental investment in time and energy, and according to Bateman's principle, almost always burdens the mother. Thus, from an optimality model it is usually preferable for an organism to inseminate than to be inseminated. However, in many species that engage in this form of copulatory competition, each father will continue to fence with other partners until it is inseminated. In Alderia modesta, individuals will store sperm from several "fencing matches" before laying their eggs, and smaller individuals will more often inseminate a larger partner, with larger individuals spending more energy on laying eggs when paired with a smaller partner on the occasion that they transfer sperm unilaterally. In the absence of potential mates, some species such as Neobenedenia melleni are capable of reproducing through self-insemination. Bilateral sperm transfer Commonly, many hermaphroditic species mutually inseminate, or trade sperm, rather than compete, Chelidonura sandrana as an example. The tiger flatworm, Maritigrella crozieri, also transfers sperm bilaterally. In many species that engage in bilateral insemination, sperm trading is conditional. If one partner "cheats", and does not transfer sperm, the other partner will either prematurely abandon the partner, or will engage in typical mating behavior without transferring sperm. Other species will alternate which partner transfers sperm, engaging in multiple bouts of fencing with the same partner over time. In A. modesta, bilateral sperm transfer is the most common, especially in similarly sized mate pairs. Other uses The term is also applied, usually informally, to homosexual activity between two males among bonobos; same-sex genital-genital rubbing is used in bonobo society to cement bonds, reduce conflict, and express communal excitement over food. Several whale species also engage in penis fencing. See also Frot Love dart Sexual conflict Traumatic insemination References External links Bizarre Animal Mating Rituals Mating Platyhelminth biology Sexual acts Penis
Penis fencing
[ "Biology" ]
669
[ "Behavior", "Sexual acts", "Ethology", "Sexuality", "Mating" ]
5,580,827
https://en.wikipedia.org/wiki/Henry%20Wilbraham
Henry Wilbraham (25 July 1825 – 13 February 1883) was an English mathematician. He is known for discovering and explaining the Gibbs phenomenon nearly fifty years before J. Willard Gibbs did. Gibbs and Maxime Bôcher, as well as nearly everyone else, were unaware of Wilbraham's paper on the Gibbs phenomenon. Biography Henry Wilbraham was born to George and Lady Anne Wilbraham at Delamere, Cheshire. His family was privileged, with his father a parliamentarian and his mother the daughter of the Earl Fortescue. He attended Harrow School before being admitted to Trinity College, Cambridge at the age of 16. He received a BA in 1846 and an MA in 1849 from Cambridge. At the age of 22 he published his paper on the Gibbs phenomenon. He remained at Trinity as a Fellow until 1856. In 1864 he married Mary Jane Marriott, and together they had seven children. In the last years of his life, he was the District Registrar of the Chancery Court at Manchester. References Paul J. Nahin, Dr. Euler's Fabulous Formula, Princeton University Press, 2006. Ch. 4, Sect. 4. 1825 births 1883 deaths 19th-century English mathematicians Mathematical analysts People educated at Harrow School Alumni of Trinity College, Cambridge People from Cheshire
Henry Wilbraham
[ "Mathematics" ]
260
[ "Mathematical analysis", "Mathematical analysts" ]
5,581,581
https://en.wikipedia.org/wiki/Flight%20information%20display%20system
A flight information display system (FIDS) is a computer system used in airports to display flight information to passengers, in which a computer system controls mechanical or electronic display boards or monitors in order to display arriving and departing flight information in real-time. The displays are located inside or around an airport terminal. A virtual version of a FIDS can also be found on most airport websites and teletext systems. In large airports, there are different sets of FIDS for each terminal or even each major airline. FIDS are used to inform passengers of boarding gates, departure/arrival times, destinations, notifications of flight delays/flight cancellations, and partner airlines, et al. Each line on an FIDS indicates a different flight number accompanied by: the airline name/logo and/or its IATA or ICAO airline designator (can also include names/logos of interlining/codesharing airlines or partner airlines, e.g. HX252/BR2898.) the city of origin or destination, and any intermediate points the expected arrival or departure time and/or the updated time (reflecting any delays) the status of the flight, such as "Landed", "Delayed", "Boarding", etc. And in the case of departing flights: the check-in counter numbers or the name of the airline handling the check-in the gate number Due to code sharing, a flight may be represented by a series of different flight numbers. For example, LH 474 and AC 9099, both partners of Star Alliance, codeshare on a route using a single aircraft, either Lufthansa or Air Canada, to operate that route at that given time. Lines may be sorted by time, airline name, or city. Most FIDS are now displayed on LCD or LED screen, although some airports still use split-flap displays. References Display technology Airport infrastructure
Flight information display system
[ "Engineering" ]
386
[ "Airport infrastructure", "Electronic engineering", "Display technology", "Aerospace engineering" ]
5,581,765
https://en.wikipedia.org/wiki/Fascia%20%28architecture%29
Fascia () is an architectural term for a vertical frieze or band under a roof edge, or which forms the outer surface of a cornice, visible to an observer. Typically consisting of a wooden board, unplasticized PVC (uPVC), or non-corrosive sheet metal, many of the non-domestic fascias made of stone form an ornately carved or pieced together cornice, in which case the term fascia is rarely used. The word fascia derives from Latin fascia meaning "band, bandage, ribbon, swathe". The term is also used, although less commonly, for other such band-like surfaces like a wide, flat trim strip around a doorway, different and separate from the wall surface. The horizontal "fascia board" which caps the end of rafters outside a building may be used to hold the rain gutter. The finished surface below the fascia and rafters is called the soffit or eave. In classical architecture, the fascia is the plain, wide band (or bands) that make up the architrave section of the entablature, directly above the columns. The guttae or drip edge was mounted on the fascia in the Doric order, below the triglyph. The term fascia can also refer to the flat strip below the cymatium. See also Bargeboard, a board fastened to a projecting gable Eaves, a roof projection beyond the line of a building Soffit, the surface or surfaces, often structural under a roof projection. The term used in other structures such as for the underside of an arch. References Columns and entablature Architectural elements
Fascia (architecture)
[ "Technology", "Engineering" ]
346
[ "Building engineering", "Structural system", "Architectural elements", "Columns and entablature", "Components", "Architecture" ]
5,582,419
https://en.wikipedia.org/wiki/Haloacetic%20acids
Haloacetic acids or HAAs are carboxylic acids in which one or more halogen atoms take the place of hydrogen atoms in the methyl group of acetic acid. In a monohaloacetic acid, a single halogen replaces a hydrogen atom: for example, in bromoacetic acid. Further substitution of hydrogen atoms with halogens can occur, as in dichloroacetic acid and trichloroacetic acid. Haloacetic acids are a common contaminant in treated drinking water, particularly water subjected to chlorination. Contaminants in treated water Haloacetic acids (HAAs) are a common undesirable by-product of water treatment by chlorination. Exposure to such disinfection by-products in drinking water, at high levels over many years, has been associated with a number of health outcomes by epidemiological studies. HAAs can be formed following chlorination, ozonation, or chloramination of water, as chlorine from the water disinfection process can react with organic matter and small amounts of bromide present in water. HAAs are highly chemically stable, and therefore persist in water after formation. A study published in August 2006 found that total levels of HAAs in drinking water were not affected by storage or boiling, but that filtration was effective in decreasing levels. HAA5 In the United States, the EPA regulates the five HAAs most commonly found in drinking water, collectively referred to as "HAA5." These are: Chloroacetic acid () Dichloroacetic acid () Trichloroacetic acid () Bromoacetic acid () Dibromoacetic acid () The regulation limit for these five acids combined is 60 parts per billion (ppb). The sum of bromodichloroacetic acid, dibromochloroacetic acid and tribromoacetic acid concentrations is known as HAA3. HAA9 The designation "HAA9" refers to a larger group of HAAs, including all of the acids in HAA5, along with: Bromochloroacetic acid () Bromodichloroacetic acid () Dibromochloroacetic acid () Tribromoacetic acid () The level of these four acids in drinking water is not regulated by the EPA. HAA6 refers to the sum of HAA5 and bromochloroacetic acid concentrations. Health effects Haloacetic acids are readily absorbed by the human body after being ingested, and can be absorbed slightly through the skin. At high concentrations, HAAs have irritating and corrosive properties; however, typical concentrations of HAAs found in drinking water are extremely low. HAAs are typically eliminated from the body through normal processes between 1 day and 2 weeks after ingestion, depending on the type of acid. Highly concentrated HAAs have been found to cause toxicity in various organs, including the liver and pancreas, in animal studies. This includes an increased risk of cancer, particularly of the liver and bladder. For this reason, the EPA considers a few HAAs (namely DCA and TCA) as potential human carcinogens. They may also cause developmental and reproductive problems during pregnancy. However, short-term adverse health effects are unlikely after ingesting dilute quantities of HAAs, and the long-term low-level risks associated with drinking treated water with residual HAAs are much lower than the risks of drinking untreated water. Chemistry Haloacetic acids have a general chemical formula , where X is hydrogen or halogen, and at least one X is a halogen. The inductive effect caused by the electronegative halogens often results in the higher acidity of these compounds by stabilising the negative charge of the conjugate base. See also Fluoroacetic acid () Difluoroacetic acid () Trifluoroacetic acid () Iodoacetic acid () Diiodoacetic acid () Triiodoacetic acid () Bromodifluoroacetic acid () References Further reading ANSI National Institute of Health External links Haloacetic Acids (For Private Water and Health Regulated Public Water Supplies) "Drinking Water Contaminants – Standards and Regulations". US Environmental Protection Agency. Carboxylic acids Organohalides
Haloacetic acids
[ "Chemistry" ]
909
[ "Organic compounds", "Carboxylic acids", "Functional groups", "Organohalides" ]
5,582,812
https://en.wikipedia.org/wiki/Schmidt%20decomposition
In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity. Theorem Let and be Hilbert spaces of dimensions n and m respectively. Assume . For any vector in the tensor product , there exist orthonormal sets and such that , where the scalars are real, non-negative, and unique up to re-ordering. Proof The Schmidt decomposition is essentially a restatement of the singular value decomposition in a different context. Fix orthonormal bases and . We can identify an elementary tensor with the matrix , where is the transpose of . A general element of the tensor product can then be viewed as the n × m matrix By the singular value decomposition, there exist an n × n unitary U, m × m unitary V, and a positive semidefinite diagonal m × m matrix Σ such that Write where is n × m and we have Let be the m column vectors of , the column vectors of , and the diagonal elements of Σ. The previous expression is then Then which proves the claim. Some observations Some properties of the Schmidt decomposition are of physical interest. Spectrum of reduced states Consider a vector of the tensor product in the form of Schmidt decomposition Form the rank 1 matrix . Then the partial trace of , with respect to either system A or B, is a diagonal matrix whose non-zero diagonal elements are . In other words, the Schmidt decomposition shows that the reduced states of on either subsystem have the same spectrum. Schmidt rank and entanglement The strictly positive values in the Schmidt decomposition of are its Schmidt coefficients, or Schmidt numbers. The total number of Schmidt coefficients of , counted with multiplicity, is called its Schmidt rank. If can be expressed as a product then is called a separable state. Otherwise, is said to be an entangled state. From the Schmidt decomposition, we can see that is entangled if and only if has Schmidt rank strictly greater than 1. Therefore, two subsystems that partition a pure state are entangled if and only if their reduced states are mixed states. Von Neumann entropy A consequence of the above comments is that, for pure states, the von Neumann entropy of the reduced states is a well-defined measure of entanglement. For the von Neumann entropy of both reduced states of is , and this is zero if and only if is a product state (not entangled). Schmidt-rank vector The Schmidt rank is defined for bipartite systems, namely quantum states The concept of Schmidt rank can be extended to quantum systems made up of more than two subsystems. Consider the tripartite quantum system: There are three ways to reduce this to a bipartite system by performing the partial trace with respect to or Each of the systems obtained is a bipartite system and therefore can be characterized by one number (its Schmidt rank), respectively and . These numbers capture the "amount of entanglement" in the bipartite system when respectively A, B or C are discarded. For these reasons the tripartite system can be described by a vector, namely the Schmidt-rank vector Multipartite systems The concept of Schmidt-rank vector can be likewise extended to systems made up of more than three subsystems through the use of tensors. Example Take the tripartite quantum state This kind of system is made possible by encoding the value of a qudit into the orbital angular momentum (OAM) of a photon rather than its spin, since the latter can only take two values. The Schmidt-rank vector for this quantum state is . See also Singular value decomposition Purification of quantum state References Further reading Linear algebra Singular value decomposition Quantum information theory Articles containing proofs
Schmidt decomposition
[ "Mathematics" ]
785
[ "Linear algebra", "Articles containing proofs", "Algebra" ]
5,582,856
https://en.wikipedia.org/wiki/Square%20%28unit%29
The square is an Imperial unit of area that is used in the construction industry in the United States and Canada, and was historically used in Australia. One square is equal to 100 square feet. Examples where the unit is used are roofing shingles, metal roofing, vinyl siding, and fibercement siding products. Some home builders use squares as a unit in floor plans to customers. When used in reference to material that is applied in an overlapped fashion, such as roof shingles or siding, a square refers to the amount of material needed to cover 100 square feet when installed according to a certain lap pattern. For example, for a shingle product designed to be installed so that each course has of exposure, a square would actually consist of more than 100 square feet of shingles in order to allow for overlapping of courses to yield the proper exposed surface. Construction in Australia no longer uses the square as a unit of measure, and it has been replaced by the square metre. The measurement was often used by estate agents to make the building sound larger as the measure includes the areas outside under the eaves, and so cannot be directly compared to the internal floor area. Residential buildings in the state of Victoria, Australia are sometimes still advertised in squares. Conversions 1 square equals 100 square feet See also List of unusual units of measurement References Architecture in Australia Units of area Customary units of measurement in the United States
Square (unit)
[ "Mathematics" ]
283
[ "Quantity", "Units of area", "Units of measurement" ]
5,583,008
https://en.wikipedia.org/wiki/Doctor%20of%20Architecture
Doctor of Architecture (D Arch) is a title accorded to students who have completed a degree program accredited by the National Architectural Accrediting Board. The only university currently offering a Doctor of Architecture degree in the USA is the University of Hawaii at Manoa. Background Most state registration boards in the United States require a degree from an accredited professional degree program as a prerequisite for licensure. The National Architectural Accrediting Board, the sole agency authorized to accredit U.S. professional degree programs in architecture, recognizes three types of degrees: the Bachelor of Architecture, the Master of Architecture, and the Doctor of Architecture. Doctor of Architecture and Master of Architecture degree programs may consist of a pre-professional undergraduate degree and a professional graduate degree that, when earned sequentially, constitute an accredited professional education. However, the pre-professional degree is not, by itself, recognized as an accredited degree. The only university in the United States offering a Doctor of Architecture degree is the University of Hawaii at Manoa. The Doctorate of Architecture was first accredited by the NAAB in 1999. Admission to the University of Hawaii program is open to students who have completed high school, transfer students who have completed some college-level work, and students who have completed Baccalaureate or advanced degrees. Completion of the degree requires 120 undergraduate credits and 90 graduate credits. References Architecture Architectural education
Doctor of Architecture
[ "Engineering" ]
276
[ "Architecture stubs", "Architectural education", "Architecture" ]
5,583,296
https://en.wikipedia.org/wiki/Metabolic%20control%20analysis
In biochemistry, metabolic control analysis (MCA) is a mathematical framework for describing metabolic, signaling, and genetic pathways. MCA quantifies how variables, such as fluxes and species concentrations, depend on network parameters. In particular, it is able to describe how network-dependent properties, called control coefficients, depend on local properties called elasticities or elasticity coefficients. MCA was originally developed to describe the control in metabolic pathways but was subsequently extended to describe signaling and genetic networks. MCA has sometimes also been referred to as Metabolic Control Theory, but this terminology was rather strongly opposed by Henrik Kacser, one of the founders. More recent work has shown that MCA can be mapped directly on to classical control theory and are as such equivalent. Biochemical systems theory (BST) is a similar formalism, though with rather different objectives. Both are evolutions of an earlier theoretical analysis by Joseph Higgins. Chemical reaction network theory is another theoretical framework that has overlap with both MCA and BST but is considerably more mathematically formal in its approach. Its emphasis is primarily on dynamic stability criteria and related theorems associated with mass-action networks. In more recent years the field has also developed a sensitivity analysis which is similar if not identical to MCA and BST. Control coefficients A control coefficient measures the relative steady state change in a system variable, e.g. pathway flux (J) or metabolite concentration (S), in response to a relative change in a parameter, e.g. enzyme activity or the steady-state rate () of step . The two main control coefficients are the flux and concentration control coefficients. Flux control coefficients are defined by and concentration control coefficients by . Summation theorems The flux control summation theorem was discovered independently by the Kacser/Burns group and the Heinrich/Rapoport group in the early 1970s and late 1960s. The flux control summation theorem implies that metabolic fluxes are systemic properties and that their control is shared by all reactions in the system. When a single reaction changes its control of the flux this is compensated by changes in the control of the same flux by all other reactions. Elasticity coefficients The elasticity coefficient measures the local response of an enzyme or other chemical reaction to changes in its environment. Such changes include factors such as substrates, products, or effector concentrations. For further information, please refer to the dedicated page at elasticity coefficients. . Connectivity theorems The connectivity theorems are specific relationships between elasticities and control coefficients. They are useful because they highlight the close relationship between the kinetic properties of individual reactions and the system properties of a pathway. Two basic sets of theorems exists, one for flux and another for concentrations. The concentration connectivity theorems are divided again depending on whether the system species is different from the local species . Response Coefficient Kacser and Burns introduced an additional coefficient that described how a biochemical pathway would respond the external environment. They termed this coefficient the response coefficient and designated it using the symbol R. The response coefficient is an important metric because it can be used to assess how much a nutrient or perhaps more important, how a drug can influence a pathway. This coefficient is therefore highly relevant to the pharmaceutical industry. The response coefficient is related to the core of metabolic control analysis via the response coefficient theorem, which is stated as follows: where is a chosen observable such as a flux or metabolite concentration, is the step that the external factor targets, is the control coefficient of the target steps, and is the elasticity of the target step with respect to the external factor . The key observation of this theorem is that an external factor such as a therapeutic drug, acts on the organism's phenotype via two influences: 1) How well the drug can affect the target itself through effective binding of the drug to the target protein and its effect on the protein activity. This effectiveness is described by the elasticity and 2) How well do modifications of the target influence the phenotype by transmission of the perturbation to the rest of the network. This is indicated by the control coefficient . A drug action, or any external factor, is most effective when both these factors are strong. For example, a drug might be very effective at changing the activity of its target protein, however if that perturbation in protein activity is unable to be transmitted to the final phenotype then the effectiveness of the drug is greatly diminished. If a drug or external factor, , targets multiple sites of action, for example sites, then the overall response in a phenotypic factor , is the sum of the individual responses: Control equations It is possible to combine the summation with the connectivity theorems to obtain closed expressions that relate the control coefficients to the elasticity coefficients. For example, consider the simplest non-trivial pathway: We assume that and are fixed boundary species so that the pathway can reach a steady state. Let the first step have a rate and the second step . Focusing on the flux control coefficients, we can write one summation and one connectivity theorem for this simple pathway: Using these two equations we can solve for the flux control coefficients to yield Using these equations we can look at some simple extreme behaviors. For example, let us assume that the first step is completely insensitive to its product (i.e. not reacting with it), S, then . In this case, the control coefficients reduce to That is all the control (or sensitivity) is on the first step. This situation represents the classic rate-limiting step that is frequently mentioned in textbooks. The flux through the pathway is completely dependent on the first step. Under these conditions, no other step in the pathway can affect the flux. The effect is however dependent on the complete insensitivity of the first step to its product. Such a situation is likely to be rare in real pathways. In fact the classic rate limiting step has almost never been observed experimentally. Instead, a range of limitingness is observed, with some steps having more limitingness (control) than others. We can also derive the concentration control coefficients for the simple two step pathway: Three step pathway Consider the simple three step pathway: where and are fixed boundary species, the control equations for this pathway can be derived in a similar manner to the simple two step pathway although it is somewhat more tedious. where D the denominator is given by Note that every term in the numerator appears in the denominator, this ensures that the flux control coefficient summation theorem is satisfied. Likewise the concentration control coefficients can also be derived, for And for Note that the denominators remain the same as before and behave as a normalizing factor. Derivation using perturbations Control equations can also be derived by considering the effect of perturbations on the system. Consider that reaction rates and are determined by two enzymes and respectively. Changing either enzyme will result in a change to the steady state level of and the steady state reaction rates . Consider a small change in of magnitude . This will have a number of effects, it will increase which in turn will increase which in turn will increase . Eventually the system will settle to a new steady state. We can describe these changes by focusing on the change in and . The change in , which we designate , came about as a result of the change . Because we are only considering small changes we can express the change in terms of using the relation where the derivative measures how responsive is to changes in . The derivative can be computed if we know the rate law for . For example, if we assume that the rate law is then the derivative is . We can also use a similar strategy to compute the change in as a result of the change . This time the change in is a result of two changes, the change in itself and the change in . We can express these changes by summing the two individual contributions: We have two equations, one describing the change in and the other in . Because we allowed the system to settle to a new steady state we can also state that the change in reaction rates must be the same (otherwise it wouldn't be at steady state). That is we can assert that . With this in mind we equate the two equations and write Solving for the ratio we obtain: In the limit, as we make the change smaller and smaller, the left-hand side converges to the derivative : We can go one step further and scale the derivatives to eliminate units. Multiplying both sides by and dividing both sides by yields the scaled derivatives: The scaled derivatives on the right-hand side are the elasticities, and the scaled left-hand term is the scaled sensitivity coefficient or concentration control coefficient, We can simplify this expression further. The reaction rate is usually a linear function of . For example, in the Briggs–Haldane equation, the reaction rate is given by . Differentiating this rate law with respect to and scaling yields . Using this result gives: A similar analysis can be done where is perturbed. In this case we obtain the sensitivity of with respect to : The above expressions measure how much enzymes and control the steady state concentration of intermediate . We can also consider how the steady state reaction rates and are affected by perturbations in and . This is often of importance to metabolic engineers who are interested in increasing rates of production. At steady state the reaction rates are often called the fluxes and abbreviated to and . For a linear pathway such as this example, both fluxes are equal at steady-state so that the flux through the pathway is simply referred to as . Expressing the change in flux as a result of a perturbation in and taking the limit as before we obtain The above expressions tell us how much enzymes and control the steady state flux. The key point here is that changes in enzyme concentration, or equivalently the enzyme activity, must be brought about by an external action. Derivation using the systems equation The control equations can also be derived in a more rigorous fashion using the systems equation: where is the stoichiometry matrix, is a vector of chemical species, and is a vector of parameters (or inputs) that can influence the system. In metabolic control analysis the key parameters are the enzyme concentrations. This approach was popularized by Heinrich, Rapoport, and Rapoport and Reder and Mazat. A detailed discussion of this approach can be found in Heinrich & Schuster and Hofmeyr. Properties of a linear pathway A linear biochemical pathway is a chain of enzyme-catalyzed reaction steps. The figure below shows a three step pathway, with intermediates, and . In order to sustain a steady-state, the boundary species and are fixed. At steady-state the rate of reaction is the same at each step. This means there is an overall flux from X_o to X_1. Linear pathways possess some well-known properties: Flux control is biased towards the first few steps of the pathway. Flux control shifts more to the first step as the equilibrium constants become large. Flux control is small at reactions close to equilibrium. Assuming reversibly, flux control at a given step is proportional to the product of the equilibrium constants. For example, flux control at the second step in a three step pathway is proportional to the product of the second and third equilibrium constants. In all cases, a rationale for these behaviors is given in terms of how elasticities transmit changes through a pathway. Metabolic control analysis software There are a number of software tools that can directly compute elasticities and control coefficients: COPASI (GUI) PySCeS (Python) SBW (GUI) libroadrunner (Python) VCell Relationship to Classical Control Theory Classical Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. In 2004 Brian Ingalls published a paper that showed that classical control theory and metabolic control analysis were identical. The only difference was that metabolic control analysis was confined to zero frequency responses when cast in the frequency domain whereas classical control theory imposes no such restriction. The other significant difference is that classical control theory has no notion of stoichiometry and conservation of mass which makes it more cumbersome to use but also means it fails to recognize the structural properties inherent in stoichiometric networks which provide useful biological insights. See also Branched pathways Biochemical systems theory Control coefficient (biochemistry) Flux (metabolism) Moiety conservation Rate-limiting step (biochemistry) References External links The Metabolic Control Analysis Web Biochemistry methods Metabolism Mathematical and theoretical biology Systems biology
Metabolic control analysis
[ "Chemistry", "Mathematics", "Biology" ]
2,536
[ "Biochemistry methods", "Mathematical and theoretical biology", "Applied mathematics", "Cellular processes", "Biochemistry", "Metabolism", "Systems biology" ]