id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
2,465,352
https://en.wikipedia.org/wiki/Sulfur%20cycle
The important sulfur cycle is a biogeochemical cycle in which the sulfur moves between rocks, waterways and living systems. It is important in geology as it affects many minerals and in life because sulfur is an essential element (CHNOPS), being a constituent of many proteins and cofactors, and sulfur compounds can be used as oxidants or reductants in microbial respiration. The global sulfur cycle involves the transformations of sulfur species through different oxidation states, which play an important role in both geological and biological processes. Steps of the sulfur cycle are: Mineralization of organic sulfur into inorganic forms, such as hydrogen sulfide (H2S), elemental sulfur, as well as sulfide minerals. Oxidation of hydrogen sulfide, sulfide, and elemental sulfur (S) to sulfate (). Reduction of sulfate to sulfide. Incorporation of sulfide into organic compounds (including metal-containing derivatives). Disproportionation of sulfur compounds (elemental sulfur, sulfite, thiosulfate) into sulfate and hydrogen sulfide. These are often termed as follows: Assimilative sulfate reduction (see also sulfur assimilation) in which sulfate () is reduced by plants, fungi and various prokaryotes. The oxidation states of sulfur are +6 in sulfate and –2 in R–SH. Desulfurization in which organic molecules containing sulfur can be desulfurized, producing hydrogen sulfide gas (H2S, oxidation state = –2). An analogous process for organic nitrogen compounds is deamination. Oxidation of hydrogen sulfide produces elemental sulfur (S8), oxidation state = 0. This reaction occurs in the photosynthetic green and purple sulfur bacteria and some chemolithotrophs. Often the elemental sulfur is stored as polysulfides. Oxidation of elemental sulfur by sulfur oxidizers produces sulfate. Dissimilative sulfur reduction in which elemental sulfur can be reduced to hydrogen sulfide. Dissimilative sulfate reduction in which sulfate reducers generate hydrogen sulfide from sulfate. Sulfur oxidation states Sulfur can be found under several oxidation states in nature, mainly −2, −1, 0, +2 (apparent), +2.5 (apparent), +4, and +6. When two sulfur atoms are present in the same polyatomic oxyanion in an asymmetrical situation, i.e, each bound to different groups as in thiosulfate, the oxidation state calculated from the known oxidation state of accompanying atoms (H = +1, and O = −2) can be an apparent average (+2 as in thiosulfate), and even differ from an entire number (+2.5 as in tetrathionate). This is the direct consequence of the different valence of each sulfur atoms present in the oxyanion. The most common sulfur species participating to the sulfur cycle are listed hereafter from the most reduced to the most oxidized: S (−2): H2S, , ; (CH3)2S S (−1): disulfide, , as in S (0): native, or elemental, sulfur () S (+2): thiosulfate, (here +2 is only an "apparent mean" oxidation state: (+5 -1)/2 = +2 because the two sulfur atoms in thiosulfate are not at the same oxidation state. In fact they are at +5 and −1 respectively). S (+4): SO2; sulfite () S (+6): (H2SO4, CaSO4). Sulfur sources and sinks Sulfur is found in oxidation states ranging from +6 in to −2 in sulfides. Thus, elemental sulfur can either give or receive electrons depending on its environment. On the anoxic early Earth, most sulfur was present in minerals such as pyrite (FeS2). Over Earth history, the amount of mobile sulfur increased through volcanic activity as well as weathering of the crust in an oxygenated atmosphere. Earth's main sulfur sink is the oceans available as electron acceptor for microorganisms in anoxic waters. When is assimilated by organisms, it is reduced and converted to organic sulfur, which is an essential component of proteins. However, the biosphere does not act as a major sink for sulfur, instead the majority of sulfur is found in seawater or sedimentary rocks including: pyrite rich shales, evaporite rocks (anhydrite and baryte), and calcium and magnesium carbonates (i.e. carbonate-associated sulfate). The amount of sulfate in the oceans is controlled by three major processes: input from rivers sulfate reduction and sulfide re-oxidation on continental shelves and slopes burial of anhydrite and pyrite in the oceanic crust. The primary natural source of sulfur to the atmosphere is sea spray or windblown sulfur-rich dust, neither of which is long lived in the atmosphere. In recent times, the large annual input of sulfur from the burning of coal and other fossil fuels has added a substantial amount of SO2 which acts as an air pollutant. In the geologic past, igneous intrusions into coal measures have caused large scale burning of these measures, and consequential release of sulfur to the atmosphere. This has led to substantial disruption to the climate system, and is one of the proposed causes of the Permian–Triassic extinction event. Dimethylsulfide [(CH3)2S or DMS] is produced by the decomposition of dimethylsulfoniopropionate (DMSP) from dying phytoplankton cells in the ocean's photic zone, and is the major biogenic gas emitted from the sea, where it is responsible for the distinctive “smell of the sea” along coastlines. DMS is the largest natural source of sulfur gas, but still only has a residence time of about one day in the atmosphere and a majority of it is redeposited in the oceans rather than making it to land. However, it is a significant factor in the climate system, as it is involved in the formation of clouds. Biologically and thermochemically driven sulfate reduction Through the dissimilatory sulfate reduction pathway, sulfate can be reduced either bacterially (bacterial sulfate reduction) or inorganically (thermochemical sulfate reduction). This pathway involves the reduction of sulfate by organic compounds to produce hydrogen sulfide, which occurs in both processes. The main products and reactants of bacterial sulfate reduction (BSR) and thermochemical sulfate reduction (TSR) are very similar. For both, various organic compounds and dissolved sulfate are the reactants, and the products or by-products are as follows: H2S, CO2, carbonates, elemental sulfur and metal sulfides. However, the reactive organic compounds differ for BSR and TSR because of the mutually exclusive temperature regimes. Organic acids are the main organic reactants for BSR and branched/n-alkanes are the main organic reactants for TSR. The inorganic reaction products in BSR and TSR are H2S (HS−) and (CO2). These processes occur because there are two very different thermal regimes in which sulfate is reduced, particularly in low-temperature and high-temperature environments. BSR usually occurs at lower temperatures from 0−80 °C, while TSR happens at much higher temperatures around 100–140 °C. Temperatures for TSR are not as well defined; the lowest confirmed temperature is 127 °C and the highest temperatures occur in settings around 160−180 °C. These two different regimes appear because at higher temperatures most sulfate-reducing microbes can no longer metabolize due to the denaturation of proteins or deactivation of enzymes, so TSR takes over. However, in hot sediments around hydrothermal vents BSR can happen at temperatures up to 110 °C. BSR and TSR occur at different depths. BSR takes place in low-temperature environments, which are shallower settings such as oil and gas fields. BSR can also take place in modern marine sedimentary environments such as stratified inland seas, continental shelves, organic-rich deltas, and hydrothermal sediments which have intense microbial sulfate reduction because of the high concentration of dissolved sulfate in the seawater. Additionally, the high amounts of hydrogen sulfide found in oil and gas fields is thought to arise from the oxidation of petroleum hydrocarbons by sulfate. Such reactions are known to occur by microbial processes but it is generally accepted that TSR is responsible for the bulk of these reactions, especially in deep or hot reservoirs. Thus, TSR occurs in deep reservoirs where the temperatures are much higher. BSR is geologically instantaneous in most geologic settings, while TSR occurs at rates in the order of hundreds of thousands of years. Although much slower than BSR, even TSR appears to be a geologically fairly fast process. BSR in shallow environments and TSR in deep reservoirs are key processes in the oceanic sulfur cycle.  Approximately, 10% (of the total gas) of H2S is produced in BSR settings, whereas 90% of the H2S is produced in TSR settings. If there is more than a few percent of H2S in any deep reservoir, then it is assumed that TSR has taken over. This is due to the fact that thermal cracking of hydrocarbons doesn't provide more than 3% of H2S. The amount of H2S is affected by several factors such as, the availability of organic reactants and sulfate and the presence/availability of base and transition metals. Microbial sulfur oxidation Sulfide oxidation is performed by both bacteria and archaea in a variety of environmental conditions. Aerobic sulfide oxidation is usually performed by autotrophs that use sulfide or elemental sulfur to fix carbon dioxide. The oxidation pathway includes the formation of various intermediate sulfur species, including elemental sulfur and thiosulfate. Under low oxygen concentrations, microbes will oxidize to elemental sulfur. This elemental sulfur accumulates as sulfur globules, intracellularly or extracellularly, to be consumed under low sulfur concentrations. To ameliorate low oxidant concentrations (that is, to find an electron sink), sulfur oxidizers like cable bacteria form long chains that span the length between oxic and sulfidic zones of the coastal sediments. The bacteria present in the sulfide rich zones oxidize the sulfide and transport the electrons to the bacteria present in the oxygen rich zone through multiple periplasmic strings where the oxygen is reduced. Anaerobic sulfide oxidation is performed by both phototrophs and chemotrophs. Green sulfur bacteria (GSB) and purple sulfur bacteria (PSB) perform anoxygenic photosynthesis fueled by sulfide oxidation. Some PSB can also perform aerobic sulfide oxidation in the presence of oxygen and can even grow chemoautotrophically under low light conditions. GSB lack this metabolic potential and have compensated by developing efficient light harvesting systems. PSB can be found in various environments ranging from hot sulfur springs and alkaline lakes to wastewater treatment plants. GSB populate stratified lakes with high reduced sulfur concentrations and can even grow in hydrothermal vents by using infra-red light to perform photosynthesis. Hydrothermal vents emit hydrogen sulfide that support the carbon fixation of chemolithotrophic bacteria that oxidize hydrogen sulfide with oxygen to produce elemental sulfur or sulfate. The chemical reactions are as follows: CO2 + 4 H2S + O2 → CH2O + 4 S0 + 3 H2O CO2 + H2S + O2 + H2O → CH2O + + 2 H+ In modern oceans, Thiomicrospira, Halothiobacillus, and Beggiatoa are primary sulfur oxidizing bacteria, and form chemosynthetic symbioses with animal hosts. The host provides metabolic substrates (e.g., CO2, O2, H2O) to the symbiont while the symbiont generates organic carbon for sustaining the metabolic activities of the host. The produced sulfate usually combines with the leached calcium ions to form gypsum, which can form widespread deposits on near mid-ocean spreading centers. Sulfur metabolizing microbes are often engaged in close symbiotic relationships with other microbes, and even animals. PSB and sulfate reducers form microbial aggregates called “pink berries” in the salt marshes of Massachusetts within which sulfur cycling occurs through the direct exchange of sulfur species. The Vestimentiferan tube worms that grow around hydrothermal vents lack a digestive tract but contain specialized organelles called trophosomes within which autotrophic, sulfide oxidizing bacteria are housed. The tube worms provide the bacteria with sulfide and the bacteria shares the fixed carbon with the worms. δ34S Although there are 25 known isotopes of sulfur, only four are stable and of geochemical importance. Of those four, two (32S, light and 34S, heavy) comprise (99.22%) of sulfur on Earth. The vast majority (95.02%) of sulfur occurs as 32S with only 4.21% in 34S. The ratio of these two isotopes is fixed in the Solar System and has been since its formation. The bulk Earth sulfur isotopic ratio is thought to be the same as the ratio of 22.22 measured from the Canyon Diablo troilite (CDT), a meteorite. That ratio is accepted as the international standard and is therefore set at δ = 0.00. Deviation from 0.00 is expressed as the δ34S which is a ratio in per mill (‰). Positive values correlate to increased levels of 34S, whereas negative values correlate with greater 32S in a sample. Formation of sulfur minerals through non-biogenic processes does not substantially differentiate between the light and heavy isotopes, therefore sulfur isotope ratios in gypsum or barite should be the same as the overall isotope ratio in the water column at their time of precipitation. Sulfate reduction through biologic activity strongly differentiates between the two isotopes because of the more rapid enzymic reaction with 32S. Average present day seawater values of δ34S are on the order of +21‰. Prior to 2010s, it was thought that sulfate reduction could fractionate sulfur isotopes up to 46 permil and fractionation larger than 46 permil recorded in sediments must be due to disproportionation of sulfur intermediates in the sediment. This view has changed since the 2010s that sulfate reduction can fractionate to 66 permil. As substrates for disproportionation are limited by the product of sulfate reduction, the isotopic effect of disproportionation should be less than 16 permil in most sedimentary settings. Throughout geologic history the sulfur cycle and the isotopic ratios have coevolved with the biosphere becoming overall more negative with the increases in biologically driven sulfate reduction, but also show substantial positive excursion. In general positive excursions in the sulfur isotopes mean that there is an excess of pyrite deposition rather than oxidation of sulfide minerals exposed on land. Marine sulfur cycle The marine sulfur cycle is driven by sulfate reduction because hydrogen sulfide is oxidized by microbes for energy or is oxidized abiotically. Dissimilatory sulfate reduction is driven by the degradation of buried organic matter and anaerobic oxidation of methane (AOM)  both of which produce carbon dioxide. At depths where sulfate is depleted, methanogenesis is prevalent. At the sulfate-methane transition zone (SMTZ), the upwelling of methane produced by the methanogens is met by the anaerobic methanotrophic archaea in the SMTZ which oxidize it using sulfate as an electron acceptor. More sulfate is present at the SMTZ than methane. A 4:1 ratio of sulfate: methane is observed and the excess sulfate is directed towards organic matter degradation. Syntrophic aggregates of sulfate reducers and methanotrophs have been discovered and the underlying mechanisms observed include direct interspecies electron transfer using large multi heme complexes. Sulfide produced by sulfate reduction can be oxidized by iron minerals to make iron sulfides and pyrite or used as electron donor or to sulfurize organic matter by microbes. Pyrite is formed through two pathways: the polysulfide and the hydrogen sulfide pathway. The polysulfide pathway is dominant until the depletion of elemental sulfur since elemental sulfur is necessary in the formation of polysulfides, then the hydrogen sulfide pathway takes over.  Microbial sulfur oxidation utilizes multiple oxidants because the concentrations of the electron acceptors are depth dependent. In the upper sediment layers oxygen and nitrate are the preferred oxidants because of the high energy yield from the reaction, and in the suboxic zones iron and manganese take on the role. Sulfide oxidation yields various sulfur intermediates such as elemental sulfur, thiosulfate, sulfite, and sulfate.The sulfur intermediates formed during sulfide oxidation are unique to this process and thus are indicative of sulfide oxidation when found in environmental samples. Sulfur isotope fractionation of these intermediates and other sulfur species has been a useful tool in the study of sulfide oxidation. The sulfur cycle in marine environments has been well-studied via the tool of sulfur isotope systematics expressed as δ34S. The modern global oceans have sulfur storage of , mainly occurring as sulfate with the δ34S value of +21‰. The overall input flux is with the sulfur isotope composition of ~3‰. Riverine sulfate derived from the terrestrial weathering of sulfide minerals (δ34S = +6‰) is the primary input of sulfur to the oceans. Other sources are metamorphic and volcanic degassing and hydrothermal activity (δ34S = 0‰), which release reduced sulfur species (such as H2S and S0). There are two major outputs of sulfur from the oceans. The first sink is the burial of sulfate either as marine evaporites (such as gypsum) or carbonate-associated sulfate (CAS), which accounts for (δ34S = +21‰). The second sulfur sink is pyrite burial in shelf sediments or deep seafloor sediments (; δ34S = −20‰). The total marine sulfur output flux is which matches the input fluxes, implying the modern marine sulfur budget is at steady state. The residence time of sulfur in modern global oceans is 13,000,000 years. Sulfurization of organic matter is a significant sulfur pool, containing 35-80% of the reduced sulfur in marine sediments. These organo-sulfur molecules are also desulfurized to release oxidized sulfur species like sulfite and sulfate. This desulfurization may allow degradation of the organic matter and thus this process determines if the organic matter is assimilated or buried. Sulfurization increases molecular weight and introduces a new moiety to the organic molecule which may inhibit its recognition by catabolic enzymes that degrade organic matter. Microbial ability for desulfurization is reflected by the presence of sulfatase genes. Evolution of the sulfur cycle The isotopic composition of sedimentary sulfides provides primary information on the evolution of the sulfur cycle. The total inventory of sulfur compounds on the surface of the Earth (nearly of sulfur) represents the total outgassing of sulfur through geologic time. Rocks analyzed for sulfur content are generally organic-rich shales meaning they are likely controlled by biogenic sulfur reduction. Average seawater curves are generated from evaporites deposited throughout geologic time because again, since they do not discriminate between the heavy and light sulfur isotopes, they should mimic the ocean composition at the time of deposition. 4.6 billion years ago (Ga) the Earth formed and had a theoretical δ34S value of 0. Since there was no biologic activity on early Earth there would be no isotopic fractionation. All sulfur in the atmosphere would be released during volcanic eruptions. When the oceans condensed on Earth, the atmosphere was essentially swept clean of sulfur gases, owing to their high solubility in water. Throughout the majority of the Archean (4.6–2.5 Ga) most systems appeared to be sulfate-limited. Some small Archean evaporite deposits require that at least locally elevated concentrations (possibly due to local volcanic activity) of sulfate existed in order for them to be supersaturated and precipitate out of solution. 3.8–3.6 Ga marks the beginning of the exposed geologic record because this is the age of the oldest rocks on Earth. Metasedimentary rocks from this time still have an isotopic value of 0 because the biosphere was not developed enough (possibly at all) to fractionate sulfur. 3.5 Ga anoxyogenic photosynthesis is established and provides a weak source of sulfate to the global ocean with sulfate concentrations incredibly low the δ34S is still basically 0. Shortly after, at 3.4 Ga the first evidence for minimal fractionation in evaporitic sulfate in association with magmatically derived sulfides can be seen in the rock record. This fractionation shows possible evidence for anoxygenic phototrophic bacteria. 2.8 Ga marks the first evidence for oxygen production through photosynthesis. This is important because there cannot be sulfur oxidation without oxygen in the atmosphere. This exemplifies the coevolution of the oxygen and sulfur cycles as well as the biosphere. 2.7–2.5 Ga is the age of the oldest sedimentary rocks to have a depleted δ 34S which provide the first compelling evidence for sulfate reduction. 2.3 Ga sulfate increases to more than 1 mM; this increase in sulfate is coincident with the "Great Oxygenation Event", when redox conditions on Earth's surface are thought by most workers to have shifted fundamentally from reducing to oxidizing. This shift would have led to an incredible increase in sulfate weathering which would have led to an increase in sulfate in the oceans. The large isotopic fractionations that would likely be associated with bacteria reduction are produced for the first time. Although there was a distinct rise in seawater sulfate at this time it was likely still only less than 5–15% of present-day levels. At 1.8 Ga, Banded iron formations (BIF) are common sedimentary rocks throughout the Archean and Paleoproterozoic; their disappearance marks a distinct shift in the chemistry of ocean water. BIFs have alternating layers of iron oxides and chert. BIFs only form if the water is allowed to supersaturate in dissolved iron (Fe2+) meaning there cannot be free oxygen or sulfur in the water column because it would form Fe3+ (rust) or pyrite and precipitate out of solution. Following this supersaturation, the water must become oxygenated in order for the ferric rich bands to precipitate it must still be sulfur poor otherwise pyrite would form instead of Fe3+. It has been hypothesized that BIFs formed during the initial evolution of photosynthetic organisms that had phases of population growth, causing over production of oxygen. Due to this over production they would poison themselves causing a mass die off, which would cut off the source of oxygen and produce a large amount of CO2 through the decomposition of their bodies, allowing for another bacterial bloom. After 1.8 Ga sulfate concentrations were sufficient to increase rates of sulfate reduction to greater than the delivery flux of iron to the oceans. Along with the disappearance of BIF, the end of the Paleoproterozoic also marks the first large scale sedimentary exhalative deposits showing a link between mineralization and a likely increase in the amount of sulfate in sea water. In the Paleoproterozoic the sulfate in seawater had increased to an amount greater than in the Archean, but was still lower than present day values. The sulfate levels in the Proterozoic also act as proxies for atmospheric oxygen because sulfate is produced mostly through weathering of the continents in the presence of oxygen. The low levels in the Proterozoic simply imply that levels of atmospheric oxygen fell between the abundances of the Phanerozoic and the deficiencies of the Archean. 750 million years ago (Ma) there is a renewed deposition of BIF which marks a significant change in ocean chemistry. This was likely due to snowball Earth episodes where the entire globe including the oceans was covered in a layer of ice cutting off oxygenation. In the late Neoproterozoic high carbon burial rates increased the atmospheric oxygen level to >10% of its present-day value. In the Latest Neoproterozoic another major oxidizing event occurred on Earth's surface that resulted in an oxic deep ocean and possibly allowed for the appearance of multicellular life. During the last 600 million years, seawater SO4 has generally varied between +10‰ and +30‰ in δ34S, with an average value close to that of today. Notably changes in seawater δ34S occurred during extinction and climatic events during this time. Over a shorter time scale (ten million years) changes in the sulfur cycle are easier to observe and can be even better constrained with oxygen isotopes. Oxygen is continually incorporated into the sulfur cycle through sulfate oxidation and then released when that sulfate is reduced once again. Since different sulfate sources within the ocean have distinct oxygen isotopic values it may be possible to use oxygen to trace the sulfur cycle. Biological sulfate reduction preferentially selects lighter oxygen isotopes for the same reason that lighter sulfur isotopes are preferred. By studying oxygen isotopes in ocean sediments over the last 10 million years were able to better constrain the sulfur concentrations in sea water through that same time. They found that the sea level changes due to Pliocene and Pleistocene glacial cycles changed the area of continental shelves which then disrupted the sulfur processing, lowering the concentration of sulfate in the sea water. This was a drastic change as compared to preglacial times before 2 million years ago. The Great Oxidation Event and sulfur isotope mass-independent fractionation The Great Oxygenation Event (GOE) is characterized by the disappearance of sulfur isotope mass-independent fractionation (MIF) in the sedimentary records at around 2.45 billion years ago (Ga). The MIF of sulfur isotope (Δ33S) is defined by the deviation of measured δ33S value from the δ33S value inferred from the measured δ34S value according to the mass dependent fractionation law. The Great Oxidation Event represented a massive transition of global sulfur cycles. Before the Great Oxidation Event, the sulfur cycle was heavily influenced by the ultraviolet (UV) radiation and the associated photochemical reactions, which induced the sulfur isotope mass-independent fractionation (Δ33S ≠ 0). The preservation of sulfur isotope mass-independent fractionation signals requires the atmospheric O2 lower than 10−5 of present atmospheric level (PAL). The disappearance of sulfur isotope mass-independent fractionation at ~2.45 Ga indicates that atmospheric pO2 exceeded 10−5 present atmospheric level after the Great Oxygenation Event. Oxygen played an essential role in the global sulfur cycles after the Great Oxygenation Event, such as oxidative weathering of sulfides. The burial of pyrite in sediments in turn contributes to the accumulation of free O2 in Earth's surface environment. Economic importance Sulfur is intimately involved in the production of fossil fuels and most metal deposits because it acts as an oxidizing or reducing agent. The vast majority of the major mineral deposits on Earth contain a substantial amount of sulfur including, but not limited to sedimentary exhalative deposits (SEDEX), Carbonate-hosted lead-zinc ore deposits (Mississippi Valley-Type MVT), and porphyry copper deposits. Iron sulfides, galena, and sphalerite will form as by-products of hydrogen sulfide generation as long as the respective transition or base metals are present or transported to a sulfate reduction site. If the system runs out of reactive hydrocarbons, economically viable elemental sulfur deposits may form. Sulfur also acts as a reducing agent in many natural gas reservoirs, and generally, ore-forming fluids have a close relationship with ancient hydrocarbon seeps or vents. Important sources of sulfur in ore deposits are generally deep-seated, but they can also come from local country rocks, seawater, or marine evaporites. The presence or absence of sulfur is one of the limiting factors in the concentration of precious metals and their precipitation from solution. pH, temperature and especially redox states determine whether sulfides will precipitate. Most sulfide brines will remain in concentration until they reach reducing conditions, a higher pH, or lower temperatures. Ore fluids are generally linked to metal-rich waters that have been heated within a sedimentary basin under elevated thermal conditions, typically in extensional tectonic settings. The redox conditions of the basin lithologies exert an important control on the redox state of the metal-transporting fluids, and deposits can form from both oxidizing and reducing fluids. Metal-rich ore fluids tend to be, by necessity, comparatively sulfide deficient, so a substantial portion of the sulfide must be supplied from another source at the site of mineralization. Bacterial reduction of seawater sulfate or a euxinic (anoxic and H2S-containing) water column is a necessary source of that sulfide. When present, the δ34S values of barite are generally consistent with a seawater sulfate source, suggesting baryte formation by reaction between hydrothermal barium and sulfate in ambient seawater. Once fossil fuels or precious metals are discovered and either burned or milled, sulfur becomes a waste product that must be dealt with properly, or it can become a pollutant. The burning of fossil fuels has greatly increased the amount of sulfur in our present-day atmosphere. Sulfur acts as a pollutant and an economic resource at the same time. Human impact Human activities have a major effect on the global sulfur cycle. The burning of coal, natural gas, and other fossil fuels has greatly increased the amount of sulfur in the atmosphere and ocean and depleted the sedimentary rock sink. Without human impact sulfur would stay tied up in rocks for millions of years until it was uplifted through tectonic events and then released through erosion and weathering processes. Instead it is being drilled, pumped and burned at a steadily increasing rate. Over the most polluted areas there has been a 30-fold increase in sulfate deposition. Although the sulfur curve shows shifts between net sulfur oxidation and net sulfur reduction in the geologic past, the magnitude of the current human impact is probably unprecedented in the geologic record. Human activities greatly increase the flux of sulfur to the atmosphere, some of which is transported globally. Humans are mining coal and extracting petroleum from the Earth's crust at a rate that mobilizes 150 x 1012 gS/yr, which is more than double the rate of 100 years ago. The result of human impact on these processes is to increase the pool of oxidized sulfur (SO4) in the global cycle, at the expense of the storage of reduced sulfur in the Earth's crust. Therefore, human activities do not cause a major change in the global pools of sulfur, but they do produce massive changes in the annual flux of sulfur through the atmosphere. When SO2 is emitted as an air pollutant, it forms sulfuric acid through reactions with water in the atmosphere. Once the acid is completely dissociated in water the pH can drop to 4.3 or lower causing damage to both man-made and natural systems. According to the EPA, acid rain is a broad term referring to a mixture of wet and dry deposition (deposited material) from the atmosphere containing higher than normal amounts of nitric and sulfuric acids. Distilled water (water without any dissolved constituents), which contains no carbon dioxide, has a neutral pH of 7. Rain naturally has a slightly acidic pH of 5.6, because carbon dioxide and water in the air react together to form carbonic acid, a very weak acid. Around Washington, D.C., however, the average rain pH is between 4.2 and 4.4. Since pH is on a log scale dropping by 1 (the difference between normal rain water and acid rain) has a dramatic effect on the strength of the acid. In the United States, roughly two thirds of all SO2 and one fourth of all NO3 come from electric power generation that relies on burning fossil fuels, like coal. As it is an important nutrient for plants, sulfur is increasingly used as a component of fertilizers. Recently sulfur deficiency has become widespread in many countries in Europe. Because of actions taken to limit acid rains atmospheric inputs of sulfur continue to decrease, As a result, the deficit in the sulfur input is likely to increase unless sulfur fertilizers are used. See also Sulfur metabolism Microbial metabolism Sulfide intrusion Sulfate-reducing microorganisms Redox Sulfur References External links Sulfur Oxidation from Soil Microbiology course at Virginia Tech University Sulfur Cycle at Carnegie Mellon University Lenntech Metabolism Soil biology Soil chemistry Sulfur Biogeochemical cycle
Sulfur cycle
[ "Chemistry", "Biology" ]
6,901
[ "Biogeochemical cycle", "Soil chemistry", "Biogeochemistry", "Soil biology", "Cellular processes", "Biochemistry", "Metabolism" ]
2,465,480
https://en.wikipedia.org/wiki/Hydrogen%20cycle
The hydrogen cycle consists of hydrogen exchanges between biotic (living) and abiotic (non-living) sources and sinks of hydrogen-containing compounds. Hydrogen (H) is the most abundant element in the universe. On Earth, common H-containing inorganic molecules include water (H2O), hydrogen gas (H2), hydrogen sulfide (H2S), and ammonia (NH3). Many organic compounds also contain H atoms, such as hydrocarbons and organic matter. Given the ubiquity of hydrogen atoms in inorganic and organic chemical compounds, the hydrogen cycle is focused on molecular hydrogen, H2. As a consequence of microbial metabolisms or naturally occurring rock-water interactions, hydrogen gas can be created. Other bacteria may then consume free H2, which may also be oxidised photochemically in the atmosphere or lost to space. Hydrogen is also thought to be an important reactant in pre-biotic chemistry and the early evolution of life on Earth, and potentially elsewhere in the Solar System. Abiotic cycles Sources Abiotic sources of hydrogen gas include water-rock and photochemical reactions. Exothermic serpentinization reactions between water and olivine minerals produce H2 in the marine or terrestrial subsurface. In the ocean, hydrothermal vents erupt magma and altered seawater fluids including abundant H2, depending on the temperature regime and host rock composition. Molecular hydrogen can also be produced through photooxidation (via solar UV radiation) of some mineral species such as siderite in anoxic aqueous environments. This may have been an important process in the upper regions of early Earth's Archaean oceans. Sinks Because H2 is the lightest element, atmospheric H2 can readily be lost to space via Jeans escape, an irreversible process that drives Earth's net mass loss. Photolysis of heavier compounds not prone to escape, such as CH4 or H2O, can also liberate H2 from the upper atmosphere and contribute to this process. Another major sink of free atmospheric H2 is photochemical oxidation by hydroxyl radicals (•OH), which forms water. Anthropogenic sinks of H2 include synthetic fuel production through the Fischer-Tropsch reaction and artificial nitrogen fixation through the Haber-Bosch process to produce nitrogen fertilizers. Biotic cycles Many microbial metabolisms produce or consume H2. Production Hydrogen is produced by hydrogenases and nitrogenases enzymes in many microorganisms, some of which are being studied for their potential for biofuel production. These H2-metabolizing enzymes are found in all three domains of life, and out of known genomes over 30% of microbial taxa contain hydrogenase genes. Fermentation produces H2 from organic matter as part of the anaerobic microbial food chain via light-dependent or light-independent pathways. Consumption Biological soil uptake is the dominant sink of atmospheric H2. Both aerobic and anaerobic microbial metabolisms consume H2 by oxidizing it in order to reduce other compounds during respiration. Aerobic H2 oxidation is known as the Knallgas reaction. Anaerobic H2 oxidation often occurs during interspecies hydrogen transfer in which H2 produced during fermentation is transferred to another organism, which uses the H2 to reduce CO2 to CH4 or acetate, to H2S, or Fe3+ to Fe2+. Interspecies hydrogen transfer keeps H2 concentrations very low in most environments because fermentation becomes less thermodynamically favorable as the partial pressure of H2 increases. Relevance for the global climate H2 can interfere with the removal of methane from the atmosphere, a greenhouse gas. Typically, atmospheric CH4 is oxidized by hydroxyl radicals (•OH), but H2 can also react with •OH to reduce it to H2O. CH4 + •OH → •CH3 + H2O H2 + •OH → H• + H2O Implications for astrobiology Hydrothermal H2 may have played a major role in pre-biotic chemistry. Production of H2 by serpentinization supported formation of the reactants proposed in the iron-sulfur world origin of life hypothesis. The subsequent evolution of hydrogenotrophic methanogenesis is hypothesized as one of the earliest metabolisms on Earth. Serpentinization can occur on any planetary body with chondritic composition. The discovery of H2 on other ocean worlds, such as Enceladus, suggests that similar processes are ongoing elsewhere in the Solar System, and potentially in other planetary systems as well. See also Biogeochemical cycle Carbon cycle Hydrogen Methane Serpentinization Interspecies hydrogen transfer Fermentation Hydrothermal vents Water cycle Ocean World Exploration Program References Biogeochemical cycle Metabolism Cycle Hydrogen biology
Hydrogen cycle
[ "Chemistry", "Biology" ]
1,010
[ "Biogeochemical cycle", "Biogeochemistry", "Cellular processes", "Biochemistry", "Metabolism" ]
2,466,027
https://en.wikipedia.org/wiki/Quantum%20yield
In particle physics, the quantum yield (denoted ) of a radiation-induced process is the number of times a specific event occurs per photon absorbed by the system. Applications Fluorescence spectroscopy The fluorescence quantum yield is defined as the ratio of the number of photons emitted to the number of photons absorbed. Fluorescence quantum yield is measured on a scale from 0 to 1.0, but is often represented as a percentage. A quantum yield of 1.0 (100%) describes a process where each photon absorbed results in a photon emitted. Substances with the largest quantum yields, such as rhodamines, display the brightest emissions; however, compounds with quantum yields of 0.10 are still considered quite fluorescent. Quantum yield is defined by the fraction of excited state fluorophores that decay through fluorescence: where is the fluorescence quantum yield, is the rate constant for radiative relaxation (fluorescence), is the rate constant for all non-radiative relaxation processes. Non-radiative processes are excited state decay mechanisms other than photon emission, which include: Förster resonance energy transfer, internal conversion, external conversion, and intersystem crossing. Thus, the fluorescence quantum yield is affected if the rate of any non-radiative pathway changes. The quantum yield can be close to unity if the non-radiative decay rate is much smaller than the rate of radiative decay, that is . Fluorescence quantum yields are measured by comparison to a standard of known quantum yield. The quinine salt quinine sulfate in a sulfuric acid solution was regarded as the most common fluorescence standard, however, a recent study revealed that the fluorescence quantum yield of this solution is strongly affected by the temperature, and should no longer be used as the standard solution. The quinine in 0.1M perchloric acid ( 0.60) shows no temperature dependence up to 45 °C, therefore it can be considered as a reliable standard solution. Experimentally, relative fluorescence quantum yields can be determined by measuring fluorescence of a fluorophore of known quantum yield with the same experimental parameters (excitation wavelength, slit widths, photomultiplier voltage etc.) as the substance in question. The quantum yield is then calculated by: where is the quantum yield, is the area under the emission peak (on a wavelength scale), is absorbance (also called "optical density") at the excitation wavelength, is the refractive index of the solvent. The subscript denotes the respective values of the reference substance. The determination of fluorescence quantum yields in scattering media requires additional considerations and corrections. FRET efficiency Förster resonance energy transfer efficiency () is the quantum yield of the energy-transfer transition, i.e. the probability of the energy-transfer event occurring per donor excitation event: where  is the rate of energy transfer,  the radiative decay rate (fluorescence) of the donor,  are non-radiative relaxation rates (e.g., internal conversion, intersystem crossing, external conversion etc.). Solvent and environmental effects A fluorophore's environment can impact quantum yield, usually resulting from changes in the rates of non-radiative decay. Many fluorophores used to label macromolecules are sensitive to solvent polarity. The class of 8-anilinonaphthalene-1-sulfonic acid (ANS) probe molecules are essentially non-fluorescent when in aqueous solution, but become highly fluorescent in nonpolar solvents or when bound to proteins and membranes. The quantum yield of ANS is ~0.002 in aqueous buffer, but near 0.4 when bound to serum albumin. Photochemical reactions The quantum yield of a photochemical reaction describes the number of molecules undergoing a photochemical event per absorbed photon: In a chemical photodegradation process, when a molecule dissociates after absorbing a light quantum, the quantum yield is the number of destroyed molecules divided by the number of photons absorbed by the system. Since not all photons are absorbed productively, the typical quantum yield will be less than 1. Quantum yields greater than 1 are possible for photo-induced or radiation-induced chain reactions, in which a single photon may trigger a long chain of transformations. One example is the reaction of hydrogen with chlorine, in which as many as 106 molecules of hydrogen chloride can be formed per quantum of blue light absorbed. Quantum yields of photochemical reactions can be highly dependent on the structure, proximity and concentration of the reactive chromophores, the type of solvent environment as well as the wavelength of the incident light. Such effects can be studied with wavelength-tunable lasers and the resulting quantum yield data can help predict conversion and selectivity of photochemical reactions. In optical spectroscopy, the quantum yield is the probability that a given quantum state is formed from the system initially prepared in some other quantum state. For example, a singlet to triplet transition quantum yield is the fraction of molecules that, after being photoexcited into a singlet state, cross over to the triplet state. Photosynthesis Quantum yield is used in modeling photosynthesis: See also Quantum dot Quantum efficiency References Radiation Spectroscopy Photochemistry
Quantum yield
[ "Physics", "Chemistry" ]
1,082
[ "Transport phenomena", "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Waves", "Radiation", "nan", "Spectroscopy" ]
14,217,924
https://en.wikipedia.org/wiki/UDP-N-acetylglucosamine%202-epimerase
In enzymology, an UDP-N-acetylglucosamine 2-epimerase () is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine UDP-N-acetyl-D-mannosamine Hence, this enzyme has one substrate, UDP-N-acetyl-D-glucosamine, and one product, UDP-N-acetyl-D-mannosamine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine 2-epimerase. Other names in common use include UDP-N-acetylglucosamine 2'-epimerase, uridine diphosphoacetylglucosamine 2'-epimerase, uridine diphospho-N-acetylglucosamine 2'-epimerase, and uridine diphosphate-N-acetylglucosamine-2'-epimerase. This enzyme participates in aminosugars metabolism. In microorganisms this epimerase is involved in the synthesis of the capsule precursor UDP-ManNAcA. An inhibitor of the bacterial 2-epimerase, epimerox, has been described. Some of these enzymes are bifunctional. The UDP-N-acetylglucosamine 2-epimerase from rat liver displays both epimerase and kinase activity. Structural studies As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes , , , and . See also UDP-N-acetylglucosamine 2-epimerase (hydrolysing) Notes References Further reading Protein families EC 5.1.3 Enzymes of known structure
UDP-N-acetylglucosamine 2-epimerase
[ "Biology" ]
427
[ "Protein families", "Protein classification" ]
14,221,581
https://en.wikipedia.org/wiki/Hosoya%20index
The Hosoya index, also known as the Z index, of a graph is the total number of matchings in it. The Hosoya index is always at least one, because the empty set of edges is counted as a matching for this purpose. Equivalently, the Hosoya index is the number of non-empty matchings plus one. The index is named after Haruo Hosoya. It is used as a topological index in chemical graph theory. Complete graphs have the largest Hosoya index for any given number of vertices; their Hosoya indices are the telephone numbers. History This graph invariant was introduced by Haruo Hosoya in 1971. It is often used in chemoinformatics for investigations of organic compounds. In his article, "The Topological Index Z Before and After 1971," on the history of the notion and the associated inside stories, Hosoya writes that he introduced the Z index to report a good correlation of the boiling points of alkane isomers and their Z indices, basing on his unpublished 1957 work carried out while he was an undergraduate student at the University of Tokyo. Example A linear alkane, for the purposes of the Hosoya index, may be represented as a path graph without any branching. A path with one vertex and no edges (corresponding to the methane molecule) has one (empty) matching, so its Hosoya index is one; a path with one edge (ethane) has two matchings (one with zero edges and one with one edges), so its Hosoya index is two. Propane (a length-two path) has three matchings: either of its edges, or the empty matching. n-butane (a length-three path) has five matchings, distinguishing it from isobutane which has four. More generally, a matching in a path with edges either forms a matching in the first edges, or it forms a matching in the first edges together with the final edge of the path. This case analysis shows that the Hosoya indices of linear alkanes obey the recurrence governing the Fibonacci numbers, and because they also have the same base case they must equal the Fibonacci numbers. The structure of the matchings in these graphs may be visualized using a Fibonacci cube. The largest possible value of the Hosoya index, on a graph with vertices, is given by the complete graph . The Hosoya indices for the complete graphs are the telephone numbers These numbers can be expressed by a summation formula involving factorials, as Every graph that is not complete has a smaller Hosoya index than this upper bound. Algorithms The Hosoya index is #P-complete to compute, even for planar graphs. However, it may be calculated by evaluating the matching polynomial mG at the argument 1. Based on this evaluation, the calculation of the Hosoya index is fixed-parameter tractable for graphs of bounded treewidth and polynomial (with an exponent that depends linearly on the width) for graphs of bounded clique-width. The Hosoya index can be efficiently approximated to any desired constant approximation ratio using a fully-polynomial randomized approximation scheme. Notes References Roberto Todeschini, Viviana Consonni (2000) "Handbook of Molecular Descriptors", Wiley-VCH, Graph invariants Mathematical chemistry Cheminformatics Matching (graph theory)
Hosoya index
[ "Chemistry", "Mathematics" ]
711
[ "Drug discovery", "Applied mathematics", "Graph theory", "Theoretical chemistry", "Mathematical chemistry", "Computational chemistry", "Molecular modelling", "Mathematical relations", "Cheminformatics", "nan", "Graph invariants", "Matching (graph theory)" ]
14,221,614
https://en.wikipedia.org/wiki/Topological%20index
In the fields of chemical graph theory, molecular topology, and mathematical chemistry, a topological index, also known as a connectivity index, is a type of a molecular descriptor that is calculated based on the molecular graph of a chemical compound. Topological indices are numerical parameters of a graph which characterize its topology and are usually graph invariant. Topological indices are used for example in the development of quantitative structure-activity relationships (QSARs) in which the biological activity or other properties of molecules are correlated with their chemical structure. Calculation Topological descriptors are derived from hydrogen-suppressed molecular graphs, in which the atoms are represented by vertices and the bonds by edges. The connections between the atoms can be described by various types of topological matrices (e.g., distance or adjacency matrices), which can be mathematically manipulated so as to derive a single number, usually known as graph invariant, graph-theoretical index or topological index. As a result, the topological index can be defined as two-dimensional descriptors that can be easily calculated from the molecular graphs, and do not depend on the way the graph is depicted or labeled and no need of energy minimization of the chemical structure. Types The simplest topological indices do not recognize double bonds and atom types (C, N, O etc.) and ignore hydrogen atoms ("hydrogen suppressed") and defined for connected undirected molecular graphs only. More sophisticated topological indices also take into account the hybridization state of each of the atoms contained in the molecule. The Hosoya index is the first topological index recognized in chemical graph theory, and it is often referred to as "the" topological index. Other examples include the Wiener index, Randić's molecular connectivity index, Balaban’s J index, and the TAU descriptors. The extended topochemical atom (ETA) indices have been developed based on refinement of TAU descriptors. Global and local indices Hosoya index and Wiener index are global (integral) indices to describe entire molecule, Bonchev and Polansky introduced local (differential) index for every atom in a molecule. Another examples of local indices are modifications of Hosoya index. Discrimination capability and superindices A topological index may have the same value for a subset of different molecular graphs, i.e. the index is unable to discriminate the graphs from this subset. The discrimination capability is very important characteristic of topological index. To increase the discrimination capability a few topological indices may be combined to superindex. Computational complexity Computational complexity is another important characteristic of topological index. The Wiener index, Randic's molecular connectivity index, Balaban's J index may be calculated by fast algorithms, in contrast to Hosoya index and its modifications for which non-exponential algorithms are unknown. List of topological indices Wiener index Hosoya index Hyper-Wiener index Estrada index Randić index Zagreb indics Szeged index Padmakar–Ivan index Gutman index sombor index Harmonic index Arithmetic index Atom bond connectivity index Merrifield-Simmons index Application QSAR QSARs represent predictive models derived from application of statistical tools correlating biological activity (including desirable therapeutic effect and undesirable side effects) of chemicals (drugs/toxicants/environmental pollutants) with descriptors representative of molecular structure and/or properties. QSARs are being applied in many disciplines for example risk assessment, toxicity prediction, and regulatory decisions in addition to drug discovery and lead optimization. For example, ETA indices have been applied in the development of predictive QSAR/QSPR/QSTR models. References Further reading External links Software for calculating various topological indices: GraphTea. Theoretical chemistry Mathematical chemistry Graph invariants Cheminformatics
Topological index
[ "Chemistry", "Mathematics" ]
763
[ "Drug discovery", "Applied mathematics", "Graph theory", "Molecular modelling", "Mathematical chemistry", "Theoretical chemistry", "Computational chemistry", "Mathematical relations", "Cheminformatics", "nan", "Graph invariants" ]
14,225,958
https://en.wikipedia.org/wiki/Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, is the probability that a normal (Gaussian) random variable will obtain a value larger than standard deviations. Equivalently, is the probability that a standard normal random variable takes a value larger than . If is a Gaussian random variable with mean and variance , then is standard normal and where . Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally. Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics. Definition and basic properties Formally, the Q-function is defined as Thus, where is the cumulative distribution function of the standard normal Gaussian distribution. The Q-function can be expressed in terms of the error function, or the complementary error function, as An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as: This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite. Craig's formula was later extended by Behnad (2020) for the Q-function of the sum of two non-negative variables, as follows: Bounds and approximations The Q-function is not an elementary function. However, it can be upper and lower bounded as, where is the density function of the standard normal distribution, and the bounds become increasingly tight for large x. Using the substitution v =u2/2, the upper bound is derived as follows: Similarly, using and the quotient rule, Solving for Q(x) provides the lower bound. The geometric mean of the upper and lower bound gives a suitable approximation for : Tighter bounds and approximations of can also be obtained by optimizing the following expression For , the best upper bound is given by and with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by and with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by and with maximum absolute relative error of 1.17%. The Chernoff bound of the Q-function is Improved exponential bounds and a pure exponential approximation are The above were generalized by Tanash & Riihonen (2020), who showed that can be accurately approximated or bounded by In particular, they presented a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound: , , or for . With the example coefficients tabulated in the paper for , the relative and absolute approximation errors are less than and , respectively. The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset. Another approximation of for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that The absolute error between and over the range is minimized by evaluating Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for Substituting these values and using the relationship between and from above gives Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound. A tighter and more tractable approximation of for positive arguments is given by López-Benítez & Casadevall (2011) based on a second-order exponential function: The fitting coefficients can be optimized over any desired range of arguments in order to minimize the sum of square errors (, , for ) or minimize the maximum absolute error (, , for ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of is trivial and does not alter the algebraic form of the approximation). Inverse Q The inverse Q-function can be related to the inverse error functions: The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor: where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y. Values The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference. Generalization to high dimensions The Q-function can be generalized to higher dimensions: where follows the multivariate normal distribution with covariance and the threshold is of the form for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as becomes larger and larger. References Normal distribution Special functions Functions related to probability distributions Articles containing proofs
Q-function
[ "Mathematics" ]
1,101
[ "Articles containing proofs", "Special functions", "Combinatorics" ]
14,227,786
https://en.wikipedia.org/wiki/Transmissibility%20%28structural%20dynamics%29
Transmissibility, in the context of structural dynamics, can be defined as the ratio of the maximum force () on the floor as a result of the vibration of a machine to the maximum machine force (): Where is equal to the damping ratio and is equal to the frequency ratio. is the ratio of the dynamic to static amplitude. Further reading Vibration Control and Measurement Tech Tip: Spring & Dampers, Episode Four References Structural analysis
Transmissibility (structural dynamics)
[ "Engineering" ]
90
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
6,008,654
https://en.wikipedia.org/wiki/Honeycomb%20%28geometry%29
In geometry, a honeycomb is a space filling or close packing of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Its dimension can be clarified as n-honeycomb for a honeycomb of n-dimensional space. Honeycombs are usually constructed in ordinary Euclidean ("flat") space. They may also be constructed in non-Euclidean spaces, such as hyperbolic honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Classification There are infinitely many honeycombs, which have only been partially classified. The more regular ones have attracted the most interest, while a rich and varied assortment of others continue to be discovered. The simplest honeycombs to build are formed from stacked layers or slabs of prisms based on some tessellations of the plane. In particular, for every parallelepiped, copies can fill space, with the cubic honeycomb being special because it is the only regular honeycomb in ordinary (Euclidean) space. Another interesting family is the Hill tetrahedra and their generalizations, which can also tile the space. Uniform 3-honeycombs A 3-dimensional uniform honeycomb is a honeycomb in 3-space composed of uniform polyhedral cells, and having all vertices the same (i.e., the group of [isometries of 3-space that preserve the tiling] is transitive on vertices). There are 28 convex examples in Euclidean 3-space, also called the Archimedean honeycombs. A honeycomb is called regular if the group of isometries preserving the tiling acts transitively on flags, where a flag is a vertex lying on an edge lying on a face lying on a cell. Every regular honeycomb is automatically uniform. However, there is just one regular honeycomb in Euclidean 3-space, the cubic honeycomb. Two are quasiregular (made from two types of regular cells): The tetrahedral-octahedral honeycomb and gyrated tetrahedral-octahedral honeycombs are generated by 3 or 2 positions of slab layer of cells, each alternating tetrahedra and octahedra. An infinite number of unique honeycombs can be created by higher order of patterns of repeating these slab layers. Space-filling polyhedra A honeycomb having all cells identical within its symmetries is said to be cell-transitive or isochoric. In the 3-dimensional euclidean space, a cell of such a honeycomb is said to be a space-filling polyhedron. A necessary condition for a polyhedron to be a space-filling polyhedron is that its Dehn invariant must be zero, ruling out any of the Platonic solids other than the cube. Five space-filling convex polyhedra can tessellate 3-dimensional euclidean space using translations only. They are called parallelohedra: Cubic honeycomb (or variations: cuboid, rhombic hexahedron or parallelepiped) Hexagonal prismatic honeycomb Rhombic dodecahedral honeycomb Elongated dodecahedral honeycomb Bitruncated cubic honeycomb or truncated octahedra Other known examples of space-filling polyhedra include: The triangular prismatic honeycomb The gyrated triangular prismatic honeycomb The triakis truncated tetrahedral honeycomb. The Voronoi cells of the carbon atoms in diamond are this shape. The trapezo-rhombic dodecahedral honeycomb Isohedral tilings Other honeycombs with two or more polyhedra Sometimes, two or more different polyhedra may be combined to fill space. Besides many of the uniform honeycombs, another well known example is the Weaire–Phelan structure, adopted from the structure of clathrate hydrate crystals Non-convex 3-honeycombs Documented examples are rare. Two classes can be distinguished: Non-convex cells which pack without overlapping, analogous to tilings of concave polygons. These include a packing of the small stellated rhombic dodecahedron, as in the Yoshimoto Cube. Overlapping of cells whose positive and negative densities 'cancel out' to form a uniformly dense continuum, analogous to overlapping tilings of the plane. Hyperbolic honeycombs In 3-dimensional hyperbolic space, the dihedral angle of a polyhedron depends on its size. The regular hyperbolic honeycombs thus include two with four or five dodecahedra meeting at each edge; their dihedral angles thus are π/2 and 2π/5, both of which are less than that of a Euclidean dodecahedron. Apart from this effect, the hyperbolic honeycombs obey the same topological constraints as Euclidean honeycombs and polychora. The 4 compact and 11 paracompact regular hyperbolic honeycombs and many compact and paracompact uniform hyperbolic honeycombs have been enumerated. Duality of 3-honeycombs For every honeycomb there is a dual honeycomb, which may be obtained by exchanging: cells for vertices. faces for edges. These are just the rules for dualising four-dimensional 4-polytopes, except that the usual finite method of reciprocation about a concentric hypersphere can run into problems. The more regular honeycombs dualise neatly: The cubic honeycomb is self-dual. That of octahedra and tetrahedra is dual to that of rhombic dodecahedra. The slab honeycombs derived from uniform plane tilings are dual to each other in the same way that the tilings are. The duals of the remaining Archimedean honeycombs are all cell-transitive and have been described by Inchbald. Self-dual honeycombs Honeycombs can also be self-dual. All n-dimensional hypercubic honeycombs with Schläfli symbols {4,3n−2,4}, are self-dual. See also List of uniform tilings Regular honeycombs Infinite skew polyhedron Plesiohedron References Further reading Coxeter, H. S. M.: Regular Polytopes. Chapter 5: Polyhedra packing and space filling Critchlow, K.: Order in space. Pearce, P.: Structure in nature is a strategy for design. Goldberg, Michael Three Infinite Families of Tetrahedral Space-Fillers Journal of Combinatorial Theory A, 16, pp. 348–354, 1974. Goldberg, Michael The Space-filling Pentahedra II, Journal of Combinatorial Theory 17 (1974), 375–378. Goldberg, Michael Convex Polyhedral Space-Fillers of More than Twelve Faces. Geom. Dedicata 8, 491-500, 1979. External links Five space-filling polyhedra, Guy Inchbald, The Mathematical Gazette 80, November 1996, p.p. 466-475. Raumfueller (Space filling polyhedra) by T.E. Dorozinski Polytopes
Honeycomb (geometry)
[ "Physics", "Chemistry", "Materials_science" ]
1,483
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
6,010,542
https://en.wikipedia.org/wiki/Period%20mapping
In mathematics, in the field of algebraic geometry, the period mapping relates families of Kähler manifolds to families of Hodge structures. Ehresmann's theorem Let be a holomorphic submersive morphism. For a point b of B, we denote the fiber of f over b by Xb. Fix a point 0 in B. Ehresmann's theorem guarantees that there is a small open neighborhood U around 0 in which f becomes a fiber bundle. That is, is diffeomorphic to . In particular, the composite map is a diffeomorphism. This diffeomorphism is not unique because it depends on the choice of trivialization. The trivialization is constructed from smooth paths in U, and it can be shown that the homotopy class of the diffeomorphism depends only on the choice of a homotopy class of paths from b to 0. In particular, if U is contractible, there is a well-defined diffeomorphism up to homotopy. The diffeomorphism from Xb to X0 induces an isomorphism of cohomology groups and since homotopic maps induce identical maps on cohomology, this isomorphism depends only on the homotopy class of the path from b to 0. Local unpolarized period mappings Assume that f is proper and that X0 is a Kähler variety. The Kähler condition is open, so after possibly shrinking U, Xb is compact and Kähler for all b in U. After shrinking U further we may assume that it is contractible. Then there is a well-defined isomorphism between the cohomology groups of X0 and Xb. These isomorphisms of cohomology groups will not in general preserve the Hodge structures of X0 and Xb because they are induced by diffeomorphisms, not biholomorphisms. Let denote the pth step of the Hodge filtration. The Hodge numbers of Xb are the same as those of X0, so the number is independent of b. The period map is the map where F is the flag variety of chains of subspaces of dimensions bp,k for all p, that sends Because Xb is a Kähler manifold, the Hodge filtration satisfies the Hodge–Riemann bilinear relations. These imply that Not all flags of subspaces satisfy this condition. The subset of the flag variety satisfying this condition is called the unpolarized local period domain and is denoted . is an open subset of the flag variety F. Local polarized period mappings Assume now not just that each Xb is Kähler, but that there is a Kähler class that varies holomorphically in b. In other words, assume there is a class ω in such that for every b, the restriction ωb of ω to Xb is a Kähler class. ωb determines a bilinear form Q on Hk(Xb, C) by the rule This form varies holomorphically in b, and consequently the image of the period mapping satisfies additional constraints which again come from the Hodge–Riemann bilinear relations. These are: Orthogonality: is orthogonal to with respect to Q. Positive definiteness: For all , the restriction of to the primitive classes of type is positive definite. The polarized local period domain is the subset of the unpolarized local period domain whose flags satisfy these additional conditions. The first condition is a closed condition, and the second is an open condition, and consequently the polarized local period domain is a locally closed subset of the unpolarized local period domain and of the flag variety F. The period mapping is defined in the same way as before. The polarized local period domain and the polarized period mapping are still denoted and , respectively. Global period mappings Focusing only on local period mappings ignores the information present in the topology of the base space B. The global period mappings are constructed so that this information is still available. The difficulty in constructing global period mappings comes from the monodromy of B: There is no longer a unique homotopy class of diffeomorphisms relating the fibers Xb and X0. Instead, distinct homotopy classes of paths in B induce possibly distinct homotopy classes of diffeomorphisms and therefore possibly distinct isomorphisms of cohomology groups. Consequently there is no longer a well-defined flag for each fiber. Instead, the flag is defined only up to the action of the fundamental group. In the unpolarized case, define the monodromy group Γ to be the subgroup of GL(Hk(X0, Z)) consisting of all automorphisms induced by a homotopy class of curves in B as above. The flag variety is a quotient of a Lie group by a parabolic subgroup, and the monodromy group is an arithmetic subgroup of the Lie group. The global unpolarized period domain is the quotient of the local unpolarized period domain by the action of Γ (it is thus a collection of double cosets). In the polarized case, the elements of the monodromy group are required to also preserve the bilinear form Q, and the global polarized period domain is constructed as a quotient by Γ in the same way. In both cases, the period mapping takes a point of B to the class of the Hodge filtration on Xb. Properties Griffiths proved that the period map is holomorphic. His transversality theorem limits the range of the period map. Period matrices The Hodge filtration can be expressed in coordinates using period matrices. Choose a basis δ1, ..., δr for the torsion-free part of the kth integral homology group . Fix p and q with , and choose a basis ω1, ..., ωs for the harmonic forms of type . The period matrix of X0 with respect to these bases is the matrix The entries of the period matrix depend on the choice of basis and on the complex structure. The δs can be varied by a choice of a matrix Λ in , and the ωs can be varied by a choice of a matrix A in . A period matrix is equivalent to Ω if it can be written as AΩΛ for some choice of A and Λ. The case of elliptic curves Consider the family of elliptic curves where λ is any complex number not equal to zero or one. The Hodge filtration on the first cohomology group of a curve has two steps, F0 and F1. However, F0 is the entire cohomology group, so the only interesting term of the filtration is F1, which is H1,0, the space of holomorphic harmonic . H1,0 is one-dimensional because the curve is elliptic, and for all λ, it is spanned by the differential form . To find explicit representatives of the homology group of the curve, note that the curve can be represented as the graph of the multivalued function on the Riemann sphere. The branch points of this function are at zero, one, λ, and infinity. Make two branch cuts, one running from zero to one and the other running from λ to infinity. These exhaust the branch points of the function, so they cut the multi-valued function into two single-valued sheets. Fix a small . On one of these sheets, trace the curve . For ε sufficiently small, this curve surrounds the branch cut and does not meet the branch cut . Now trace another curve δ(t) that begins in one sheet as for and continues in the other sheet as for . Each half of this curve connects the points 1 and λ on the two sheets of the Riemann surface. From the Seifert–van Kampen theorem, the homology group of the curve is free of rank two. Because the curves meet in a single point, , neither of their homology classes is a proper multiple of some other homology class, and hence they form a basis of H1. The period matrix for this family is therefore The first entry of this matrix we will abbreviate as A, and the second as B. The bilinear form Q is positive definite because locally, we can always write ω as f dz, hence By Poincaré duality, γ and δ correspond to cohomology classes γ* and δ* which together are a basis for . It follows that ω can be written as a linear combination of γ* and δ*. The coefficients are given by evaluating ω with respect to the dual basis elements γ and δ: When we rewrite the positive definiteness of Q in these terms, we have Since γ* and δ* are integral, they do not change under conjugation. Furthermore, since γ and δ intersect in a single point and a single point is a generator of H0, the cup product of γ* and δ* is the fundamental class of X0. Consequently this integral equals . The integral is strictly positive, so neither A nor B can be zero. After rescaling ω, we may assume that the period matrix equals for some complex number τ with strictly positive imaginary part. This removes the ambiguity coming from the action. The action of is then the usual action of the modular group on the upper half-plane. Consequently, the period domain is the Riemann sphere. This is the usual parameterization of an elliptic curve as a lattice. See also Hodge theory Jacobian variety Modular group References Calculations Explicit calculation of period matrices for curves of the form - includes examples Explicit calculation of period matrices for hyperelliptic curves - includes examples Algorithm for computing periods of hypersurfaces General Voisin, Hodge Theory and Complex Algebraic Geometry I, II Applications Shimura curves within the locus of hyperelliptic Jacobians in genus three External links Period mapping in the Encyclopedia of Mathematics Hodge theory Elliptic curves
Period mapping
[ "Engineering" ]
2,053
[ "Tensors", "Differential forms", "Hodge theory" ]
6,011,211
https://en.wikipedia.org/wiki/Bioreactor%20landfill
Landfills are the primary method of waste disposal in many parts of the world, including United States and Canada. Bioreactor landfills are expected to reduce the amount of and costs associated with management of leachate, to increase the rate of production of methane (natural gas) for commercial purposes and reduce the amount of land required for land-fills. Bioreactor landfills are monitored and manipulate oxygen and moisture levels to increase the rate of decomposition by microbial activity. Traditional landfills and associated problems Landfills are the oldest known method of waste disposal. Waste is buried in large dug out pits (unless naturally occurring locations are available) and covered. Bacteria and archaea decompose the waste over several decades producing several by-products of importance, including methane gas (natural gas), leachate, and volatile organic compounds (such as hydrogen sulfide (H2S), N2O2, etc.). Methane gas, a strong greenhouse gas, can build up inside the landfill leading to an explosion unless released from the cell. Leachate are fluid metabolic products from decomposition and contain various types of toxins and dissolved metallic ions. If leachate escapes into the ground water it can cause health problems in both animals and plants. The volatile organic compounds (VOCs) are associated with causing smog and acid rain. With the increasing amount of waste produced, appropriate places to safely store it have become difficult to find. Working of a bioreactor landfill There are three types of bioreactor: aerobic, anaerobic and a hybrid (using both aerobic and anaerobic method). All three mechanisms involve the reintroduction of collected leachate supplemented with water to maintain moisture levels in the landfill. The micro-organisms responsible for decomposition are thus stimulated to decompose at an increased rate with an attempt to minimise harmful emissions. In aerobic bioreactors air is pumped into the landfill using either vertical or horizontal system of pipes. The aerobic environment decomposition is accelerated and amount of VOCs, toxicity of leachate and methane are minimised. In anaerobic bioreactors with leachate being circulated the landfill produces methane at a rate much faster and earlier than traditional landfills. The high concentration and quantity of methane allows it to be used more efficiently for commercial purposes while reducing the time that the landfill needs to be monitored for methane production. Hybrid bioreactors subject the upper portions of the landfill through aerobic-anaerobic cycles to increase decomposition rate while methane is produced by the lower portions of the landfill. Bioreactor landfills produce lower quantities of VOCs than traditional landfills, except H2S. Bioreactor landfills produce higher quantities of H2S. The exact biochemical pathway responsible for this increase is not well studied Advantages of bioreactor landfills Bioreactor landfills accelerate the process of decomposition. As decomposition progresses, the mass of biodegradable components in the landfill declines, creating more space for dumping garbage. Bioreactor landfills are expected to increase this rate of decomposition and save up to 30% of space needed for landfills. With increasing amounts of solid waste produced every year and scarcity of landfill spaces, bioreactor landfill can thus provide a significant way of maximising landfill space. This is not just cost effective, but since less land is needed for the landfills, this is also better for the environment. Furthermore, most landfills are monitored for at least 3 to 4 decades to ensure that no leachate or landfill gases escape into the community surrounding the landfill site. In contrast, bioreactor landfill are expected to decompose to level that does not require monitoring in less than a decade. Hence, the landfill land can be used for other purposes such as reforestation or parks, depending on the location at an earlier date. In addition, re-using leachate to moisturise the landfill filters it. Thus, less time and energy is required to process the leachate, making the process more efficient. Disadvantages of bioreactor landfills Bioreactor landfills are a relatively new technology. For the newly developed bioreactor landfills initial monitoring costs are higher to ensure that everything important is discovered and properly controlled. This includes gases, odours and seepage of leachate into the ground surface. The increased moisture content of bioreactor landfill may reduce the structural stability of the landfill by increasing the pore water pressure within the waste mass. Since the target of bioreactor landfills is to maintain a high moisture content, gas collection systems can be affected by the increased moisture content of the waste. Implementation of bioreactor landfills Bioreactor landfills, being a novel technology, are still in development and being studied on a laboratory scale. Pilot projects for bioreactor landfills are showing promise and more are being experimented with in different parts of the world. Despite the potential benefits of bioreactor landfills there are no standardised and approved designs with guidelines and operational procedures. Following is a list of bioreactor landfill projects which are being used to collect data for forming these needed guidelines and procedures: United States California Yolo County Florida Alachua County Southeast Landfill Highlands County New River Regional Landfill, Raiford Polk County Landfill, Winter Haven Kentucky Outer Loop Landfill Michigan Saint Clair County Mississippi Plantation Oaks Bioreactor Demonstration Project, Sibley Missouri Columbia New Jersey ACUA's Haneman Environmental Park, Egg Harbor Township North Carolina Buncombe County Landfill Project Virginia Maplewood Landfill and King George County Landfills Virginia Landfill Project XL Demonstration Project Canada Sainte-Sophie Bioreactor demonstration Project, Quebec Australia New South Wales WoodLawn, Goulburn Queensland Ti Tree Bioenergy, Ipswich See also Daily cover Landfill liner Landfill mining References External links Toward a Twenty-first Century Landfill - Yolo County's Bioreactor Research Project web page. Bioreactorlandfill.org Landfill Biochemical engineering Bioreactors
Bioreactor landfill
[ "Chemistry", "Engineering", "Biology" ]
1,199
[ "Bioreactors", "Biological engineering", "Chemical reactors", "Chemical engineering", "Biochemical engineering", "Microbiology equipment", "Biochemistry" ]
6,011,769
https://en.wikipedia.org/wiki/Quantum%20mutual%20information
In quantum information theory, quantum mutual information, or von Neumann mutual information, after John von Neumann, is a measure of correlation between subsystems of quantum state. It is the quantum mechanical analog of Shannon mutual information. Motivation For simplicity, it will be assumed that all objects in the article are finite-dimensional. The definition of quantum mutual entropy is motivated by the classical case. For a probability distribution of two variables p(x, y), the two marginal distributions are The classical mutual information I(X:Y) is defined by where S(q) denotes the Shannon entropy of the probability distribution q. One can calculate directly So the mutual information is Where the logarithm is taken in basis 2 to obtain the mutual information in bits. But this is precisely the relative entropy between p(x, y) and p(x)p(y). In other words, if we assume the two variables x and y to be uncorrelated, mutual information is the discrepancy in uncertainty resulting from this (possibly erroneous) assumption. It follows from the property of relative entropy that I(X:Y) ≥ 0 and equality holds if and only if p(x, y) = p(x)p(y). Definition The quantum mechanical counterpart of classical probability distributions are modeled with density matrices. Consider a quantum system that can be divided into two parts, A and B, such that independent measurements can be made on either part. The state space of the entire quantum system is then the tensor product of the spaces for the two parts. Let ρAB be a density matrix acting on states in HAB. The von Neumann entropy of a density matrix S(ρ), is the quantum mechanical analogy of the Shannon entropy. For a probability distribution p(x,y), the marginal distributions are obtained by integrating away the variables x or y. The corresponding operation for density matrices is the partial trace. So one can assign to ρ a state on the subsystem A by where TrB is partial trace with respect to system B. This is the reduced state of ρAB on system A. The reduced von Neumann entropy of ρAB with respect to system A is S(ρB) is defined in the same way. It can now be seen that the definition of quantum mutual information, corresponding to the classical definition, should be as follows. Quantum mutual information can be interpreted the same way as in the classical case: it can be shown that where denotes quantum relative entropy. Note that there is an alternative generalization of mutual information to the quantum case. The difference between the two for a given state is called quantum discord, a measure for the quantum correlations of the state in question. Properties When the state is pure (and thus ), the mutual information is twice the entanglement entropy of the state: A positive quantum mutual information is not necessarily indicative of entanglement, however. A classical mixture of separable states will always have zero entanglement, but can have nonzero QMI, such as In this case, the state is merely a classically correlated state. References Quantum mechanical entropy Quantum information theory
Quantum mutual information
[ "Physics" ]
641
[ "Quantum mechanical entropy", "Entropy", "Physical quantities" ]
6,013,248
https://en.wikipedia.org/wiki/Regular%20homotopy
In the mathematical field of topology, a regular homotopy refers to a special kind of homotopy between immersions of one manifold in another. The homotopy must be a 1-parameter family of immersions. Similar to homotopy classes, one defines two immersions to be in the same regular homotopy class if there exists a regular homotopy between them. Regular homotopy for immersions is similar to isotopy of embeddings: they are both restricted types of homotopies. Stated another way, two continuous functions are homotopic if they represent points in the same path-components of the mapping space , given the compact-open topology. The space of immersions is the subspace of consisting of immersions, denoted by . Two immersions are regularly homotopic if they represent points in the same path-component of . Examples Any two knots in 3-space are equivalent by regular homotopy, though not by isotopy. The Whitney–Graustein theorem classifies the regular homotopy classes of a circle into the plane; two immersions are regularly homotopic if and only if they have the same turning number – equivalently, total curvature; equivalently, if and only if their Gauss maps have the same degree/winding number. Stephen Smale classified the regular homotopy classes of a k-sphere immersed in – they are classified by homotopy groups of Stiefel manifolds, which is a generalization of the Gauss map, with here k partial derivatives not vanishing. More precisely, the set of regular homotopy classes of embeddings of sphere in is in one-to-one correspondence with elements of group . In case we have . Since is path connected, and and due to Bott periodicity theorem we have and since then we have . Therefore all immersions of spheres and in euclidean spaces of one more dimension are regular homotopic. In particular, spheres embedded in admit eversion if . A corollary of his work is that there is only one regular homotopy class of a 2-sphere immersed in . In particular, this means that sphere eversions exist, i.e. one can turn the 2-sphere "inside-out". Both of these examples consist of reducing regular homotopy to homotopy; this has subsequently been substantially generalized in the homotopy principle (or h-principle) approach. Non-degenerate homotopy For locally convex, closed space curves, one can also define non-degenerate homotopy. Here, the 1-parameter family of immersions must be non-degenerate (i.e. the curvature may never vanish). There are 2 distinct non-degenerate homotopy classes. Further restrictions of non-vanishing torsion lead to 4 distinct equivalence classes. See also Arnold invariants Immersion (mathematics) References Differential topology Algebraic topology
Regular homotopy
[ "Mathematics" ]
603
[ "Fields of abstract algebra", "Topology", "Differential topology", "Algebraic topology" ]
6,013,654
https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20existence%20and%20smoothness
The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering. Even more basic (and seemingly intuitive) properties of the solutions to Navier–Stokes have never been proven. For the three-dimensional system of equations, and given some initial conditions, mathematicians have neither proved that smooth solutions always exist, nor found any counter-examples. This is called the Navier–Stokes existence and smoothness problem. Since understanding the Navier–Stokes equations is considered to be the first step to understanding the elusive phenomenon of turbulence, the Clay Mathematics Institute in May 2000 made this problem one of its seven Millennium Prize problems in mathematics. It offered a US$1,000,000 prize to the first person providing a solution for a specific statement of the problem: The Navier–Stokes equations In mathematics, the Navier–Stokes equations are a system of nonlinear partial differential equations for abstract vector fields of any size. In physics and engineering, they are a system of equations that model the motion of liquids or non-rarefied gases (in which the mean free path is short enough so that it can be thought of as a continuum mean instead of a collection of particles) using continuum mechanics. The equations are a statement of Newton's second law, with the forces modeled according to those in a viscous Newtonian fluid—as the sum of contributions by pressure, viscous stress and an external body force. Since the setting of the problem proposed by the Clay Mathematics Institute is in three dimensions, for an incompressible and homogeneous fluid, only that case is considered below. Let be a 3-dimensional vector field, the velocity of the fluid, and let be the pressure of the fluid. The Navier–Stokes equations are: where is the kinematic viscosity, the external volumetric force, is the gradient operator and is the Laplacian operator, which is also denoted by or . Note that this is a vector equation, i.e. it has three scalar equations. Writing down the coordinates of the velocity and the external force then for each there is the corresponding scalar Navier–Stokes equation: The unknowns are the velocity and the pressure . Since in three dimensions, there are three equations and four unknowns (three scalar velocities and the pressure), then a supplementary equation is needed. This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid: Due to this last property, the solutions for the Navier–Stokes equations are searched in the set of solenoidal ("divergence-free") functions. For this flow of a homogeneous medium, density and viscosity are constants. Since only its gradient appears, the pressure p can be eliminated by taking the curl of both sides of the Navier–Stokes equations. In this case the Navier–Stokes equations reduce to the vorticity-transport equations. The Navier–Stokes equations are nonlinear, meaning that the terms in the equations do not have a simple linear relationship with each other. This means that the equations cannot be solved using traditional linear techniques, and more advanced methods must be used instead. This nonlinearity allows the equations to describe a wide range of fluid dynamics phenomena, including the formation of shock waves and other complex flow patterns. One way to understand the nonlinearity of the Navier–Stokes equations is to consider the term in the equations. This term represents the acceleration of the fluid, and it is a product of the velocity vector v and the gradient operator ∇. Because the gradient operator is a linear operator, the term (v · ∇)v is nonlinear in the velocity vector v. This means that the acceleration of the fluid depends on the magnitude and direction of the velocity, as well as the spatial distribution of the velocity within the fluid. Another source of nonlinearity in the Navier–Stokes equations is the pressure term . The pressure in a fluid depends on the density and the gradient of the pressure, and this term is therefore nonlinear in the pressure. To see this more explicitly, consider the case of a circular obstacle of radius placed in a uniform flow with velocity and density . Let be the velocity of the fluid at position and time , and let be the pressure at the same position and time. The Navier–Stokes equations in this case are: where is the kinematic viscosity of the fluid. Assuming that the flow is steady (meaning that the velocity and pressure do not vary with time), we can set the time derivative terms equal to zero: We can now consider the flow near the circular obstacle. In this region, the velocity of the fluid will be higher than the uniform flow velocity due to the presence of the obstacle. This results in a nonlinear term in the Navier–Stokes equations that is proportional to the velocity of the fluid. At the same time, the presence of the obstacle will also result in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. This can be seen by considering the continuity equation, which states that the mass flow rate through any surface must be constant. Since the velocity is higher near the obstacle, the mass flow rate through a surface near the obstacle will be higher than the mass flow rate through a surface farther away from the obstacle. This can be compensated for by a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. As a result of these nonlinear effects, the Navier–Stokes equations in this case become difficult to solve, and approximations or numerical methods must be used to find the velocity and pressure fields in the flow. Consider the case of a two-dimensional fluid flow in a rectangular domain, with a velocity field and a pressure field . We can use a finite element method to solve the Navier–Stokes equation for the velocity field: To do this, we divide the domain into a series of smaller elements, and represent the velocity field as: where is the number of elements, and are the shape functions associated with each element. Substituting this expression into the Navier–Stokes equation and applying the finite element method, we can derive a system of ordinary differential equations: where is the domain, and the integrals are over the domain. This system of ordinary differential equations can be solved using techniques such as the finite element method or spectral methods. Here, we will use the finite difference method. To do this, we can divide the time interval into a series of smaller time steps, and approximate the derivative at each time step using a finite difference formula: where is the size of the time step, and and are the values of and at time step . Using this approximation, we can iterate through the time steps and compute the value of at each time step. For example, starting at time step and using the approximation above, we can compute the value of at time step : This process can be repeated until we reach the final time step . There are many other approaches to solving ordinary differential equations, each with its own advantages and disadvantages. The choice of approach depends on the specific equation being solved, and the desired accuracy and efficiency of the solution. Two settings: unbounded and periodic space There are two different settings for the one-million-dollar-prize Navier–Stokes existence and smoothness problem. The original problem is in the whole space , which needs extra conditions on the growth behavior of the initial condition and the solutions. In order to rule out the problems at infinity, the Navier–Stokes equations can be set in a periodic framework, which implies that they are no longer working on the whole space but in the 3-dimensional torus . Each case will be treated separately. Statement of the problem in the whole space Hypotheses and growth conditions The initial condition is assumed to be a smooth and divergence-free function (see smooth function) such that, for every multi-index (see multi-index notation) and any , there exists a constant such that for all The external force is assumed to be a smooth function as well, and satisfies a very analogous inequality (now the multi-index includes time derivatives as well): for all For physically reasonable conditions, the type of solutions expected are smooth functions that do not grow large as . More precisely, the following assumptions are made: There exists a constant such that for all Condition 1 implies that the functions are smooth and globally defined and condition 2 means that the kinetic energy of the solution is globally bounded. The Millennium Prize conjectures in the whole space (A) Existence and smoothness of the Navier–Stokes solutions in Let . For any initial condition satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector and a pressure satisfying conditions 1 and 2 above. (B) Breakdown of the Navier–Stokes solutions in There exists an initial condition and an external force such that there exists no solutions and satisfying conditions 1 and 2 above. The Millennium Prize conjectures are two mathematical problems that were chosen by the Clay Mathematics Institute as the most important unsolved problems in mathematics. The first conjecture, which is known as the "smoothness" conjecture, states that there should always exist smooth and globally defined solutions to the Navier–Stokes equations in three-dimensional space. The second conjecture, known as the "breakdown" conjecture, states that there should be at least one set of initial conditions and external forces for which there are no smooth solutions to the Navier–Stokes equations. The Navier–Stokes equations are a set of partial differential equations that describe the motion of fluids. They are given by: where is the velocity field of the fluid, is the pressure, is the density, is the kinematic viscosity, and is an external force. The first equation is known as the momentum equation, and the second equation is known as the continuity equation. These equations are typically accompanied by boundary conditions, which describe the behavior of the fluid at the edges of the domain. For example, in the case of a fluid flowing through a pipe, the boundary conditions might specify that the velocity and pressure are fixed at the walls of the pipe. The Navier–Stokes equations are nonlinear and highly coupled, making them difficult to solve in general. In particular, the difficulty of solving these equations lies in the term , which represents the nonlinear advection of the velocity field by itself. This term makes the Navier–Stokes equations highly sensitive to initial conditions, and it is the main reason why the Millennium Prize conjectures are so challenging. In addition to the mathematical challenges of solving the Navier–Stokes equations, there are also many practical challenges in applying these equations to real-world situations. For example, the Navier–Stokes equations are often used to model fluid flows that are turbulent, which means that the fluid is highly chaotic and unpredictable. Turbulence is a difficult phenomenon to model and understand, and it adds another layer of complexity to the problem of solving the Navier–Stokes equations. To solve the Navier–Stokes equations, we need to find a velocity field and a pressure field that satisfy the equations and the given boundary conditions. This can be done using a variety of numerical techniques, such as finite element methods, spectral methods, or finite difference methods. For example, consider the case of a two-dimensional fluid flow in a rectangular domain, with velocity and pressure fields and a pressure field ,respectively. The Navier–Stokes equations can be written as: where is the density, is the kinematic viscosity, and is an external force. The boundary conditions might specify that the velocity is fixed at the walls of the domain, or that the pressure is fixed at certain points. The last identity occurs because the flow is solenoidal. To solve these equations numerically, we can divide the domain into a series of smaller elements, and solve the equations locally within each element. For example, using a finite element method, we might represent the velocity and pressure fields as: where is the number of elements, and are the shape functions associated with each element. Substituting these expressions into the Navier–Stokes equations and applying the finite element method, we can derive a system of ordinary differential equations Statement of the periodic problem Hypotheses The functions sought now are periodic in the space variables of period 1. More precisely, let be the unitary vector in the i- direction: Then is periodic in the space variables if for any , then: Notice that this is considering the coordinates mod 1. This allows working not on the whole space but on the quotient space , which turns out to be the 3-dimensional torus: Now the hypotheses can be stated properly. The initial condition is assumed to be a smooth and divergence-free function and the external force is assumed to be a smooth function as well. The type of solutions that are physically relevant are those who satisfy these conditions: Just as in the previous case, condition 3 implies that the functions are smooth and globally defined and condition 4 means that the kinetic energy of the solution is globally bounded. The periodic Millennium Prize theorems (C) Existence and smoothness of the Navier–Stokes solutions in Let . For any initial condition satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector and a pressure satisfying conditions 3 and 4 above. (D) Breakdown of the Navier–Stokes solutions in There exists an initial condition and an external force such that there exists no solutions and satisfying conditions 3 and 4 above. Partial results In 1934, Jean Leray proved that there are smooth and globally defined solutions to the Navier–Stokes equations under the assumption that the initial velocity is sufficiently small. He also proved the existence of so-called weak solutions to the Navier–Stokes equations, satisfying the equations in mean value, not pointwise. In the 1960s, the finite difference method was proven to be convergent for the Navier–Stokes equations and the equations were numerically solved. It was also proven that there are smooth and globally defined solutions to the Navier–Stokes equations in 2 dimensions. It is known that, given an initial velocity there exists a finite "blowup time" T, depending on such that the Navier–Stokes equations on have smooth solutions and . It is not known if the solutions exist beyond. In 2016, Terence Tao published a paper titled "Finite time blowup for an averaged three-dimensional Navier–Stokes equation", in which he formalizes the idea of a "supercriticality barrier" for the global regularity problem for the true Navier–Stokes equations, and claims that his method of proof hints at a possible route to establishing blowup for the true equations. In popular culture Unsolved problems have been used to indicate a rare mathematical talent in fiction. The Navier–Stokes problem features in The Mathematician's Shiva (2014), a book about a prestigious, deceased, fictional mathematician named Rachela Karnokovitch taking the proof to her grave in protest of academia. The movie Gifted (2017) referenced the Millennium Prize problems and dealt with the potential for a 7-year-old girl and her deceased mathematician mother for solving the Navier–Stokes problem. See also List of unsolved problems in mathematics List of unsolved problems in physics Notes References Further reading External links Contributed by: Yakov Sinai The Clay Mathematics Institute's Navier–Stokes equation prize Why global regularity for Navier–Stokes is hard — Possible routes to resolution are scrutinized by Terence Tao. Navier–Stokes existence and smoothness (Millennium Prize Problem) A lecture on the problem by Luis Caffarelli. Fluid dynamics Millennium Prize Problems Partial differential equations Unsolved problems in mathematics Unsolved problems in physics
Navier–Stokes existence and smoothness
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,301
[ "Unsolved problems in mathematics", "Chemical engineering", "Unsolved problems in physics", "Millennium Prize Problems", "Piping", "Mathematical problems", "Fluid dynamics" ]
6,014,225
https://en.wikipedia.org/wiki/Pressure%20drop
Pressure drop (often abbreviated as "dP" or "ΔP") is defined as the difference in total pressure between two points of a fluid carrying network. A pressure drop occurs when frictional forces, caused by the resistance to flow, act on a fluid as it flows through a conduit (such as a channel, pipe, or tube). This friction converts some of the fluid's hydraulic energy to thermal energy (i.e., internal energy). Since the thermal energy cannot be converted back to hydraulic energy, the fluid experiences a drop in pressure, as is required by conservation of energy. The main determinants of resistance to fluid flow are fluid velocity through the pipe and fluid viscosity. Pressure drop increases proportionally to the frictional shear forces within the piping network. A piping network containing a high relative roughness rating as well as many pipe fittings and joints, tube convergence, divergence, turns, surface roughness, and other physical properties will affect the pressure drop. High flow velocities or high fluid viscosities result in a larger pressure drop across a pipe section, valve, or elbow joint. Low velocity will result in less (or no) pressure drop. The fluid may also be biphasic as in pneumatic conveying with a gas and a solid; in this case, the friction of the solid must also be taken into consideration for calculating the pressure drop. Applications Fluid in a system will always flow from a region of higher pressure to a region of lower pressure, assuming it has a path to do so. All things being equal, a higher pressure drop will lead to a higher flow (except in cases of choked flow). The pressure drop of a given system will determine the amount of energy needed to convey fluid through that system. For example, a larger pump could be required to move a set amount of water through smaller-diameter pipes (with higher velocity and thus higher pressure drop) as compared to a system with larger-diameter pipes (with lower velocity and thus lower pressure drop). Calculation of pressure drop Pressure drop is related inversely to pipe diameter to the fifth power. For example, halving a pipe's diameter would increase the pressure drop by a factor of (e.g. from 2 psi to 64 psi), assuming no change in flow. Pressure drop in piping is directly proportional to the length of the piping—for example, a pipe with twice the length will have twice the pressure drop, given the same flow rate. Piping fittings (such as elbow and tee joints) generally lead to greater pressure drop than straight pipe. As such, a number of correlations have been developed to calculate equivalent length of fittings. Certain valves are provided with an associated flow coefficient, commonly known as or . The flow coefficient relates pressure drop, flow rate, and specific gravity for a given valve. Many empirical calculations exist for calculation of pressure drop, including: Darcy–Weisbach equation, to calculate pressure drop in a pipe Hagen–Poiseuille equation See also ΔP head loss References External links Mechanics Fluid dynamics
Pressure drop
[ "Physics", "Chemistry", "Engineering" ]
634
[ "Chemical engineering", "Mechanics", "Mechanical engineering", "Piping", "Fluid dynamics" ]
15,342,820
https://en.wikipedia.org/wiki/Communications%20Access%20for%20Land%20Mobiles
Communications access for land mobiles (CALM) is an initiative by the ISO TC 204/Working Group 16 to define a set of wireless communication protocols and air interfaces for a variety of communication scenarios spanning multiple modes of communications and multiple methods of transmissions in Intelligent Transportation System (ITS). The CALM architecture is based on an IPv6 convergence layer that decouples applications from the communication infrastructure. A standardized set of air interface protocols is provided for the best use of resources available for short, medium and long-range, safety critical communications, using one or more of several media, with multipoint (mesh) transfer. Since 2007 CALM stands for Communication Access for Land Mobile, before that year, CALM stood for Communications, Air-interface, Long and Medium range. Communication Modes CALM enables the following communication modes: Vehicle-to-Infrastructure (V2I): communication initiated by either roadside or vehicle (e.g. petrol forecourt or toll booth) Vehicle-to-Vehicle (V2V): peer to peer ad hoc networking amongst fast moving objects following the idea of MANET's/VANET's. Infrastructure-to-Infrastructure (I2I): point-to-point connection where conventional cabling is undesirable (e.g. using lamp posts or street signs to relay signals) Methods of transmission used by CALM may be based on one or more of the following communication media: Infrared GSM (2G, 3G cellular telephone communication technology) DSRC 5.8-5.9 GHz (legacy systems) Various evolutions of the IEEE 802.11 standard including WAVE (IEEE P1609.3/D23), M5 (ISO 21215) WiMAX, IEEE 802.16e MM-wave (63 GHz) Satellite Bluetooth RFID The CALM architecture provides an abstraction layer for vehicle applications, managing communication for multiple concurrent sessions spanning all communications modes, and all methods of transmission. Applications Applications for CALM are likely to include in-vehicle internet access, dynamic navigation, safety warnings, collision avoidance, and ad hoc networks linking multiple vehicles. Security The CALM architecture protects critical in-vehicle communication using a firewall controlled by the vehicle. Parental controls are also being considered as a component of the architecture. Implementations The CALM standard is still work in progress. Therefore, large scale implementations of the standard do not yet exist. CVIS is a project funded by the EC in the sixth framework program (FP6). The aim of the project is to develop and implement a communication infrastructure based on the CALM architecture. CVIS will implement CALM M5, 2G/3G and IR as communication media. The implementation will be tested at several test locations across Europe with a wide range of ITS applications. External links http://www.safespot-eu.org/ SAFESPOT EU Integrated Project on Cooperative Vehicular Systems for Road Safety http://hal.inria.fr/inria-00419466/fr/ Architecture Pour Communication Véhicules-Infrastructure References Intelligent transportation systems
Communications Access for Land Mobiles
[ "Technology" ]
624
[ "Information systems", "Warning systems", "Intelligent transportation systems", "Transport systems" ]
15,344,177
https://en.wikipedia.org/wiki/CinemaNext
Previously known as XDC from its creation in 2004 to 2012 until it changed its name to until it was bought by Ymagis Group in 2014 and was renamed to CinemaNext. It is a cinema exhibition services company based in Liège (in Belgium) but also has offices in 26 other countries including France, Spain and Germany. The principal activities of CinemaNext are the installation and maintenance of Tenbea.com, sound systems and outfitting for cinemas. They also offer consulting to cinemas about financing and managing their refurbishment projects. Digital cinema The digital content lab of XDC prepares digital content for distribution. It creates different sub-titled versions and performs quality control. The company is the first entity to have VPF digital cinema deployment agreements with all the 6 major US studios Warner, Fox, Universal, Paramount, Sony and Disney for a total of 8 thousand digital screens in 22 European countries. Today, XDC has signed VPF deals with exhibitors for about 1,000 screens spread over 11 European countries (Austria, Portugal, Germany, Switzerland, Spain, Belgium, The Netherlands, Hungary, Czech Republic, Slovakia and Poland). XDC has also achieved a global financing of 100 million Euros with Fortis Bank to allow the VPF roll out of 2,000 digital screens in the first phase of its European deployment program. XDC is also responsible for installing and maintaining the equipment, and training operators. XDC is currently in charge of more than 500 screens in 10 European countries (Sweden, Belgium, The Netherlands, Luxembourg, France, Spain, Germany, Austria, Switzerland and Poland) and its aim is to reach 1500 screens, in order to create the first European digital cinema network. Distribution of alternative content In December 2007, XDC has signed an agreement with Qubo and Dynamic to create DDcinema, a company dedicated to the distribution of digital cinema alternative content (mainly lyrical operas). See also Digital cinema References Mass media companies established in 2004 Information technology companies of Belgium Digital media Companies based in Liège Province
CinemaNext
[ "Technology" ]
407
[ "Multimedia", "Digital media" ]
15,346,514
https://en.wikipedia.org/wiki/KIF5C
Kinesin heavy chain isoform 5C is a protein that in humans is encoded by the KIF5C gene. It is part of the kinesin family of motor proteins. References Further reading External links Human proteins Motor proteins
KIF5C
[ "Chemistry" ]
49
[ "Biochemistry stubs", "Motor proteins", "Protein stubs", "Molecular machines" ]
12,539,451
https://en.wikipedia.org/wiki/Moiety%20conservation
Moiety conservation is the conservation of a subgroup in a chemical species, which is cyclically transferred from one molecule to another. In biochemistry, moiety conservation can have profound effects on the system's dynamics. Moiety-conserved cycles in biochemistry A typical example of a conserved moiety in biochemistry is the Adenosine diphosphate (ADP) subgroup that remains unchanged when it is phosphorylated to create adenosine triphosphate (ATP) and then dephosphorylated back to ADP forming a conserved cycle. Moiety-conserved cycles in nature exhibit unique network control features which can be elucidated using techniques such as metabolic control analysis. Other examples in metabolism include NAD/NADH, NADP/NADPH, CoA/Acetyl-CoA. Conserved cycles also exist in large numbers in protein signaling networks when proteins get phosphorylated and dephosphorylated. Most, if not all, of these cycles, are time-scale-dependent. For example, although a protein in a phosphorylation cycle is conserved during the interconversion, over a longer time scale, there will be low levels of protein synthesis and degradation, which change the level of protein moiety. The same applies to cycles involving ATP, NAD, etc. Thus, although the concept of a moiety-conserved cycle in biochemistry is a useful approximation, over time scales that include significant net synthesis and degradation of the moiety, the approximation is no longer valid. When invoking the conserved-moiety assumption on a particular moiety, we are, in effect, assuming the system is closed to that moiety. Identifying conserved cycles Conserved cycles in a biochemical network can be identified by examination of the stoichiometry matrix, The stoichiometry matrix for a simple cycle with species A and AP is given by: The rates of change of A and AP can be written using the equation: Expanding the expression leads to: Note that . This means that , where is the total mass of moiety . Given an arbitrary system: elementary row operations can be applied to both sides such that the stoichiometric matrix is reduced to its echelon form, giving: The elementary operations are captured in the matrix. We can partition to match the echelon matrix where the zero rows begin such that: By multiplying out the lower partition, we obtain: The matrix will contain entries corresponding to the conserved cycle participants. Conserved cycles and computer models The presence of conserved moieties can affect how computer simulation models are constructed. Moiety-conserved cycles will reduce the number of differential equations required to solve a system. For example, a simple cycle has only one independent variable. The other variable can be computed using the difference between the total mass and the independent variable. The set of differential equations for the two-cycle is given by: These can be reduced to one differential equation and one linear algebraic equation: References Mathematical and theoretical biology Systems biology Biochemistry
Moiety conservation
[ "Chemistry", "Mathematics", "Biology" ]
610
[ "Mathematical and theoretical biology", "Applied mathematics", "nan", "Biochemistry", "Systems biology" ]
10,200,558
https://en.wikipedia.org/wiki/Hot%20band
In molecular vibrational spectroscopy, a hot band is a band centred on a hot transition, which is a transition between two excited vibrational states, i.e. neither is the overall ground state. In infrared or Raman spectroscopy, hot bands refer to those transitions for a particular vibrational mode which arise from a state containing thermal population of another vibrational mode. For example, for a molecule with 3 normal modes, , and , the transition ← , would be a hot band, since the initial state has one quantum of excitation in the mode. Hot bands are distinct from combination bands, which involve simultaneous excitation of multiple normal modes with a single photon, and overtones, which are transitions that involve changing the vibrational quantum number for a normal mode by more than 1. Vibrational hot bands In the harmonic approximation, the normal modes of a molecule are not coupled, and all vibrational quantum levels are equally spaced, so hot bands would not be distinguishable from so-called "fundamental" transitions arising from the overall vibrational ground state. However, vibrations of real molecules always have some anharmonicity, which causes coupling between different vibrational modes that in turn shifts the observed frequencies of hot bands in vibrational spectra. Because anharmonicity decreases the spacing between adjacent vibrational levels, hot bands exhibit red shifts (appear at lower frequencies than the corresponding fundamental transitions). The magnitude of the observed shift is correlated to the degree of anharmonicity in the corresponding normal modes. Both the lower and upper states involved in the transition are excited states. Therefore, the lower excited state must be populated for a hot band to be observed. The most common form of excitation is by thermal energy. The population of the lower excited state is then given by the Boltzmann distribution. In general the population can be expressed as where kB is the Boltzmann constant and E is the energy difference between the two states. In simplified form this can be expressed as where ν is the wavenumber [cm−1] of the hot band and T is the temperature [K]. Thus, the intensity of a hot band, which is proportional to the population of the lower excited state, increases as the temperature increases. Combination bands As mentioned above, combination bands involve changes in vibrational quantum numbers of more than one normal mode. These transitions are forbidden by harmonic oscillator selection rules, but are observed in vibrational spectra of real systems due to anharmonic couplings of normal modes. Combination bands typically have weak spectral intensities, but can become quite intense in cases where the anharmonicity of the vibrational potential is large. Broadly speaking, there are two types of combination bands. Difference transition A difference transition, or difference band, occurs between excited states of two different vibrations. Using the 3 mode example from above, ← , is a difference transition. For difference bands involving transfer of a single quantum of excitation, as in the example, the frequency is approximately equal to the difference between the fundamental frequencies. The difference is not exact because there is anharmonicity in both vibrations. However the term "difference band" also applies to cases where more than one quantum is transferred, such as ← . Since the initial state of a difference band is always an excited state, difference bands are necessarily "hot bands"! Difference bands are seldom observed in conventional vibrational spectra, because they are forbidden transitions according to harmonic selection rules, and because populations of vibrationally excited states tend to be quite low. Sum transition A sum transition (sum band), occurs when two or more fundamental vibrations are excited simultaneously. For instance, ← and ← , are examples of sum transitions. The frequency of a sum band is slightly less than the sum of the frequencies of the fundamentals, again due to anharmonic shifts in both vibrations. Sum transitions are harmonic-forbidden, and thus typically have low intensities relative to vibrational fundamentals. Also, sum bands can be, but are not always, hot bands, and thus may also show reduced intensities from thermal population effects, as described above. Sum bands are more commonly observed than difference bands in vibrational spectra References Vibrational spectroscopy
Hot band
[ "Physics", "Chemistry" ]
859
[ "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
10,202,429
https://en.wikipedia.org/wiki/Relative%20change
In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a standard or reference or starting value. The comparison is expressed as a ratio and is a unitless number. By multiplying these ratios by 100 they can be expressed as percentages so the terms percentage change, percent(age) difference, or relative percentage difference are also commonly used. The terms "change" and "difference" are used interchangeably. Relative change is often used as a quantitative indicator of quality assurance and quality control for repeated measurements where the outcomes are expected to be the same. A special case of percent change (relative change expressed as a percentage) called percent error occurs in measuring situations where the reference value is the accepted or actual value (perhaps theoretically determined) and the value being compared to it is experimentally determined (by measurement). The relative change formula is not well-behaved under many conditions. Various alternative formulas, called indicators of relative change, have been proposed in the literature. Several authors have found log change and log points to be satisfactory indicators, but these have not seen widespread use. Definition Given two numerical quantities, vref and v with vref some reference value, their actual change, actual difference, or absolute change is . The term absolute difference is sometimes also used even though the absolute value is not taken; the sign of Δ typically is uniform, e.g. across an increasing data series. If the relationship of the value with respect to the reference value (that is, larger or smaller) does not matter in a particular application, the absolute value may be used in place of the actual change in the above formula to produce a value for the relative change which is always non-negative. The actual difference is not usually a good way to compare the numbers, in particular because it depends on the unit of measurement. For instance, is the same as , but the absolute difference between is 1 while the absolute difference between is 100, giving the impression of a larger difference. But even with constant units, the relative change helps judge the importance of the respective change. For example, an increase in price of of a valuable is considered big if changing from but rather small when changing from . We can adjust the comparison to take into account the "size" of the quantities involved, by defining, for positive values of vref: The relative change is independent of the unit of measurement employed; for example, the relative change from is , the same as for . The relative change is not defined if the reference value (vref) is zero, and gives negative values for positive increases if vref is negative, hence it is not usually defined for negative reference values either. For example, we might want to calculate the relative change of −10 to −6. The above formula gives , indicating a decrease, yet in fact the reading increased. Measures of relative change are unitless numbers expressed as a fraction. Corresponding values of percent change would be obtained by multiplying these values by 100 (and appending the % sign to indicate that the value is a percentage). Domain The domain restriction of relative change to positive numbers often poses a constraint. To avoid this problem it is common to take the absolute value, so that the relative change formula works correctly for all nonzero values of vref: This still does not solve the issue when the reference is zero. It is common to instead use an indicator of relative change, and take the absolute values of both and . Then the only problematic case is , which can usually be addressed by appropriately extending the indicator. For example, for arithmetic mean this formula may be used: Percentage change A percentage change is a way to express a change in a variable. It represents the relative change between the old value and the new one. For example, if a house is worth $100,000 today and the year after its value goes up to $110,000, the percentage change of its value can be expressed as It can then be said that the worth of the house went up by 10%. More generally, if V1 represents the old value and V2 the new one, Some calculators directly support this via a or function. When the variable in question is a percentage itself, it is better to talk about its change by using percentage points, to avoid confusion between relative difference and absolute difference. Percent error The percent error is a special case of the percentage form of relative change calculated from the absolute change between the experimental (measured) and theoretical (accepted) values, and dividing by the theoretical (accepted) value. The terms "Experimental" and "Theoretical" used in the equation above are commonly replaced with similar terms. Other terms used for experimental could be "measured," "calculated," or "actual" and another term used for theoretical could be "accepted." Experimental value is what has been derived by use of calculation and/or measurement and is having its accuracy tested against the theoretical value, a value that is accepted by the scientific community or a value that could be seen as a goal for a successful result. Although it is common practice to use the absolute value version of relative change when discussing percent error, in some situations, it can be beneficial to remove the absolute values to provide more information about the result. Thus, if an experimental value is less than the theoretical value, the percent error will be negative. This negative result provides additional information about the experimental result. For example, experimentally calculating the speed of light and coming up with a negative percent error says that the experimental value is a velocity that is less than the speed of light. This is a big difference from getting a positive percent error, which means the experimental value is a velocity that is greater than the speed of light (violating the theory of relativity) and is a newsworthy result. The percent error equation, when rewritten by removing the absolute values, becomes: It is important to note that the two values in the numerator do not commute. Therefore, it is vital to preserve the order as above: subtract the theoretical value from the experimental value and not vice versa. Examples Valuable assets Suppose that car M costs $50,000 and car L costs $40,000. We wish to compare these costs. With respect to car L, the absolute difference is . That is, car M costs $10,000 more than car L. The relative difference is, and we say that car M costs 25% more than car L. It is also common to express the comparison as a ratio, which in this example is, and we say that car M costs 125% of the cost of car L. In this example the cost of car L was considered the reference value, but we could have made the choice the other way and considered the cost of car M as the reference value. The absolute difference is now since car L costs $10,000 less than car M. The relative difference, is also negative since car L costs 20% less than car M. The ratio form of the comparison, says that car L costs 80% of what car M costs. It is the use of the words "of" and "less/more than" that distinguish between ratios and relative differences. Percentages of percentages If a bank were to raise the interest rate on a savings account from 3% to 4%, the statement that "the interest rate was increased by 1%" would be incorrect and misleading. The absolute change in this situation is 1 percentage point (4% − 3%), but the relative change in the interest rate is: In general, the term "percentage point(s)" indicates an absolute change or difference of percentages, while the percent sign or the word "percentage" refers to the relative change or difference. Indicators of relative change The (classical) relative change above is but one of the possible measures/indicators of relative change. An indicator of relative change from x (initial or reference value) to y (new value) is a binary real-valued function defined for the domain of interest which satisfies the following properties: Appropriate sign: is an increasing function of when is fixed. is continuous. Independent of the unit of measurement: for all , . Normalized: The normalization condition is motivated by the observation that scaled by a constant still satisfies the other conditions besides normalization. Furthermore, due to the independence condition, every can be written as a single argument function of the ratio . The normalization condition is then that . This implies all indicators behave like the classical one when is close to . Usually the indicator of relative change is presented as the actual change Δ scaled by some function of the values x and y, say . As with classical relative change, the general relative change is undefined if is zero. Various choices for the function have been proposed: As can be seen in the table, all but the first two indicators have, as denominator a mean. One of the properties of a mean function is: , which means that all such indicators have a "symmetry" property that the classical relative change lacks: . This agrees with intuition that a relative change from x to y should have the same magnitude as a relative change in the opposite direction, y to x, just like the relation suggests. Maximum mean change has been recommended when comparing floating point values in programming languages for equality with a certain tolerance. Another application is in the computation of approximation errors when the relative error of a measurement is required. Minimum mean change has been recommended for use in econometrics. Logarithmic change has been recommended as a general-purpose replacement for relative change and is discussed more below. Tenhunen defines a general relative difference function from L (reference value) to K: which leads to In particular for the special cases , Logarithmic change Of these indicators of relative change, the most natural arguably is the natural logarithm (ln) of the ratio of the two numbers (final and initial), called log change. Indeed, when , the following approximation holds: In the same way that relative change is scaled by 100 to get percentages, can be scaled by 100 to get what is commonly called log points. Log points are equivalent to the unit centinepers (cNp) when measured for root-power quantities. This quantity has also been referred to as a log percentage and denoted L%. Since the derivative of the natural log at 1 is 1, log points are approximately equal to percent change for small differences – for example an increase of 1% equals an increase of 0.995 cNp, and a 5% increase gives a 4.88 cNp increase. This approximation property does not hold for other choices of logarithm base, which introduce a scaling factor due to the derivative not being 1. Log points can thus be used as a replacement for percent change. Additivity Using log change has the advantages of additivity compared to relative change. Specifically, when using log change, the total change after a series of changes equals the sum of the changes. With percent, summing the changes is only an approximation, with larger error for larger changes. For example: Note that in the above table, since relative change 0 (respectively relative change 1) has the same numerical value as log change 0 (respectively log change 1), it does not correspond to the same variation. The conversion between relative and log changes may be computed as . By additivity, , and therefore additivity implies a sort of symmetry property, namely and thus the magnitude of a change expressed in log change is the same whether V0 or V1 is chosen as the reference. In contrast, for relative change, , with the difference becoming larger as V1 or V0 approaches 0 while the other remains fixed. For example: Here 0+ means taking the limit from above towards 0. Uniqueness and extensions The log change is the unique two-variable function that is additive, and whose linearization matches relative change. There is a family of additive difference functions for any , such that absolute change is and log change is . See also Approximation error Errors and residuals in statistics Relative standard deviation Logarithmic scale Notes References Measurement Numerical analysis Ratios Subtraction Dimensionless quantities
Relative change
[ "Physics", "Mathematics" ]
2,496
[ "Numerical analysis", "Physical quantities", "Subtraction", "Quantity", "Sign (mathematics)", "Computational mathematics", "Measurement", "Size", "Arithmetic", "Mathematical relations", "Dimensionless quantities", "Approximations", "Ratios" ]
10,203,313
https://en.wikipedia.org/wiki/Biological%20computing
Biological computers use biologically derived molecules — such as DNA and/or proteins — to perform digital or real computations. The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials. A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer. Scientific background Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis. Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers. Biochemical computers Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality. Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input). Biomechanical computers Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the output signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output. Bioelectronic computers Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system. Network-based biocomputers In networks-based biocomputation, self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al., mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step. Engineering biocomputers The behavior of biologically derived computational systems such as these relies on the particular molecules that make up the system, which are primarily proteins but may also include DNA molecules. Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system. The chemical nature of a protein is dictated by its sequence of amino acids—the chemical building blocks of proteins. This sequence is in turn dictated by a specific sequence of DNA nucleotides—the building blocks of DNA molecules. Proteins are manufactured in biological systems through the translation of nucleotide sequences by biological molecules called ribosomes, which assemble individual amino acids into polypeptides that form functional proteins based on the nucleotide sequence that the ribosome interprets. What this ultimately means is that one can engineer the chemical components necessary to create a biological system capable of performing computations by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. Thus, implementing nanobiotechnology to design and produce synthetically designed proteins—as well as the design and synthesis of artificial DNA molecules—can allow the construction of functional biocomputers (e.g. Computational Genes). Biocomputers can also be designed with cells as their basic components. Chemically induced dimerization systems can be used to make logic gates from individual cells. These logic gates are activated by chemical agents that induce interactions between previously non-interacting proteins and trigger some observable change in the cell. Network-based biocomputers are engineered by nanofabrication of the hardware from wafers where the channels are etched by electron-beam lithography or nano-imprint lithography. The channels are designed to have a high aspect ratio of cross section so the protein filaments will be guided. Also, split and pass junctions are engineered so filaments will propagate in the network and explore the allowed paths. Surface silanization ensures that the motility proteins can be affixed to the surface and remain functional. The molecules that perform the logic operations are derived from biological tissue. Economics All biological organisms have the ability to self-replicate and self-assemble into functional components. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them. Notable advancements in biocomputer technology Currently, biocomputers exist with various functional capabilities that include operations of "binary " logic and mathematical calculations. Tom Knight of the MIT Artificial Intelligence Laboratory first suggested a biochemical computing scheme in which protein concentrations are used as binary signals that ultimately serve to perform logical operations. At or above a certain concentration of a particular biochemical product in a biocomputer chemical pathway indicates a signal that is either a 1 or a 0. A concentration below this level indicates the other, remaining signal. Using this method as computational analysis, biochemical computers can perform logical operations in which the appropriate binary output will occur only under specific logical constraints on the initial conditions. In other words, the appropriate binary output serves as a logically derived conclusion from a set of initial conditions that serve as premises from which the logical conclusion can be made. In addition to these types of logical operations, biocomputers have also been shown to demonstrate other functional capabilities, such as mathematical computations. One such example was provided by W.L. Ditto, who in 1999 created a biocomputer composed of leech neurons at Georgia Tech which was capable of performing simple addition. These are just a few of the notable uses that biocomputers have already been engineered to perform, and the capabilities of biocomputers are becoming increasingly sophisticated. Because of the availability and potential economic efficiency associated with producing biomolecules and biocomputers—as noted above—the advancement of the technology of biocomputers is a popular, rapidly growing subject of research that is likely to see much progress in the future. In March 2013. a team of bioengineers from Stanford University, led by Drew Endy, announced that they had created the biological equivalent of a transistor, which they dubbed a "transcriptor". The invention was the final of the three components necessary to build a fully functional computer: data storage, information transmission, and a basic system of logic. Parallel biological computing with networks, where bio-agent movement corresponds to arithmetical addition was demonstrated in 2016 on a SUBSET SUM instance with 8 candidate solutions. In July 2017, separate experiments with E. Coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. Coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Harvard researchers proved that it is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells. In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 x 2 maze problems to probe the principle for distributed computing among cells. In 2024, FinalSpark, a Swiss biocomputing startup, launched an online platform enabling global researchers to conduct experiments remotely on biological neurons in vitro. Future potential of biocomputers Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available non-bio computers. The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts. See also Biotechnology Computational gene Computer DNA computing Human biocomputer Molecular electronics Nanotechnology Nanobiotechnology Peptide computing Wetware computer Unconventional computing References Nanotechnology Biotechnology Models of computation
Biological computing
[ "Materials_science", "Engineering", "Biology" ]
2,402
[ "Nanotechnology", "nan", "Materials science", "Biotechnology" ]
10,204,161
https://en.wikipedia.org/wiki/Syncrude%20Tailings%20Dam
The Syncrude Tailings Dam, impounding the Mildred Lake Settling Basin (MLSB), is an embankment dam that is, by volume of construction material, the largest earth structure in the world in 2001. It is located north of Fort McMurray, Alberta, Canada, at the northern end of the Mildred Lake lease area owned by Syncrude Canada Ltd. The dam and the tailings reservoir within it are constructed and maintained as part of ongoing operations by Syncrude in extracting oil from the Athabasca oil sands. Other tailings dams constructed and operated in the same area by Syncrude include the Southwest Sand Storage (SWSS), which is the third largest dam in the world by volume of construction material after the Tarbela Dam. Oil sands tailings pond water According to Canada’s Oil Sands Innovation Alliance (COSIA), an alliance of oil sands producers formed in 2012, who share research on Environmental Priority Areas (EPAs) such as tailing pond water and greenhouse gases, "Tailings are the sand, silt, clay, soil and water found naturally in oil sands that remain following the mining and bitumen extraction process." The Clark Hot Water Extraction (CHWE) process used by Suncor and Syncrude in their open-pit mining operations, to extract bitumen from the Athabasca Oil Sands (AOS) produces large quantities of tailings pond sludge which remains stable for decades. By 1990 it was considered to be the "imminent environmental constraint to future use of the hot water process." Oil sands tailings pond water contains toxic chemicals such as "naphthenic acids (NAs) and process chemicals (e.g., alkyl sulphates, quaternary ammonium compounds, and alkylphenol ethoxylates)." Other Syncrude tailings dams By 2012 Syncrude Canada Ltd had oilsands mining operations on three lease areas (Mildred Lake, Aurora North and Aurora South), all about 40 km north of Fort McMurray. There are many tailings dams on those leases. The lease that has the greatest number of tailings dams, and the largest tailings dams, is the Mildred Lake lease. According to Syncrude's 2010 Baseline Report submitted to the Energy Resources Conservation Board (since replaced by the Alberta Energy Regulator (AER), the Mildred Lake and Aurora North leases together contain: the Mildred Lake Settling Basin (MLSB), Southwest Sand Storage (SWSS), West In-Pit (WIP), East In-Pit (EIP), Southwest In-Pit (SWIP), Aurora Settling Basin (ASB) and Aurora East Pit Northeast (AEPN-E). Those referred to as "in pit" have only small containing embankments. In Aurora South lease the main tailings dam will be the External Tailings Area (ETA). By 2016, Syncrude four tailings areas at Aurora North consisted of the out-of-pit Aurora Settling Basin (ASB), in operation since 2000, and three in-pit basins: Aurora East Pit North East (AEPN-E) since 2010, Aurora East Pit North West (AEPN-W) since 2011 (in-pit), and Aurora East Pit South (AEPS) since 2014. Mildred Lake Settling Basin (MLSB) The Mildred Lake Settling Basin is located on the north side of the Mildred Lake lease area. It is a tailings pond that serves three purposes. Firstly, the embankment was planned as storage for a substantial volume of sand. Secondly, the basin acts as a storage basin for process water, which is recycled, with a planned ultimate storage capacity of 350×106 m3. Thirdly, the fines that are not captured elsewhere settle and compact in the basin, and are later pumped out for long term storage. This means that the MLSB is a true dam in the sense that it is filled with water in the long term, rather than being quickly filled by solids as in many other tailings dams. The embankment has a circumference of about 18 km, an average height of about 40 m and a maximum height of about 88 m. Two starter dams were constructed during 1976 to 1978 and were required until sufficient sand was available for building the embankments. The north starter dam had a crest elevation of 312 m. The original ground surface varied from 294 m to 305 m while up to 1.5 m of original ground was stripped for a trench above which was a compacted clay core. The crest width was 30 metres. The main embankment was taken to a final elevation of 352 m for more than half of its length by 1994 and completed in 1995. For construction purposes the embankment was considered to be in a collection of 30 "cells", each with a crest length of about 600 metres. Acceptable side slopes were determined on a cell-by-cell basis, based on the strength of available materials and foundation movement. The slope of the outer part of the embankment is much smaller than that of the inner part, in a ratio of about 4:1. By 1997 Syncrude's large open-pit operations were producing up to 250,000 tons a day of tailings that were collected in this tailings pond built on the upstream construction method. As this dam frequently appears in lists as the largest dam structure in the world, it is necessary to estimate the accuracy of the quoted volume of construction material. The quoted length of the embankment of 18 km is reliable. The average height of the embankment is quoted to be 40 m, and a check on this using four cross sections yields 45 m, which is of the same order of magnitude. The average base width of the embankment is variously 1,800 m, 800 m from Google Earth and 660 m. So whereas one report gives an embankment volume of 720×106 m3, calculations based on the width of the embankment base from these three sources give embankment volumes of 660, 290 and 240×106 m3 respectively. So there's some uncertainty as to the total volume of construction material. South West Sand Storage (SWSS) The SWSS facility is located in the southwest corner of the Mildred Lake lease area. It was commissioned in 1993. The facility was designed to provide coarse tailings sand storage, returning water and thin fine tailings to other sites within the Mildred Lake Project area. The crest elevation is 400 m or 390 m. An upgrade to increase the water storage to a maximum water surface level of 397 m was constructed in 2009 to 2010. The embankment length is 19.5 km. The maximum embankment height is at least 30 m and the average embankment height is about 18.5 m. The average embankment base width is about 800 m from Google Earth. So the total embankment volume is about 145×106 m3. Aurora Settling Basin (ASB) This is located to the South East of the Aurora North lease, adjacent to the Muskeg River. The embankment length is 11.6 km and the average embankment base width is about 260 m from Google Earth. The surface elevation is 341.4 m. See also Syncrude Canada Ltd. Athabasca Oil Sands List of largest dams in the world List of articles about Canadian oil sands References External links ERCB Directive 074 tailings plans 2010 Dams in Alberta Mining in Alberta Regional Municipality of Wood Buffalo Tailings dams Athabasca oil sands
Syncrude Tailings Dam
[ "Technology", "Engineering" ]
1,514
[ "Tailings dams", "Mining engineering", "Hazardous waste", "Mining equipment" ]
10,204,831
https://en.wikipedia.org/wiki/Department%20of%20Energy%20%28United%20Kingdom%29
The Department of Energy was a department of the United Kingdom Government. The department was established in January 1974, when the responsibility for energy production was transferred away from the Department of Trade and Industry in the wake of the 1973 oil crisis and with the importance of North Sea oil increasing. Following the privatisation of the energy industries in the United Kingdom, which had begun some ten years earlier, the department was abolished in 1992. Many of its functions were abandoned, with the remainder being absorbed into other bodies or departments. The Office of Gas Supply (Ofgas) and the Office of Electricity Regulation (OFFER) took over market regulation, the Energy Efficiency Office was transferred to the Department of the Environment, and various media-related functions were transferred to the Department of National Heritage. The core activities relating to UK energy policy were transferred back to the Department of Trade and Industry (DTI). The Department of Energy was a significant source of funding for energy research, and for investigations into the potential for renewable energy technologies in the UK. Work funded or part-funded by the department included investigations into Geothermal power and the Severn Barrage Ministers Secretary of State for Energy Colour key (for political parties): Politicians: Junior ministers included Peter Morrison (Minister of State in 1987) and Patrick Jenkin. Earlier and later ministries Although only formed in 1974, the Department of Energy was not the first ministry to handle energy-related matters. The Ministry of Fuel and Power was created on 11 June 1942 from functions separated from the Board of Trade. It took charge of coal production, allocation of supplies of fuels, control of energy prices and petrol rationing during World War II. The Ministry of Fuel and Power was renamed the Ministry of Power in January 1957. The Ministry of Power later became part of the Ministry of Technology on 6 October 1969, which merged into the Department of Trade and Industry on 20 October 1970. The post of Secretary of State for Energy was re-created in 2008 as the Secretary of State for Energy and Climate Change. See also Energy use and conservation in the United Kingdom Secretary of State for Energy and Climate Change References External links History of the Department of Energy Energy Department of Energy and Climate Change Ministries established in 1974 1974 establishments in the United Kingdom 1992 disestablishments in the United Kingdom Defunct environmental agencies Energy ministries Energy in the United Kingdom
Department of Energy (United Kingdom)
[ "Engineering" ]
468
[ "Energy organizations", "Energy ministries" ]
10,207,027
https://en.wikipedia.org/wiki/Autoconstructive%20evolution
Autoconstructive evolution is a process in which the entities undergoing evolutionary change are themselves responsible for the construction of their own offspring and thus for aspects of the evolutionary process itself. Because biological evolution is always autoconstructive, this term mainly occurs in evolutionary computation, to distinguish artificial life type systems from conventional genetic algorithms where the GA performs replication artificially. The term was coined by Lee Spector. Importance of autoconstructive evolution Autoconstructive evolution is a good platform for answering theoretical questions about the evolution of evolvability. Preliminary evidence suggests that the way in which offspring are generated changes substantially over the course of evolution. By studying these patterns, we can begin to understand how evolving systems organize themselves to evolve faster. Ultimately, such an understanding could allow us to improve our ability to solve problems with evolutionary computation. This increased ability for the process of self-replication to evolve is also thought to be important for recreating the open-ended evolutionary process observed on earth Examples of autoconstructive evolution Tierra and Avida A relatively simple form of autoconstruction occurs in systems such as Tierra and Avida. In these systems, programs replicate themselves by allocating space in memory for their offspring and then looping over all of the instructions in their genome and copying each into the newly allocated space. This is autoconstruction in that the programs are responsible for determining what code ends up in the offspring. Programs most commonly make exact copies of themselves, with changes being introduced exclusively through mutation events. In principle, however, programs can compose a wide range of possible offspring by only copying a subset of their genomes. PushGP PushGP is a genetic programming system which evolves code written in the Push language. Push is a stack-based language designed for easy use in genetic programming, in which every variable type (e.g. strings, integers, etc.) has its own stack. All variables are stored on the stack associated with their type. One of the variable types is executable Push code. As a result, this language design allows for rich autoconstructive evolution by treating all code left on the code stack at the end of program execution as the program's offspring. Using this approach, programs have complete control over the offspring programs that they create. References External links Autoconstructive evolution with PushGP and Pushpop Evolutionary biology
Autoconstructive evolution
[ "Biology" ]
490
[ "Evolutionary biology" ]
10,208,822
https://en.wikipedia.org/wiki/M%C3%B6bius%20aromaticity
In organic chemistry, Möbius aromaticity is a special type of aromaticity believed to exist in a number of organic molecules. In terms of molecular orbital theory these compounds have in common a monocyclic array of molecular orbitals in which there is an odd number of out-of-phase overlaps, the opposite pattern compared to the aromatic character in Hückel systems. The nodal plane of the orbitals, viewed as a ribbon, is a Möbius strip, rather than a cylinder, hence the name. The pattern of orbital energies is given by a rotated Frost circle (with the edge of the polygon on the bottom instead of a vertex), so systems with 4n electrons are aromatic, while those with 4n + 2 electrons are anti-aromatic/non-aromatic. Due to the incrementally twisted nature of the orbitals of a Möbius aromatic system, stable Möbius aromatic molecules need to contain at least 8 electrons, although 4-electron Möbius aromatic transition states are well known in the context of the Dewar-Zimmerman framework for pericyclic reactions. Möbius molecular systems were considered in 1964 by Edgar Heilbronner by application of the Hückel method, but the first such isolable compound was not synthesized until 2003 by the group of Rainer Herges. However, the fleeting trans-C9H9+ cation, one conformation of which is shown on the right, was proposed to be a Möbius aromatic reactive intermediate in 1998 based on computational and experimental evidence. Hückel-Möbius aromaticity The Herges compound (6 in the image below) was synthesized in several photochemical cycloaddition reactions from tetradehydrodianthracene 1 and the ladderane syn-tricyclooctadiene 2 as a substitute for cyclooctatetraene. Intermediate 5 was a mixture of 2 isomers and the final product 6 a mixture of 5 isomers with different cis and trans configurations. One of them was found to have a C2 molecular symmetry corresponding to a Möbius aromatic and another Hückel isomer was found with Cs symmetry. Despite having 16 electrons in its pi system (making it a 4n antiaromatic compound) the Heilbronner prediction was borne out because according to Herges the Möbius compound was found to have aromatic properties. With bond lengths deduced from X-ray crystallography a HOMA value was obtained of 0.50 (for the polyene part alone) and 0.35 for the whole compound which qualifies it as a moderate aromat. Henry Rzepa pointed out that the conversion of intermediate 5 to 6 can proceed by either a Hückel or a Möbius transition state. The difference was demonstrated in a hypothetical pericyclic ring opening reaction to cyclododecahexaene. The Hückel TS (left) involves 6 electrons (arrow pushing in red) with Cs molecular symmetry conserved throughout the reaction. The ring opening is disrotatory and suprafacial and both bond length alternation and NICS values indicate that the 6 membered ring is aromatic. The Möbius TS with 8 electrons on the other hand has lower computed activation energy and is characterized by C2 symmetry, a conrotatory and antarafacial ring opening and 8-membered ring aromaticity. Another interesting system is the cyclononatetraenyl cation explored for over 30 years by Paul v. R. Schleyer et al. This reactive intermediate is implied in the solvolysis of the bicyclic chloride 9-deutero-9'-chlorobicyclo[6.1.0]-nonatriene 1 to the indene dihydroindenol 4. The starting chloride is deuterated in only one position but in the final product deuterium is distributed at every available position. This observation is explained by invoking a twisted 8-electron cyclononatetraenyl cation 2 for which a NICS value of -13.4 (outsmarting benzene) is calculated. A more recent study, however, suggests that the stability of trans-C9H9+ is not much different in energy compared to a Hückel topology isomer. The same study suggested that for [13]annulenyl cation, the Möbius topology penta-trans-C13H13+ is a global energy minimum and predicts that it may be directly observable. In 2005 the same P. v. R. Schleyer questioned the 2003 Herges claim: he analyzed the same crystallographic data and concluded that there was indeed a large degree of bond length alternation resulting in a HOMA value of -0.02, a computed NICS value of -3.4 ppm also did not point towards aromaticity and (also inferred from a computer model) steric strain would prevent effective pi-orbital overlap. A Hückel-Möbius aromaticity switch (2007) has been described based on a 28 pi-electron porphyrin system: The phenylene rings in this molecule are free to rotate forming a set of conformers: one with a Möbius half-twist and another with a Hückel double-twist (a figure-eight configuration) of roughly equal energy. In 2014, Zhu and Xia (with the help of Schleyer) synthesized a planar Möbius system that consisted of two pentene rings connected with an osmium atom. They formed derivatives where osmium had 16 and 18 electrons and determined that Craig–Möbius aromaticity is more important for the stabilization of the molecule than the metal's electron count. Transition states In contrast to the rarity of Möbius aromatic ground state molecular systems, there are many examples of pericyclic transition states that exhibit Möbius aromaticity. The classification of a pericyclic transition state as either Möbius or Hückel topology determines whether 4N or 4N + 2 electrons are required to make the transition state aromatic or antiaromatic, and therefore, allowed or forbidden, respectively. Based on the energy level diagrams derived from Hückel MO theory, (4N + 2)-electron Hückel and (4N)-electron Möbius transition states are aromatic and allowed, while (4N + 2)-electron Möbius and (4N)-electron Hückel transition states are antiaromatic and forbidden. This is the basic premise of the Möbius-Hückel concept. Derivation of Hückel MO theory energy levels for Möbius topology From the figure above, it can also be seen that the interaction between two consecutive AOs is attenuated by the incremental twisting between orbitals by , where is the angle of twisting between consecutive orbitals, compared to the usual Hückel system. For this reason resonance integral is given by , where is the standard Hückel resonance integral value (with completely parallel orbitals). Nevertheless, after going all the way around, the Nth and 1st orbitals are almost completely out of phase. (If the twisting were to continue after the th orbital, the st orbital would be exactly phase-inverted compared to the 1st orbital). For this reason, in the Hückel matrix the resonance integral between carbon and is . For the generic carbon Möbius system, the Hamiltonian matrix is: . Eigenvalues for this matrix can now be found, which correspond to the energy levels of the Möbius system. Since is a matrix, we will have eigenvalues and MOs. Defining the variable , we have: . To find nontrivial solutions to this equation, we set the determinant of this matrix to zero to obtain . Hence, we find the energy levels for a cyclic system with Möbius topology, . In contrast, recall the energy levels for a cyclic system with Hückel topology, . See also Barrelene Baird's rule Bicycloaromaticity References Notes Physical organic chemistry
Möbius aromaticity
[ "Chemistry" ]
1,653
[ "Physical organic chemistry" ]
10,209,776
https://en.wikipedia.org/wiki/Energy%20applications%20of%20nanotechnology
As the world's energy demand continues to grow, the development of more efficient and sustainable technologies for generating and storing energy is becoming increasingly important. According to Dr. Wade Adams from Rice University, energy will be the most pressing problem facing humanity in the next 50 years and nanotechnology has potential to solve this issue. Nanotechnology, a relatively new field of science and engineering, has shown promise to have a significant impact on the energy industry. Nanotechnology is defined as any technology that contains particles with one dimension under 100 nanometers in length. For scale, a single virus particle is about 100 nanometers wide. People in the fields of science and engineering have already begun developing ways of utilizing nanotechnology for the development of consumer products. Benefits already observed from the design of these products are an increased efficiency of lighting and heating, increased electrical storage capacity, and a decrease in the amount of pollution from the use of energy. Benefits such as these make the investment of capital in the research and development of nanotechnology a top priority. Commonly used nanomaterials in energy An important sub-field of nanotechnology related to energy is nanofabrication, the process of designing and creating devices on the nanoscale. The ability to create devices smaller than 100 nanometers opens many doors for the development of new ways to capture, store, and transfer energy. Improvements in the precision of nanofabrication technologies are critical to solving many energy related problems that the world is currently facing. Graphene-based materials There is enormous interest in the use of graphene-based materials for energy storage. The research on the use of graphene for energy storage began very recently, but the growth rate of relative research is rapid. Graphene recently emerged as a promising material for energy storage because of several properties, such as low weight, chemical inertness and low price. Graphene is an allotrope of carbon that exists as a two-dimensional sheet of carbon atoms organized in a hexagonal lattice. A family of graphene-related materials, called "graphenes" by the research community, consists of structural or chemical derivatives of graphene. The most important chemically derived graphene is graphene oxide (defined as single layer of graphite oxide, Graphite oxide can be obtained by reacting graphite with strong oxidizers, for example, a mixture of sulfuric acid, sodium nitrate, and potassium permanganate) which is usually prepared from graphite by oxidization to graphite oxide and consequent exfoliation. The properties of graphene depend greatly on the method of fabrication. For example, reduction of graphene oxide to graphene results in a graphene structure that is also one-atom thick but contains a high concentration of defects, such as nanoholes and Stone–Wales defects. Moreover, carbon materials, which have relatively high electrical conductivity and variable structures are extensively used in the modification of sulfur. Sulfur–carbon composites with diverse structures have been synthesized and exhibited remarkably improved electrochemical performance than pure sulfur, which is crucial for battery design. Graphene has great potential in the modification of a sulfur cathode for high performance Li-S batteries, which has been broadly investigated in recent years. Silicon-based nano semiconductors Silicon-based nano semiconductors have the most useful application in solar energy and it also has been extensively studied at many places, such as Kyoto University. They utilize silicon nanoparticles in order to absorb a greater range of wavelengths from the electromagnetic spectrum. This can be done by putting many identical and equally spaced silicon rods on the surface. Also, the height and length of spacing have to be optimized for reaching the best results. This arrangement of silicon particles allows solar energy to be reabsorbed by many different particles, exciting electrons and resulting in much of the energy being converted to heat. Then, the heat can be converted to electricity. Researchers from Kyoto University have shown that these nano-scale semiconductors can increase efficiency by at least 40%, compared to the regular solar cells. Nanocellulose‐based materials Cellulose is the most abundant natural polymer on earth. Currently, nanocellulose‐based mesoporous structures, flexible thin films, fibers, and networks are developed and used in photovoltaic (PV) devices, energy storage systems, mechanical energy harvesters, and catalysts components. Inclusion of nanocellulose in those energy‐related devices largely raises the portion of eco‐friendly materials and is very promising in addressing the relevant environmental concerns. Furthermore, cellulose manifests itself in the low cost and large‐scale promises. Nanostructures in energy One-dimensional nanomaterials One-dimensional nanostructures have shown promise to increase energy density, safety, and cycling-life of energy storage systems, an area in need of improvement for Li-ion batteries. These nanostructures are mainly used in battery electrodes because of their shorter bi-continuous ion and electron transport pathways, which results in higher battery performance. Additionally, 1D nanostructures are capable of increasing charge storage by double layering, and can also be used on supercapacitors because of their fast pseudocapacitive surface redox processes. In the future, novel design and controllable synthesis of these materials will be developed much more in-depth. 1D nanomaterials are also environmentally friendly and cost-effective. Two-dimensional nanomaterials The most important feature of two dimensional nanomaterials is that their properties can be precisely controlled. This means that 2D nanomaterials can be easily modified and engineered on nanostructures. The interlayer space can also be manipulated for nonlayered materials, called 2D nanofluidic channels. 2D nanomaterials can also be engineered into porous structures in order to be used for energy storage and catalytic applications by applying facile charge and mass transport. 2D nanomaterials also have a few challenges. There are some side effects of modifying the properties of the materials, such as activity and structural stability, which can be compromised when they are engineered. For example, creating some defects can increase the number of active sites for higher catalytic performance, but side reactions may also happen, which could possibly damage the catalyst's structure. Another example is that interlayer expansion can lower the ion diffusion barrier in the catalytic reaction, but it can also potentially lower its structural stability. Because of this, there is a tradeoff between performance and stability. A second issue is consistency in design methods. For example, heterostructures are the main structures of the catalyst in interlayer space and energy storage devices, but these structures may lack the understanding of mechanism on the catalytic reaction or charge storage mechanisms. A deeper understanding of 2D nanomaterial design is required, because fundamental knowledge will lead to consistent and efficient methods of designing these structures. A third challenge is the practical application of these technologies. There is a huge difference between lab-scale and industry-scale applications of 2D nanomaterials due to their intrinsic instability during storage and processing. For example, porous 2D nanomaterial structures have low packing densities, which makes them difficult to pack into dense films. New processes are still being developed for the application of these materials on an industrial scale. Applications Lithium-sulfur based high-performance batteries The Li-ion battery is currently one of the most popular electrochemical energy storage systems and has been widely used in areas from portable electronics to electric vehicles. However, the gravimetric energy density of Li-ion batteries is limited and less than that of fossil fuels. The lithium sulfur (Li-S) battery, which has a much higher energy density than the Li-ion battery, has been attracting worldwide attention in recent years. A group of researches from the National Natural Science Foundation of China (Grant No. 21371176 and 21201173) and the Ningbo Science and Technology Innovation Team (Grant No. 2012B82001) have developed a nanostructure-based lithium-sulfur battery consisting of graphene/sulfur/carbon nano-composite multilayer structures. Nanomodification of sulfur can increase the electrical conductivity of the battery and improve electron transportation in the sulfur cathode. A graphene/sulfur/carbon nanocomposite with a multilayer structure (G/S/C), in which nanosized sulfur is layered on both sides of chemically reduced graphene sheets and covered with amorphous carbon layers, can be designed and successfully prepared. This structure achieves high conductivity, and surface protection of sulfur simultaneously, and thus gives rise to excellent charge/discharge performance. The G/S/C composite shows promising characteristics as a high performance cathode material for Li-S batteries. Nanomaterials in solar cells Engineered nanomaterials are key building blocks of the current generation solar cells. Today's best solar cells have layers of several different semiconductors stacked together to absorb light at different energies but still only manage to use approximately 40% of the Sun's energy. Commercially available solar cells have much lower efficiencies (15-20%). Nanostructuring has been used to improve the efficiencies of established photovoltaic (PV) technologies, for example, by improving current collection in amorphous silicon devices, plasmonic enhancement in dye-sensitized solar cells, and improved light trapping in crystalline silicon. Furthermore, nanotechnology could help increase the efficiency of light conversion by utilizing the flexible bandgaps of nanomaterials, or by controlling the directivity and photon escape probability of photovoltaic devices. Titanium dioxide (TiO2) is one of the most widely investigated metal oxides for use in PV cells in the past few decades because of its low cost, environmental benignity, plentiful polymorphs, good stability, and excellent electronic and optical properties. However, their performances are greatly limited by the properties of the TiO2 materials themselves. One limitation is the wide band gap, making TiO2 only sensitive to ultraviolet (UV) light, which just occupies less than 5% of the solar spectrum. Recently, core–shell structured nanomaterials have attracted a great deal of attention as they represent the integration of individual components into a functional system, showing improved physical and chemical properties (e.g., stability, non-toxicity, dispersibility, multi-functionality), which are unavailable from the isolated components. For TiO2 nanomaterials, this core–shell structured design would provide a promising way to overcome their disadvantages, thus resulting in improved performances. Compared to sole TiO2 material, core–shell structured TiO2 composites show tunable optical and electrical properties, even new functions, which are originated from the unique core–shell structures. Nanoparticle fuel additives Nanomaterials can be used in a variety of ways to reduce energy consumption. Nanoparticle fuel additives can also be of great use in reducing carbon emissions and increasing the efficiency of combustion fuels. Cerium oxide nanoparticles have been shown to be very good at catalyzing the decomposition of unburnt hydrocarbons and other small particle emissions due to their high surface area to volume ratio, as well as lowering the pressure within the combustion chamber of engines to increase engine efficiency and curb NOx emissions. Addition of carbon nanoparticles has also successfully increased burning rate and ignition delay in jet fuel. Iron nanoparticle additives to biodiesel and diesel fuels have also shown a decrease in fuel consumption and volumetric emissions of hydrocarbons by 3-6%, carbon monoxide by 6-12% and nitrogen oxides by 4-11% in one study. Environmental and health impacts of fuel additives While nanomaterials can increase energy efficiency of fuel in several ways, a drawback of their use lies in the effect of nanoparticles on the environment. With cerium oxide nanoparticle additives in fuel, trace amounts of these toxic particles can be emitted within the exhaust. Cerium oxide additives in diesel fuel have been shown to cause lung inflammation and increased bronchial alveolar lavage fluid in rats. This is concerning, especially in areas with high road traffic, where these particles are likely to accumulate and cause adverse health effects. Naturally occurring nanoparticles created by the incomplete combustion of diesel fuels are also large contributors to toxicity of diesel fumes. More research needs to be conducted to determine whether the addition of artificial nanoparticles to fuels decreases the net amount of toxic particle emissions due to combustion. Economic benefits The relatively recent shift toward using nanotechnology with respect to the capture, transfer, and storage of energy has and will continue to have many positive economic impacts on society. The control of materials that nanotechnology offers to scientists and engineers of consumer products is one of the most important aspects of nanotechnology and allows for efficiency improvements of a variety of products. More efficient capture and storage of energy by use of nanotechnology may lead to decreased energy costs in the future, as preparation costs of nanomaterials becomes less expensive with more development. A major issue with current energy generation is the generation of waste heat as a by-product of combustion. A common example of this is in an internal combustion engine. The internal combustion engine loses about 64% of the energy from gasoline as heat and an improvement of this alone could have a significant economic impact. However, improving the internal combustion engine in this respect has proven to be extremely difficult without sacrificing performance. Improving the efficiency of fuel cells through the use of nanotechnology appears to be more plausible by using molecularly tailored catalysts, polymer membranes, and improved fuel storage. In order for a fuel cell to operate, particularly of the hydrogen variant, a noble-metal catalyst (usually platinum, which is very expensive) is needed to separate the electrons from the protons of the hydrogen atoms. However, catalysts of this type are extremely sensitive to carbon monoxide reactions. In order to combat this, alcohols or hydrocarbons compounds are used to lower the carbon monoxide concentration in the system. Using nanotechnology, catalysts can be designed through nanofabrication that limit incomplete combustion and thus decrease the amount of carbon monoxide, improving the efficiency of the process. See also Nanotechnology Energy Fuel cell References Nanotechnology Energy technology
Energy applications of nanotechnology
[ "Materials_science", "Engineering" ]
2,962
[ "Nanotechnology", "Materials science" ]
10,211,510
https://en.wikipedia.org/wiki/Diederich%20Hinrichsen
Diederich Hinrichsen (born 17 February 1939) is a German mathematician who, together with Hans W. Knobloch, established the field of dynamical systems theory and control theory in Germany. Life and work Diederich Hinrichsen was born in 1939, and studied mathematics, physics, literature, philosophy, and economics from 1958 to 1965 in Hamburg. In 1966 he got his PhD at the University of Erlangen under the supervision of Heinz Bauer. His main research area at that time was abstract potential theory, with a special focus on extensions of the Cauchy-Weil theorem to the Choquet boundary. After research visits in Paris and Hamburg, he went to Havana where he helped to re-establish mathematics in Cuba. After an appointment to Bielefeld, he became professor of mathematics at the University of Bremen. Hinrichsen was the founding director of the Research Center for Dynamical Systems, concentrating on finite- and infinite-dimensional linear systems, stochastic dynamical systems, nonlinear dynamics and stability analysis. He focused on algebraic systems theory, parameterization problems in control and linear algebra, infinite-dimensional systems, and stability analysis, developing a comprehensive theory of linear systems. In a different direction, with Anthony J. Pritchard (University of Warwick), he worked on concepts of stability radii and spectral value sets, building up a robustness theory covering deterministic and stochastic aspects of dynamical systems. After retiring in Germany, he is now a professor at Carlos III in Madrid. Selected publications 1982. Feedback Control of Linear and Nonlinear Systems, with Alberto Isidori. Heidelberg : Springer. 1990. Control of Uncertain Systems. Progress in Systems & Control Theory, with Bengt Martensson. Boston : Birkhäuser. 1999. Advances in Mathematical Systems Theory. In Honor of Diederich Hinrichsen, Boston : Birkhäuser 2005. Mathematical Systems Theory, with A. J. Pritchard. Heidelberg : Springer References External links Extended list of selected publications by Diederich Hinrichsen Institute for Dynamical Systems Spectral value sets: a graphical tool for robustness analysis (Post-script file; ~8 MB) 1939 births 20th-century German mathematicians 21st-century German mathematicians Control theorists Living people German systems scientists Academic staff of the Charles III University of Madrid
Diederich Hinrichsen
[ "Engineering" ]
478
[ "Control engineering", "Control theorists" ]
10,211,876
https://en.wikipedia.org/wiki/AN/PSN-13%20Defense%20Advanced%20GPS%20Receiver
The AN/PSN-13 Defense Advanced GPS Receiver (DAGR; colloquially, "dagger") is a handheld GPS receiver used by the United States Department of Defense and select foreign military services. It is a military-grade, dual-frequency receiver, and has the security hardware necessary to decode the encrypted P(Y)-code GPS signals. Manufactured by Rockwell Collins, the DAGR entered production in March 2004, with the 40,000th unit delivered in September 2005. It was estimated by the news source Defense Industry Daily that, by the end of 2006, the USA and various allies around the world had issued almost $300 million worth of DAGR contracts, and ordered almost 125,000 units. The DAGR replaced the Precision Lightweight GPS Receiver (PLGR), which was first fielded in 1994. Rockwell Collins also manufactures a GPS receiver known as the "Polaris Guide", that looks like a DAGR, but uses only the civilian C/A code signals. These units are labelled as "SPS", for "Standard Positioning Service", and may be possessed by non-military users. Features Graphical screen, with the ability to overlay map images. 12-channel continuous satellite tracking for "all-in-view" operation. Simultaneous L1/L2 dual frequency GPS signal reception. Capable of Direct-Y code acquisition Cold start first fix in less than 100 seconds. Extended performance in a diverse jamming environment. 41 dB J/S maintaining state 5 tracking. 24 dB during initial C/A code acquisition. Utilizes Receiver Autonomous Integrity Monitoring (RAIM). Selective Availability/Anti-Spoofing Module (SAASM) compatible (currently version 3.2). Wide Area GPS Enhancement (WAGE) compatible. Resistant to multi-path effects. Can be used as survey for weapons systems Fielded to the U.S. Army, U.S. Marine Corps, U.S. Navy, U.S. Air Force and select foreign military forces Designed to fit in a Battle Dress Uniform's 2-magazine ammo pouch Approximate cost to government per unit to acquire: $1,832 Comparison to PLGR See also List of military electronics of the United States Moving map display References External links Rockwell Collins' DAGR technical specifications US Army DAGR information page Wikileaks' US Army AN/PSN-13A DAGR Operator's Manual -- Change 1 Brooke Clarke's excellent DAGR Information Page GPS Tracker Global Positioning System Military equipment of the United States Military electronics of the United States Military equipment introduced in the 2000s
AN/PSN-13 Defense Advanced GPS Receiver
[ "Technology", "Engineering" ]
527
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
10,212,368
https://en.wikipedia.org/wiki/Precision%20Lightweight%20GPS%20Receiver
The AN/PSN-11 Precision Lightweight GPS Receiver (PLGR, colloquially "plugger") is a ruggedized, hand-held, single-frequency GPS receiver fielded by the United States Armed Forces. It incorporates the Precise Positioning Service — Security Module (PPS-SM) to access the encrypted P(Y)-code GPS signal. Introduced in January 1990, and extensively fielded until 2004 when it was replaced by its successor, the Defense Advanced GPS Receiver (DAGR). In that time period more than 165,000 PLGRs were procured worldwide, and despite being superseded by the DAGR, large numbers remain in unit inventories and it continues to be the most widely used GPS receiver in the United States military. The PLGR measures 9.5 by 4.1 by 2.6 inches and weighs with batteries. It was originally delivered to the United States military with a six-year warranty; however, this was extended to ten years in June 2000. Versions AN/PSN-11 — NSN 5825-01-374-6643, an early version (tan case) AN/PSN-11(V)1 "Enhanced PLGR" — NSN 5825-01-395-3513, an upgraded version (green case) See also Defense Advanced GPS Receiver List of military electronics of the United States Moving map display Selective availability anti-spoofing module References https://web.archive.org/web/20110722183355/https://gps.army.mil/gps/CustomContent/gps/ue/plgr.htm http://www.rockwellcollins.com/news/gallery/gov/navigation/page2997.html http://www.ion.org/museum/files/PLGR-9~1.PDF http://www.prc68.com/I/PLGR.shtml] — Brooke Clarke's PLGR page Global Positioning System Military equipment of the United States Military electronics of the United States Military equipment introduced in the 1990s
Precision Lightweight GPS Receiver
[ "Technology", "Engineering" ]
438
[ "Wireless locating", "Computer hardware stubs", "Aircraft instruments", "Aerospace engineering", "Global Positioning System", "Computing stubs" ]
7,900,132
https://en.wikipedia.org/wiki/Minor%20spliceosome
The minor spliceosome is a ribonucleoprotein complex that catalyses the removal (splicing) of an atypical class of spliceosomal introns (U12-type) from messenger RNAs in some clades of eukaryotes. This process is called noncanonical splicing, as opposed to U2-dependent canonical splicing. U12-type introns represent less than 1% of all introns in human cells. However they are found in genes performing essential cellular functions. Early evidence A notable feature of eukaryotic nuclear pre-mRNA introns is the relatively high level of conservation of the primary sequences of 5' and 3' splice sites over a great range of organisms. Between 1989 and 1991, several groups reported four independent examples of introns with a splice site that differed from the common intron: Cartilage matrix protein (CMP/MATN1) gene in humans and chickens Proliferating cell nucleolar protein P120 (NOL1) gene in humans Mouse Rep3 gene, presumably involved in DNA repair Drosophila prospero gene that encodes for a homeobox protein In 1991 by comparing the intron sequences of P120 and CMP genes, IJ Jackson reported the existence of ATATCC (5') and YYCAC (3') splice sites in these introns. The finding indicated a possible novel splicing mechanism. In 1994, S.L. Hall and R.A. Padgett compared the primary sequence of all reports on the four genes mentioned above. The results suggested a new type of introns with ATATCCTT 5' splice sites and YCCAC 3' splice sites and an almost invariant TCCTTAAC sequence near the 3' end of the introns (so called 3' upstream element). A search for small nuclear RNA sequences that are complementary to these splice sites suggested U12 snRNA (matches the 3' sequence) and U11 snRNA (matches the 5' sequence) as being putative factors involved in splicing of this new type of introns. In all these four genes, the pre-mRNA contains other introns whose sequences conform to those of major class introns. Neither the size nor the position of the AT–AC intron within the host gene is conserved. In 1996, Woan-Yuh Tarn and Joan A. Steitz described an in vitro system that splices a pre-mRNA substrate containing an AT–AC intron derived from the human P120 gene. Psoralen cross-linking confirms the base-pairing interaction predicted by Hall and Padgett between the branch site of the pre-mRNA substrate and U12 RNA. Native gel electrophoresis reveals that U11, U12, and U5 snRNPs assemble onto the P120 pre-mRNA to form splicing complexes. Structure of U12-type introns Although originally referred to as AT-AC introns, not all these introns are delimited by AT-AC dinucleotides. Some of them have GT-AG or AT-AG ends, at least. Thus, it is more correct to speak about the splicing machinery which is used to process them, differentiating between U2-type (canonical or major) and U12-type (non-canonical or minor). The main determinants for distinguishing U2- and U12-type introns are 5' splice site and branch site sequences. The minor spliceosome consists of U11, U12, U4atac, and U6atac, together with U5 and an unknown number of non-snRNP proteins. The U11, U12 and U4atac/U6atac snRNPs are functional analogs of the U1, U2 and U4/U6 snRNPs in the major spliceosome. Although the minor U4atac and U6atac snRNAs are functional analogs of U4 and U6, respectively, they share only limited sequence homology (c. 40%). Furthermore, the sequence of U11 in comparison with U1, as well as U12 compared with U2, are completely unrelated. Despite this fact, the minor U11, U12, U4atac and U6atac snRNAs can be folded into structures similar to U1, U2, U4 and U6, respectively. Location of minor spliceosomal activity The location of spliceosomal activity for the minor class spliceosome is regarded by most experts to be in the nucleus. However, a single paper has claimed that the minor spliceosome is active in the cytosol. The data presented within this paper are not fully accepted within the field and directly contradict numerous other papers. Evolution Like the major spliceosome, the minor spliceosome had an early origin: several of its characteristic constituents are present in representative organisms from all eukaryotic supergroups for which there is any substantial genome sequence information. In addition, functionally important sequence elements contained within U12-type introns and snRNAs are highly conserved during evolution. See also RNA splicing Spliceosome References Review papers: Review. Review. Classic papers: Other references: Organelles Gene expression RNA Spliceosome RNA splicing
Minor spliceosome
[ "Chemistry", "Biology" ]
1,125
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
7,901,877
https://en.wikipedia.org/wiki/Fault%20current%20limiter
A fault current limiter (FCL), also known as fault current controller (FCC), is a device which limits the prospective fault current when a fault occurs (e.g. in a power transmission network) without complete disconnection. The term includes superconducting, solid-state and inductive devices. Applications Electric power distribution systems include circuit breakers to disconnect power in case of a fault, but to maximize reliability, they wish to disconnect the smallest possible portion of the network. This means that even the smallest circuit breakers, as well as all wiring to them, must be able to disconnect large fault currents. A problem arises if the electricity supply is upgraded, by adding new generation capacity or by adding cross-connections. Because these increase the amount of power that can be supplied, all of the branch circuits must have their bus bars and circuit breakers upgraded to handle the new higher fault current limit. This poses a particular problem when distributed generation, such as wind farms and rooftop solar power, is added to an existing electric grid. It is desirable to be able to add additional power sources without large system-wide upgrades. A simple solution is to add electrical impedance to the circuit. This limits the rate at which current can increase, which limits the level the fault current can rise to before the breaker is opened. However, this also limits the ability of the circuit to satisfy rapidly changing demand, so the addition or removal of large loads causes unstable power. A fault current limiter is a nonlinear element which has a low impedance at normal current levels, but presents a higher impedance at fault current levels. Further, this change is extremely rapid, before a circuit breaker can trip a few milliseconds later. (High-power circuit breakers are synchronized to the alternating current zero crossing to minimize arcing.) While the power is unstable during the fault, it is not completely disconnected. After the faulting branch is disconnected, the fault current limiter automatically returns to normal operation. Superconducting fault current limiter Superconducting fault current limiters exploit the extremely rapid loss of superconductivity (called "quenching") above a critical combination of temperature, current density, and magnetic field. In normal operation, current flows through the superconductor without resistance and negligible impedance. If a fault develops, the superconductor quenches, its resistance rises sharply, and current is diverted to a parallel circuit with the desired higher impedance. (The structure is not usable as a circuit breaker, because the normally-conducting superconductive material does not have a high enough resistance. It is only high enough to cause sufficient heating to melt the material.) Superconducting fault current limiters are described as being in one of two major categories: resistive or inductive. In a resistive FCL, the current passes directly through the superconductor. When it quenches, the sharp rise in resistance reduces the fault current from what it would otherwise be (the prospective fault current). A resistive FCL can be either DC or AC. If it is AC, then there will be a steady power dissipation from AC losses (superconducting hysteresis losses) which must be removed by the cryogenic system. An AC FCL is usually made from wire wound non-inductively; otherwise the inductance of the device would create an extra constant power loss on the system. Inductive FCLs come in many variants, but the basic concept is a transformer with a resistive FCL as the secondary. In un-faulted operation, there is no resistance in the secondary and so the inductance of the device is low. A fault current quenches the superconductor, the secondary becomes resistive and the inductance of the whole device rises. The advantage of this design is that there is no heat ingress through current leads into the superconductor, and so the cryogenic power load may be lower. However, the large amount of iron required means that inductive FCLs are much bigger and heavier than resistive FCLs. The first successful field test of an HTS FCL of this type was by SC Power Systems, a division of Zenergy Power plc in 2009. The quench process is a two-step process. First, a small region quenches directly in response to a high current density. This section rapidly heats by Joule heating, and the increase in temperature quenches adjacent regions. GridON Ltd has developed the first commercial inductive FCL for distribution & transmission networks. Using a unique and proprietary concept of magnetic-flux alteration - requiring no superconducting or cryogenic components - the self-triggered FCL instantaneously increases its impedance tenfold upon fault condition. It limits the fault current for its entire duration and recovers to its normal condition immediately thereafter. This inductive FCL is scalable to extra high voltage ratings. Solid state fault current limiter Inductive fault current limiter Development of the superconducting fault current limiters FCLs are under active development. In 2007, there were at least six national and international projects using magnesium diboride wire or YBCO tape, and two using BSCCO-2212 rods. Countries active in FCL development are Germany, the UK, the US, Korea and China. In 2007, the US Department of Energy spent $29m on three FCL development projects. High temperature superconductors are required for practical FCLs. AC losses generate constant heat inside the superconductor, and the cost of cryogenic cooling at liquid helium temperatures required by low temperature superconductors makes the whole device uneconomic. First applications for FCLs are likely to be used to help control medium-voltage electricity distribution systems, followed by electric-drive ships: naval vessels, submarines and cruise ships. Larger FCLs may eventually be deployed in high-voltage transmission systems. See also Current limiting Power-system protection Superconductivity Magnesium diboride YBCO References External links Superconducting Fault Current Limiters UK Government 2007 Report on FCLs High-temperature superconductor fault current limiters: concepts, applications, and development status Fault Current Limiter and Their Types 2012: YBCO-tape FCL enters service in German private grid Over-current protection devices Superconductivity
Fault current limiter
[ "Physics", "Materials_science", "Engineering" ]
1,324
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
7,902,939
https://en.wikipedia.org/wiki/J-coupling
In nuclear chemistry and nuclear physics, J-couplings (also called spin-spin coupling or indirect dipole–dipole coupling) are mediated through chemical bonds connecting two spins. It is an indirect interaction between two nuclear spins that arises from hyperfine interactions between the nuclei and local electrons. In NMR spectroscopy, J-coupling contains information about relative bond distances and angles. Most importantly, J-coupling provides information on the connectivity of chemical bonds. It is responsible for the often complex splitting of resonance lines in the NMR spectra of fairly simple molecules. J-coupling is a frequency difference that is not affected by the strength of the magnetic field, so is always stated in Hz. Vector model and manifestations for chemical structure assignments The origin of J-coupling can be visualized by a vector model for a simple molecule such as hydrogen fluoride (HF). In HF, the two nuclei have spin . Four states are possible, depending on the relative alignment of the H and F nuclear spins with the external magnetic field. The selection rules of NMR spectroscopy dictate that ΔI = 1, which means that a given photon (in the radio frequency range) can affect ("flip") only one of the two nuclear spins. J-coupling provides three parameters: the multiplicity (the "number of lines"), the magnitude of the coupling (strong, medium, weak), and the sign of the coupling. Multiplicity The multiplicity provides information on the number of centers coupled to the signal of interest, and their nuclear spin. For simple systems, as in 1H–1H coupling in NMR spectroscopy, the multiplicity is one more than the number of adjacent protons which are magnetically nonequivalent to the protons of interest. For ethanol, each methyl proton is coupled to the two methylene protons, so the methyl signal is a triplet, while each methylene proton is coupled to the three methyl protons, so the methylene signal is a quartet. Nuclei with spins greater than , which are called quadrupolar, can give rise to greater splitting, although in many cases coupling to quadrupolar nuclei is not observed. Many elements consist of nuclei with nuclear spin and without. In these cases, the observed spectrum is the sum of spectra for each isotopomer. One of the great conveniences of NMR spectroscopy for organic molecules is that several important lighter spin nuclei are either monoisotopic, e.g. 31P and 19F, or have very high natural abundance, e.g. 1H. An additional convenience is that 12C and 16O have no nuclear spin so these nuclei, which are common in organic molecules, do not cause splitting patterns in NMR. Magnitude of J-coupling For 1H–1H coupling, the magnitude of J decreases rapidly with the number of bonds between the coupled nuclei, especially in saturated molecules. Generally speaking two-bond coupling (i.e. 1H–C–1H) is stronger than three-bond coupling (1H–C–C–1H). The magnitude of the coupling also provides information on the dihedral angles relating the coupling partners, as described by the Karplus equation for three-bond coupling constants. For heteronuclear coupling, the magnitude of J is related to the nuclear magnetic moments of the coupling partners. 19F, with a high nuclear magnetic moment, gives rise to large coupling to protons. 103Rh, with a very small nuclear magnetic moment, gives only small couplings to 1H. To correct for the effect of the nuclear magnetic moment (or equivalently the gyromagnetic ratio γ), the "reduced coupling constant" K is often discussed, where K = . For coupling of a 13C nucleus and a directly bonded proton, the dominant term in the coupling constant JC–H is the Fermi contact interaction, which is a measure of the s-character of the bond at the two nuclei. Where the external magnetic field is very low, e.g. as Earth's field NMR, J-coupling signals of the order of hertz usually dominate chemical shifts which are of the order of millihertz and are not normally resolvable. Sign of J-coupling The value of each coupling constant also has a sign, and coupling constants of comparable magnitude often have opposite signs. If the coupling constant between two given spins is negative, the energy is lower when these two spins are parallel, and conversely if their coupling constant is positive. For a molecule with a single J-coupling constant, the appearance of the NMR spectrum is unchanged if the sign of the coupling constant is reversed, although spectral lines at given positions may represent different transitions. The simple NMR spectrum therefore does not indicate the sign of the coupling constant, which there is no simple way of predicting. However for some molecules with two distinct J-coupling constants, the relative signs of the two constants can be experimentally determined by a double resonance experiment. For example in the diethylthallium ion (C2H5)2Tl+, this method showed that the methyl-thallium (CH3-Tl) and methylene-thallium (CH2-Tl) coupling constants have opposite signs. The first experimental method to determine the absolute sign of a J-coupling constant was proposed in 1962 by Buckingham and Lovering, who suggested the use of a strong electric field to align the molecules of a polar liquid. The field produces a direct dipolar coupling of the two spins, which adds to the observed J-coupling if their signs are parallel and subtracts from the observed J-coupling if their signs are opposed.Buckingham A.D. and Lovering E.G., Effects of a strong electric field on NMR spectra. The absolute sign of the spin coupling constant, Transactions Faraday Society, 58, 2077-2081 (1962), https://doi.org/10.1039/TF9625802077 This method was first applied to 4-nitrotoluene, for which the J-coupling constant between two adjacent (or ortho) ring protons was shown to be positive because the splitting of the two peaks for each proton decreases with the applied electric field. Another way to align molecules for NMR spectroscopy is to dissolve them in a nematic liquid crystal solvent. This method has also been used to determine the absolute sign of J-coupling constants. J-coupling Hamiltonian The Hamiltonian of a molecular system may be taken as: H = D1 + D2 + D3,D1 = electron orbital–orbital, spin–orbital, spin–spin and electron-spin–external-field interactionsD2 = magnetic interactions between nuclear spin and electron spinD3 = direct interaction of nuclei with each other For a singlet molecular state and frequent molecular collisions, D1 and D3 are almost zero. The full form of the J-coupling interaction between spins 'Ij and Ik on the same molecule is: H = 2π Ij · Jjk · Ik where Jjk is the J-coupling tensor, a real 3 × 3 matrix. It depends on molecular orientation, but in an isotropic liquid it reduces to a number, the so-called scalar coupling. In 1D NMR, the scalar coupling leads to oscillations in the free induction decay as well as splittings of lines in the spectrum. Decoupling By selective radio frequency irradiation, NMR spectra can be fully or partially decoupled, eliminating or selectively reducing the coupling effect. Carbon-13 NMR spectra are often recorded with proton decoupling. History In September 1951, H. S. Gutowsky, D. W. McCall, and C. P. Slichter reported experiments on HPF_6, CH_3OPF_2, and POCl_2F, where they explained the presence of multiple resonance lines with an interaction of the form . Independently, in October 1951, E. L. Hahn and D. E. Maxwell reported a spin echo experiment which indicates the existence of an interaction between two protons in dichloroacetaldehyde. In the echo experiment, two short, intense pulses of radiofrequency magnetic field are applied to the spin ensemble at the nuclear resonance condition and are separated by a time interval of τ. The echo appears with a given amplitude at time 2τ. For each setting of τ, the maximum value of the echo signal is measured and plotted as a function of τ. If the spin ensemble consists of a magnetic moment, a monotonic decay in the echo envelope is obtained. In the Hahn–Maxwell experiment, the decay was modulated by two frequencies: one frequency corresponded with the difference in chemical shift between the two non-equivalent spins and a second frequency, J, that was smaller and independent of magnetic field strength ( = 0.7 Hz). Such interaction came as a great surprise. The direct interaction between two magnetic dipoles depends on the relative position of two nuclei in such a way that when averaged over all possible orientations of the molecule it equals to zero. In November 1951, N. F. Ramsey and E. M. Purcell proposed a mechanism that explained the observation and gave rise to an interaction of the form I1·I2. The mechanism is the magnetic interaction between each nucleus and the electron spin of its own atom together with the exchange coupling of the electron spins with each other. In the 1990s, direct evidence was found for the presence of J-couplings between magnetically active nuclei on both sides of the hydrogen bond. Initially, it was surprising to observe such couplings across hydrogen bonds since J-couplings are usually associated with the presence of purely covalent bonds. However, it is now well established that the H-bond J-couplings follow the same electron-mediated polarization mechanism as their covalent counterparts. The spin–spin coupling between nonbonded atoms in close proximity has sometimes been observed between fluorine, nitrogen, carbon, silicon and phosphorus atoms. See also Earth's field NMR (EFNMR) Exclusive correlation spectroscopy (ECOSY) Magnetic dipole–dipole interaction (dipolar coupling) Nuclear magnetic resonance (NMR) Nuclear magnetic resonance spectroscopy of carbohydrates Nuclear magnetic resonance spectroscopy of nucleic acids Nuclear magnetic resonance spectroscopy of proteins Proton NMR Relaxation (NMR) Residual dipolar coupling References Nuclear magnetic resonance
J-coupling
[ "Physics", "Chemistry" ]
2,151
[ "Nuclear magnetic resonance", "Nuclear physics" ]
7,903,176
https://en.wikipedia.org/wiki/Doubly%20fed%20electric%20machine
Doubly fed electric machines, Doubly fed induction generator (DFIG), or slip-ring generators, are electric motors or electric generators, where both the field magnet windings and armature windings are separately connected to equipment outside the machine. By feeding adjustable frequency AC power to the field windings, the magnetic field can be made to rotate, allowing variation in motor or generator speed. This is useful, for instance, for generators used in wind turbines. Additionally, DFIG-based wind turbines offer the ability to control active and reactive power. Introduction Doubly fed electrical generators are similar to AC electrical generators, but have additional features which allow them to run at speeds slightly above or below their natural synchronous speed. This is useful for large variable speed wind turbines, because wind speed can change suddenly. When a gust of wind hits a wind turbine, the blades try to speed up, but a synchronous generator is locked to the speed of the power grid and cannot speed up. So large forces are developed in the hub, gearbox, and generator as the power grid pushes back. This causes wear and damage to the mechanism. If the turbine is allowed to speed up immediately when hit by a wind gust, the stresses are lower with the power from the wind gust still being converted to useful electricity. One approach to allowing wind turbine speed to vary is to accept whatever frequency the generator produces, convert it to DC, and then convert it to AC at the desired output frequency using an inverter. This is common for small house and farm wind turbines. But the inverters required for megawatt-scale wind turbines are large and expensive. Doubly fed generators are another solution to this problem. Instead of the usual field winding fed with DC, and an armature winding where the generated electricity comes out, there are two three-phase windings, one stationary and one rotating, both separately connected to equipment outside the generator. Thus, the term doubly fed is used for this kind of machines. One winding is directly connected to the output, and produces 3-phase AC power at the desired grid frequency. The other winding (traditionally called the field, but here both windings can be outputs) is connected to 3-phase AC power at variable frequency. This input power is adjusted in frequency and phase to compensate for changes in speed of the turbine. Adjusting the frequency and phase requires an AC to DC to AC converter. This is usually constructed from very large IGBT semiconductors. The converter is bidirectional, and can pass power in either direction. Power can flow from this winding as well as from the output winding. History With its origins in wound rotor induction motors with multiphase winding sets on the rotor and stator, respectively, which were invented by Nikola Tesla in 1888, the rotor winding set of the doubly fed electric machine is connected to a selection of resistors via multiphase slip rings for starting. However, the slip power was lost in the resistors. Thus means to increase the efficiency in variable speed operation by recovering the slip power were developed. In Krämer (or Kraemer) drives the rotor was connected to an AC and DC machine set that fed a DC machine connected to the shaft of the slip ring machine. Thus the slip power was returned as mechanical power and the drive could be controlled by the excitation currents of the DC machines. The drawback of the Krämer drive is that the machines need to be overdimensioned in order to cope with the extra circulating power. This drawback was corrected in the Scherbius drive where the slip power is fed back to the AC grid by motor generator sets. The rotating machinery used for the rotor supply was heavy and expensive. Improvement in this respect was the static Scherbius drive where the rotor was connected to a rectifier-inverter set constructed first by mercury arc-based devices and later on with semiconductor diodes and thyristors. In the schemes using a rectifier the power flow was possible only out of the rotor because of the uncontrolled rectifier. Moreover, only sub-synchronous operation as a motor was possible. Another concept using static frequency converter had a cycloconverter connected between the rotor and the AC grid. The cycloconverter can feed power in both directions and thus the machine can be run both sub- and oversynchronous speeds. Large cycloconverter-controlled, doubly fed machines have been used to run single phase generators feeding  Hz railway grid in Europe. Cycloconverter powered machines can also run the turbines in pumped storage plants. Today the frequency changer used in applications up to few tens of megawatts consists of two back to back connected IGBT inverters. Several brushless concepts have also been developed in order to get rid of the slip rings that require maintenance. Doubly fed induction generator Doubly fed induction generator (DFIG), a generating principle widely used in wind turbines. It is based on an induction generator with a multiphase wound rotor and a multiphase slip ring assembly with brushes for access to the rotor windings. It is possible to avoid the multiphase slip ring assembly, but there are problems with efficiency, cost and size. A better alternative is a brushless wound-rotor doubly fed electric machine. The principle of the DFIG is that stator windings are connected to the grid and rotor winding are connected to the converter via slip rings and back-to-back voltage source converter that controls both the rotor and the grid currents. Thus rotor frequency can freely differ from the grid frequency (50 or 60 Hz). By using the converter to control the rotor currents, it is possible to adjust the active and reactive power fed to the grid from the stator independently of the generator's turning speed. The control principle used is either the two-axis current vector control or direct torque control (DTC). DTC has turned out to have better stability than current vector control especially when high reactive currents are required from the generator. The doubly fed generator rotors are typically wound with 2 to 3 times the number of turns of the stator. This means that the rotor voltages will be higher and currents respectively lower. Thus in the typical ±30% operational speed range around the synchronous speed, the rated current of the converter is accordingly lower which leads to a lower cost of the converter. The drawback is that controlled operation outside the operational speed range is impossible because of the higher than rated rotor voltage. Further, the voltage transients due to the grid disturbances (three- and two-phase voltage dips, especially) will also be magnified. In order to prevent high rotor voltages (and high currents resulting from these voltages) from destroying the insulated-gate bipolar transistors and diodes of the converter, a protection circuit (called crowbar) is used. The crowbar will short-circuit the rotor windings through a small resistance when excessive currents or voltages are detected. In order to be able to continue the operation as quickly as possible an active crowbar has to be used. The active crowbar can remove the rotor short in a controlled way and thus the rotor side converter can be started only after 20–60ms from the start of the grid disturbance when the remaining voltage stays above 15% of the nominal voltage. Thus, it is possible to generate reactive current to the grid during the rest of the voltage dip and in this way help the grid to recover from the fault. For zero voltage ride through, it is common to wait until the dip ends because it is otherwise not possible to know the phase angle where the reactive current should be injected. As a summary, a doubly fed induction machine is a wound-rotor doubly fed electric machine and has several advantages over a conventional induction machine in wind power applications. First, as the rotor circuit is controlled by a power electronics converter, the induction generator is able to both import and export reactive power. This has important consequences for power system stability and allows the machine to support the grid during severe voltage disturbances (low-voltage ride-through; LVRT). Second, the control of the rotor voltages and currents enables the induction machine to remain synchronized with the grid while the wind turbine speed varies. A variable speed wind turbine utilizes the available wind resource more efficiently than a fixed speed wind turbine, especially during light wind conditions. Third, the cost of the converter is low when compared with other variable speed solutions because only a fraction of the mechanical power, typically 25–30%, is fed to the grid through the converter, the rest being fed to grid directly from the stator. The efficiency of the DFIG is very good for the same reason. See also Variable-frequency transformer References External links Electric motors Electrical generators
Doubly fed electric machine
[ "Physics", "Technology", "Engineering" ]
1,842
[ "Electrical generators", "Machines", "Engines", "Electric motors", "Physical systems", "Electrical engineering" ]
7,904,047
https://en.wikipedia.org/wiki/Organ%20Care%20System
The Organ Care System (OCS) is a medical device designed by Transmedics to allow donor organs to be maintained for longer periods of time prior to transplant. The system mimics the elements of human physiology and keeps organs in an environment and temperature similar to the human body. The system allows for organ preservation that last longer than the standard organ preservation method of putting organs on ice, static cold storage, which can cause cold ischemia. When put on ice, organs begin to deteriorate about three to four hours after retrieval. On the other hand, the Paragonix SherpaPak Cardiac Transport System can offer uniform cooling by suspending the donor heart in a preservation solution, and provides continuous temperature monitoring. Organ Care System Heart A portable system designed for “extracorporeal heart perfusion, assessment, and resuscitation”, the system allows for the heart to remain beating though a process of retrograde perfusion. The process involves oxygenated donor blood moves through the coronary arteries via the aorta and returns to the circuit through the pulmonary artery. Organ Care system has been successfully used in clinical practice and has demonstrated that is able to prolong the ischemic time of donors' hearts by perfusing the organ in a beating state in normothermia while allowing the surgeon to assess the quality of the donor heart before surgery. Recently, it has been successfully used in high-risk heart recipients. Organ Care System Lung Referred to colloquially as “lung-in-a-box,” the OCS Lung System, designed to mimic the human system, keeps the lungs “breathing,” blood pumping through the vessels, and is kept at a human body-regulated temperature while in transportation to a donor patient. Organ Care System Liver The OCS Liver System was granted pre-market approval by the FDA in September 2021. The system is designed to preserve the organ by keeping it at temperatures similar to a human body and maintains its function by perfusing the liver with blood, oxygen, and vital nutrients like bile salts. The device also assists in monitoring and regulating the pressure and flow of the perfusion. References External links Transmedics homepage Medical devices Transplantation medicine
Organ Care System
[ "Biology" ]
445
[ "Medical devices", "Medical technology" ]
7,904,551
https://en.wikipedia.org/wiki/Magnetic%20dipole%E2%80%93dipole%20interaction
Magnetic dipole–dipole interaction, also called dipolar coupling, refers to the direct interaction between two magnetic dipoles. Roughly speaking, the magnetic field of a dipole goes as the inverse cube of the distance, and the force of its magnetic field on another dipole goes as the first derivative of the magnetic field. It follows that the dipole-dipole interaction goes as the inverse fourth power of the distance. Suppose and are two magnetic dipole moments that are far enough apart that they can be treated as point dipoles in calculating their interaction energy. The potential energy of the interaction is then given by: where is the magnetic constant, is a unit vector parallel to the line joining the centers of the two dipoles, and || is the distance between the centers of and . Last term with -function vanishes everywhere but the origin, and is necessary to ensure that vanishes everywhere. Alternatively, suppose and are gyromagnetic ratios of two particles with spin quanta and . (Each such quantum is some integral multiple of .) Then: where is a unit vector in the direction of the line joining the two spins, and || is the distance between them. Finally, the interaction energy can be expressed as the dot product of the moment of either dipole into the field from the other dipole: where is the field that dipole 2 produces at dipole 1, and is the field that dipole 1 produces at dipole 2. It is not the sum of these terms. The force arising from the interaction between and is given by: The Fourier transform of can be calculated from the fact that and is given by Dipolar coupling and NMR spectroscopy The direct dipole-dipole coupling is very useful for molecular structural studies, since it depends only on known physical constants and the inverse cube of internuclear distance. Estimation of this coupling provides a direct spectroscopic route to the distance between nuclei and hence the geometrical form of the molecule, or additionally also on intermolecular distances in the solid state leading to NMR crystallography notably in amorphous materials. For example, in water, NMR spectra of hydrogen atoms of water molecules are narrow lines because dipole coupling is averaged due to chaotic molecular motion. In solids, where water molecules are fixed in their positions and do not participate in the diffusion mobility, the corresponding NMR spectra have the form of the Pake doublet. In solids with vacant positions, dipole coupling is averaged partially due to water diffusion which proceeds according to the symmetry of the solids and the probability distribution of molecules between the vacancies. Although internuclear magnetic dipole couplings contain a great deal of structural information, in isotropic solution, they average to zero as a result of diffusion. However, their effect on nuclear spin relaxation results in measurable nuclear Overhauser effects (NOEs). The residual dipolar coupling (RDC) occurs if the molecules in solution exhibit a partial alignment leading to an incomplete averaging of spatially anisotropic magnetic interactions i.e. dipolar couplings. RDC measurement provides information on the global folding of the protein-long distance structural information. It also provides information about "slow" dynamics in molecules. See also J-coupling Magic angle Residual dipolar coupling Nuclear Overhauser effect Magnetic moment Zero field splitting References Malcolm H. Levitt, Spin Dynamics: Basics of Nuclear Magnetic Resonance. . Electromagnetism Magnetic moment Nuclear magnetic resonance
Magnetic dipole–dipole interaction
[ "Physics", "Chemistry", "Mathematics" ]
706
[ "Electromagnetism", "Physical phenomena", "Nuclear magnetic resonance", "Physical quantities", "Quantity", "Magnetic moment", "Fundamental interactions", "Nuclear physics", "Moment (physics)" ]
722,672
https://en.wikipedia.org/wiki/Fauna
Fauna (: faunae or faunas) is all of the animal life present in a particular region or time. The corresponding terms for plants and fungi are flora and funga, respectively. Flora, fauna, funga and other forms of life are collectively referred to as biota. Zoologists and paleontologists use fauna to refer to a typical collection of animals found in a specific time or place, e.g. the "Sonoran Desert fauna" or the "Burgess Shale fauna". Paleontologists sometimes refer to a sequence of faunal stages, which is a series of rocks all containing similar fossils. The study of animals of a particular region is called faunistics. Etymology Fauna comes from the name Fauna, a Roman goddess of earth and fertility, the Roman god Faunus, and the related forest spirits called Fauns. All three words are cognates of the name of the Greek god Pan, and panis is the Modern Greek equivalent of fauna (πανίς or rather πανίδα). Fauna is also the word for a book that catalogues the animals in such a manner. The term was first used by Carl Linnaeus from Sweden in the title of his 1745 work Fauna Suecica. Subdivisions on the basis of region Cryofauna Cryofauna refers to the animals that live in, or very close to, cold areas. Cryptofauna Cryptofauna is the fauna that exists in protected or concealed microhabitats. Epifauna Epifauna, also called epibenthos, are aquatic animals that live on the bottom substratum as opposed to within it, that is, the benthic fauna that live on top of the sediment surface at the seafloor. Infauna Infauna are benthic organisms that live within the bottom substratum of a water body, especially within the bottom-most oceanic sediments, the layer of small particles at the bottom of a body of water, rather than on its surface. Bacteria and microalgae may also live in the interstices of bottom sediments. In general, infaunal animals become progressively smaller and less abundant with increasing water depth and distance from shore, whereas bacteria show more constancy in abundance, tending toward one million cells per milliliter of interstitial seawater. Such creatures are found in the fossil record and include lingulata, trilobites and worms. They made burrows in the sediment as protection and may also have fed upon detritus or the mat of microbes which tended to grow on the surface of the sediment. Today, a variety of organisms live in and disturb the sediment. The deepest burrowers are the ghost shrimps (Thalassinidea), which go as deep as into the sediment at the bottom of the ocean. Limnofauna Limnofauna refers to the animals that live in fresh water. Macrofauna Macrofauna are benthic or soil organisms which are retained on a 0.5 mm sieve. Studies in the deep sea define macrofauna as animals retained on a 0.3 mm sieve to account for the small size of many of the taxa. Megafauna Megafauna are large animals of any particular region or time. For example, Australian megafauna. Meiofauna Meiofauna are small benthic invertebrates that live in both marine and freshwater environments. The term meiofauna loosely defines a group of organisms by their size, larger than microfauna but smaller than macrofauna, rather than a taxonomic grouping. One environment for meiofauna is between grains of damp sand (see Mystacocarida). In practice these are metazoan animals that can pass unharmed through a 0.5–1 mm mesh but will be retained by a 30–45 μm mesh, but the exact dimensions will vary from researcher to researcher. Whether an organism passes through a 1 mm mesh also depends upon whether it is alive or dead at the time of sorting. Mesofauna Mesofauna are macroscopic soil animals such as arthropods or nematodes. Mesofauna are extremely diverse; considering just the springtails (Collembola), as of 1998, approximately 6,500 species had been identified. Microfauna Microfauna are microscopic or very small animals (usually including protozoans and very small animals such as rotifers). To qualify as microfauna, an organism must exhibit animal-like characteristics, as opposed to microflora, which are more plant-like. Stygofauna Stygofauna is any fauna that lives in groundwater systems or aquifers, such as caves, fissures and vugs. Stygofauna and troglofauna are the two types of subterranean fauna (based on life-history). Both are associated with subterranean environments – stygofauna is associated with water, and troglofauna with caves and spaces above the water table. Stygofauna can live within freshwater aquifers and within the pore spaces of limestone, calcrete or laterite, whilst larger animals can be found in cave waters and wells. Stygofaunal animals, like troglofauna, are divided into three groups based on their life history - stygophiles, stygoxenes, and stygobites. Troglofauna Troglofauna are small cave-dwelling animals that have adapted to their dark surroundings. Troglofauna and stygofauna are the two types of subterranean fauna (based on life-history). Both are associated with subterranean environments – troglofauna is associated with caves and spaces above the water table and stygofauna with water. Troglofaunal species include spiders, insects, myriapods and others. Some troglofauna live permanently underground and cannot survive outside the cave environment. Troglofauna adaptations and characteristics include a heightened sense of hearing, touch and smell. Loss of under-used senses is apparent in the lack of pigmentation as well as eyesight in most troglofauna. Troglofauna insects may exhibit a lack of wings and longer appendages. Xenofauna Xenofauna, theoretically, are alien organisms that can be described as animal analogues. While no alien life forms, animal-like or otherwise, are known definitively, the concept of alien life remains a subject of great interest in fields like astronomy, astrobiology, biochemistry, evolutionary biology, science fiction, and philosophy. Other Other terms include avifauna, which means "bird fauna" and piscifauna (or ichthyofauna), which means "fish fauna". Treatises Classic faunas Linnaeus, Carolus. Fauna Suecica. 1746 See also Biodiversity Biome Ecology Ecosystem Environmental movement Fauna and Flora Preservation Society Gene pool Genetic erosion Genetic pollution Natural environment Soil zoology References External links "Biodiversity of Collembola and their functional role in the ecosystem" (by Josef Rusek; September 1998) Animal ecology Ecology terminology Organisms
Fauna
[ "Physics", "Biology" ]
1,476
[ "Ecology terminology", "Organisms", "Physical objects", "nan", "Matter" ]
722,849
https://en.wikipedia.org/wiki/Tuohy%20needle
A Tuohy (/tOO-ee/) needle is a hollow hypodermic needle, very slightly curved at the end, suitable for inserting epidural catheters. Epidural needle Literally, an epidural needle is simply a needle that is placed into the epidural space. To provide continuous epidural analgesia or anesthesia, a small hollow catheter may be threaded through the epidural needle into the epidural space, and left there while the needle is removed. There are multiple types of epidural needles as well as catheters, but in modern practice in developed nations, disposable materials are used to ensure sterility. Epidural needles are designed with a curved tip to help prevent puncture of the dural membrane. But following accidental dural puncture, headache occurs in up to 85% of patients causing significant perioperative morbidity. However, in case of inadvertent dural perforation, the incidence of headache can be lowered by identifying the epidural space with the needle bevel oriented parallel to the longitudinal dural fibers which limits the size of the subsequent dural tear. Types Types of epidural needles include: The Crawford Needle The Tuohy Needle The Hustead Needle The Weiss Needle The Sprotte Spezial Needle Other Epidural Needles : Other less popular types are the Wagner needle (1957), the Cheng needle(1958), the Crawley needle (1968), the Foldes needle (1973), and the Bell needle (1975)—all variants of the Huber design with a blunted tip of varying sharpness. Variants like the Brace needle, a Crawford variant; the Lutz epidural needle (1963), with a pencil-point design for single-shot epidural use; the Scott needle (1985), a Tuohy needle with a Luer lock hub; and the Eldor needle (1993), designed for use with combined spinal epidural anesthesia. History Though Ralph L. Huber (1915–2006), a Seattle dentist, was the inventor of this needle in 1940, it is known in the name of Edward Boyce Tuohy (1908–1959), a 20th-century U.S anesthesiologist who first popularized it in 1945. References Further reading Medical equipment Neurology procedures
Tuohy needle
[ "Biology" ]
494
[ "Medical equipment", "Medical technology" ]
723,149
https://en.wikipedia.org/wiki/Gamete%20intrafallopian%20transfer
Gamete intrafallopian transfer (GIFT) is a tool of assisted reproductive technology against infertility. Eggs are removed from a woman's ovaries, and placed in one of the fallopian tubes, along with the man's sperm. The technique, first attempted by Steptoe and Edwards and later pioneered by endocrinologist Ricardo Asch, allows fertilization to take place inside the woman's uterus. With the advances in IVF the GIFT procedure is used less as pregnancy rates in IVF tend to be equal or better and do not require laparoscopy when the egg is put back. Method It takes, on average, four to six weeks to complete a cycle of GIFT. First, the woman must take a fertility drug to stimulate egg production in the ovaries. The doctor will monitor the growth of the ovarian follicles, and once they are mature, the woman will be injected with human chorionic gonadotropin (hCG). The eggs will be harvested approximately 36 hours later, mixed with the man's sperm, and placed back into the woman's fallopian tubes using a laparoscope. Indications A woman must have at least one normal fallopian tube in order for GIFT to be suitable. It is used in instances where the fertility problem relates to sperm dysfunction, and where the couple has idiopathic (unknown cause) infertility. Some patients may prefer the procedure to IVF for ethical reasons, since the fertilization takes place inside the body. This is a semi invasive procedure and requires laparoscopy. Success rate As with most fertility procedures, success depends on the couple's age and the woman's egg quality. It is estimated that approximately 25–30% of GIFT cycles result in pregnancy, with a third of those being twins or triplets, etc. The First GIFT baby in the UK was Todd Holden born in October 1986. The first application of this method in Latin America was held in Argentina on 13 May 1986, and was led by Dr. Ricardo Asch, the treatment was successfully completed with the birth of Manuel Campo Lopez. In Venezuela, the first GIFT babies were Luis Hernández, Rosa Helena Hernández and Luisa Hernández, who were born on 24 June 1987 and were also the first triplets to be born using the GIFT method. Bioethical issues Gamete intrafallopian transfer is not technically in vitro fertilisation because with GIFT, fertilisation takes place inside the body, not on a petri dish. Some Catholic moral theologians are nevertheless concerned with it because they "consider this to be a replacement of the marital act, and therefore immoral." See also Zygote intrafallopian transfer References Assisted reproductive technology Fertility medicine Human reproduction
Gamete intrafallopian transfer
[ "Biology" ]
572
[ "Assisted reproductive technology", "Medical technology" ]
723,173
https://en.wikipedia.org/wiki/Obligate%20anaerobe
Obligate anaerobes are microorganisms killed by normal atmospheric concentrations of oxygen (20.95% O2). Oxygen tolerance varies between species, with some species capable of surviving in up to 8% oxygen, while others lose viability in environments with an oxygen concentration greater than 0.5%. Oxygen sensitivity The oxygen sensitivity of obligate anaerobes has been attributed to a combination of factors including oxidative stress and enzyme production. Oxygen can also damage obligate anaerobes in ways not involving oxidative stress. Because molecular oxygen contains two unpaired electrons in the highest occupied molecular orbital, it is readily reduced to superoxide () and hydrogen peroxide () within cells. A reaction between these two products results in the formation of a free hydroxyl radical (OH.). Superoxide, hydrogen peroxide, and hydroxyl radicals are a class of compounds known as reactive oxygen species (ROS), highly reactant products that are damaging to microbes, including obligate anaerobes. Aerobic organisms produce superoxide dismutase and catalase to detoxify these products, but obligate anaerobes produce these enzymes in very small quantities, or not at all. The variability in oxygen tolerance of obligate anaerobes (<0.5 to 8% O2) is thought to reflect the quantity of superoxide dismutase and catalase being produced. In 1986, Carlioz and Touati performed experiments which support the idea that reactive oxygen species may be toxic to anaerobes. E. coli, a facultative anaerobe, was mutated by a deletion of superoxide dismutase genes. In the presence of oxygen, this mutation resulted in the inability to properly synthesize certain amino acids or use common carbon sources as substrates during metabolism. In the absence of oxygen, the mutated samples grew normally. In 2018, Lu et al. found that in Bacteroides thetaiotaomicron, an obligate anaerobe found in the mammalian digestive tract, exposure to oxygen results in increased levels of superoxide which inactivated important metabolic enzymes. Dissolved oxygen increases the redox potential of a solution, and high redox potential inhibits the growth of some obligate anaerobes. For example, methanogens grow at a redox potential lower than -0.3 V. Sulfide is an essential component of some enzymes, and molecular oxygen oxidizes this to form disulfide, thus inactivating certain enzymes (e.g. nitrogenase). Organisms may not be able to grow with these essential enzymes deactivated. Growth may also be inhibited due to a lack of reducing equivalents for biosynthesis because electrons are exhausted in reducing oxygen. Energy metabolism Obligate anaerobes convert nutrients into energy through anaerobic respiration or fermentation. In aerobic respiration, the pyruvate generated from glycolysis is converted to acetyl-CoA. This is then broken down via the TCA cycle and electron transport chain. Anaerobic respiration differs from aerobic respiration in that it uses an electron acceptor other than oxygen in the electron transport chain. Examples of alternative electron acceptors include sulfate, nitrate, iron, manganese, mercury, and carbon monoxide. Fermentation differs from anaerobic respiration in that the pyruvate generated from glycolysis is broken down without the involvement of an electron transport chain (i.e. there is no oxidative phosphorylation). Numerous fermentation pathways exist such as lactic acid fermentation, mixed acid fermentation, 2-3 butanediol fermentation where organic compounds are reduced to organic acids and alcohol. The energy yield of anaerobic respiration and fermentation (i.e. the number of ATP molecules generated) is less than in aerobic respiration. This is why facultative anaerobes, which can metabolise energy both aerobically and anaerobically, preferentially metabolise energy aerobically. This is observable when facultative anaerobes are cultured in thioglycolate broth. Ecology and examples Obligate anaerobes are found in oxygen-free environments such as the intestinal tracts of animals, the deep ocean, still waters, landfills, in deep sediments of soil. Examples of obligately anaerobic bacterial genera include Actinomyces, Bacteroides, Clostridium, Fusobacterium, Peptostreptococcus, Porphyromonas, Prevotella, Propionibacterium, and Veillonella. Clostridium species are endospore-forming bacteria, and can survive in atmospheric concentrations of oxygen in this dormant form. The remaining bacteria listed do not form endospores. Several species of the Mycobacterium, Streptomyces, and Rhodococcus genera are examples of obligate anaerobe found in soil. Obligate anaerobes are also found in the digestive tracts of humans and other animals as well as in the first stomach of ruminants. Examples of obligately anaerobic fungal genera include the rumen fungi Neocallimastix, Piromonas, and Sphaeromonas. See also Aerobic respiration Fermentation Obligate aerobe Facultative anaerobe Microaerophile References Microbiology
Obligate anaerobe
[ "Chemistry", "Biology" ]
1,156
[ "Microbiology", "Microscopy" ]
724,577
https://en.wikipedia.org/wiki/Blood%20urea%20nitrogen
Blood urea nitrogen (BUN) is a medical test that measures the amount of urea nitrogen found in blood. The liver produces urea in the urea cycle as a waste product of the digestion of protein. Normal human adult blood should contain 7 to 18 mg/dL (0.388 to 1 mmol/L) of urea nitrogen. Individual laboratories may have different reference ranges, as they may use different assays. The test is used to detect kidney problems. It is not considered as reliable as creatinine or BUN-to-creatinine ratio blood studies. Interpretation BUN is an indication of kidney health. The normal range is 2.1–7.1 mmol/L or 6–20 mg/dL. The main causes of an increase in BUN are: high-protein diet, decrease in glomerular filtration rate (GFR) (suggestive of kidney failure), decrease in blood volume (hypovolemia), congestive heart failure, gastrointestinal hemorrhage, fever, rapid cell destruction from infections, athletic activity, excessive muscle breakdown, and increased catabolism. Hypothyroidism can cause both decreased GFR and hypovolemia, but BUN-to-creatinine ratio has been found to be lowered in hypothyroidism and raised in hyperthyroidism. The main causes of a decrease in BUN are malnutrition (low-protein diet), severe liver disease, anabolic state, and syndrome of inappropriate antidiuretic hormone. Another rare cause of a decreased BUN is ornithine transcarbamylase deficiency, which is a genetic disorder inherited in an X-linked recessive pattern. OTC deficiency is also accompanied by hyperammonemia and high orotic acid levels. Units BUN is usually reported in mg/dL in some countries (e.g. United States, Mexico, Italy, Austria, and Germany). Elsewhere, the concentration of urea is reported in SI units as mmol/L. represents the mass of nitrogen within urea/volume, not the mass of whole urea. Each molecule of urea has two nitrogen atoms, each having molar mass 14 g/mol. To convert from mg/dL of blood urea nitrogen to mmol/L of urea: Note that molar concentrations of urea and urea nitrogen are equal, because both nitrogen gas and urea has two nitrogen atoms. Convert BUN to urea in mg/dL by using following formula: Where 60 represents MW of urea and 14*2 MW of urea nitrogen. See also Kt/V Standardized Kt/V Urea reduction ratio Urine urea nitrogen References Chemical pathology Diagnostic nephrology Nitrogen cycle
Blood urea nitrogen
[ "Chemistry", "Biology" ]
577
[ "Biochemistry", "Chemical pathology", "Nitrogen cycle", "Metabolism" ]
724,723
https://en.wikipedia.org/wiki/Diltiazem
Diltiazem, sold under the brand name Cardizem among others, is a nondihydropyridine calcium channel blocker medication used to treat high blood pressure, angina, and certain heart arrhythmias. It may also be used in hyperthyroidism if beta blockers cannot be used. It is taken by mouth or given by injection into a vein. When given by injection, effects typically begin within a few minutes and last a few hours. Common side effects include swelling, dizziness, headaches, and low blood pressure. Other severe side effects include an overly slow heart beat, heart failure, liver problems, and allergic reactions. Use is not recommended during pregnancy. It is unclear if use when breastfeeding is safe. Diltiazem works by relaxing the smooth muscle in the walls of arteries, resulting in them opening and allowing blood to flow more easily. Additionally, it acts on the heart to prolong the period until it can beat again. It does this by blocking the entry of calcium into the cells of the heart and blood vessels. It is a class IV antiarrhythmic. Diltiazem was approved for medical use in the United States in 1982. It is available as a generic medication. In 2022, it was the 100th most commonly prescribed medication in the United States, with more than 6million prescriptions. An extended release formulation is also available. Medical uses Diltiazem is indicated for: Stable angina (exercise-induced) – diltiazem increases coronary blood flow and decreases myocardial oxygen consumption, secondary to decreased peripheral resistance, heart rate, and contractility. Variant angina – it is effective owing to its direct effects on coronary dilation. Unstable angina (preinfarction, crescendo) – diltiazem may be particularly effective if the underlying mechanism is vasospasm. Myocardial bridge For supraventricular tachycardias (PSVT), diltiazem appears to be as effective as verapamil in treating re-entrant supraventricular tachycardia. Atrial fibrillation or atrial flutter is another indication. The initial bolus should be 0.25 mg/kg, intravenous (IV). Because of its vasodilatory effects, diltiazem is useful for treating hypertension. Calcium channel blockers are well tolerated, and especially effective in treating low-renin hypertension. It is also used as topical application for anal fissures because it promotes healing due to its vasodilatory property. Contraindications and precautions In congestive heart failure, patients with reduced ventricular function may not be able to counteract the negative inotropic and chronotropic effects of diltiazem, the result being an even higher compromise of function. With SA node or AV conduction disturbances, the use of diltiazem should be avoided in patients with SA or AV nodal abnormalities, because of its negative chronotropic and dromotropic effects. Low blood pressure patients, with systolic blood pressures below 90 mm Hg, should not be treated with diltiazem. Diltiazem may paradoxically increase ventricular rate in patients with Wolff-Parkinson-White syndrome because of accessory conduction pathways. Diltiazem is relatively contraindicated in the presence of sick sinus syndrome, atrioventricular node conduction disturbances, bradycardia, impaired left ventricle function, peripheral artery occlusive disease, and chronic obstructive pulmonary disease. Side effects A reflex sympathetic response, caused by the peripheral dilation of vessels and the resulting drop in blood pressure, works to counteract the negative inotropic, chronotropic and dromotropic effects of diltiazem. Undesirable effects include hypotension, bradycardia, dizziness, flushing, fatigue, headaches and edema. Rare side effects are congestive heart failure, myocardial infarction, and hepatotoxicity. Drug interactions Because of its inhibition of hepatic cytochromes CYP3A4, CYP2C9 and CYP2D6, there are a number of drug interactions. Some of the more important interactions are listed below. Beta-blockers Intravenous diltiazem should be used with caution with beta-blockers because, while the combination is most potent at reducing heart rate, there are rare instances of dysrhythmia and AV node block. Quinidine Quinidine should not be used concurrently with calcium channel blockers because of reduced clearance of both drugs and potential pharmacodynamic effects at the SA and AV nodes. Fentanyl Concurrent use of fentanyl with diltiazem, or any other CYP3A4 inhibitors, as these medications decrease the breakdown of fentanyl and thus increase its effects. Mechanism of action Diltiazem, also known as (2S,3S)-3-acetoxy-5-[2-(dimethylamino)ethyl]-2,3-dihydro-2-(4-methoxyphenyl)-1,5-benzothiazepin-4(5H)-one hydrochlorid has a vasodilating activity attributed to the (2S,3S)-isomer. Diltiazem is a potent vasodilator, increasing blood flow and variably decreasing the heart rate via strong depression of A-V node conduction. It binds to the alpha-1 subunit of L-type calcium channels in a fashion somewhat similar to verapamil, another nondihydropyridine (non-DHP) calcium channel blocker. Chemically, it is based upon a 1,4-thiazepine ring, making it a benzothiazepine-type calcium channel blocker. It is a potent and mild vasodilator of coronary and peripheral vessels, respectively, which reduces peripheral resistance and afterload, though not as potent as the dihydropyridine (DHP) calcium channel blockers. This results in minimal reflexive sympathetic changes. Diltiazem has negative inotropic, chronotropic, and dromotropic effects. This means diltiazem causes a decrease in heart muscle contractility – how strong the beat is, lowering of heart rate – due to slowing of the sinoatrial node, and a slowing of conduction through the atrioventricular node – increasing the time needed for each beat. Each of these effects results in reduced oxygen consumption by the heart, reducing angina, typically unstable angina, symptoms. These effects also reduce blood pressure by causing less blood to be pumped out. Research Diltiazem is prescribed off-label by doctors in the US for prophylaxis of cluster headaches. Some research on diltiazem and other calcium channel antagonists in the treatment and prophylaxis of migraine is ongoing. Recent research has shown diltiazem may reduce cocaine cravings in drug-addicted rats. This is believed to be due to the effects of calcium blockers on dopaminergic and glutamatergic signaling in the brain. Diltiazem also enhances the analgesic effect of morphine in animal tests, without increasing respiratory depression, and reduces the development of tolerance. Diltiazem is also being used in the treatment of anal fissures. It can be taken orally or applied topically with increased effectiveness. When applied topically, it is made into a cream form using either petrolatum or Phlojel. Phlojel absorbs the diltiazem into the problem area better than the petrolatum base. It has good short-term success rates. References Calcium channel blockers CYP2D6 inhibitors CYP3A4 inhibitors Benzothiazepines 4-Methoxyphenyl compounds Drugs developed by AbbVie Drugs developed by Merck Lactams Acetate esters Chemical substances for emergency medicine Wikipedia medicine articles ready to translate
Diltiazem
[ "Chemistry" ]
1,716
[ "Chemicals in medicine", "Chemical substances for emergency medicine" ]
724,926
https://en.wikipedia.org/wiki/Tiller
A tiller or till is a lever used to steer a vehicle. The mechanism is primarily used in watercraft, where it is attached to an outboard motor, rudder post or stock to provide leverage in the form of torque for the helmsman to turn the rudder. A tiller may also be used in vehicles outside of water, and was seen in early automobiles. On vessels, a tiller can be used by the helmsman directly pulling or pushing it, but it may also be moved remotely using tiller lines or a ship's wheel. Rapid or excessive movement of the tiller results in an increase in drag and will result in braking or slowing the boat. Description A tiller is a lever used to steer a vehicle. It provides leverage in the form of torque to turn the device that changes the direction of the vehicle, such as a rudder on a watercraft or the surface wheels on a wheeled vehicle. A tiller can be used by directly pulling or pushing it, but it may also be moved remotely using a whipstaff, tiller lines, or a ship's wheel, among other variations. (For example, some kayaks have foot pedals that turn a tiller.) Tillers on outboard motors often employ an additional control mechanism where twisting of the shaft is used to vary speed. Watercraft Rudder control In watercraft, the tiller may be attached to a rudder post (American terminology) or rudder stock (English terminology) that provides leverage in the form of torque to turn the rudder. In steering a boat, the tiller is always moved in the direction opposite of which the bow of the boat is to move. If the tiller is moved to port side (left), the bow will turn to starboard (right). If the tiller is moved to starboard (right), the bow will turn port (left). Sailing students often learn the alliterative phrase "Tiller Towards Trouble" to remind them of how to steer. Rapid or excessive movement of the tiller results in an increase in drag and will result in braking or slowing the boat. In the early 1500s the tiller was also referred to as the steering stick. Engine control Some outboard motors may instead have the tiller directly attached and offer controls for engine throttle and prop rotation for forward and reverse. Tiller orders Until the current international standards for giving steering orders were defined by the SOLAS Convention of 1929, it was common for steering orders on ships to be given as "Tiller Orders", which dictated to which side of the vessel the tiller was to be moved. Since the tiller is forward of the rudder's pivot point, and the rudder aft of it, the tiller's movement is reversed at the rudder, giving the impression that orders were given "the wrong way round". For example, to turn a ship to port (its left side), the helmsman would be given the order "starboard helm" or "x degrees starboard". The ship's tiller was then moved to starboard, turning the rudder to the vessel's port side, producing a turn to port. The opposite convention applied in France (where tribord—starboard—meant turn to starboard), but Austria and Italy kept to the English system. There was no standardisation in vessels from Scandinavian countries, where the practice varied from ship to ship. Most French vessels with steering wheels had their steering chains reversed and when under the command of a British pilot this could result in confusion. When large steamships appeared in the late 19th century with telemotors hydraulically connecting the wheel on the bridge to the steering gear at the stern, the practice continued. However, the helmsman was now no longer directly controlling the tiller, and the ship's wheel was simply turned in the desired direction (turn the wheel to port and the ship will go to port). Tiller Orders remained however; although many maritime nations had abandoned the convention by the end of the 19th century, Britain retained it until 1933 and the U.S. merchant marine until 1935. One of the reasons for this system continuing, apart from it being a long-established maritime tradition, was that it provided consistency—regardless of whether a vessel was steered directly by the tiller or remotely by a wheel, every vessel had a tiller of some sort and so a tiller order remained true for any vessel. During the transition period the wording of the order was changed, to specify "Wheel to starboard" or "Wheel to port". A well-known and often-depicted example occurred on the RMS Titanic in 1912 just before she collided with an iceberg. When the iceberg appeared directly in front of the ship, her officer-of-the-watch, First Officer William Murdoch, decided to attempt to clear the iceberg by swinging the ship to its port side. He ordered "Hard-a-Starboard", which was a Tiller Order directing the helmsman to turn the wheel to port (anti-clockwise) as far as it would go. The Titanic's steering gear then pushed the tiller toward the starboard side of the ship, swinging the rudder over to port and causing the vessel to turn to port. These actions are faithfully portrayed in the 1997 film of the disaster. Although frequently described as an error, the order was given and executed correctly— the vessel struck the iceberg anyway. However, according to the granddaughter of the highest-ranking officer to survive the sinking, Second Officer Charles Lightoller, the order was not correctly executed. Quartermaster Hitchins, who had been trained under Rudder Orders, mistakenly turned the wheel to starboard. It took two minutes to recognise and correct the error, by which time it was too late to avoid collision with the iceberg. Louise Patten makes the statement in an endnote to her fictional story, Good as Gold. Although this system seems confusing and contradictory today, to generations of sailors trained on sailing vessels with tiller steering it seemed perfectly logical and was understood by all seafarers. Only when new generations of sailors trained on ships with wheel-and-tiller steering came into the industry was the system replaced. Other vehicles Landcraft The first automobiles were steered with a tiller, which angled the wheels to steer the vehicle. A steering wheel was first used in Europe in 1894 and became standard on French Panhard cars in 1898. Arthur Constantin Krebs replaced the tiller with an inclined steering wheel for the Panhard & Levassor car he designed for the Paris-Amsterdam race which ran from 7–13 July 1898. In the US, Packard introduced a steering wheel on the second car they built, in 1899. By early in the next century, the steering wheel had nearly replaced the tiller in automobiles. However, some automobiles still used tillers into the teens, such as Rauch & Lang Carriage Co., a manufacturer of electric automobiles in Cleveland, Ohio. Lanchester in England also offered tiller steering later than many car manufacturers. Today, tractor-drawn semi-trailers for ladder trucks are called tiller truck and use a "tiller" (rear steering axle) driver to control the trailer where the aerial ladder is located. Some recumbent bicycles employ tiller steering. The tiller of the electric threewheeler TWIKE – called Joystick – includes buttons for acceleration and electric braking. Mobility scooters are usually fitted with a tiller. Aircraft Most large, transport category airplanes use a device known as a tiller to steer the airplane while taxiing. This usually takes the form of a small steering wheel or lever in the cockpit, often one for the pilot and one for the co-pilot. However, they differ from the tiller on a ship. Rather than move the rudder, the tiller on an airplane steers by turning the nose wheel, and the tiller is moved in the direction of the turn, rather than opposite the turn as on a ship. See also Ship's wheel Steering engine References Control devices Sailboat components Sailing ship components Watercraft components fr:Barre (bateau)
Tiller
[ "Engineering" ]
1,646
[ "Control devices", "Control engineering" ]
724,958
https://en.wikipedia.org/wiki/Thorium%20dioxide
Thorium dioxide (ThO2), also called thorium(IV) oxide, is a crystalline solid, often white or yellow in colour. Also known as thoria, it is mainly a by-product of lanthanide and uranium production. Thorianite is the name of the mineralogical form of thorium dioxide. It is moderately rare and crystallizes in an isometric system. The melting point of thorium oxide is 3300 °C – the highest of all known oxides. Only a few elements (including tungsten and carbon) and a few compounds (including tantalum carbide) have higher melting points. All thorium compounds, including the dioxide, are radioactive because there are no stable isotopes of thorium. Structure and reactions Thoria exists as two polymorphs. One has a fluorite crystal structure. This is uncommon among binary dioxides. (Other binary oxides with fluorite structure include cerium dioxide, uranium dioxide and plutonium dioxide.) The band gap of thoria is about 6 eV. A tetragonal form of thoria is also known. Thorium dioxide is more stable than thorium monoxide (ThO). Only with careful control of reaction conditions can oxidation of thorium metal give the monoxide rather than the dioxide. At extremely high temperatures, the dioxide can convert to the monoxide either by a disproportionation reaction (equilibrium with liquid thorium metal) above or by simple dissociation (evolution of oxygen) above . Applications Nuclear fuels Thorium dioxide (thoria) can be used in nuclear reactors as ceramic fuel pellets, typically contained in nuclear fuel rods clad with zirconium alloys. Thorium is not fissile (but is "fertile", breeding fissile uranium-233 under neutron bombardment); hence, it must be used as a nuclear reactor fuel in conjunction with fissile isotopes of either uranium or plutonium. This can be achieved by blending thorium with uranium or plutonium, or using it in its pure form in conjunction with separate fuel rods containing uranium or plutonium. Thorium dioxide offers advantages over conventional uranium dioxide fuel pellets, because of its higher thermal conductivity (lower operating temperature), considerably higher melting point, and chemical stability (does not oxidize in the presence of water/oxygen, unlike uranium dioxide). Thorium dioxide can be turned into a nuclear fuel by breeding it into uranium-233 (see below and refer to the article on thorium for more information on this). The high thermal stability of thorium dioxide allows applications in flame spraying and high-temperature ceramics. Alloys Thorium dioxide is used as a stabilizer in tungsten electrodes in TIG welding, electron tubes, and aircraft gas turbine engines. As an alloy, thoriated tungsten metal is not easily deformed because the high-fusion material thoria augments the high-temperature mechanical properties, and thorium helps stimulate the emission of electrons (thermions). It is the most popular oxide additive because of its low cost, but is being phased out in favor of non-radioactive elements such as cerium, lanthanum and zirconium. Thoria-dispersed nickel finds its applications in various high-temperature operations like combustion engines because it is a good creep-resistant material. It can also be used for hydrogen trapping. Catalysis Thorium dioxide has almost no value as a commercial catalyst, but such applications have been well investigated. It is a catalyst in the Ruzicka large ring synthesis. Other applications that have been explored include petroleum cracking, conversion of ammonia to nitric acid and preparation of sulfuric acid. Radiocontrast agents Thorium dioxide was the primary ingredient in Thorotrast, a once-common radiocontrast agent used for cerebral angiography, however, it causes a rare form of cancer (hepatic angiosarcoma) many years after administration. This use was replaced with injectable iodine or ingestable barium sulfate suspension as standard X-ray contrast agents. Lamp mantles Another major use in the past was in gas mantle of lanterns developed by Carl Auer von Welsbach in 1890, which are composed of 99% ThO2 and 1% cerium(IV) oxide. Even as late as the 1980s it was estimated that about half of all ThO2 produced (several hundred tonnes per year) was used for this purpose. Some mantles still use thorium, but yttrium oxide (or sometimes zirconium oxide) is used increasingly as a replacement. Glass manufacture When added to glass, thorium dioxide helps increase its refractive index and decrease dispersion. Such glass finds application in high-quality lenses for cameras and scientific instruments. The radiation from these lenses can darken them and turn them yellow over a period of years and degrade film, but the health risks are minimal. Yellowed lenses may be restored to their original colourless state by lengthy exposure to intense ultraviolet radiation. Thorium dioxide has since been replaced by rare-earth oxides such as lanthanum oxide in almost all modern high-index glasses, as they provide similar effects and are not radioactive. References Cited sources Hepatotoxins Oxides Thorium(IV) compounds Refractory materials Fluorite crystal structure
Thorium dioxide
[ "Physics", "Chemistry" ]
1,090
[ "Refractory materials", "Oxides", "Salts", "Materials", "Matter" ]
725,441
https://en.wikipedia.org/wiki/Atwood%20machine
The Atwood machine (or Atwood's machine) was invented in 1784 by the English mathematician George Atwood as a laboratory experiment to verify the mechanical laws of motion with constant acceleration. Atwood's machine is a common classroom demonstration used to illustrate principles of classical mechanics. The ideal Atwood machine consists of two objects of mass and , connected by an inextensible massless string over an ideal massless pulley. Both masses experience uniform acceleration. When , the machine is in neutral equilibrium regardless of the position of the weights. Equation for constant acceleration An equation for the acceleration can be derived by analyzing forces. Assuming a massless, inextensible string and an ideal massless pulley, the only forces to consider are: tension force (), and the weight of the two masses ( and ). To find an acceleration, consider the forces affecting each individual mass. Using Newton's second law (with a sign convention of derive a system of equations for the acceleration (). As a sign convention, assume that a is positive when downward for and upward for . Weight of and is simply and respectively. Forces affecting m1: Forces affecting m2: and adding the two previous equations yields and the concluding formula for acceleration The Atwood machine is sometimes used to illustrate the Lagrangian method of deriving equations of motion. See also Notes External links A treatise on the rectilinear motion and rotation of bodies; with a description of original experiments relative to the subject by George Atwood, 1764. Drawings appear on page 450. Professor Greenslade's account on the Atwood Machine Atwood's Machine by Enrique Zeleny, The Wolfram Demonstrations Project Mechanics Physics experiments
Atwood machine
[ "Physics", "Engineering" ]
345
[ "Mechanics", "Physics experiments", "Mechanical engineering", "Experimental physics" ]
725,585
https://en.wikipedia.org/wiki/10%20Hygiea
10 Hygiea is a major asteroid located in the main asteroid belt. With a mean diameter of between km and a mass estimated to be 3% of the total mass of the belt, it is the fourth-largest asteroid in the Solar System by both volume and mass, and is the largest of the C-type asteroids (dark asteroids with a carbonaceous surface) in classifications that use G type for 1 Ceres. It is very close to spherical, apparently because it had re-accreted after the disruptive impact that produced the large Hygiean family of asteroids. Observation Despite its size, Hygiea appears very dim when observed from Earth. This is due to its dark surface and its position in the outer main belt. For this reason, six smaller asteroids were observed before Annibale de Gasparis discovered Hygiea on 12 April 1849. At most oppositions, Hygiea has a magnitude that is four magnitudes dimmer than Vesta's, and observing it typically requires at least a telescope. However, while at a perihelic opposition, it can be observed just with 10×50 binoculars as Hygiea would have a magnitude of +9.1. Discovery and name On 12 April 1849, in Naples, Italy, astronomer Annibale de Gasparis (age 29) discovered Hygiea. It was the first of his nine asteroid discoveries. The director of the Naples observatory, Ernesto Capocci, named the asteroid. He chose to call it Igea Borbonica ("Bourbon Hygieia"), after the Greek goddess of health, daughter of Asclepius, and in honor of the ruling family of the Kingdom of the Two Sicilies where Naples was located. In 1852, John Russell Hind wrote that "it is universally termed Hygiea, the unnecessary appendage 'Borbonica' being dropped." The English form is an irregular spelling of Greek Hygieia or Hygeia (Latin Hygea or Hygia). Symbol The intended astronomical symbol for Hygiea was a zeta-shaped serpent crowned with a star, in the pipeline for Unicode 17.0 as U+1F779 🝹 the serpent and serpent drinking from a bowl are traditional symbols of the goddess Hygieia (cf. U+1F54F 🕏). In later years it was substituted with a rod of Asclepius: (a serpent twined around a staff, U+2695 ⚕), confusing Hygieia with her masculine counterpart. These symbols are now both largely obsolete. In this century, 10 Hygiea has seen some minor astrological use, and its symbol was confused once again, with Asclepsius's rod replaced by Mercury's caduceus: , though in a more elaborate form (U+2BDA ⯚) than the symbol of the planet Mercury. The caduceus has long been mistaken for the rod of Asclepius. Physical characteristics Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018 revealed that Hygiea is nearly spherical and is close to a hydrostatic equilibrium shape. Based on spectral evidence, Hygiea's surface is thought to consist of primitive carbonaceous materials similar to those found in carbonaceous chondrite meteorites. Aqueous alteration products have been detected on its surface, which could indicate the presence of water ice in the past which was heated sufficiently to melt. The primitive present surface composition indicates that Hygiea had not melted during the early period of Solar System formation. However, observations suggest Hygiea suffered a major collision early in its history that completely disrupted it, with its present spherical shape due to re-accretion of the disrupted material. No deep basins are visible in VLT images, indicating that any large craters that formed after re-accretion must have flat floors, consistent with an icy C-type composition. In images taken with the VLT in 2017, a bright surface feature is visible, as well as at least two dark craters, which have been informally named Serpens and Calix after the Latin words for 'snake' and 'cup', respectively. Serpens has a diameter of 180 km, Calix of 90 km. Hygiea is the largest of the class of dark C-type asteroids that are dominant in the outer asteroid belt, beyond the Kirkwood gap at 2.82 AU. Its mean diameter . Hygiea is close to spherical, with an axis ratio of that is consistent with a MacLaurin ellipsoid. Aside from being the smallest of the "big four", Hygiea has a relatively low density of , comparable to Ceres (2.16) and the larger icy satellites of the Solar System (Ganymede 1.94, Callisto 1.83, Titan 1.88, Triton 2.06) rather than to Pallas () or Vesta (3.45). Although it is the largest body in its region, due to its dark surface and farther-than-average distance from the Sun, Hygiea appears very dim when observed from Earth. In fact, it is the third dimmest of the first twenty-three asteroids discovered, with only 13 Egeria and 17 Thetis having lower mean opposition magnitudes. At most oppositions, Hygiea has a magnitude of around +10.2, which is as much as four orders fainter than Vesta, and observation calls for at least a telescope to resolve. However, at a perihelic opposition, Hygiea can reach +9.1 magnitude and may just be resolvable with 10 × 50 binoculars, unlike the next two largest asteroids in the asteroid belt, 704 Interamnia and 511 Davida, which are always beyond binocular visibility. A total of 17 stellar occultations by Hygiea have been tracked by Earth-based astronomers, including two (in 2002 and 2014) that were seen by a large number of observers. The observations have been used to constrain Hygiea's size, shape and rotation axis. The Hubble Space Telescope has resolved the asteroid and ruled out the presence of any orbiting companions larger than about in diameter. Orbit and rotation Orbiting at an average of 3.14 AU from the Sun, Hygiea is the most distant of the "big four" asteroids. It lies closer to the ecliptic as well, with an orbital inclination of 4°. Its orbit is less circular than those of Ceres or Vesta, with an eccentricity of around 0.12. Its perihelion is at a quite similar longitude to those of Vesta and Ceres, though its ascending and descending nodes are opposite to the corresponding ones for those objects. Although its perihelion is extremely close to the mean distance of Ceres and Pallas, a collision between Hygiea and its larger companions is impossible because at that distance they are always on opposite sides of the ecliptic. In 2056, Hygiea will pass 0.025 AU from Ceres, and then in 2063, Hygiea will pass 0.020 AU from Pallas. At aphelion Hygiea reaches out to the extreme edge of the asteroid belt at the perihelia of the Hilda family, which is in a 3:2 orbital resonance with Jupiter. As one of the most massive asteroids, Hygiea is used by the Minor Planet Center to calculate perturbations. Hygiea is in an unstable three-body mean motion resonance with Jupiter and Saturn. The computed Lyapunov time for this asteroid is 30,000 years, indicating that it occupies a chaotic orbit that will change randomly over time because of gravitational perturbations by the planets. It is the lowest numbered asteroid in such a resonance (the next lowest numbered being 70 Panopaea). Hygiea has a rotation period of 13.83 hours. Its single-peaked light curve has an amplitude of 0.27 mag, which is largely attributed to albedo variations. Hygiea's north pole points towards ecliptic longitude and ecliptic latitude , which gives an axial tilt of 119° with respect to the ecliptic. Hygiea family Hygiea is the main member of the Hygiean asteroid family that constitutes about 1% of asteroids in the main belt. The family was formed when an object with a diameter of about 100 km collided with proto-Hygiea about 2 billion years ago. Because the impact craters on Hygiea today are too small to contain the volume of ejected material, it is thought that Hygiea was completely disrupted by the impact and that the majority of the debris recoalesced after the pieces that formed the rest of the family had escaped. Hygiea contains almost all the mass (over 98%) of the family. See also List of former planets Notes References External links A simulation of the orbit of Hygiea Stellar occultation of 11 August 2013 (video) (displays Elong from Sun and V mag for 2011) Hygiea asteroids Hygiea Hygiea Possible dwarf planets C-type asteroids (Tholen) C-type asteroids (SMASS) 18490412 18490412
10 Hygiea
[ "Physics", "Astronomy" ]
1,950
[ "Concepts in astronomy", "Unsolved problems in astronomy", "Possible dwarf planets" ]
725,995
https://en.wikipedia.org/wiki/Glovebox
A glovebox (or glove box) is a sealed container that is designed to allow one to manipulate objects where a separate atmosphere is desired. Built into the sides of the glovebox are gloves arranged in such a way that the user can place their hands into the gloves and perform tasks inside the box without breaking containment. Part or all of the box is usually transparent to allow the user to see what is being manipulated. A smaller antechamber compartment is used to transport items into or out of the main chamber without compromising the internal environment. Antechambers are much smaller than the main chambers so they can be exposed to ambient conditions more often and achieve inert conditions quickly. Two types of gloveboxes exist. The first allows a person to work with hazardous substances, such as radioactive materials or infectious disease agents, and the second allows manipulation of substances that must be contained within a very high purity inert atmosphere, such as argon or nitrogen. It is also possible to use a glovebox for manipulation of items in a vacuum chamber. Inert atmosphere work The gas in a glovebox is pumped through a series of treatment devices which remove solvents, water and oxygen from the gas. Copper metal (or some other finely divided metal) is commonly used to remove oxygen, this oxygen removing column is normally regenerated by passing a hydrogen/nitrogen mixture through it while it is heated: the water formed is passed out of the box with the excess hydrogen and nitrogen. It is common to use molecular sieves to remove water by absorbing it in the molecular sieves' pores. Such a box is often used by organometallic chemists to transfer dry solids from one container to another container. An alternative to using a glovebox for air sensitive work is to employ Schlenk methods using a Schlenk line. One disadvantage of working in a glovebox is that organic solvents will attack the plastic seals. As a result, the box will start to leak and water and oxygen can then enter the box. Another disadvantage of a glovebox is that oxygen and water can diffuse through the plastic gloves. Also, coordinating solvents, such as tetrahydrofuran and dichloromethane, can bind irreversibly to the copper catalyst, reducing its effectiveness. One way to prolong the lifespan of the glovebox and catalyst is to turn off circulation when using solvents, followed by purging when work involving solvents is finished. Inert atmosphere gloveboxes are typically kept at a higher pressure than the surrounding air, so that any microscopic leaks are mostly leaking inert gas out of the box instead of letting air in. Hazardous materials work At the now-deactivated Rocky Flats Plant, which manufactured plutonium triggers, also called "pits", production facilities consisted of linked stainless steel gloveboxes up to 64 feet, or 20 meters, in length, which contained the equipment which forged and machined the trigger parts. The gloves were lead-lined. Other materials used in the gloveboxes included acrylic viewing windows and Benelex shielding composed of wood fiber and plastic which shielded against neutron radiation. Manipulation of the lead-lined gloves was onerous work. Some gloveboxes for radioactive work are under inert conditions, for instance, one nitrogen-filled box contains an argon-filled box. The argon box is fitted with a gas treatment system to keep the gas very pure to enable electrochemical experiments in molten salts. Gloveboxes are also used in the biological sciences when dealing with anaerobes or high-biosafety level pathogens. Gloveboxes used for hazardous materials are generally maintained at a lower pressure than the surrounding atmosphere, so that microscopic leaks result in air intake rather than hazard outflow. Gloveboxes used for hazardous materials generally incorporate HEPA filters into the exhaust, to keep the hazard contained. Gallery See also Desiccators are used for storing chemicals which are moisture-sensitive, but do not react quickly or violently with water. Fume hoods are used for hazardous material handling where less operator protection and the same atmosphere can be used. Hot cells often use remote manipulators to provide radiological containment where more operator protection is required. Sandblasting cabinets are a type of glovebox which shield the user from the high-velocity abrasive particles inside. Schlenk lines are used for manipulating oxygen- and moisture-sensitive chemicals. References External links American Glovebox Society Hans-Jürgen Bässler und Frank Lehmann: Containment Technology: Progress in the Pharmaceutical and Food Processing Industry. Springer, Berlin 2013 Laboratory equipment Gas technologies Air-free techniques Radiation protection
Glovebox
[ "Chemistry", "Engineering" ]
946
[ "Vacuum systems", "Air-free techniques", "Cleanroom technology" ]
726,049
https://en.wikipedia.org/wiki/Pharmacodynamics
Pharmacodynamics (PD) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs). The effects can include those manifested within animals (including humans), microorganisms, or combinations of organisms (for example, infection). Pharmacodynamics and pharmacokinetics are the main branches of pharmacology, being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms. In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing, benefit, and adverse effects. Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models). Pharmacodynamics places particular emphasis on dose–response relationships, that is, the relationships between drug concentration and effect. One dominant example is drug-receptor interactions as modeled by L + R <=> LR where L, R, and LR represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps. Basics There are four principal protein targets with which drugs can interact: Enzymes (e.g. neostigmine and acetyl cholinesterase) Inhibitors Inducers Activators Membrane carriers [Reuptake vs Efflux] (e.g. tricyclic antidepressants and catecholamine uptake-1) Enhancer (RE) Inhibitor (RI) Releaser (RA) Ion channels (e.g. nimodipine and voltage-gated Ca2+ channels) Blocker Opener Receptor (e.g. Listed in table below) Agonists can be full, partial or inverse. Antagonists can be competitive, non-competitive, or uncompetive. Allosteric modulator can have 3 effects within a receptor. One is its capability or incapability to activate a receptor (2 possibilities). The other two are agonist affinity and efficacy. They may be increased, decreased or unaffected (3 and 3 possibilities). NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor. Effects on the body The majority of drugs either There are 7 main drug actions: stimulating action through direct receptor agonism and downstream effects depressing action through direct receptor agonism and downstream effects (ex.: inverse agonist) blocking/antagonizing action (as with silent antagonists), the drug binds the receptor but does not activate it stabilizing action, the drug seems to act neither as a stimulant or as a depressant (ex.: some drugs possess receptor activity that allows them to stabilize general receptor activation, like buprenorphine in opioid dependent individuals or aripiprazole in schizophrenia, all depending on the dose and the recipient) exchanging/replacing substances or accumulating them to form a reserve (ex.: glycogen storage) direct beneficial chemical reaction as in free radical scavenging direct harmful chemical reaction which might result in damage or destruction of the cells, through induced toxic or lethal damage (cytotoxicity or irritation) Desired activity The desired activity of a drug is mainly due to successful targeting of one of the following: Cellular membrane disruption Chemical reaction with downstream effects Interaction with enzyme proteins Interaction with structural proteins Interaction with carrier proteins Interaction with ion channels Ligand binding to receptors: Hormone receptors Neuromodulator receptors Neurotransmitter receptors General anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na+ influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine, a drug for gout, interferes with the function of the structural protein tubulin, while digitalis, a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump. The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist). In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage. Undesirable effects Undesirable effects of a drug include: Increased probability of cell mutation (carcinogenic activity) A multitude of simultaneous assorted actions which may be deleterious Interaction (additive, multiplicative, or metabolic) Induced physiological damage, or abnormal chronic conditions Therapeutic window The therapeutic window is the amount of a medication between the amount that gives an effect (effective dose) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects. Duration of action The duration of action of a drug is the length of time that particular drug is effective. Duration of action is a function of several parameters including plasma half-life, the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target. Recreational drug use In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves. Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion. Total The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety, starting from the moment the substance is first administered. Onset The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected. Come up The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up." Peak The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height. Offset The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down." After effects The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol, cocaine, and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis, LSD in low to high doses, and ketamine. Receptor binding and effect The binding of ligands (drug) to receptors is governed by the law of mass action which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The equilibrium dissociation constant is defined by: L + R <=> LR                       where L=ligand, R=receptor, square brackets [] denote concentration. The fraction of bound receptors is Where is the fraction of receptor bound by the ligand. This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called receptor reserve phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue. The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist (efficacy), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists. Often the response is determined as a function of log[L] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when [L]=Kd . The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration. Multicellular pharmacodynamics The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both in vivo and in silico. Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell. Toxicodynamics Toxicodynamics (TD) and pharmacodynamics (PD) link a therapeutic agent or toxicant, or toxin (xenobiotic)'s dosage to the features, amount, and time course of its biological action. The mechanism of action is a crucial factor in determining effect and toxicity of the drug, taking in consideration the pharmacokinetic (PK) factors. The sort and extent of altered cellular physiology will depend on the combination of the drug's presence (as established by pharmacokinetic (PK) studies) and/or its mechanism and duration of action (PD). Types of xenobiotic-target interaction can be described either by reversible, irreversible, noncompetitive, and allosteric interaction or agonist, partial agonist, antagonist, and inverse interactions. In vitro, ex vivo, or in vivo studies can be used to assess PD and TD from the molecule to the level of the entire organism. The mechanism of drug action and adverse drug reaction is either physiochemical property based and biochemical based. Adverse drugs reactions can be classified as either idiosyncratic (type B) or intrinsic (type A). Idiosyncratic toxicity is not dosage dependent and defy the mass-action relationship. Immune-mediated processes are frequently cited as the source of type B reactions. These cannot be accurately described in preclinical research or clinical trials due to their low incidence frequency. Type A reactions are dosage (concentration) dependent. Usually, this kind of side effect is an extension of an ongoing treatment. Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology. Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models. See also Mechanism of action Dose-response relationship Pharmacokinetics ADME Antimicrobial pharmacodynamics Pharmaceutical company Schild regression References External links Vijay. (2003) Predictive software for drug design and development. Pharmaceutical Development and Regulation 1 ((3)), 159–168. Werner, E., In silico multicellular systems biology and minimal genomes, DDT vol 8, no 24, pp 1121–1127, Dec 2003. (Introduces the concepts MCPD and Net-MCPD) Dr. David W. A. Bourne, OU College of Pharmacy Pharmacokinetic and Pharmacodynamic Resources. Introduction to Pharmacokinetics and Pharmacodynamics (PDF) Pharmacy Medicinal chemistry Life sciences industry
Pharmacodynamics
[ "Chemistry", "Biology" ]
2,880
[ "Pharmacology", "Life sciences industry", "Pharmacy", "Pharmacodynamics", "nan", "Medicinal chemistry", "Biochemistry" ]
726,413
https://en.wikipedia.org/wiki/Zero-emissions%20vehicle
A zero-emission vehicle (ZEV) is a vehicle that does not emit exhaust gas or other pollutants from the onboard source of power. The California definition also adds that this includes under any and all possible operational modes and conditions. This is because under cold-start conditions for example, internal combustion engines tend to produce the maximum amount of pollutants. In a number of countries and states, transport is cited as the main source of greenhouse gases (GHG) and other pollutants. The desire to reduce this is thus politically strong. Terminology Harmful pollutants to the health and the environment include particulates (soot), hydrocarbons, carbon monoxide, ozone, lead, and various oxides of nitrogen. Although not considered emission pollutants by the original California Air Resources Board (CARB) or U.S. Environmental Protection Agency (EPA) definitions, the most recent common use of the term also includes volatile organic compounds, several toxic airborne compounds (such as 1,3-Butadiene), and pollutants of global significance such as carbon dioxide and other greenhouse gases. Examples of zero-emission vehicle with different power sources can include muscle-powered vehicles such as bicycles, electric bicycles, and gravity racers. Motor vehicles Also other battery electric vehicles, which may shift emissions to the location where the electricity is generated (if the electricity comes from coal or natural gas power plants—as opposed to hydro-electric, wind power, solar power or nuclear power plants); and fuel cell vehicles powered by hydrogen, which may shift emissions to the location where the hydrogen is generated. It does not include hydrogen internal combustion engine vehicles because these do generate some emissions (although being near-emissionless). It also does not include vehicles running on 100% biofuel as these also emit exhaust gases, despite being carbon neutral overall. Emissions from the manufacturing process are thus not included in this definition, and it has been argued that the emissions that are created during manufacture are currently of an order of magnitude that is comparable to the emissions that are created during a vehicle's operating lifetime. However, these vehicles are in the early stages of their development; the manufacturing emissions may decrease by the development of technology, industry, shifting toward mass production and the ever-increasing use of renewable energy throughout the supply-chains. History Well-to-wheel emissions The term zero-emissions or ZEV, as originally coined by the California Air Resources Board (CARB), refers only to motor vehicle emissions from the onboard source of power. Therefore, CARB's definition is accounting only for pollutants emitted at the point of the vehicle operation, and the clean air benefits are usually local because depending on the source of the electricity used to recharge the batteries, air pollutant emissions are shifted to the location of the electricity generation plants. In a broader perspective, the electricity used to recharge the batteries must be generated from renewable or clean sources such as wind, solar, hydroelectric, or nuclear power for ZEVs to have almost none or zero well-to-wheel emissions. In other words, if ZEVs are recharged from electricity generated by fossil fuel plants, they cannot be considered as zero emissions. However, the spread of electrical-powered vehicles can help the development of systems for charging the EV batteries from excess electricity which cannot be used otherwise. For instance, electricity demand is lowest at night and the excess generated electricity at this time can be used for recharging the EVs' batteries. It's worth mentioning that renewable sources such as wind turbines or solar panels are less controllable in terms of the amount of generated electricity compared to fossil fuel power plants; most renewable energy sources are intermittent energy sources. Therefore, development of these resources will lead to excess energy which can be better used by development of EVs. Moreover, most EVs can benefit from regenerative brakes and other optimization systems which increases the energy efficiency in these vehicles. Fuel cell vehicles (FCVs) can help even more in terms of the development of sustainable energy sources because these cars use hydrogen as their fuel. Compressed hydrogen can be used as an energy storage element, while electricity must be stored in batteries. The hydrogen can be produced by electricity through electrolysis, and this electricity can come from green sources. Hydrogen can be produced in situ, e.g. excess at wind farm when the generated electricity is not needed, or it can be connected to the grid to use the excess electricity from the grid and produce electricity, e.g. at hydrogen pump stations. As a result, development of FCVs can be a big step toward sustainable development and reducing GHG emission in a long-term perspective. Other countries have a different definition of ZEV, noteworthy the more recent inclusion of greenhouse gases, as many European rules now regulate CO2 emissions. CARB's role in regulating greenhouse gases began in 2004 based on the 2002 Pavley Act (AB 1493), but was blocked by lawsuits and by the EPA in 2007, by rejecting the required waiver. Additional responsibilities were granted to CARB by California's Global Warming Solutions Act of 2006 (AB 32), which includes the mandate to set low-carbon fuel standards. As a result of an investigation into false advertising regarding "zero-emissions" claims, the Advertising Standards Authority (ASA) in the UK ruled in March 2010 to ban an advertisement from Renault UK regarding its "zero-emission vehicles" because the ad breached CAP (Broadcast) TV Code rules 5.1.1, 5.1.2 (Misleading advertising) and 5.2.1 (Misleading advertising- Evidence) and 5.2.6 (Misleading advertising-Environmental claims.) Greenhouse gases and other pollutant emissions are generated by vehicle manufacturing processes. The emissions from manufacturing are many factors larger than the emissions from tailpipes, even in gasoline engine vehicles. Most reports on ZEVs' impact to the climate do not take into account these manufacturing emissions, though over the lifetime of the car the emissions from manufacturing are relatively small. Considering the current U.S. energy mix, a ZEV would produce an average 58% reduction in carbon dioxide emissions per mile driven. Given the current energy mixes in other countries, it has been predicted that such emissions would decrease by 40% in the U.K. and 19% in China. Types of zero-emission vehicles Apart from animal-powered and human-powered vehicles, battery electric vehicles (which include cars, aircraft and boats) also do not emit any of the above pollutants, nor any CO2 gases during use. This is a particularly important quality in densely populated areas, where the health of residents can be severely affected. However, the production of the fuels that power ZEVs, such as the production of hydrogen from fossil fuels, may produce more emissions per mile than the emissions produced from a conventional fossil fueled vehicle. A well-to-wheel life cycle assessment is necessary to understand the emissions implications associated with operating a ZEV. Bicycles In the mid-19th century, bicycle ownership became common (during the bike boom)—predating mass car ownership. In the 1960s, the Flying Pigeon bicycle became the single most popular mechanized vehicle on the planet. Some 210 million electric bikes are on the road in China. Motor vehicles Segway Personal Transporters are two-wheeled, self-balancing, battery-powered machines that are eleven times more energy-efficient than the average American car. Operating on two lithium-ion batteries, the Segway PT produces zero emissions during operation, and utilizes a negligible amount of electricity while charging via a standard wall outlet. Marine Wind-powered land vehicles operating on wind exist (using wind turbines and kites). For boats and other watercraft, regular and special sails (as rotorsails, wing sails, turbo sails, skysails) exist that can propel them without emissions. Air An electric aircraft is an aircraft powered by electric motors. Electricity may be supplied by a variety of methods including batteries, ground power cables, solar cells, ultracapacitors, fuel cells and power beaming. Between 2015 and 2016, Solar Impulse 2 completed a circumnavigation of the Earth using solar power. Incentives Subsidies for public transport Japanese public transport is being driven in the direction of zero emissions due to growing environmental concern. Honda has launched a conceptual bus which features exercise machines to the rear of the vehicle to generate kinetic energy used for propulsion. Due to the stop-start nature of idling in public transport, regenerative braking may be a possibility for public transport systems of the future. Subsidies for development of electric cars In an attempt to curb carbon emissions as well as noise pollution in South African cities, the South African Department of Science & Technology (DST), as well as other private investments, have made US$5 million available through the Innovation Fund for the development of the Joule. The Joule is a five-seater car, planned to be released in 2014. However the company ceased trading in 2012. Low and zero emission zones Several cities have implemented low-emission zones. Launched in 2019 and set to expand in 2023, the implementation of London's Ultra Low Emission Zone (ULEZ) incentivizes and accelerates the widespread adoption of cleaner vehicles through setting daily charge rates for driving vehicles that are non-compliant with ULEZ emission standards. See also (LCFS) (SULEV) (ULEV) (Zero Emission, No Noise) References External links Official California site on ZEVs and PZEVs Appropriate technology Sustainable transport
Zero-emissions vehicle
[ "Physics" ]
1,941
[ "Physical systems", "Transport", "Sustainable transport" ]
17,049,966
https://en.wikipedia.org/wiki/Ozone%20cracking
Cracks can be formed in many different elastomers by ozone attack, and the characteristic form of attack of vulnerable rubbers is known as ozone cracking. The problem was formerly very common, especially in tires, but is now rarely seen in those products owing to preventive measures. However, it does occur in many other safety-critical items such as fuel lines and rubber seals, such as gaskets and O-rings, where ozone attack is considered unlikely. Only a trace amount of the gas is needed to initiate cracking, and so these items can also succumb to the problem. Susceptible elastomers Tiny traces of ozone in the air will attack double bonds in rubber chains, with natural rubber, polybutadiene, styrene-butadiene rubber and nitrile rubber being most sensitive to degradation. Every repeat unit in the first three materials has a double bond, so every unit can be degraded by ozone. Nitrile rubber is a copolymer of butadiene and acrylonitrile units, but the proportion of acrylonitrile is usually lower than butadiene, so attack occurs. Butyl rubber is more resistant but still has a small number of double bonds in its chains, so attack is possible. Exposed surfaces are attacked first, the density of cracks varying with ozone gas concentration. The higher the concentration, the greater the number of cracks formed. Ozone-resistant elastomers include EPDM, fluoroelastomers like Viton and polychloroprene rubbers like Neoprene. Attack is less likely because double bonds form a very small proportion of the chains, and with the latter, the chlorination reduces the electron density in the double bonds, therefore lowering their propensity to react with ozone. Silicone rubber, Hypalon and polyurethanes are also ozone-resistant. Form of cracking Ozone cracks form in products under tension, but the critical strain is very small. The cracks are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over. Such cracks are very dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, so fuel leakage and fire may follow. Seals are also susceptible to attack, such as diaphragm seals in air lines. Such seals are often critical for the operation of pneumatic controls, and if a crack penetrates the seal, all functions of the system can be lost. Nitrile rubber seals are commonly used in pneumatic systems because of its oil resistance. However, if ozone gas is present, cracking will occur in the seals unless preventative measures are taken. Ozone attack will occur at the most sensitive zones in a seal, especially sharp corners where the strain is greatest when the seal is flexing in use. The corners represent stress concentrations, so the tension is at a maximum when the diaphragm of the seal is bent under air pressure. The seal shown at left failed from traces of ozone at circa 1 ppm, and once cracking had started, it continued as long as the gas was present. This particular failure led to loss of production on a semi-conductor fabrication line. The problem was solved by adding effective filters in the air line and by modifying the design to eliminate the very sharp corners. An ozone-resistant elastomer such as Viton was also considered as a replacement for the Nitrile rubber. The pictures were taken using ESEM for maximum resolution. Ozonolysis The reaction occurring between double bonds and ozone is known as ozonolysis when one molecule of the gas reacts with the double bond: The immediate result is formation of an ozonide, which then decomposes rapidly so that the double bond is cleaved. This is the critical step in chain breakage when polymers are attacked. The strength of polymers depends on the chain molecular weight or degree of polymerization, the higher the chain length, the greater the mechanical strength (such as tensile strength). By cleaving the chain, the molecular weight drops rapidly and there comes a point when it has little strength whatsoever, and a crack forms. Further attack occurs in the freshly exposed crack surfaces and the crack grows steadily until it completes a circuit and the product separates or fails. In the case of a seal or a tube, failure occurs when the wall of the device is penetrated. The carbonyl end groups which are formed are usually aldehydes or ketones, which can oxidise further to carboxylic acids. The net result is a high concentration of elemental oxygen on the crack surfaces, which can be detected using energy-dispersive X-ray spectroscopy in the environmental SEM, or ESEM. The spectrum at left shows the high oxygen peak compared with a constant sulfur peak. The spectrum at right shows the unaffected elastomer surface spectrum, with a relatively low oxygen peak compared with the sulfur peak. Prevention The problem can be prevented by adding antiozonants to the rubber before vulcanization. Ozone cracks were commonly seen in automobile tire sidewalls, but are now seen rarely thanks to the use of these additives. A common and low cost antiozonant is a wax which bleeds to the surface and forms a protective layer, but other specialist chemicals are also widely used. On the other hand, the problem does recur in unprotected products such as rubber tubing and seals, where ozone attack is thought to be impossible. Unfortunately, traces of ozone can turn up in the most unexpected situations. Using ozone-resistant rubbers is another way of inhibiting cracking. EPDM rubber and butyl rubber are ozone resistant, for example. For high value equipment where loss of function can cause serious problems, low cost seals may be replaced at frequent intervals so as to preclude failure. Ozone gas is produced during electric discharge by sparking or corona discharge for example. Static electricity can build up within machines like compressors with moving parts constructed from insulating materials. If those compressors feed pressurised air into a closed pneumatic system, then all seals in the system may be at risk from ozone cracking. Ozone is also produced by the action of sunlight on volatile organic compounds or VOCs, such as gasoline vapour present in the air of towns and cities, in a problem known as photochemical smog. The ozone formed can drift many miles before it is destroyed by further reactions. See also Applied spectroscopy Brittleness Corona discharge Corrosion Electrostatic discharge Forensic chemistry Forensic engineering Forensic materials engineering Forensic polymer engineering Ozonolysis Polymer degradation Stress corrosion cracking References Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004). Lewis, Peter Rhys Forensic Polymer Engineering: Why polymer products fail in service, 2nd edition, Woodhead/Elsevier (2016). Polymers Corrosion Elastomers Polymer chemistry Tires Forensic phenomena Materials degradation Rubber properties
Ozone cracking
[ "Chemistry", "Materials_science", "Engineering" ]
1,425
[ "Metallurgy", "Synthetic materials", "Materials science", "Corrosion", "Elastomers", "Electrochemistry", "Polymer chemistry", "Polymers", "Materials degradation" ]
17,051,408
https://en.wikipedia.org/wiki/Rudolph%20Schild
Rudolph E. Schild (born 10 January 1940) is an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, who has been active since the mid-1960s. He has authored or contributed to over 250 papers, of which 150 are in refereed journals. Career Schild's research in the 1980's and 90's was focused on using gravitational lensing to determine the age of the universe and the Hubble constant. The investigation into quasar images also, in 1994, suggested the existence of a binary pair of stars within a few light years of Earth. He also published in 1996 his findings on rogue planets identified through analysis of Hubble Space Telescope images. Then, in the 2000's, Schild began focusing on the double galaxy CSL-1 and superstring theory, which was noted as a possible step toward uncovering the theory of everything. Schild is a member of a group of researchers who have published frequently on the claim that photos on Mars from various NASA rover missions have shown evidence of fossilized life. He is a proponent of "magnetospheric eternally collapsing objects" (MECOs), an alternative to black holes. These results are most often published in Journal of Cosmology, a fringe astronomy journal edited by Schild himself, while his other research is published in mainstream astronomy journals such as MNRAS and the Astronomical Journal. Personal life Schild is married to mezzo-soprano Jane Struss, who teaches voice at Longy School of Music. References Living people Harvard University faculty American astrophysicists Panspermia 1940 births
Rudolph Schild
[ "Biology" ]
333
[ "Biological hypotheses", "Origin of life", "Panspermia" ]
17,058,007
https://en.wikipedia.org/wiki/Component%20Object%20Model
Component Object Model (COM) is a binary-interface technology for software components from Microsoft that enables using objects in a language-neutral way between different programming languages, programming contexts, processes and machines. COM is the basis for other Microsoft domain-specific component technologies including OLE, OLE Automation, ActiveX, COM+, and DCOM as well as implementations such as DirectX, Windows shell, UMDF, Windows Runtime, and Browser Helper Object. COM enables object use with only knowing its interface; not its internal implementation. The component implementer defines interfaces that are separate from the implementation. Support for multiple programming contexts is handled by relying on the object for aspects that would be challenging to implement as a facility. Supporting multiple uses of an object is handled by requiring each object to destroy itself via reference-counting. Access to an object's interfaces (similar to Type conversion) is provided by each object as well. COM is available only in Microsoft Windows and Apple's Core Foundation 1.3 and later plug-in application programming interface (API). The latter only implements a subset of the whole COM interface. Over time, COM is being replaced with other technologies such as Microsoft .NET and web services (i.e. via WCF). However, COM objects can be used in a .NET language via COM Interop. COM is similar to other component technologies such as SOM, CORBA and Enterprise JavaBeans, although each has its strengths and weaknesses. Unlike C++, COM provides a stable application binary interface (ABI) that is unaffected by compiler differences. This makes using COM advantageous for object-oriented C++ libraries that are to be used by clients compiled via different compilers. History Introduced in 1987, Dynamic Data Exchange (DDE) was one of the first interprocess communication technologies in Windows. It allowed sending and receiving messages in so-called conversations between applications. Antony Williams, involved in architecting COM, distributed two papers within Microsoft that embraced the concept of software components: Object Architecture: Dealing With the Unknown – or – Type Safety in a Dynamically Extensible Class Library in 1988 and On Inheritance: What It Means and How To Use It in 1990. These provided the foundation of many of the ideas behind COM. Object Linking and Embedding (OLE), Microsoft's first object-based framework, was built on DDE and designed specifically for compound documents. It was introduced with Word and Excel in 1991, and was later included with Windows, starting with version 3.1 in 1992. An example of a compound document is a spreadsheet embedded in a Word document. As changes are made to the spreadsheet in Excel, they appear automatically in the Word document. In 1991, Microsoft introduced the Visual Basic Extension (VBX) technology with Visual Basic 1.0. A VBX is a packaged extension in the form of a dynamic-link library (DLL) that allows objects to be graphically placed in a form and manipulated by properties and methods. These were later adapted for use by other languages such as Visual C++. In 1992, with Windows 3.1, Microsoft released OLE 2 with its new underlying object model, COM. The COM application binary interface (ABI) was the same as the MAPI ABI (released in 1992), and like it was based on MSRPC and ultimately on the Open Group's DCE/RPC. COM was created to replace DDE since its text-based conversation and Windows messaging design was not flexible enough to allow sharing application features in a robust and extensible way. In 1994, the OLE custom control (OCX) technology, based on COM, was introduced as the successor to VBX. At the same time, Microsoft stated that OLE 2 would be known simply as "OLE". In early 1996, Microsoft found a new use for OCX extending their web browser's capability. Microsoft renamed some parts of OLE relating to the Internet as ActiveX, and gradually renamed all OLE technologies to ActiveX, except the compound document technology that was used in Microsoft Office. Later in 1996, Microsoft extended COM to work across the network with DCOM. Related technologies MSRPC The COM IDL is based on the feature-rich DCE/RPC IDL, with object-oriented extensions. Microsoft's implementation of DCE/RPC, MSRPC, is used as the primary inter-process communication mechanism for Windows NT services and internal components, making it an obvious choice of foundation. DCOM DCOM extends COM from merely supporting a single user with separate applications communicating on the Windows desktop, to activating objects running under different security contexts, and on different machines across the network. With this were added necessary features for configuring which users have authority to create, activate and call objects, for identifying the calling user, as well as specifying required encryption for security of calls. COM+ Microsoft introduced Microsoft Transaction Server (MTS) in Windows NT 4 in order to provide developers with support for distributed transactions, resource pooling, disconnected applications, event publication and subscription, better memory and processor (thread) management, as well as to position Windows as an alternative to other enterprise-level operating systems. Renamed to COM+ in Windows 2000, the feature set was incorporated into the operating system as opposed to the series of external tools provided by MTS. At the same time, Microsoft de-emphasized DCOM as a separate entity. Components that used COM+ were handled more directly by the added layer of COM+; in particular by operating system support for interception. In the first release of MTS, interception was tacked on installing an MTS component would modify the Windows Registry to call the MTS software, and not the component directly. Windows 2000 included Component Services control panel updates for configuring COM+ components. An advantage of COM+ was that it could be run in "component farms". Instances of a component, if coded properly, could be pooled and reused by new calls to its initializing routine without unloading it from memory. Components could also be distributed (called from another machine). COM+ and Microsoft Visual Studio provided tools to make it easy to generate client-side proxies, so although DCOM was used to make the remote call, it was easy to do for developers. COM+ also introduced a subscriber/publisher event mechanism called COM+ Events, and provided a new way of leveraging MSMQ (a technology that provides inter-application asynchronous messaging) with components called Queued Components. COM+ events extend the COM+ programming model to support late-bound (see Late binding) events or method calls between the publisher or subscriber and the event system. .NET .NET is Microsoft's component technology that supersedes COM. .NET hides many details of component creation and therefore eases development. .NET provides wrappers to commonly used COM controls. .NET can leverage COM+ via the System.EnterpriseServices namespace, and several of the services that COM+ provides have been duplicated in .NET. For example, the System.Transactions namespace provides the TransactionScope class, which provides transaction management without resorting to COM+. Similarly, queued components can be replaced by Windows Communication Foundation (WCF) with an MSMQ transport. There is limited support for backward compatibility. A COM object may be used in .NET by implementing a Runtime Callable Wrapper (RCW). NET objects that conform to certain interface restrictions may be used in COM objects by calling a COM callable wrapper (CCW). From both the COM and .NET sides, objects using the other technology appear as native objects. See COM Interop. WCF eases a number of COM's remote execution challenges. For instance, it allows objects to be transparently marshalled by value across process or machine boundaries more easily. Windows Runtime Windows Runtime (WinRT) is a COM-based API, albeit an enhanced COM variant. Because of its COM-like basis, WinRT supports interfacing from multiple programming contexts, but it is an unmanaged, native API. The API definitions are stored in ".winmd" files, which are encoded in ECMA 335 metadata format; the same CLI metadata format that .NET uses with a few modifications. This metadata format allows for significantly lower overhead than P/Invoke when WinRT is invoked from .NET applications. Nano-COM Nano-COM is a subset of COM focused on the application binary interface (ABI) aspects of COM that enable function and method calls across independently compiled modules/components. Nano-COM can be expressed in a portable C++ header file. Nano-COM extends the native ABI of the underlying instruction architecture and OS to support typed object references whereas a typical ABI focuses on atomic types, structures, arrays and function calling conventions. A Nano-COM header file defines or names at least three types: GUID identifies an interface type HRESULT method result codes such as S_OK, E_FAIL, E_OUTOFMEMORY IUnknown base type for object references; abstract virtual functions to support dynamic_cast<T>-style acquisition of new interface types and ref counting a la shared_ptr<T> Many uses of Nano-COM define two functions to address callee-allocated memory buffers as results: <NanoCom>Alloc called by method implementations to allocate raw buffers (not objects) that are returned to the caller <NanoCom>Free called by method callers to free callee-allocated buffers when no longer in use Some implementations of Nano-COM such as Direct3D eschew the allocator functions and restrict themselves to only use caller-allocated buffers. Nano-COM has no notion of classes, apartments, marshaling, registration, etc. Rather, object references are simply passed across function boundaries and allocated via standard language constructs (e.g., C++ new operator). The basis of Nano-COM was used by Mozilla to bootstrap Firefox (called XPCOM), and is currently in use as the base ABI technology for DirectX/Direct3D/DirectML. Security In Internet Explorer Since an ActiveX control (any COM component) runs as native code, with no sandboxing protection, there are few restrictions on what it can do. Using ActiveX components, as Internet Explorer supported, in a web page lead to problems with malware infections. Microsoft recognized the problem as far back as 1996 when Charles Fitzgerald said, "We never made the claim up front that ActiveX is intrinsically secure". Later versions of Internet Explorer prompt the user before installing an ActiveX control, allowing them to block installation. As a level of protection, an ActiveX control is signed with a digital signature to guarantee authenticity. It is also possible to disable ActiveX controls altogether, or to allow only a selected few. Process corruption The transparent support for out-of-process COM servers promotes software safety in terms of process isolation. This can be useful for decoupling subsystems of large application into separate processes. Process isolation limits state corruption in one process from negatively affecting the integrity of the other processes, since they only communicate through strictly defined interfaces. Thus, only the affected subsystem needs to be restarted in order to regain valid state. This is not the case for subsystems within the same process, where a rogue pointer in one subsystem can randomly corrupt other subsystems. Binding COM is supported via bindings in several languages, such as C, C++, Visual Basic, Delphi, Python and several of the Windows scripting contexts. Component access is via interface methods. This allows for direct calling in-process and via the COM/DCOM sub-system access between processes and computers. Type system Coclass A coclass, a COM class, implements one or more interfaces. It is identified by a class ID, called CLSID which is GUID, and by a human-readable programmatic identifier, called ProgID. A coclass is created via one of these identifiers. Interface Each COM interface extends the IUnknown interface, which exposes methods for reference counting and for accessing the other interfaces of the object similar to type conversion, a.k.a. type casting. An interface is identified by an interface ID (IID), a GUID. A custom interface, anything derived from IUnknown, provides early bound access via a pointer to a virtual method table that contains a list of pointers to the functions that implement the functions declared in the interface, in the order they are declared. An in-process invocation overhead is, therefore, comparable to a C++ virtual method call. Dispatching, a.k.a. late bound access, is provided by implementing IDispatch. Dispatching allows access from a wider range of programming contexts than a custom interface. Like many object-oriented languages, COM provides a separation of interface from implementation. This distinction is especially strong in COM where an object has no default interface. A client must request an interface to have any access. COM supports multiple implementations of the same interface, so that clients can choose which implementation of an interface to use. Type library A COM type library defines COM metadata, such as coclasses and interfaces. A library can be defined as Interface definition language (IDL); a programming language independent syntax. IDL is similar to C++ with additional syntax for defining interfaces and coclasses. IDL also supports bracketed attributes before declarations to define metadata such as identifiers and relationships between parameters. An IDL file is compiled via the MIDL compiler. For use with C/C++, the MIDL compiler generates a header file with struct definitions to match the vtbls of the declared interfaces and a C file containing declarations of the interface GUIDs. C++ source code for a proxy module can also be generated by the MIDL compiler. This proxy contains method stubs for converting COM calls into remote procedure calls to enable DCOM for out-of-process communication. MIDL can generate a binary type library (TLB) that can be used by other tools to support access from other context. Examples The following IDL code declares a coclass named SomeClass which implements an interface named ISomeInterface. coclass SomeClass { [default] interface ISomeInterface; }; This is conceptually equivalent to the following C++ code where ISomeInterface is a pure virtual class, a.k.a. abstract base class. class ISomeInterface {}; class SomeClass : public ISomeInterface { }; In C++, COM objects are instantiated via the COM subsystem CoCreateInstance function that takes the CLSID and IID. SomeClass can be created as follows: ISomeInterface* interface_ptr = NULL; HRESULT hr = CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_ALL, IID_ISomeInterface, (void**)&interface_ptr); Reference counting A COM object uses reference counting to manage object lifetime. An object's reference count is controlled by the clients through the IUnknown AddRef and Release methods. COM objects are responsible for freeing their own memory when the reference count drops to zero. Some programming contexts (e.g. Visual Basic) provide automatic reference counting to simplify object use. In C++, a smart pointer can be used to automate reference count management. The following are guidelines for when to AddRef and Release should called: A functions that returns an interface reference (via return value or via "out" parameter) increments the count of the returned object Release is called before the interface pointer is overwritten or goes out of scope If a copy is made on an interface reference pointer, AddRef is called AddRef and Release are called on the interface which is being referenced (not a different interface of the same object) since an object may implement per-interface reference counts in order to allocate internal resources only for the interfaces which are being referenced For remote objects, not all reference count calls are sent over the wire. A a proxy keeps only one reference on the remote object and maintains its own local reference count. To simplify COM development for C++ developers, Microsoft introduced ATL (Active Template Library). ATL provides a relatively high-level COM development paradigm. It also shields COM client application developers from the need to directly maintain reference counting, by providing smart pointer types. Other libraries and languages that are COM-aware include the Microsoft Foundation Classes, the VC Compiler COM Support, VBScript, Visual Basic, ECMAScript (JavaScript) and Borland Delphi. Programming context COM is a language agnostic binary standard that allows objects to be used in any programming context able to access its binary interfaces. COM client software is responsible for enabling the COM sub-system, instantiating and reference-counting COM objects and querying objects for supported interfaces. The Microsoft Visual C++ compiler supports extensions to the C++ language, referred to as C++ Attributes, that are designed to simplify COM development and minimize boilerplate code required to implement COM servers in C++. Type metadata storage Originally, type library metadata was required to be stored in the system registry. A COM client would use the registry information for object creation. Registration-free (RegFree) COM was introduced with Windows XP to allow storing type library metadata as an assembly manifest either as a resource in the executable file or in a separate file installed with the component. This allows multiple versions of the same component to be installed on the same computer, in different directories. And it allows for XCOPY deployment. This technology has limited support for EXE COM servers and cannot be used for system-wide components such as MDAC, MSXML, DirectX or Internet Explorer. During application loading, the Windows loader searches for the manifest. If it is present, the loader adds information from it to the activation context. When the COM class factory tries to instantiate a class, the activation context is first checked to see if an implementation for the CLSID can be found. Only if the lookup fails, the registry is scanned. A COM object can be created without type library information; with only a path to the DLL file and CLSID. A client can use the COM DLL function DllGetClassObject with the CLSID and IID_IClassFactory to create an instance of a factory object. The client can then use the factory object's CreateInstance to create an instance. This is the same process the COM sub-system uses. If an object created this way creates another object, it will do so in the usual way (using the registry or manifest). But it can create internal objects (which may not be registered at all), and hand out references to interfaces to them, using its own private knowledge. Marshalling A COM object can be transparently created and used from within the same process (in-process), across process boundaries (out-of-process), or remotely over the network (DCOM). Out-of-process and remote objects use marshalling to serialize method calls and return values over process or network boundaries. This marshalling is invisible to the client, which accesses the object as if it were a local in-process object. Threading In COM, threading is addressed through a concept known as apartments. An individual COM object lives in exactly one apartment, which might either be single-threaded or multi-threaded. There are three types of apartments in COM: Single-Threaded Apartment (STA), Multi-Threaded Apartment (MTA), and Thread Neutral Apartment (NA). Each apartment represents one mechanism whereby an object's internal state may be synchronized across multiple threads. A process can consist of multiple COM objects, some of which may use STA and others of which may use MTA. All threads accessing COM objects similarly live in one apartment. The choice of apartment for COM objects and threads are determined at run-time, and cannot be changed. Threads and objects which belong to the same apartment follow the same thread access rules. Method calls which are made inside the same apartment are therefore performed directly without any assistance from COM. Method calls made across apartments are achieved via marshalling. This requires the use of proxies and stubs. Criticisms Complexity COM is relatively complex especially compared to more modern component technologies such as .NET. Message pumping When an STA is initialized it creates a hidden window that is used for inter-apartment and inter-process message routing. This window must have its message queue regularly "pumped". This construct is known as a "message pump". On earlier versions of Windows, failure to do so could cause system-wide deadlocks. This problem is complicated by some Windows APIs that initialize COM as part of their implementation, which causes a "leak" of implementation details. Reference counting Reference counting within COM may cause problems if two or more objects are circularly referenced. The design of an application must take this into account so that objects are not left orphaned. Objects may also be left with active reference counts if the COM "event sink" model is used. Since the object that fires the event needs a reference to the object reacting to the event, the latter's reference count will never reach zero. Reference cycles are typically broken using either out-of-band termination or split identities. In the out-of-band termination technique, an object exposes a method which, when called, forces it to drop its references to other objects, thereby breaking the cycle. In the split identity technique, a single implementation exposes two separate COM objects (also known as identities). This creates a weak reference between the COM objects, preventing a reference cycle. DLL Hell Because in-process COM components are implemented in DLL files and registration only allows for a single version per CLSID, they might in some situations be subject to the "DLL Hell" effect. Registration-free COM capability eliminates this problem for in-process components; registration-free COM is not available for out-of-process servers. See also Notes References External links Component Object Model on MSDN Interview with Tony Williams, Co-Inventor of COM (Video Webcast, August 2006) Info: Difference Between OLE Controls and ActiveX Controls from Microsoft TypeLib Data Format Specification (unofficial) with open source dumper utility. The COM / DCOM Glossary (archive.org) Component-based software engineering Inter-process communication Microsoft application programming interfaces Object models Object request broker Object-oriented programming
Component Object Model
[ "Technology" ]
4,713
[ "Component-based software engineering", "Components" ]
17,058,216
https://en.wikipedia.org/wiki/Stochastic%20control
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time. Certainty equivalence An extremely well-studied formulation in stochastic control is that of linear quadratic Gaussian control. Here the model is linear, the objective function is the expected value of a quadratic form, and the disturbances are purely additive. A basic result for discrete-time centralized systems with only additive uncertainty is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all centralized systems with linear equations of evolution, quadratic cost function, and noise entering the model only additively; the quadratic assumption allows for the optimal control laws, which follow the certainty-equivalence property, to be linear functions of the observations of the controllers. Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, noise in the multiplicative parameters of the model, or decentralization of control—causes the certainty equivalence property not to hold. For example, its failure to hold for decentralized control was demonstrated in Witsenhausen's counterexample. Discrete time In a discrete-time context, the decision-maker observes the state variable, possibly with observational noise, in each time period. The objective may be to optimize the sum of expected values of a nonlinear (possibly quadratic) objective function over all the time periods from the present to the final period of concern, or to optimize the value of the objective function as of the final period only. At each time period new observations are made, and the control variables are to be adjusted optimally. Finding the optimal solution for the present time may involve iterating a matrix Riccati equation backwards in time from the last period to the present period. In the discrete-time case with uncertainty about the parameter values in the transition matrix (giving the effect of current values of the state variables on their own evolution) and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a Riccati equation can still be obtained for iterating backward to each period's solution even though certainty equivalence does not apply.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications. Example A typical specification of the discrete-time stochastic linear quadratic control problem is to minimize where E1 is the expected value operator conditional on y0, superscript T indicates a matrix transpose, and S is the time horizon, subject to the state equation where y is an n × 1 vector of observable state variables, u is a k × 1 vector of control variables, At is the time t realization of the stochastic n × n state transition matrix, Bt is the time t realization of the stochastic n × k matrix of control multipliers, and Q (n × n) and R (k × k) are known symmetric positive definite cost matrices. We assume that each element of A and B is jointly independently and identically distributed through time, so the expected value operations need not be time-conditional. Induction backwards in time can be used to obtain the optimal control solution at each time, with the symmetric positive definite cost-to-go matrix X evolving backwards in time from according to which is known as the discrete-time dynamic Riccati equation of this problem. The only information needed regarding the unknown parameters in the A and B matrices is the expected value and variance of each element of each matrix and the covariances among elements of the same matrix and among elements across matrices. The optimal control solution is unaffected if zero-mean, i.i.d. additive shocks also appear in the state equation, so long as they are uncorrelated with the parameters in the A and B matrices. But if they are so correlated, then the optimal control solution for each period contains an additional additive constant vector. If an additive constant vector appears in the state equation, then again the optimal control solution for each period contains an additional additive constant vector. The steady-state characterization of X (if it exists), relevant for the infinite-horizon problem in which S goes to infinity, can be found by iterating the dynamic equation for X repeatedly until it converges; then X is characterized by removing the time subscripts from its dynamic equation. Continuous time If the model is in continuous time, the controller knows the state of the system at each instant of time. The objective is to maximize either an integral of, for example, a concave function of a state variable over a horizon from time zero (the present) to a terminal time T, or a concave function of a state variable at some future date T. As time evolves, new observations are continuously made and the control variables are continuously adjusted in optimal fashion. Stochastic model predictive control In the literature, there are two types of MPCs for stochastic systems; Robust model predictive control and Stochastic Model Predictive Control (SMPC). Robust model predictive control is a more conservative method which considers the worst scenario in the optimization procedure. However, this method, similar to other robust controls, deteriorates the overall controller's performance and also is applicable only for systems with bounded uncertainties. The alternative method, SMPC, considers soft constraints which limit the risk of violation by a probabilistic inequality. In finance In a continuous time approach in a finance context, the state variable in the stochastic differential equation is usually wealth or net worth, and the controls are the shares placed at each time in the various assets. Given the asset allocation chosen at any time, the determinants of the change in wealth are usually the stochastic returns to assets and the interest rate on the risk-free asset. The field of stochastic control has developed greatly since the 1970s, particularly in its applications to finance. Robert Merton used stochastic control to study optimal portfolios of safe and risky assets. His work and that of Black–Scholes changed the nature of the finance literature. Influential mathematical textbook treatments were by Fleming and Rishel, and by Fleming and Soner. These techniques were applied by Stein to the financial crisis of 2007–08. The maximization, say of the expected logarithm of net worth at a terminal date T, is subject to stochastic processes on the components of wealth. In this case, in continuous time Itô's equation is the main tool of analysis. In the case where the maximization is an integral of a concave function of utility over an horizon (0,T), dynamic programming is used. There is no certainty equivalence as in the older literature, because the coefficients of the control variables—that is, the returns received by the chosen shares of assets—are stochastic. See also Backward stochastic differential equation Stochastic process Control theory Multiplier uncertainty Stochastic scheduling References Further reading Control theory Stochastic processes
Stochastic control
[ "Mathematics" ]
1,552
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
17,060,393
https://en.wikipedia.org/wiki/Post-translational%20regulation
Post-translational regulation refers to the control of the levels of active protein. There are several forms. It is performed either by means of reversible events (posttranslational modifications, such as phosphorylation or sequestration) or by means of irreversible events (proteolysis). See also Post-translational modification References Gene expression Post-translational modification
Post-translational regulation
[ "Chemistry", "Biology" ]
85
[ "Gene expression", "Biochemical reactions", "Post-translational modification", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
17,060,990
https://en.wikipedia.org/wiki/Nucleic%20acid%20methods
Nucleic acid methods are the techniques used to study nucleic acids: DNA and RNA. Purification DNA extraction Phenol–chloroform extraction Minicolumn purification RNA extraction Boom method Synchronous coefficient of drag alteration (SCODA) DNA purification Quantification Abundance in weight: spectroscopic nucleic acid quantitation Absolute abundance in number: real-time polymerase chain reaction (quantitative PCR) High-throughput relative abundance: DNA microarray High-throughput absolute abundance: serial analysis of gene expression (SAGE) Size: gel electrophoresis Synthesis De novo: oligonucleotide synthesis Amplification: polymerase chain reaction (PCR), loop-mediated isothermal amplification (LAMP), transcription-mediated amplification (TMA) Kinetics Multi-parametric surface plasmon resonance Dual-polarization interferometry Quartz crystal microbalance with dissipation monitoring (QCM-D) Gene function RNA interference Other Bisulfite sequencing DNA sequencing Expression cloning Fluorescence in situ hybridization Lab-on-a-chip Comparison of nucleic acid simulation software Northern blot Nuclear run-on assay Radioactivity in the life sciences Southern blot Differential centrifugation (sucrose gradient) Toeprinting assay Several bioinformatics methods, as seen in list of RNA structure prediction software See also CSH Protocols Current Protocols References External links Protocols for Recombinant DNA Isolation, Cloning, and Sequencing Genetics techniques Molecular biology Nucleic acids
Nucleic acid methods
[ "Chemistry", "Engineering", "Biology" ]
318
[ "Genetics techniques", "Biomolecules by chemical classification", "Genetic engineering", "Molecular biology", "Biochemistry", "Nucleic acids" ]
19,278,451
https://en.wikipedia.org/wiki/Bolus%20%28radiation%20therapy%29
In radiation therapy, bolus is a material which has properties equivalent to tissue when irradiated. It is widely used in practice to reduce or alter dosing for targeted radiation therapy. Compensating for missing tissue or irregular tissue shape It must be possible to mould the bolus to fill the tissue space. Lincolnshire and Spier's bolus, which is loosely packed in polyethylene bags, is suitable as the bolus bags take the shape of the skin surface these bags are easily smoothed to achieve a flat surface. Modifying dose at the skin surface and at depth A specific thickness of bolus can be applied to the skin to alter the dose received at depth in the tissue and on the skin surface. A typical example of this is the application of a defined thickness of bolus to a chest wall for post-mastectomy chest wall treatment, to increase the skin dose. The thickness of bolus applied is dependent on the skin dose required and the angle of incidence of the treatment beams. For example, if oblique 6 MV beams are used for tangential pair, 1 cm of bolus effectively becomes 1.5 cm, i.e., "full bolus". When a full bolus is applied, bolus thickness equal to the depth of the build-up region removes the skin-sparing effect of a megavoltage x-ray beam. On the other hand, there are boluses that do not require the selection of specific thicknesses to treat a certain depth. These types of boluses have densities higher than water but can be calculated from CT images by the Treatment Planning System (TPS). One of these boluses is commonly known as high-density and high-adaptation bolus (e.g., eXaSkin and eXaSkin Plus). Pliable bolus Suitable material must be pliable and easily moulded to the skin surface, but retain a constant thickness. One example includes paraffin gauze. Rigid bolus For smaller areas which do not require the bolus to be moulded over the skin, Perspex can be used. The use of Perspex bolus is advantageous for electron set-ups because it is transparent. Since the f.s.d. for most electron fields is 95 cm, so that the movements of the couch are not isocentric, inaccuracies may arise for aligning angled fields when an opaque bolus is inserted. Positioning bolus in the treatment beam To ensure that the patient receives the required dose, bolus of the right thickness must be placed correctly. Therefore, bolus requirements must be clearly documented in the setup sheets of the treatment card. When using bolus to compensate for missing tissue, the whole of the bolussed area must be level with the point on the patient where the f.s.d. is set, to ensure dose homogeneity. When the bolus is used to reduce the skin-sparing effect, the bolus does not necessarily need to touch the skin all over the bolussed area as the scatter is of sufficiently high energy to be unaffected by an air gap. However, it is important that the bolus is uniform thickness. Some bolus materials are easily squashed and must be carefully measured at regular intervals. References Perez and Brady's Principles and Practice of Radiation Oncology Oncology Radiation Radiation therapy
Bolus (radiation therapy)
[ "Physics", "Chemistry" ]
694
[ "Transport phenomena", "Waves", "Physical phenomena", "Radiation" ]
19,283,730
https://en.wikipedia.org/wiki/Pharmacotoxicology
Pharmacotoxicology entails the study of the consequences of toxic exposure to pharmaceutical drugs and agents in the health care field. The field of pharmacotoxicology also involves the treatment and prevention of pharmaceutically induced side effects. Pharmacotoxicology can be separated into two different categories: pharmacodynamics (the effects of a drug on an organism), and pharmacokinetics (the effects of the organism on the drug). Mechanisms of Pharmaceutical Drug Toxicity There are many mechanisms by which pharmaceutical drugs can have toxic implications. A very common mechanism is covalent binding of either the drug or its metabolites to specific enzymes or receptor in tissue-specific pathways that then will elicit toxic responses. Covalent binding can occur during both on-target and off-target situations and after biotransformation. On-target toxicity. On-target toxicity is also referred to as mechanism-based toxicity. This type of adverse effect that results from pharmaceutical drug exposure is commonly due to interactions of the drug with its intended target. In this case, both the therapeutic and toxic targets are the same. To avoid toxicity during treatment, many times the drug needs to be changed to target a different aspect of the illness or symptoms. Statins are an example of a drug class that can have toxic effects at the therapeutic target (HMG CoA reductase). Immune Responses Some pharmaceuticals can initiate allergic reactions, as in the case of penicillins. In some people, administration of penicillin can induce production of specific antibodies and initiate an immune response. Activation of this response when unwarranted can cause severe health concerns and prevent proper immune system functioning. Immune responses to pharmaceutical exposure can be very common in accidental contamination events. Tamoxifen, a selective estrogen receptor modulator, has been shown to alter the humoral adaptive immune response in gilthead seabream. In this case, pharmaceuticals can produce adverse effects not only in humans, but also in organisms that are unintentionally exposed. Off-target toxicity Adverse effects at targets other than those desired for pharmaceutical treatments often occur with drugs that are nonspecific. If a drug can bind to unexpected proteins, receptors, or enzymes that can alter different pathways other than those desired for treatment, severe downstream effects can develop. An example of this is the drug eplerenone (aldosterone receptor antagonist), which should increase aldosterone levels, but has shown to produce atrophy of the prostate. Bioactivation Bioactivation is a crucial step in the activity of certain pharmaceuticals. Often, the parent form of the drug is not the active form and it needs to be metabolized in order to produce its therapeutic effects. In other cases, bioactivation is not necessarily needed for drugs to be active and can instead produce reactive intermediates that initiate stronger adverse effects than the original form of the drug. Bioactivation can occur through the action Phase I metabolic enzymes, such as cytochrome P450 or peroxidases. Reactive intermediates can cause a loss of function in some enzymatic pathways or can promote the production of reactive oxygen species, both of which can increase stress levels and alter homeostasis. Drug-drug interactions Drug-drug interactions can occur when certain drugs are administered at the same time. Effects of this can be additive (outcome is greater than those of one individual drug), less than additive (therapeutic effects are less than those of one individual drug), or functional alterations (one drug changes how another is absorbed, distributed, and metabolized). Drug-drug interactions can be of serious concern for patients who are undergoing multi-drug therapies. Coadministration of chloroquine, an anti-malaria drug, and statins for treatment of cardiovascular diseases has been shown to cause inhibition of organic anion-transporting polypeptides (OATPs) and lead to systemic statin exposure. Pharmacotoxicity Examples There are many different pharmaceutical drugs that can produce adverse effects after biotransformation, interaction with alternate targets, or through drug-drug interactions. All pharmaceuticals can be toxic, depending on the dose. Acetaminophen Acetaminophen (APAP) is a very common drug used to treat pain. High doses of acetaminophen has been shown to produce severe hepatotoxicity after being biotransformed to produce reactive intermediates. Acetaminophen is metabolized by CYP2E1 to produce NAPQI, which then causes significant oxidative stress due to increased reactive oxygen species (ROS). ROS can cause cellular damage in a multitude of ways, a few of which being DNA and mitochondrial damage and depletion of antioxidant enzymes such as glutathione. In terms of drug-drug interactions, acetaminophen activates CAR, a nuclear receptor involved in the production of metabolic enzymes, which increases the metabolism of other drugs. This could either cause reactive intermediates/drug activity to persist for longer than necessary, or the drug will be cleared quicker than normal and prevent any therapeutic actions from occurring. Ethanol induces CYP2E1 enzymes in the liver, which can lead to increased NAPQI formation in addition to that formed by acetaminophen. Aspirin Aspirin is an NSAID used to treat inflammation and pain. Overdoses or treatments in conjunction with other NSAIDs can produce additive effects, which can lead to increased oxidative stress and ROS activity. Chronic exposure to aspirin can lead to CNS toxicity and eventually affect respiratory function. Anti-depressants Anti-depressants have been prescribed since the 1950s, and their prevalence has significantly increased since then. There are many classes of anti-depressant pharmaceuticals, such as selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and tricyclic anti-depressants. Many of these drugs, especially the SSRIs, function by blocking the metabolism or reuptake of neurotransmitters to treat depression and anxiety. Chronic exposure or overdose of these pharmaceuticals can lead to serotonin and CNS hyperexcitation, weight changes, and, in severe cases, suicide. Anti-cancer drugs Doxorubicin is a very effective anti-cancer drug that causes congestive heart failure while treating tumors. Doxorubicin is an uncoupling agent in that it inhibits proper functioning of complex I of the electron transport chain in mitochondria. It then leads to the production of ROS and the inhibition of ATP production. Doxorubicin has been shown to be selectively toxic to cardiac tissue, although some toxicity has been seen in other tissues as well. Other anti-cancer drugs, such as fluoropyrimidines and taxanes, are extremely effective at treating and reducing tumor proliferation, but have high incidences of cardiac arrhythmias and myocardial infarctions. References External links PsychRights.org - 'Psychiatric Polypharmacy: A Word of Caution', Leslie Morrison, MS, RN, Esq, Paul B. Duryea, Charis Moore, Alexandra Nathanson-Shinn, Stephen E. Hall, MD, James Meeker, PhD, DABFT, Charles A. Reynolds, PharmD, BCPP, Protection & Advocacy, Inc. Pharmacology Pharmacy Toxicology
Pharmacotoxicology
[ "Chemistry", "Environmental_science" ]
1,548
[ "Pharmacology", "Toxicology", "Medicinal chemistry", "Pharmacy" ]
1,784,050
https://en.wikipedia.org/wiki/Timekeeping%20on%20Mars
Though no standard exists, numerous calendars and other timekeeping approaches have been proposed for the planet Mars. The most commonly seen in the scientific literature denotes the time of year as the number of degrees on its orbit from the northward equinox, and increasingly there is use of numbering the Martian years beginning at the equinox that occurred April 11, 1955. Mars has an axial tilt and a rotation period similar to those of Earth. Thus, it experiences seasons of spring, summer, autumn and winter much like Earth. Mars' orbital eccentricity is considerably larger, which causes its seasons to vary significantly in length. A sol, or Martian day, is not that different from an Earth day: less than an hour longer. However, a Mars year is almost twice as long as an Earth year. Sols The average length of a Martian sidereal day is (88,642.663 seconds based on SI units), and the length of its solar day is (88,775.244 seconds). The corresponding values for Earth are currently and , respectively, which yields a conversion factor of Earth days/sol: thus, Mars's solar day is only about 2.75% longer than Earth's; approximately 73 sols pass for every 75 Earth days. The term "sol" is used by planetary scientists to refer to the duration of a solar day on Mars. The term was adopted during NASA's Viking project (1976) in order to avoid confusion with an Earth "day". By inference, Mars' "solar hour" is of a sol (1 hr 1 min 39 sec), a "solar minute" of a solar hour (61.65 sec), and a "solar second" of a solar minute (1.0275 sec). Mars Sol Date When accounting solar days on Earth, astronomers often use Julian dates—a simple sequential count of days—for timekeeping purposes. An analogous system for Mars has been proposed "[f]or historical utility with respect to the Earth-based atmospheric, visual mapping, and polar-cap observations of Mars,... a sequential count of sol-numbers". This Mars Sol Date (MSD) starts "prior to the 1877 perihelic opposition." Thus, the MSD is a running count of sols since 29 December 1873 (coincidentally the birth date of astronomer Carl Otto Lampland). Numerically, the Mars Sol Date is defined as MSD = (JD − 2451549.5)/1.0274912517 + 44796.0 − 0.0009626, where JD is the Julian Date using Terrestrial Time. Time of day A convention used by spacecraft lander projects to date has been to enumerate local solar time using a 24-hour "Mars clock" on which the hours, minutes and seconds are 2.75% longer than their standard (Earth) durations. This has the advantage that no handling of times greater than 23:59 is needed, so standard tools can be used. The Mars time of noon is 12:00 which is in Earth time 12 hours and 20 minutes after midnight. For the Mars Pathfinder, Mars Exploration Rover (MER), Phoenix, and Mars Science Laboratory missions, the operations teams have worked on "Mars time", with a work schedule synchronized to the local time at the landing site on Mars, rather than the Earth day. This results in the crew's schedule sliding approximately 40 minutes later in Earth time each day. Wristwatches calibrated in Martian time, rather than Earth time, were used by many of the MER team members. Local solar time has a significant impact on planning the daily activities of Mars landers. Daylight is needed for the solar panels of landed spacecraft. Its temperature rises and falls rapidly at sunrise and sunset because Mars does not have Earth's thick atmosphere and oceans that soften such fluctuations. Consensus has recently been gained in the scientific community studying Mars to similarly define Martian local hours as 1/24th of a Mars day. As on Earth, on Mars there is also an equation of time that represents the difference between sundial time and uniform (clock) time. The equation of time is illustrated by an analemma. Because of orbital eccentricity, the length of the solar day is not quite constant. Because its orbital eccentricity is greater than that of Earth, the length of day varies from the average by a greater amount than that of Earth, and hence its equation of time shows greater variation than that of Earth: on Mars, the Sun can run 50 minutes slower or 40 minutes faster than a Martian clock (on Earth, the corresponding figures are slower and faster). Mars has a prime meridian, defined as passing through the small crater Airy-0. The prime meridian was first proposed by German astronomers Wilhelm Beer and Johann Heinrich Mädler in 1830 as marked by the fork in the albedo feature later named Sinus Meridiani by Italian astronomer Giovanni Schiaparelli. This convention was readily adopted by the astronomical community, the result being that Mars had a universally accepted prime meridian half a century before the International Meridian Conference of 1884 established one for Earth. The definition of the Martian prime meridian has since been refined on the basis of spacecraft imagery as the center of the crater Airy-0 in Terra Meridiani. However, Mars does not have time zones defined at regular intervals from the prime meridian, as on Earth. Each lander so far has used an approximation of local solar time as its frame of reference, as cities did on Earth before the introduction of standard time in the 19th century. (The two Mars Exploration Rovers happen to be approximately 12 hours and one minute apart.) Since the late 1990s and arrival of Mars Global Surveyor at Mars, the most widely used system for specifying locations on Mars has been planetocentric coordinates, which measure longitude 0°–360° East and latitude angles from the center of Mars. An alternative system that was used before then is planetographic coordinates, which measure longitudes as 0°–360° West and determined latitudes as mapped onto the surface. However, planetographic coordinates remain in use, such as on the MAVEN orbiter project. Coordinated Mars Time Coordinated Mars Time (MTC) or Martian Coordinated Time is a proposed Mars analog to Universal Time (UT1) on Earth. It is defined as the mean solar time at Mars's prime meridian. The name "MTC" is intended to parallel the Terran Coordinated Universal Time (UTC), but this is somewhat misleading: what distinguishes UTC from other forms of UT is its leap seconds, but MTC does not use any such scheme. MTC is more closely analogous to UT1. Use of the term "Martian Coordinated Time" as a planetary standard time first appeared in a journal article in 2000. The abbreviation "MTC" was used in some versions of the related Mars24 Sunclock coded by the NASA Goddard Institute for Space Studies. That application has also denoted the standard time as "Airy Mean Time" (AMT), in analogy of Greenwich Mean Time (GMT). In an astronomical context, "GMT" is a deprecated name for Universal Time, or sometimes more specifically for UT1. Neither AMT or MTC has yet been employed in mission timekeeping. This is partially attributable to uncertainty regarding the position of Airy-0 (relative to other longitudes), which meant that AMT could not be realized as accurately as local time at points being studied. At the start of the Mars Exploration Rover missions, the positional uncertainty of Airy-0 corresponded to roughly a 20-second uncertainty in realizing AMT. In order to refine the location of the prime meridian, it has been proposed that it be based on a specification that the Viking Lander 1 is located at 47.95137°W. Lander mission clocks When a NASA spacecraft lander begins operations on Mars, the passing Martian days (sols) are tracked using a simple numerical count. The two Viking mission landers, Mars Phoenix, the Mars Science Laboratory rover Curiosity, InSight, and Mars 2020 Perseverance missions all count the sol on which the lander touched down as "Sol 0". Mars Pathfinder and the two Mars Exploration Rovers instead defined touchdown as "Sol 1". Each successful lander mission so far has used its own "time zone", corresponding to some defined version of local solar time at the landing site location. Of the nine successful NASA Mars landers to date, eight employed offsets from local mean solar time (LMST) for the lander site while the ninth (Mars Pathfinder) used local true solar time (LTST). Information as to whether China's Zhurong rover project has used a similar timekeeping system of recording the sol number and LMST (or offset) has not been disseminated. Viking Landers The "local lander time" for the two Viking mission landers were offsets from LMST at the respective lander sites. In both cases, the initial clock midnight was set to match local true midnight immediately preceding touchdown. Pathfinder Mars Pathfinder used the local apparent solar time at its location of landing. Its time zone was AAT-02:13:01, where "AAT" is Airy Apparent Time, meaning apparent (true) solar time at Airy-0. The difference between the true and mean solar time (AMT and AAT) is the Martian equation of time. Pathfinder kept track of the days with a sol count starting on Sol 1 (corresponding to MSD 43905), on which it landed at night at 02:56:55 (mission clock; 4:41 AMT). Spirit and Opportunity The two Mars Exploration Rovers did not use mission clocks matched to the LMST of their landing points. For mission planning purposes, they instead defined a time scale that would approximately match the clock to the apparent solar time about halfway through the nominal 90-sol primary mission. This was referred to in mission planning as "Hybrid Local Solar Time" (HLST) or as the "MER Continuous Time Algorithm". These time scales were uniform in the sense of mean solar time (i.e., they approximate the mean time of some longitude) and were not adjusted as the rovers traveled. (The rovers traveled distances that could make a few seconds difference to local solar time.) The HLST of Spirit is AMT+11:00:04 whereas the LMST at its landing site is AMT+11:41:55. The HLST of Opportunity is AMT-01:01:06 whereas the LMST at its landing site is AMT-00:22:06. Neither rover was likely to ever reach the longitude at which its mission time scale matches local mean time. However, for atmospheric measurements and other science purposes, Local True Solar Time is recorded. Spirit and Opportunity both started their sol counts with Sol 1 on the day of landing, corresponding to MSD 46216 and MSD 46236, respectively. Phoenix The Phoenix lander project specified a mission clock that matched Local Mean Solar Time at the planned landing longitude of 126.65°W (233.35°E). This corresponds to a mission clock of AMT-08:26:36. The actual landing site was 0.900778° (19.8 km) east of that, corresponding to 3 minutes and 36 seconds later in local solar time. The date is kept using a mission clock sol count with the landing occurring on Sol 0, corresponding to MSD 47776 (mission time zone); the landing occurred around 16:35 LMST, which is MSD 47777 01:02 AMT. Curiosity The Curiosity rover project specified a mission clock that matched Local Mean Solar Time at its originally planned landing longitude of 137.42°E. This corresponds to a mission clock of AMT+09:09:40.8. The actual landing site was about 0.02° (1.3 km) east of that, a difference of about 5 seconds in solar time. The local mean solar time is also affected by the rover motion; at 4.6°S, this is about 1 second of time difference for every 246 meters of displacement along the east–west direction. The date is kept using a mission clock sol count with the landing occurring on Sol 0, corresponding to MSD 49269 (mission time zone); the landing occurred around 14:53 LMST (05:53 AMT). InSight The InSight lander project specified a mission clock that matched Local Mean Solar Time at its planned landing site of 135.97°E. This corresponds to a mission clock of AMT+09:03:53. The actual landing site was at 135.623447°E, or 0.346553° (20.5 km) west of the reference longitude, so the lander mission clock is 1 minute and 23 seconds ahead of the actual mean local solar time at the lander location. The date is kept using a mission clock sol count with the landing occurring on Sol 0, corresponding to MSD 51511 (mission time zone); landing occurred around 14:23 LMST (05:14 AMT). Perseverance The Perseverance rover project specified a mission clock that matched Local Mean Solar Time at a planned landing longitude of 77.43°E. This corresponds to a mission clock of AMT+05:09:43. The actual landing site was about 0.02° (1.2 km) east of that, a difference of about 5 seconds in solar time. The local mean solar time is also affected by the rover motion; at 18.4°N, this is about 1 second of time difference for every 234 meters of displacement in the east–west direction. The date is kept using a mission clock sol count with the landing occurring on Sol 0, corresponding to MSD 52304 (mission time zone); landing occurred around 15:54 LMST (10:44 AMT). Summary Years Definition of year and seasons The length of time for Mars to complete one orbit around the Sun in respect to the stars, its sidereal year, is about 686.98 Earth solar days (≈ 1.88 Earth years), or 668.5991 sols. Because of the eccentricity of Mars' orbit, the seasons are not of equal length. Assuming that seasons run from equinox to solstice or vice versa, the season Ls 0 to Ls 90 (northern-hemisphere spring / southern-hemisphere autumn) is the longest season lasting 194 Martian sols, and Ls 180 to Ls 270 (northern hemisphere autumn / southern-hemisphere spring) is the shortest season, lasting only 142 Martian sols. As on Earth, the sidereal year is not the quantity that is needed for calendar purposes. Similarly, the tropical year would likely be used because it gives the best match to the progression of the seasons. It is slightly shorter than the sidereal year due to the precession of Mars' rotational axis. The precession cycle is 93,000 Martian years (175,000 Earth years), much longer than on Earth. Its length in tropical years can be computed by dividing the difference between the sidereal year and tropical year by the length of the tropical year. Tropical year length depends on the starting point of measurement, due to the effects of Kepler's second law of planetary motion and precession. There are various possible years including the March (northward) equinox year, June (northern) solstice year, the September (southward) equinox year, the December (southern) solstice year, and the tropical year based on the mean sun. (See March equinox year.) On Earth, the variation in the lengths of the tropical years is small, with the mean time from June solstice to June solstice being about a thousandth of a day shorter than that between two December solstices, but on Mars it is much larger because of the greater eccentricity of its orbit. The northward equinox year is 668.5907 sols, the northern solstice year is 668.5880 sols, the southward equinox year is 668.5940 sols, and the southern solstice year is 668.5958 sols (0.0078 sols more than the northern solstice year). (Since, like Earth, the northern and southern hemispheres of Mars have opposite seasons, equinoxes and solstices must be labelled by hemisphere to remove ambiguity.) Seasons begin at 90 degree intervals of solar longitude (Ls) at equinoxes and solstices. Year numbering For purposes of enumerating Mars years and facilitating data comparisons, a system increasingly used in the scientific literature, particularly studies of Martian climate, enumerates years relative to the northern spring equinox (Ls 0) that occurred on April 11, 1955, labeling that date the start of Mars Year 1 (MY1). The system was first described in a paper focused on seasonal temperature variation by R. Todd Clancy of the Space Science Institute. Although Clancy and co-authors described the choice as "arbitrary", the great dust storm of 1956 falls in MY1. This system has been extended by defining Mars Year 0 (MY0) as beginning May 24, 1953, and so allowing for negative year numbers. Martian calendars Long before mission control teams on Earth began scheduling work shifts according to the Martian sol while operating spacecraft on the surface of Mars, it was recognized that humans probably could adapt to this slightly longer diurnal period. This suggested that a calendar based on the sol and the Martian year might be a useful timekeeping system for astronomers in the short term and for explorers in the future. For most day-to-day activities on Earth, people do not use Julian days, as astronomers do, but the Gregorian calendar, which despite its various complications is quite useful. It allows for easy determination of whether one date is an anniversary of another, whether a date is in winter or spring, and what is the number of years between two dates. This is much less practical with Julian days count. For similar reasons, if it is ever necessary to schedule and co-ordinate activities on a large scale across the surface of Mars it would be necessary to agree on a calendar. American astronomer Percival Lowell expressed the time of year on Mars in terms of Mars dates that were analogous to Gregorian dates, with 20 March, 21 June, 22 September, and 21 December marking the southward equinox, southern solstice, northward equinox, and northern solstice, respectively; Lowell's focus was on the southern hemisphere of Mars because it is the hemisphere that is more easily observed from Earth during favorable oppositions. Lowell's system was not a true calendar, since a Mars date could span nearly two entire sols; rather it was a convenient device for expressing the time of year in the southern hemisphere in lieu of heliocentric longitude, which would have been less comprehensible to a general readership. Italian astronomer Mentore Maggini's 1939 book describes a calendar developed years earlier by American astronomers Andrew Ellicott Douglass and William H. Pickering, in which the first nine months contain 56 sols and the last three months contain 55 sols. Their calendar year begins with the northward equinox on 1 March, thus imitating the original Roman calendar. Other dates of astronomical significance are: northern solstice, 27 June; southward equinox, 36 September; southern solstice, 12 December; perihelion, 31 November; and aphelion, 31 May. Pickering's inclusion of Mars dates in a 1916 report of his observations may have been the first use of a Martian calendar in an astronomical publication. Maggini states: "These dates of the Martian calendar are frequently used by observatories...." Despite his claim, this system eventually fell into disuse, and in its place new systems were proposed periodically which likewise did not gain sufficient acceptance to take permanent hold. In 1936, when the calendar reform movement was at its height, American astronomer Robert G. Aitken published an article outlining a Martian calendar. In each quarter there are three months of 42 sols and a fourth month of 41 sols. The pattern of seven-day weeks repeats over a two-year cycle, i.e., the calendar year always begins on a Sunday in odd-numbered years, thus effecting a perpetual calendar for Mars. Whereas previous proposals for a Martian calendar had not included an epoch, American astronomer I. M. Levitt developed a more complete system in 1954. In fact, Ralph Mentzer, an acquaintance of Levitt's who was a watchmaker for the Hamilton Watch Company, built several clocks designed by Levitt to keep time on both Earth and Mars. They could also be set to display the date on both planets according to Levitt's calendar and epoch (the Julian day epoch of 4713 BCE). Charles F. Capen included references to Mars dates in a 1966 Jet Propulsion Laboratory technical report associated with the Mariner 4 flyby of Mars. This system stretches the Gregorian calendar to fit the longer Martian year, much as Lowell had done in 1895, the difference being that 20 March, 21 June, 22 September, and 21 December marks the northward equinox, northern solstice, southward equinox, southern solstice, respectively. Similarly, Conway B. Leovy et al. also expressed time in terms of Mars dates in a 1973 paper describing results from the Mariner 9 Mars orbiter. British astronomer Sir Patrick Moore described a Martian calendar of his own design in 1977. His idea was to divide up a Martian year into 18 months. Months 6, 12 and 18, have 38 sols, while the rest of the months contain 37 sols. American aerospace engineer and political scientist Thomas Gangale first published regarding the Darian calendar in 1986, with additional details published in 1998 and 2006. It has 24 months to accommodate the longer Martian year while keeping the notion of a "month" that is reasonably similar to the length of an Earth month. On Mars, a "month" would have no relation to the orbital period of any moon of Mars, since Phobos and Deimos orbit in about 7 hours and 30 hours respectively. However, Earth and Moon would generally be visible to the naked eye when they were above the horizon at night, and the time it takes for the Moon to move from maximum separation in one direction to the other and back as seen from Mars is close to a Lunar month. Czech astronomer Josef Šurán offered a Martian calendar design in 1997, in which a common year has 672 Martian days distributed into 24 months of 28 days (or 4 weeks of 7 days each); in skip years, the week at the end of the twelfth month is omitted. Moore's 37-sol period 37 sols is the smallest integer number of sols after which the Mars Sol Date and the Julian date become offset by a full day. Alternatively, it can be viewed as the smallest integer number of sols needed for any Martian time zones to complete a full lap around Earth time zones. Specifically, 37 sols are equal to 38 Earth days plus 24 minutes and 44 seconds. Remarkably, the 37-sol period also accidentally almost divides several time quantities of interest at the same time. In particular: One Martian year is approximately equal to 18 × (37 sols) + 2.59897 sols Two Earth-Mars synodic periods are approximately equal to 41 × (37 sols) + 1.176 sols One Earth decade is approximately equal to 96 × (37 sols) + 2.7018 sols This makes the 37-sol period useful both for time synchronization between Earth and Mars timezones, and for Martian calendars, as a small number of leap sols can be straightforwardly added to eliminate calendar drift with respect to either the Martian year, Earth-Mars launch windows, or Earth calendars. List of notable events in Martian history Martian time in fiction The first known reference to time on Mars appears in Percy Greg's novel Across the Zodiac (1880). The primary, secondary, tertiary, and quaternary divisions of the sol are based on the number 12. Sols are numbered 0 through the end of the year, with no additional structure to the calendar. The epoch is "the union of all races and nations in a single State, a union which was formally established 13,218 years ago". 20th century Edgar Rice Burroughs described, in The Gods of Mars (1913), the divisions of the sol into zodes, xats, and tals. Although possibly the first to make the mistake of describing the Martian year as lasting 687 Martian days, he was far from the last. In the Robert A. Heinlein novel Red Planet (1949), humans living on Mars use a 24-month calendar, alternating between familiar Earth months and newly created months such as Ceres and Zeus. For example, Ceres comes after March and before April, while Zeus comes after October and before November. The Arthur C. Clarke novel The Sands of Mars (1951) mentions in passing that "Monday followed Sunday in the usual way" and "the months also had the same names, but were fifty to sixty days in length". In H. Beam Piper's short story "Omnilingual" (1957), the Martian calendar and the periodic table are the keys to archaeologists' deciphering of the records left by the long dead Martian civilization. Kurt Vonnegut's novel The Sirens of Titan (1959) describes a Martian calendar divided into twenty-one months: "twelve with thirty days, and nine with thirty-one", for a total of only 639 sols. D. G. Compton states in his novel Farewell, Earth's Bliss (1966), during the prison ship's journey to Mars: "Nobody on board had any real idea how the people in the settlement would have organised their six-hundred-and-eighty-seven-day year." In Ian McDonald's Desolation Road (1988), set on a terraformed Mars (referred to by the book's characters as "Ares"), characters follow an implied 24-month calendar whose months are portmanteaus of Gregorian months, such as "Julaugust", "Augtember", and "Novodecember". In both Philip K. Dick's novel Martian Time-Slip (1964) and Kim Stanley Robinson's Mars Trilogy (1992–1996), clocks retain Earth-standard seconds, minutes, and hours, but freeze at midnight for 39.5 minutes. As the fictional colonization of Mars progresses, this "timeslip" becomes a sort of witching hour, a time when inhibitions can be shed, and the emerging identity of Mars as a separate entity from Earth is celebrated. (It is not said explicitly whether this occurs simultaneously all over Mars, or at local midnight in each longitude.) Also in the Mars Trilogy, the calendar year is divided into twenty-four months. The names of the months are the same as the Gregorian calendar, except for a "1" or "2" in front to indicate the first or second occurrence of that month (for example, 1 January, 2 January, 1 February, 2 February). 21st century In the manga and anime series Aria (2001–2002), by Kozue Amano, set on a terraformed Mars, the calendar year is also divided into twenty-four months. Following the modern Japanese calendar, the months are not named but numbered sequentially, running from 1st Month to 24th Month. The Darian calendar is mentioned in a couple of works of fiction set on Mars: Star Trek: Department of Temporal Investigations: Watching the Clock by Christopher L. Bennett, Pocket Books/Star Trek (April 26, 2011) The Quantum Thief by Hannu Rajaniemi, Tor Books; Reprint edition (May 10, 2011) In Andy Weir's novel The Martian (2011) and its 2015 feature film adaptation, sols are counted and referenced frequently with onscreen title cards, in order to emphasize the amount of time the main character spends on Mars. In Season 4 of For All Mankind, which is set in large part on a Mars base, there are wristwatches set to "Mars time" much the same way as are currently used among the staff of robotic Mars missions. Formulas to compute MSD and MTC The Mars Sol Date (MSD) can be computed from the Julian date referred to Terrestrial Time (TT), as MSD = (JDTT − 2405522.0028779) / 1.0274912517 Terrestrial time, however, is not as easily available as Coordinated Universal Time (UTC). TT can be computed from UTC by first adding the difference which is a positive integer number of seconds occasionally updated by the introduction of leap seconds (see current number of leap seconds), then adding the constant difference This leads to the following formula giving MSD from the UTC-referred Julian date: MSD = [JDUTC + (TAI − UTC)/86400 − 2405522.0025054] / 1.0274912517 where the difference is in seconds. JDUTC can in turn be computed from any epoch-based time stamp, by adding the Julian date of the epoch to the time stamp in days. For example, if is a Unix timestamp in seconds, then JDUTC = / 86400 + 2440587.5 It follows, by a simple substitution: MSD = [ + (TAI − UTC)] / 88775.244147 + 34127.2954262 MTC is the fractional part of MSD, in hours, minutes and seconds: MTC = (MSD mod 1) × 24 h For example, at the time this page was last generated (): JDTT = MSD = MTC = See also Astronomy on Mars Universal Time Coordinated Universal Time Notes References External links Martian Time MARS24 Application Earth Date to Mars Date Converter NASA Mars Clock (Curiosity Rover) mclock - Command Line Mars Clock TED Talk - What Time Is It On Mars Mars Mars
Timekeeping on Mars
[ "Physics" ]
6,263
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
1,784,072
https://en.wikipedia.org/wiki/Atmospheric%20refraction
Atmospheric refraction is the deviation of light or other electromagnetic wave from a straight line as it passes through the atmosphere due to the variation in air density as a function of height. This refraction is due to the velocity of light through air decreasing (the refractive index increases) with increased density. Atmospheric refraction near the ground produces mirages. Such refraction can also raise or lower, or stretch or shorten, the images of distant objects without involving mirages. Turbulent air can make distant objects appear to twinkle or shimmer. The term also applies to the refraction of sound. Atmospheric refraction is considered in measuring the position of both celestial and terrestrial objects. Astronomical or celestial refraction causes astronomical objects to appear higher above the horizon than they actually are. Terrestrial refraction usually causes terrestrial objects to appear higher than they actually are, although in the afternoon when the air near the ground is heated, the rays can curve upward making objects appear lower than they actually are. Refraction not only affects visible light rays, but all electromagnetic radiation, although in varying degrees. For example, in the visible spectrum, blue is more affected than red. This may cause astronomical objects to appear dispersed into a spectrum in high-resolution images. Whenever possible, astronomers will schedule their observations around the times of culmination, when celestial objects are highest in the sky. Likewise, sailors will not shoot a star below 20° above the horizon. If observations of objects near the horizon cannot be avoided, it is possible to equip an optical telescope with control systems to compensate for the shift caused by the refraction. If the dispersion is also a problem (in case of broadband high-resolution observations), atmospheric refraction correctors (made from pairs of rotating glass prisms) can be employed as well. Since the amount of atmospheric refraction is a function of the temperature gradient, temperature, pressure, and humidity (the amount of water vapor, which is especially important at mid-infrared wavelengths), the amount of effort needed for a successful compensation can be prohibitive. Surveyors, on the other hand, will often schedule their observations in the afternoon, when the magnitude of refraction is minimum. Atmospheric refraction becomes more severe when temperature gradients are strong, and refraction is not uniform when the atmosphere is heterogeneous, as when turbulence occurs in the air. This causes suboptimal seeing conditions, such as the twinkling of stars and various deformations of the Sun's apparent shape soon before sunset or after sunrise. Astronomical refraction Astronomical refraction deals with the angular position of celestial bodies, their appearance as a point source, and through differential refraction, the shape of extended bodies such as the Sun and Moon. Atmospheric refraction of the light from a star is zero in the zenith, less than 1′ (one arc-minute) at 45° apparent altitude, and still only 5.3′ at 10° altitude; it quickly increases as altitude decreases, reaching 9.9′ at 5° altitude, 18.4′ at 2° altitude, and 35.4′ at the horizon; all values are for 10 °C and 1013.25 hPa in the visible part of the spectrum. On the horizon, refraction is slightly greater than the apparent diameter of the Sun, so when the bottom of the sun's disc appears to touch the horizon, the sun's true altitude is negative. If the atmosphere suddenly vanished at this moment, one couldn't see the sun, as it would be entirely below the horizon. By convention, sunrise and sunset refer to times at which the Sun's upper limb appears on or disappears from the horizon and the standard value for the Sun's true altitude is −50′: −34′ for the refraction and −16′ for the Sun's semi-diameter. The altitude of a celestial body is normally given for the center of the body's disc. In the case of the Moon, additional corrections are needed for the Moon's horizontal parallax and its apparent semi-diameter; both vary with the Earth–Moon distance. Refraction near the horizon is highly variable, principally because of the variability of the temperature gradient near the Earth's surface and the geometric sensitivity of the nearly horizontal rays to this variability. As early as 1830, Friedrich Bessel had found that even after applying all corrections for temperature and pressure (but not for the temperature gradient) at the observer, highly precise measurements of refraction varied by ±0.19′ at two degrees above the horizon and by ±0.50′ at a half degree above the horizon. At and below the horizon, values of refraction significantly higher than the nominal value of 35.4′ have been observed in a wide range of climates. Georg Constantin Bouris measured refraction of as much of 4° for stars on the horizon at the Athens Observatory and, during his ill-fated Endurance expedition, Sir Ernest Shackleton recorded refraction of 2°37′: “The sun which had made ‘positively his last appearance’ seven days earlier surprised us by lifting more than half its disk above the horizon on May 8. A glow on the northern horizon resolved itself into the sun at 11 am that day. A quarter of an hour later the unreasonable visitor disappeared again, only to rise again at 11:40 am, set at 1 pm, rise at 1:10 pm and set lingeringly at 1:20 pm. These curious phenomena were due to refraction which amounted to 2° 37′ at 1:20 pm. The temperature was 15° below 0° Fahr., and we calculated that the refraction was 2° above normal.” Day-to-day variations in the weather will affect the exact times of sunrise and sunset as well as moon-rise and moon-set, and for that reason it generally is not meaningful to give rise and set times to greater precision than the nearest minute. More precise calculations can be useful for determining day-to-day changes in rise and set times that would occur with the standard value for refraction if it is understood that actual changes may differ because of unpredictable variations in refraction. Because atmospheric refraction is nominally 34′ on the horizon, but only 29′ at 0.5° above it, the setting or rising sun seems to be flattened by about 5′ (about 1/6 of its apparent diameter). Calculating refraction Young distinguished several regions where different methods for calculating astronomical refraction were applicable. In the upper portion of the sky, with a zenith distance of less than 70° (or an altitude over 20°), various simple refraction formulas based on the index of refraction (and hence on the temperature, pressure, and humidity) at the observer are adequate. Between 20° and 5° of the horizon the temperature gradient becomes the dominant factor and numerical integration, using a method such as that of Auer and Standish and employing the temperature gradient of the standard atmosphere and the measured conditions at the observer, is required. Closer to the horizon, actual measurements of the changes with height of the local temperature gradient need to be employed in the numerical integration. Below the astronomical horizon, refraction is so variable that only crude estimates of astronomical refraction can be made; for example, the observed time of sunrise or sunset can vary by several minutes from day to day. As The Nautical Almanac notes, "the actual values of …the refraction at low altitudes may, in extreme atmospheric conditions, differ considerably from the mean values used in the tables." Many different formulas have been developed for calculating astronomical refraction; they are reasonably consistent, differing among themselves by a few minutes of arc at the horizon and becoming increasingly consistent as they approach the zenith. The simpler formulations involved nothing more than the temperature and pressure at the observer, powers of the cotangent of the apparent altitude of the astronomical body and in the higher order terms, the height of a fictional homogeneous atmosphere. The simplest version of this formula, which Smart held to be only accurate within 45° of the zenith, is: where R is the refraction in radians, n0 is the index of refraction at the observer (which depends on the temperature, pressure, and humidity), and ha is the apparent altitude angle of the astronomical body. An early simple approximation of this form, which directly incorporated the temperature and pressure at the observer, was developed by George Comstock: where R is the refraction in seconds of arc, b is the atmospheric pressure in millimeters of mercury, and t is the temperature in Celsius. Comstock considered that this formula gave results within one arcsecond of Bessel's values for refraction from 15° above the horizon to the zenith. A further expansion in terms of the third power of the cotangent of the apparent altitude incorporates H0, the height of the homogeneous atmosphere, in addition to the usual conditions at the observer: A version of this formula is used in the International Astronomical Union's Standards of Fundamental Astronomy; a comparison of the IAU's algorithm with more rigorous ray-tracing procedures indicated an agreement within 60 milliarcseconds at altitudes above 15°. Bennett developed another simple empirical formula for calculating refraction from the apparent altitude which gives the refraction R in arcminutes: This formula is used in the U. S. Naval Observatory's Vector Astrometry Software, and is reported to be consistent with Garfinkel's more complex algorithm within 0.07′ over the entire range from the zenith to the horizon. Sæmundsson developed an inverse formula for determining refraction from true altitude; if h is the true altitude in degrees, refraction R in arcminutes is given by the formula is consistent with Bennett's to within 0.1′. The formulas of Bennet and Sæmundsson assume an atmospheric pressure of 101.0 kPa and a temperature of 10 °C; for different pressure P and temperature T, refraction calculated from these formulas is multiplied by Refraction increases approximately 1% for every 0.9 kPa increase in pressure, and decreases approximately 1% for every 0.9 kPa decrease in pressure. Similarly, refraction increases approximately 1% for every 3 °C decrease in temperature, and decreases approximately 1% for every 3 °C increase in temperature. Random refraction effects Turbulence in Earth's atmosphere scatters the light from stars, making them appear brighter and fainter on a time-scale of milliseconds. The slowest components of these fluctuations are visible as twinkling (also called scintillation). Turbulence also causes small, sporadic motions of the star image, and produces rapid distortions in its structure. These effects are not visible to the naked eye, but can be easily seen even in small telescopes. They perturb astronomical seeing conditions. Some telescopes employ adaptive optics to reduce this effect. Terrestrial refraction Terrestrial refraction, sometimes called geodetic refraction, deals with the apparent angular position and measured distance of terrestrial bodies. It is of special concern for the production of precise maps and surveys. Since the line of sight in terrestrial refraction passes near the earth's surface, the magnitude of refraction depends chiefly on the temperature gradient near the ground, which varies widely at different times of day, seasons of the year, the nature of the terrain, the state of the weather, and other factors. As a common approximation, terrestrial refraction is considered as a constant bending of the ray of light or line of sight, in which the ray can be considered as describing a circular path. A common measure of refraction is the coefficient of refraction. Unfortunately there are two different definitions of this coefficient. One is the ratio of the radius of the Earth to the radius of the line of sight, the other is the ratio of the angle that the line of sight subtends at the center of the Earth to the angle of refraction measured at the observer. Since the latter definition only measures the bending of the ray at one end of the line of sight, it is one half the value of the former definition. The coefficient of refraction is directly related to the local vertical temperature gradient and the atmospheric temperature and pressure. The larger version of the coefficient k, measuring the ratio of the radius of the Earth to the radius of the line of sight, is given by: where temperature T is given in kelvins, pressure P in millibars, and height h in meters. The angle of refraction increases with the coefficient of refraction and with the length of the line of sight. Although the straight line from your eye to a distant mountain might be blocked by a closer hill, the ray may curve enough to make the distant peak visible. A convenient method to analyze the effect of refraction on visibility is to consider an increased effective radius of the Earth Reff, given by where R is the radius of the Earth and k is the coefficient of refraction. Under this model the ray can be considered a straight line on an Earth of increased radius. The curvature of the refracted ray in arc seconds per meter can be computed using the relationship where 1/σ is the curvature of the ray in arcsec per meter, P is the pressure in millibars, T is the temperature in kelvins, and β is the angle of the ray to the horizontal. Multiplying half the curvature by the length of the ray path gives the angle of refraction at the observer. For a line of sight near the horizon cos β differs little from unity and can be ignored. This yields where L is the length of the line of sight in meters and Ω is the refraction at the observer measured in arc seconds. A simple approximation is to consider that a mountain's apparent altitude at your eye (in degrees) will exceed its true altitude by its distance in kilometers divided by 1500. This assumes a fairly horizontal line of sight and ordinary air density; if the mountain is very high (so much of the sightline is in thinner air) divide by 1600 instead. See also Notes References Further reading External links Observational astronomy Atmospheric optical phenomena
Atmospheric refraction
[ "Physics", "Astronomy" ]
2,869
[ "Physical phenomena", "Earth phenomena", "Observational astronomy", "Optical phenomena", "Atmospheric optical phenomena", "Astronomical sub-disciplines" ]
1,785,475
https://en.wikipedia.org/wiki/Chain-growth%20polymerization
Chain-growth polymerization (AE) or chain-growth polymerisation (BE) is a polymerization technique where monomer molecules add onto the active site on a growing polymer chain one at a time. There are a limited number of these active sites at any moment during the polymerization which gives this method its key characteristics. Chain-growth polymerization involves 3 types of reactions : Initiation: An active species I* is formed by some decomposition of an initiator molecule I Propagation: The initiator fragment reacts with a monomer M to begin the conversion to the polymer; the center of activity is retained in the adduct. Monomers continue to add in the same way until polymers Pi* are formed with the degree of polymerization i Termination: By some reaction generally involving two polymers containing active centers, the growth center is deactivated, resulting in dead polymer Introduction In 1953, Paul Flory first classified polymerization as "step-growth polymerization" and "chain-growth polymerization". IUPAC recommends to further simplify "chain-growth polymerization" to "chain polymerization". It is a kind of polymerization where an active center (free radical or ion) is formed, and a plurality of monomers can be polymerized together in a short period of time to form a macromolecule having a large molecular weight. In addition to the regenerated active sites of each monomer unit, polymer growth will only occur at one (or possibly more) endpoint. Many common polymers can be obtained by chain polymerization such as polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), poly(methyl methacrylate) (PMMA), polyacrylonitrile (PAN), polyvinyl acetate (PVA). Typically, chain-growth polymerization can be understood with the chemical equation: In this equation, P is the polymer while x represents degree of polymerization, * means active center of chain-growth polymerization, M is the monomer which will react with active center, and L may be a low-molar-mass by-product obtained during chain propagation. For most chain-growth polymerizations, there is no by-product L formed. However there are some exceptions, such as the polymerization of amino acid N-carboxyanhydrides to oxazolidine-2,5-diones. This type of polymerization is described as "chain" or "chain-growth" because the reaction mechanism is a chemical chain reaction with an initiation step in which an active center is formed, followed by a rapid sequence of chain propagation steps in which the polymer molecule grows by addition of one monomer molecule to the active center in each step. The word "chain" here does not refer to the fact that polymer molecules form long chains. Some polymers are formed instead by a second type of mechanism known as step-growth polymerization without rapid chain propagation steps. Reaction steps All chain-growth polymerization reactions must include chain initiation and chain propagation. Chain transfer and chain termination steps also occur in many but not all chain-growth polymerizations. Chain initiation Chain initiation is the initial generation of a chain carrier, which is an intermediate such as a radical or an ion which can continue the reaction by chain propagation. Initiation steps are classified according to the way that energy is provided: thermal initiation, high energy initiation, and chemical initiation, etc. Thermal initiation uses molecular thermal motion to dissociate a molecule and form active centers. High energy initiation refers to the generation of chain carriers by radiation. Chemical initiation is due to a chemical initiator. For the case of radical polymerization as an example, chain initiation involves the dissociation of a radical initiator molecule (I) which is easily dissociated by heat or light into two free radicals (2 R°). Each radical R° then adds a first monomer molecule (M) to start a chain which terminates with a monomer activated by the presence of an unpaired electron (RM1°). I → 2 R° R° + M → RM1° Chain propagation IUPAC defines chain propagation as a reaction of an active center on the growing polymer molecule, which adds one monomer molecule to form a new polymer molecule (RM1°) one repeat unit longer. For radical polymerization, the active center remains an atom with an unpaired electron. The addition of the second monomer and a typical later addition step are RM1° + M → RM2° ............... RMn° + M → RMn+1° For some polymers, chains of over 1000 monomer units can be formed in milliseconds. Chain termination In a chain termination step, the active center disappears, resulting in the termination of chain propagation. This is different from chain transfer in which the active center only shifts to another molecule but does not disappear. For radical polymerization, termination involves a reaction of two growing polymer chains to eliminate the unpaired electrons of both chains. There are two possibilities. 1. Recombination is the reaction of the unpaired electrons of two chains to form a covalent bond between them. The product is a single polymer molecule with the combined length of the two reactant chains: RMn° + RMm° → Pn+m 2. Disproportionation is the transfer of a hydrogen atom from one chain to the other, so that the two product chain molecules are unchanged in length but are no longer free radicals: RMn° + RMm° → Pn + Pm Initiation, propagation and termination steps also occur in chain reactions of smaller molecules. This is not true of the chain transfer and branching steps considered next. Chain transfer In some chain-growth polymerizations there is also a chain transfer step, in which the growing polymer chain RMn° takes an atom X from an inactive molecule XY, terminating the growth of the polymer chain: RMn° + XY → RMnX + Y°. The Y fragment ls a new active center which adds more monomer M to form a new growing chain YMn°. This can happen in free radical polymerization for chains RMn°, in ionic polymerization for chains RMn+ or RMn–, or in coordination polymerization. In most cases chain transfer will generate a by-product and decrease the molar mass of the final polymer. Chain transfer to polymer: Branching Another possibility is chain transfer to a second polymer molecule, result in the formation of a product macromolecule with a branched structure. In this case the growing chain takes an atom X from a second polymer chain whose growth had been completed. The growth of the first polymer chain is completed by the transfer of atom X. However the second molecule loses an atom X from the interior of its polymer chain to form a reactive radical (or ion) which can add more monomer molecules. This results in the addition of a branch or side chain and the formation of a product macromolecule with a branched structure. Classes of chain-growth polymerization The International Union of Pure and Applied Chemistry (IUPAC) recommends definitions for several classes of chain-growth polymerization. Radical polymerization Based on the IUPAC definition, radical polymerization is a chain polymerization in which the kinetic-chain carriers are radicals. Usually, the growing chain end bears an unpaired electron. Free radicals can be initiated by many methods such as heating, redox reactions, ultraviolet radiation, high energy irradiation, electrolysis, sonication, and plasma. Free radical polymerization is very important in polymer chemistry. It is one of the most developed methods in chain-growth polymerization. Currently, most polymers in our daily life are synthesized by free radical polymerization, including polyethylene, polystyrene, polyvinyl chloride, polymethyl methacrylate, polyacrylonitrile, polyvinyl acetate, styrene butadiene rubber, nitrile rubber, neoprene, etc. Ionic polymerization Ionic polymerization is a chain polymerization in which the kinetic-chain carriers are ions or ion pairs. It can be further divided into anionic polymerization and cationic polymerization. Ionic polymerization generates many polymers used in daily life, such as butyl rubber, polyisobutylene, polyphenylene, polyoxymethylene, polysiloxane, polyethylene oxide, high density polyethylene, isotactic polypropylene, butadiene rubber, etc. Living anionic polymerization was developed in the 1950s. The chain will remain active indefinitely unless the reaction is transferred or terminated deliberately, which allows the control of molar weight and dispersity (or polydispersity index, PDI). Coordination polymerization Coordination polymerization is a chain polymerization that involves the preliminary coordination of a monomer molecule with a chain carrier. The monomer is first coordinated with the transition metal active center, and then the activated monomer is inserted into the transition metal-carbon bond for chain growth. In some cases, coordination polymerization is also called insertion polymerization or complexing polymerization. Advanced coordination polymerizations can control the tacticity, molecular weight and PDI of the polymer effectively. In addition, the racemic mixture of the chiral metallocene can be separated into its enantiomers. The oligomerization reaction produces an optically active branched olefin using an optically active catalyst. Living polymerization Living polymerization was first described by Michael Szwarc in 1956. It is defined as a chain polymerization from which chain transfer and chain termination are absent. In the absence of chain-transfer and chain termination, the monomer in the system is consumed and the polymerization stops but the polymer chain remains active. If new monomer is added, the polymerization can proceed. Due to the low PDI and predictable molecular weight, living polymerization is at the forefront of polymer research. It can be further divided into living free radical polymerization, living ionic polymerization and living ring-opening metathesis polymerization, etc. Ring-opening polymerization Ring-opening polymerization is defined as a polymerization in which a cyclic monomer yields a monomeric unit which is acyclic or contains fewer cycles than the monomer. Generally, the ring-opening polymerization is carried out under mild conditions, and the by-product is less than in the polycondensation reaction. A high molecular weight polymer is easily obtained. Common ring-opening polymerization products includes polypropylene oxide, polytetrahydrofuran, , polyoxymethylene, polycaprolactam and polysiloxane. Reversible-deactivation polymerization Reversible-deactivation polymerization is defined as a chain polymerization propagated by chain carriers that are deactivated reversibly, bringing them into one or more active-dormant equilibria. An example of a reversible-deactivation polymerization is group-transfer polymerization. Comparison with step-growth polymerization Polymers were first classified according to polymerization method by Wallace Carothers in 1929, who introduced the terms addition polymer and condensation polymer to describe polymers made by addition reactions and condensation reactions respectively. However this classification is inadequate to describe a polymer which can be made by either type of reaction, for example nylon 6 which can be made either by addition of a cyclic monomer or by condensation of a linear monomer. Flory revised the classification to chain-growth polymerization and step-growth polymerization, based on polymerization mechanisms rather than polymer structures. IUPAC now recommends that the names of step-growth polymerization and chain-growth polymerization be further simplified to polycondensation (or polyaddition if no low-molar-mass by-product is formed when a monomer is added) and chain polymerization. Most polymerizations are either chain-growth or step-growth reactions. Chain-growth includes both initiation and propagation steps (at least), and the propagation of chain-growth polymers proceeds by the addition of monomers to a growing polymer with an active centre. In contrast step-growth polymerization involves only one type of step, and macromolecules can grow by reaction steps between any two molecular species: two monomers, a monomer and a growing chain, or two growing chains. In step growth, the monomers will initially form dimers, trimers, etc. which later react to form long chain polymers. In chain-growth polymerization, a growing macromolecule increases in size rapidly once its growth is initiated. When a macromolecule stops growing it generally will add no more monomers. In step-growth polymerization on the other hand, a single polymer molecule can grow over the course of the whole reaction. In chain-growth polymerization, long macromolecules with high molecular weight are formed when only a small fraction of monomer has reacted. Monomers are consumed steadily over the course of the whole reaction, but the degree of polymerization can increase very quickly after chain initiation. However in step-growth polymerization the monomer is consumed very quickly to dimer, trimer and oligomer. The degree of polymerization increases steadily during the whole polymerization process. The type of polymerization of a given monomer usually depends on the functional groups present, and sometimes also on whether the monomer is linear or cyclic. Chain-growth polymers are usually addition polymers by Carothers' definition. They are typically formed by addition reactions of C=C bonds in the monomer backbone, which contains only carbon-carbon bonds. Another possibility is ring-opening polymerization, as for the chain-growth polymerization of tetrahydrofuran or of polycaprolactone (see Introduction above). Step-growth polymers are typically condensation polymers in which an elimination product as such as H2O are formed. Examples are polyamides, polycarbonates, polyesters, polyimides, polysiloxanes and polysulfones. If no elimination product is formed, then the polymer is an addition polymer, such as a polyurethane or a poly(phenylene oxide). Chain-growth polymerization with a low-molar-mass by-product during chain growth is described by IUPAC as "condensative chain polymerization". Compared to step-growth polymerization, living chain-growth polymerization shows low molar mass dispersity (or PDI), predictable molar mass distribution and controllable conformation. Generally, polycondensation proceeds in a step-growth polymerization mode. Application Chain polymerization products are widely used in many aspects of life, including electronic devices, food packaging, catalyst carriers, medical materials, etc. At present, the world's highest yielding polymers such as polyethylene (PE), polyvinyl chloride (PVC), polypropylene (PP), etc. can be obtained by chain polymerization. In addition, some carbon nanotube polymer is used for electronical devices. Controlled living chain-growth conjugated polymerization will also enable the synthesis of well-defined advanced structures, including block copolymers. Their industrial applications extend to water purification, biomedical devices and sensors. References External links Internet Encyclopedia of Science Polymer chemistry Polymerization reactions ja:重合反応#付加縮合
Chain-growth polymerization
[ "Chemistry", "Materials_science", "Engineering" ]
3,209
[ "Polymerization reactions", "Polymer chemistry", "Materials science" ]
1,786,306
https://en.wikipedia.org/wiki/Eb/N0
{{DISPLAYTITLE:Eb/N0}} In digital communication or data transmission, (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account. As the description implies, is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, is in units of joules (watt-seconds). is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules. These are the same units as so the ratio is dimensionless; it is frequently expressed in decibels. directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of "bandwidth" to apply to the signal. But when the signal bandwidth is well defined, is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross" link spectral efficiency in bit/s⋅Hz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type. must be used with care on interference-limited channels since additive white noise (with constant noise density ) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as and added to the thermal noise to produce the overall ratio . Relation to carrier-to-noise ratio is closely related to the carrier-to-noise ratio (CNR or ), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection: where is the channel data rate (net bit rate) and is the channel bandwidth. The equivalent expression in logarithmic form (dB): Caution: Sometimes, the noise power is denoted by when negative frequencies and complex-valued equivalent baseband signals are considered rather than passband signals, and in that case, there will be a 3 dB difference. Relation to Es/N0 can be seen as a normalized measure of the energy per symbol to noise power spectral density (): where is the energy per symbol in joules and is the nominal spectral efficiency in (bits/s)/Hz. is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following: where is the number of alternative modulation symbols, e.g. for QPSK and for 8PSK. This is the energy per bit, not the energy per information bit. can further be expressed as: where is the carrier-to-noise ratio or signal-to-noise ratio, is the channel bandwidth in hertz, and is the symbol rate in baud or symbols per second. Shannon limit The Shannon–Hartley theorem says that the limit of reliable information rate (data rate exclusive of error-correcting codes) of a channel depends on bandwidth and signal-to-noise ratio according to: where is the information rate in bits per second excluding error-correcting codes, is the bandwidth of the channel in hertz, is the total signal power (equivalent to the carrier power ), and is the total noise power in the bandwidth. This equation can be used to establish a bound on for any system that achieves reliable communication, by considering a gross bit rate equal to the net bit rate and therefore an average energy per bit of , with noise spectral density of . For this calculation, it is conventional to define a normalized rate , a bandwidth utilization parameter of bits per second per half hertz, or bits per dimension (a signal of bandwidth can be encoded with dimensions, according to the Nyquist–Shannon sampling theorem). Making appropriate substitutions, the Shannon limit is: Which can be solved to get the Shannon-limit bound on : When the data rate is small compared to the bandwidth, so that is near zero, the bound, sometimes called the ultimate Shannon limit, is: which corresponds to −1.59dB. This often-quoted limit of −1.59 dB applies only to the theoretical case of infinite bandwidth. The Shannon limit for finite-bandwidth signals is always higher. Cutoff rate For any given system of coding and decoding, there exists what is known as a cutoff rate , typically corresponding to an about 2 dB above the Shannon capacity limit. The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery of turbo codes, low-density parity-check (LDPC) and polar codes. References External links Explained. An introductory article on Noise (electronics) Signal processing Engineering ratios
Eb/N0
[ "Mathematics", "Technology", "Engineering" ]
1,066
[ "Telecommunications engineering", "Computer engineering", "Metrics", "Signal processing", "Engineering ratios", "Quantity" ]
1,786,418
https://en.wikipedia.org/wiki/Technetium-99m%20generator
A technetium-99m generator, or colloquially a technetium cow or moly cow, is a device used to extract the metastable isotope 99mTc of technetium from a decaying sample of molybdenum-99. 99Mo has a half-life of 66 hours and can be easily transported over long distances to hospitals where its decay product technetium-99m (with a half-life of only 6 hours, inconvenient for transport) is extracted and used for a variety of nuclear medicine diagnostic procedures, where its short half-life is very useful. Parent isotope source 99Mo can be obtained by the neutron activation (n,γ reaction) of 98Mo in a high-neutron-flux reactor. However, the most frequently used method is through fission of uranium-235 in a nuclear reactor. While most reactors currently engaged in 99Mo production use highly enriched uranium-235 targets, proliferation concerns have prompted some producers to transition to low-enriched uranium targets. The target is irradiated with neutrons to form 99Mo as a fission product (with 6.1% yield). Molybdenum-99 is then separated from unreacted uranium and other fission products in a hot cell. Generator invention and history 99mTc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory, Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from tellurium-132, its 3.2-days parent, produced in the Brookhaven Graphite Research Reactor. They detected a trace contaminant which proved to be 99mTc, which was coming from 99Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer. Generator function and mechanism Technetium-99m's short half-life of 6 hours makes long-term storage impossible. Transport of 99mTc from the limited number of production sites to radiopharmacies (for manufacture of specific radiopharmaceuticals) and other end users would be complicated by the need to significantly overproduce to have sufficient remaining activity after long journeys. Instead, the longer-lived parent nuclide 99Mo can be supplied to radiopharmacies in a generator, after its extraction from the neutron-irradiated uranium targets and its purification in dedicated processing facilities. Radiopharmacies may be hospital-based or stand-alone facilities, and in many cases will subsequently distribute 99mTc radiopharmaceuticals to regional nuclear medicine departments. Development in direct production of 99mTc, without first producing the parent 99Mo, precludes the use of generators; however, this is uncommon and relies on suitable production facilities close to radiopharmacies. Production Generators provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from 99mTc generator is 20–50 μSv/h during transport. These generators' output declines with time and must be replaced weekly, since the half-life of 99Mo is still only 66 hours. Since the half-life of the parent nuclide (99Mo) is much longer than that of the daughter nuclide (99mTc), 50% of equilibrium activity is reached within one daughter half-life, 75% within two daughter half-lives. Hence, removing the daughter nuclide (elution process) from the generator ("milking" the cow) is reasonably done as often as every 6 hours in a 99Mo/99mTc generator. Separation Most commercial 99Mo/99mTc generators use column chromatography, in which 99Mo in the form of molybdate, MoO42− is adsorbed onto acid alumina (Al2O3). When the 99Mo decays it forms pertechnetate TcO4−, which, because of its single charge, is less tightly bound to the alumina. Pouring normal saline solution through the column of immobilized 99Mo elutes the soluble 99mTc, resulting in a saline solution containing the 99mTc as pertechnetate, with sodium as the counterion. The solution of sodium pertechnetate may then be added in an appropriate concentration to the pharmaceutical kit to be used, or sodium pertechnetate can be used directly without pharmaceutical tagging for specific procedures requiring only the 99mTcO4− as the primary radiopharmaceutical. A large percentage of the 99mTc generated by a 99Mo/99mTc generator is produced in the first 3 parent half-lives, or approximately one week. Hence, clinical nuclear medicine units purchase at least one such generator per week or order several in a staggered fashion. Isomeric ratio When the generator is left unused, 99Mo decays to 99mTc, which in turn decays to 99Tc. The half-life of 99Tc is far longer than its metastable isomer, so the ratio of 99Tc to 99mTc increases over time. Both isomers are carried out by the elution process and react equally well with the ligand, but the 99Tc is an impurity useless to imaging (and cannot be separated). The generator is washed of 99Tc and 99mTc at the end of the manufacturing process of the generator, but the ratio of 99Tc to 99mTc then builds up again during transport or any other period when the generator is left unused. The first few elutions will have reduced effectiveness because of this high ratio. References Radiopharmaceuticals Radioactivity Technetium-99m Medical physics
Technetium-99m generator
[ "Physics", "Chemistry" ]
1,278
[ "Applied and interdisciplinary physics", "Medicinal radiochemistry", "Radiopharmaceuticals", "Medical physics", "Nuclear physics", "Chemicals in medicine", "Radioactivity" ]
1,786,719
https://en.wikipedia.org/wiki/Step-growth%20polymerization
In polymer chemistry, step-growth polymerization refers to a type of polymerization mechanism in which bi-functional or multifunctional monomers react to form first dimers, then trimers, longer oligomers and eventually long chain polymers. Many naturally-occurring and some synthetic polymers are produced by step-growth polymerization, e.g. polyesters, polyamides, polyurethanes, etc. Due to the nature of the polymerization mechanism, a high extent of reaction is required to achieve high molecular weight. The easiest way to visualize the mechanism of a step-growth polymerization is a group of people reaching out to hold their hands to form a human chain—each person has two hands (= reactive sites). There also is the possibility to have more than two reactive sites on a monomer: In this case branched polymers production take place. IUPAC has deprecated the term step-growth polymerization, and recommends use of the terms polyaddition (when the propagation steps are addition reactions and molecules are not evolved during these steps) and polycondensation (when the propagation steps are condensation reactions and molecules are evolved during these steps). Historical aspects Most natural polymers being employed at early stage of human society are of condensation type. The synthesis of first truly synthetic polymeric material, bakelite, was announced by Leo Baekeland in 1907, through a typical step-growth polymerization fashion of phenol and formaldehyde. The pioneer of synthetic polymer science, Wallace Carothers, developed a new means of making polyesters through step-growth polymerization in 1930s as a research group leader at DuPont. It was the first reaction designed and carried out with the specific purpose of creating high molecular weight polymer molecules, as well as the first polymerization reaction whose results had been predicted by scientific theory. Carothers developed a series of mathematic equations to describe the behavior of step-growth polymerization systems which are still known as the Carothers equations today. Collaborating with Paul Flory, a physical chemist, they developed theories that describe more mathematical aspects of step-growth polymerization including kinetics, stoichiometry, and molecular weight distribution etc. Carothers is also well known for his invention of Nylon. Condensation polymerization "Step growth polymerization" and condensation polymerization are two different concepts, not always identical. In fact polyurethane polymerizes with addition polymerization (because its polymerization produces no small molecules), but its reaction mechanism corresponds to a step-growth polymerization. The distinction between "addition polymerization" and "condensation polymerization" was introduced by Wallace Carothers in 1929, and refers to the type of products, respectively: a polymer only (addition) a polymer and a molecule with a low molecular weight (condensation) The distinction between "step-growth polymerization" and "chain-growth polymerization" was introduced by Paul Flory in 1953, and refers to the reaction mechanisms, respectively: by functional groups (step-growth polymerization) by free-radical or ion (chain-growth polymerization) Differences from chain-growth polymerization This technique is usually compared with chain-growth polymerization to show its characteristics. Classes of step-growth polymers Classes of step-growth polymers are: Polyester has high glass transition temperature Tg and high melting point Tm, good mechanical properties to about 175 °C, good resistance to solvent and chemicals. It can exist as fibers and films. The former is used in garments, felts, tire cords, etc. The latter appears in magnetic recording tape and high grade films. Polyamide (nylon) has good balance of properties: high strength, good elasticity and abrasion resistance, good toughness, favorable solvent resistance. The applications of polyamide include: rope, belting, fiber cloths, thread, substitute for metal in bearings, jackets on electrical wire. Polyurethane can exist as elastomers with good abrasion resistance, hardness, good resistance to grease and good elasticity, as fibers with excellent rebound, as coatings with good resistance to solvent attack and abrasion and as foams with good strength, good rebound and high impact strength. Polyurea shows high Tg, fair resistance to greases, oils, and solvents. It can be used in truck bed liners, bridge coating, caulk and decorative designs. Polysiloxane, siloxane-based polymers available in a wide range of physical states—from liquids to greases, waxes, resins, and rubbers. Due to perfect thermal stability (thanks to silicon, Si) uses of this material include antifoam and release agents, gaskets, seals, cable and wire insulation, hot liquids and gas conduits, etc. Polycarbonates are transparent, self-extinguishing materials. They possess properties like crystalline thermoplasticity, high impact strength, good thermal and oxidative stability. They can be used in machinery, auto-industry, and medical applications. For example, the cockpit canopy of F-22 Raptor is made of high optical quality polycarbonate. Polysulfides have outstanding oil and solvent resistance, good gas impermeability, good resistance to aging and ozone. However, it smells bad, and it shows low tensile strength as well as poor heat resistance. It can be used in gasoline hoses, gaskets and places that require solvent resistance and gas resistance. Polyether shows good thermoplastic behavior, water solubility, generally good mechanical properties, moderate strength and stiffness. It is applied in sizing for cotton and synthetic fibers, stabilizers for adhesives, binders, and film formers in pharmaceuticals. Phenol formaldehyde resin (bakelite) have good heat resistance, dimensional stability as well as good resistance to most solvents. It also shows good dielectric properties. This material is typically used in molding applications, electrical, radio, televisions and automotive parts where their good dielectric properties are of use. Some other uses include: impregnating paper, varnishes, decorative laminates for wall coverings. Polytriazole polymers are produced from monomers which bear both an alkyne and azide functional group. The monomer units are linked to each other by the a 1,2,3-triazole group; which is produced by the 1,3-dipolar cycloaddition, also called the azide-alkyne Huisgen cycloaddition. These polymers can take on the form of a strong resin, or a gel. With oligopeptide monomers containing a terminal alkyne and terminal azide the resulting clicked peptide polymer will be biodegradable due to action of endopeptidases on the oligopeptide unit. Branched polymers A monomer with functionality of 3 or more will introduce branching in a polymer and will ultimately form a cross-linked macrostructure or network even at low fractional conversion. The point at which a tree-like topology transits to a network is known as the gel point because it is signalled by an abrupt change in viscosity. One of the earliest so-called thermosets is known as bakelite. It is not always water that is released in step-growth polymerization: in acyclic diene metathesis or ADMET dienes polymerize with loss of ethene. Kinetics The kinetics and rates of step-growth polymerization can be described using a polyesterification mechanism. The simple esterification is an acid-catalyzed process in which protonation of the acid is followed by interaction with the alcohol to produce an ester and water. However, there are a few assumptions needed with this kinetic model. The first assumption is water (or any other condensation product) is efficiently removed. Secondly, the functional group reactivities are independent of chain length. Finally, it is assumed that each step only involves one alcohol and one acid. This is a general rate law degree of polymerization for polyesterification where n= reaction order. Self-catalyzed polyesterification If no acid catalyst is added, the reaction will still proceed because the acid can act as its own catalyst. The rate of condensation at any time t can then be derived from the rate of disappearance of -COOH groups and The second-order term arises from its use as a catalyst, and k is the rate constant. For a system with equivalent quantities of acid and glycol, the functional group concentration can be written simply as After integration and substitution from Carothers equation, the final form is the following For a self-catalyzed system, the number average degree of polymerization (Xn) grows proportionally with . External catalyzed polyesterification The uncatalyzed reaction is rather slow, and a high Xn is not readily attained. In the presence of a catalyst, there is an acceleration of the rate, and the kinetic expression is altered to which is kinetically first order in each functional group. Hence, and integration gives finally For an externally catalyzed system, the number average degree of polymerization grows proportionally with . Molecular weight distribution in linear polymerization The product of a polymerization is a mixture of polymer molecules of different molecular weights. For theoretical and practical reasons it is of interest to discuss the distribution of molecular weights in a polymerization. The molecular weight distribution (MWD) had been derived by Flory by a statistical approach based on the concept of equal reactivity of functional groups. Probability Step-growth polymerization is a random process so we can use statistics to calculate the probability of finding a chain with x-structural units ("x-mer") as a function of time or conversion. {\mathit{x}AA} + \mathit{x}BB -> AA-(BB-AA)_{\mathit{x}-1}-BB \mathit{x}AB -> A-(B-A)_{\mathit{x}-1}-B Probability that an 'A' functional group has reacted Probability of finding an 'A' unreacted Combining the above two equations leads to. Where Px is the probability of finding a chain that is x-units long and has an unreacted 'A'. As x increases the probability decreases. Number fraction distribution The number fraction distribution is the fraction of x-mers in any system and equals the probability of finding it in solution. Where N is the total number of polymer molecules present in the reaction. Weight fraction distribution The weight fraction distribution is the fraction of x-mers in a system and the probability of finding them in terms of mass fraction. Notes: Mo is the molar mass of the repeat unit, No is the initial number of monomer molecules, and N is the number of unreacted functional groups Substituting from the Carothers equation We can now obtain: PDI The polydispersity index (PDI), is a measure of the distribution of molecular mass in a given polymer sample. However, for step-growth polymerization the Carothers equation can be used to substitute and rearrange this formula into the following. Therefore, in step-growth when p=1, then the PDI=2. Molecular weight control in linear polymerization Need for stoichiometric control There are two important aspects with regard to the control of molecular weight in polymerization. In the synthesis of polymers, one is usually interested in obtaining a product of very specific molecular weight, since the properties of the polymer will usually be highly dependent on molecular weight. Molecular weights higher or lower than the desired weight are equally undesirable. Since the degree of polymerization is a function of reaction time, the desired molecular weight can be obtained by quenching the reaction at the appropriate time. However, the polymer obtained in this manner is unstable in that it leads to changes in molecular weight because the ends of the polymer molecule contain functional groups that can react further with each other. This situation is avoided by adjusting the concentrations of the two monomers so that they are slightly nonstoichiometric. One of the reactants is present in slight excess. The polymerization then proceeds to a point at which one reactant is completely used up and all the chain ends possess the same functional group of the group that is in excess. Further polymerization is not possible, and the polymer is stable to subsequent molecular weight changes. Another method of achieving the desired molecular weight is by addition of a small amount of monofunctional monomer, a monomer with only one functional group. The monofunctional monomer, often referred to as a chain stopper, controls and limits the polymerization of bifunctional monomers because the growing polymer yields chain ends devoid of functional groups and therefore incapable of further reaction. Quantitative aspects To properly control the polymer molecular weight, the stoichiometric imbalance of the bifunctional monomer or the monofunctional monomer must be precisely adjusted. If the nonstoichiometric imbalance is too large, the polymer molecular weight will be too low. It is important to understand the quantitative effect of the stoichiometric imbalance of reactants on the molecular weight. Also, this is necessary in order to know the quantitative effect of any reactive impurities that may be present in the reaction mixture either initially or that are formed by undesirable side reactions. Impurities with A or B functional groups may drastically lower the polymer molecular weight unless their presence is quantitatively taken into account. More usefully, a precisely controlled stoichiometric imbalance of the reactants in the mixture can provide the desired result. For example, an excess of diamine over an acid chloride would eventually produce a polyamide with two amine end groups incapable of further growth when the acid chloride was totally consumed. This can be expressed in an extension of the Carothers equation as, where r is the ratio of the number of molecules of the reactants. were NBB is the molecule in excess. The equation above can also be used for a monofunctional additive which is the following, where NB is the number of monofunction molecules added. The coefficient of 2 in front of NB is require since one B molecule has the same quantitative effect as one excess B-B molecule. Multi-chain polymerization A monomer with functionality 3 has 3 functional groups which participate in the polymerization. This will introduce branching in a polymer and may ultimately form a cross-linked macrostructure. The point at which this three-dimensional 3D network is formed is known as the gel point, signaled by an abrupt change in viscosity. A more general functionality factor fav is defined for multi-chain polymerization, as the average number of functional groups present per monomer unit. For a system containing N0 molecules initially and equivalent numbers of two function groups A and B, the total number of functional groups is N0fav. And the modified Carothers equation is , where p equals to Advances in step-growth polymers The driving force in designing new polymers is the prospect of replacing other materials of construction, especially metals, by using lightweight and heat-resistant polymers. The advantages of lightweight polymers include: high strength, solvent and chemical resistance, contributing to a variety of potential uses, such as electrical and engine parts on automotive and aircraft components, coatings on cookware, coating and circuit boards for electronic and microelectronic devices, etc. Polymer chains based on aromatic rings are desirable due to high bond strengths and rigid polymer chains. High molecular weight and crosslinking are desirable for the same reason. Strong dipole-dipole, hydrogen bond interactions and crystallinity also improve heat resistance. To obtain desired mechanical strength, sufficiently high molecular weights are necessary, however, decreased solubility is a problem. One approach to solve this problem is to introduce of some flexibilizing linkages, such as isopropylidene, C=O, and into the rigid polymer chain by using an appropriate monomer or comonomer. Another approach involves the synthesis of reactive telechelic oligomers containing functional end groups capable of reacting with each other, polymerization of the oligomer gives higher molecular weight, referred to as chain extension. Aromatic polyether The oxidative coupling polymerization of many 2,6-disubstituted phenols using a catalytic complex of a cuprous salt and amine form aromatic polyethers, commercially referred to as poly(p-phenylene oxide) or PPO. Neat PPO has little commercial uses due to its high melt viscosity. Its available products are blends of PPO with high-impact polystyrene (HIPS). Polyethersulfone Polyethersulfone (PES) is also referred to as polyetherketone, polysulfone. It is synthesized by nucleophilic aromatic substitution between aromatic dihalides and bisphenolate salts. Polyethersulfones are partially crystalline, highly resistant to a wide range of aqueous and organic environment. They are rated for continuous service at temperatures of 240-280 °C. The polyketones are finding applications in areas like automotive, aerospace, electrical-electronic cable insulation. Aromatic polysulfides Poly(p-phenylene sulfide) (PPS) is synthesized by the reaction of sodium sulfide with p-dichlorobenzene in a polar solvent such as 1-methyl-2-pyrrolidinone (NMP). It is inherently flame-resistant and stable toward organic and aqueous conditions; however, it is somewhat susceptible to oxidants. Applications of PPS include automotive, microwave oven component, coating for cookware when blend with fluorocarbon polymers and protective coatings for valves, pipes, electromotive cells, etc. Aromatic polyimide Aromatic polyimides are synthesized by the reaction of dianhydrides with diamines, for example, pyromellitic anhydride with p-phenylenediamine. It can also be accomplished using diisocyanates in place of diamines. Solubility considerations sometimes suggest use of the half acid-half ester of the dianhydride, instead of the dianhydride itself. Polymerization is accomplished by a two-stage process due to the insolubility of polyimides. The first stage forms a soluble and fusible high-molecular-weight poly(amic acid) in a polar aprotic solvent such as NMP or N,N-dimethylacetamide. The poly(amic aicd) can then be processed into the desired physical form of the final polymer product (e.g., film, fiber, laminate, coating) which is insoluble and infusible. Telechelic oligomer approach Telechelic oligomer approach applies the usual polymerization manner except that one includes a monofunctional reactant to stop reaction at the oligomer stage, generally in the 50-3000 molecular weight. The monofunctional reactant not only limits polymerization but end-caps the oligomer with functional groups capable of subsequent reaction to achieve curing of the oligomer. Functional groups like alkyne, norbornene, maleimide, nitrite, and cyanate have been used for this purpose. Maleimide and norbornene end-capped oligomers can be cured by heating. Alkyne, nitrile, and cyanate end-capped oligomers can undergo cyclotrimerization yielding aromatic structures. See also Conducting polymer Fire-safe polymers Liquid crystal polymer Random graph theory of gelation Thermosetting plastic References External links Crosslinking Polymerization reactions
Step-growth polymerization
[ "Chemistry", "Materials_science" ]
4,121
[ "Polymerization reactions", "Polymer chemistry" ]
1,787,013
https://en.wikipedia.org/wiki/Photoacoustic%20spectroscopy
Photoacoustic spectroscopy is the measurement of the effect of absorbed electromagnetic energy (particularly of light) on matter by means of acoustic detection. The discovery of the photoacoustic effect dates to 1880 when Alexander Graham Bell showed that thin discs emitted sound when exposed to a beam of sunlight that was rapidly interrupted with a rotating slotted disk. The absorbed energy from the light causes local heating, generating a thermal expansion which creates a pressure wave or sound. Later Bell showed that materials exposed to the non-visible portions of the solar spectrum (i.e., the infrared and the ultraviolet) can also produce sounds. A photoacoustic spectrum of a sample can be recorded by measuring the sound at different wavelengths of the light. This spectrum can be used to identify the absorbing components of the sample. The photoacoustic effect can be used to study solids, liquids and gases. Uses and techniques Photoacoustic spectroscopy has become a powerful technique to study concentrations of gases at the part per billion or even part per trillion levels. Modern photoacoustic detectors still rely on the same principles as Bell's apparatus; however, to increase the sensitivity, several modifications have been made. Instead of sunlight, intense lasers are used to illuminate the sample since the intensity of the generated sound is proportional to the light intensity; this technique is referred to as laser photoacoustic spectroscopy (LPAS). The ear has been replaced by sensitive microphones. The microphone signals are further amplified and detected using lock-in amplifiers. By enclosing the gaseous sample in a cylindrical chamber, the sound signal is amplified by tuning the modulation frequency to an acoustic resonance of the sample cell. By using cantilever enhanced photoacoustic spectroscopy sensitivity can still be further improved enabling reliable monitoring of gases on ppb-level. Example The following example illustrates the potential of the photoacoustic technique: In the early 1970s, Patel and co-workers measured the temporal variation of the concentration of nitric oxide in the stratosphere at an altitude of 28 km with a balloon-borne photoacoustic detector. These measurements provided crucial data bearing on the problem of ozone depletion by man-made nitric oxide emission. Some of the early work relied on development of the RG theory by Rosencwaig and Gersho. Applications One of the important capabilities of using FTIR photoacoustic spectroscopy has been the ability to evaluate samples in their in situ state by infrared spectroscopy, which can be used to detect and quantify chemical functional groups and thus chemical substances. This is particularly useful for biological samples that can be evaluated without crushing to powder or subjecting to chemical treatments. Seashells, bone and such samples have been investigated. Using photoacoustic spectroscopy has helped evaluate molecular interactions in bone with osteogenesis imperfecta. While most academic research has concentrated on high resolution instruments, some work has gone in the opposite direction. In the last twenty years, very low cost instruments for applications such as leakage detection and for the control of carbon dioxide concentration have been developed and commercialized. Typically, low cost thermal sources are used which are modulated electronically. Diffusion through semi-permeable disks instead of valves for gas exchange, low-cost microphones, and proprietary signal processing with digital signal processors have brought down the costs of these systems. The future of low-cost applications of photoacoustic spectroscopy may be the realization of fully integrated micromachined photoacoustic instruments. The photoacoustic approach has been utilized to quantitatively measure macromolecules, such as proteins. The photoacoustic immunoassay labels and detects target proteins using nanoparticles that can generate strong acoustic signals. The photoacoustics-based protein analysis has also been applied for point-of-care testings. Photoacoustic spectroscopy also has many military applications. One such application is the detection toxic chemical agents. The sensitivity of photoacoustic spectroscopy makes it an ideal analysis technique for detecting trace chemicals associated with chemical attacks. LPAS sensors may be applied in industry, security (nerve agent and explosives detection), and medicine (breath analysis). References Further reading Sigrist, M. W. (1994), "Air Monitoring by Laser Photoacoustic Spectroscopy," in: Sigrist, M. W. (editor), "Air Monitoring by Spectroscopic Techniques," Wiley, New York, pp. 163–238. External links General introduction to photoacoustic spectroscopy: Photoacoustic spectroscopy in trace gas monitoring Photoacoustic spectrometer for trace gas detection based on a Helmholtz Resonant Cell (www.aerovia.fr) Photoacoustic multi-gas monitor for trace gas detection based on cantilever enhanced photoacoustic spectroscopy (www.gasera.fi) Spectroscopy
Photoacoustic spectroscopy
[ "Physics", "Chemistry" ]
995
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,787,246
https://en.wikipedia.org/wiki/Ion%20chromatography
Ion chromatography (or ion-exchange chromatography) is a form of chromatography that separates ions and ionizable polar molecules based on their affinity to the ion exchanger. It works on almost any kind of charged molecule—including small inorganic anions, large proteins, small nucleotides, and amino acids. However, ion chromatography must be done in conditions that are one pH unit away from the isoelectric point of a protein. The two types of ion chromatography are anion-exchange and cation-exchange. Cation-exchange chromatography is used when the molecule of interest is positively charged. The molecule is positively charged because the pH for chromatography is less than the pI (also known as pH(I)). In this type of chromatography, the stationary phase is negatively charged and positively charged molecules are loaded to be attracted to it. Anion-exchange chromatography is when the stationary phase is positively charged and negatively charged molecules (meaning that pH for chromatography is greater than the pI) are loaded to be attracted to it. It is often used in protein purification, water analysis, and quality control. The water-soluble and charged molecules such as proteins, amino acids, and peptides bind to moieties which are oppositely charged by forming ionic bonds to the insoluble stationary phase. The equilibrated stationary phase consists of an ionizable functional group where the targeted molecules of a mixture to be separated and quantified can bind while passing through the column—a cationic stationary phase is used to separate anions and an anionic stationary phase is used to separate cations. Cation exchange chromatography is used when the desired molecules to separate are cations and anion exchange chromatography is used to separate anions. The bound molecules then can be eluted and collected using an eluant which contains anions and cations by running a higher concentration of ions through the column or by changing the pH of the column. One of the primary advantages for the use of ion chromatography is that only one interaction is involved the separation, as opposed to other separation techniques; therefore, ion chromatography may have higher matrix tolerance. Another advantage of ion exchange is the predictability of elution patterns (based on the presence of the ionizable group). For example, when cation exchange chromatography is used, certain cations will elute out first and others later. A local charge balance is always maintained. However, there are also disadvantages involved when performing ion-exchange chromatography, such as constant evolution of the technique which leads to the inconsistency from column to column. A major limitation to this purification technique is that it is limited to ionizable group. History Ion chromatography has advanced through the accumulation of knowledge over a course of many years. Starting from 1947, Spedding and Powell used displacement ion-exchange chromatography for the separation of the rare earths. Additionally, they showed the ion-exchange separation of 14N and 15N isotopes in ammonia. At the start of the 1950s, Kraus and Nelson demonstrated the use of many analytical methods for metal ions dependent on their separation of their chloride, fluoride, nitrate or sulfate complexes by anion chromatography. Automatic in-line detection was progressively introduced from 1960 to 1980 as well as novel chromatographic methods for metal ion separations. A groundbreaking method by Small, Stevens and Bauman at Dow Chemical Co. unfolded the creation of the modern ion chromatography. Anions and cations could now be separated efficiently by a system of suppressed conductivity detection. In 1979, a method for anion chromatography with non-suppressed conductivity detection was introduced by Gjerde et al. Following it in 1980, was a similar method for cation chromatography. As a result, a period of extreme competition began within the IC market, with supporters for both suppressed and non-suppressed conductivity detection. This competition led to fast growth of new forms and the fast evolution of IC. A challenge that needs to be overcome in the future development of IC is the preparation of highly efficient monolithic ion-exchange columns and overcoming this challenge would be of great importance to the development of IC. The boom of Ion exchange chromatography primarily began between 1935 and 1950 during World War II and it was through the "Manhattan project" that applications and IC were significantly extended. Ion chromatography was originally introduced by two English researchers, agricultural Sir Thompson and chemist J T Way. The works of Thompson and Way involved the action of water-soluble fertilizer salts, ammonium sulfate and potassium chloride. These salts could not easily be extracted from the ground due to the rain. They performed ion methods to treat clays with the salts, resulting in the extraction of ammonia in addition to the release of calcium. It was in the fifties and sixties that theoretical models were developed for IC for further understanding and it was not until the seventies that continuous detectors were utilized, paving the path for the development from low-pressure to high-performance chromatography. Not until 1975 was "ion chromatography" established as a name in reference to the techniques, and was thereafter used as a name for marketing purposes. Today IC is important for investigating aqueous systems, such as drinking water. It is a popular method for analyzing anionic elements or complexes that help solve environmentally relevant problems. Likewise, it also has great uses in the semiconductor industry. Because of the abundant separating columns, elution systems, and detectors available, chromatography has developed into the main method for ion analysis. When this technique was initially developed, it was primarily used for water treatment. Since 1935, ion exchange chromatography rapidly manifested into one of the most heavily leveraged techniques, with its principles often being applied to majority of fields of chemistry, including distillation, adsorption, and filtration. Principle Ion-exchange chromatography separates molecules based on their respective charged groups. Ion-exchange chromatography retains analyte molecules on the column based on coulombic (ionic) interactions. The ion exchange chromatography matrix consists of positively and negatively charged ions. Essentially, molecules undergo electrostatic interactions with opposite charges on the stationary phase matrix. The stationary phase consists of an immobile matrix that contains charged ionizable functional groups or ligands. The stationary phase surface displays ionic functional groups (R-X) that interact with analyte ions of opposite charge. To achieve electroneutrality, these immobilized charges couple with exchangeable counterions in the solution. Ionizable molecules that are to be purified, compete with these exchangeable counterions, for binding to the immobilized charges on the stationary phase. These ionizable molecules are retained or eluted based on their charge. Initially, molecules that do not bind or bind weakly to the stationary phase are first to be washed away. Altered conditions are needed for the elution of the molecules that bind to the stationary phase. The concentration of the exchangeable counterions, which competes with the molecules for binding, can be increased, or the pH can be changed to affect the ionic charge of the eluent or the solute. A change in pH affects the charge on the particular molecules and, therefore, alter their binding. When reducing the net charge of the solute's molecules, they start eluting out. This way, such adjustments can be used to release the proteins of interest. Additionally, concentration of counterions can be gradually varied to affect the retention of the ionized molecules, thus separate them. This type of elution is called gradient elution. On the other hand, step elution can be used, in which the concentration of counterions are varied in steps. This type of chromatography is further subdivided into cation exchange chromatography and anion-exchange chromatography. Positively charged molecules bind to cation exchange resins, while negatively charged molecules bind to anion exchange resins. The ionic compound consisting of the cationic species M+ and the anionic species B− can be retained by the stationary phase. Cation exchange chromatography retains positively charged cations because the stationary phase displays a negatively charged functional group: Anion exchange chromatography retains anions using positively charged functional group: Note that the ion strength of either C+ or A− in the mobile phase can be adjusted to shift the equilibrium position, thus retention time. The ion chromatogram shows a typical chromatogram obtained with an anion exchange column. Procedure Before ion-exchange chromatography can be initiated, it must be equilibrated. The stationary phase must be equilibrated to certain requirements that depend on the experiment that you are working with. Once equilibrated, the charged ions in the stationary phase will be attached to its opposite charged exchangeable ions, such as Cl− or Na+. Next, a buffer should be chosen in which the desired protein can bind to. After equilibration, the column needs to be washed. The washing phase will help elute out all impurities that does not bind to the matrix while the protein of interest remains bounded. This sample buffer needs to have the same pH as the buffer used for equilibration to help bind the desired proteins. Uncharged proteins will be eluted out of the column at a similar speed of the buffer flowing through the column with no retention. Once the sample has been loaded onto to the column, and the column has been washed with the buffer to elute out all non-desired proteins, elution is carried out at specific conditions to elute the desired proteins that are bound to the matrix. Bound proteins are eluted out by utilizing a gradient of linearly increasing salt concentration. With increasing ionic strength of the buffer, the salt ions will compete with the desired proteins in order to bind to charged groups on the surface of the medium. This will cause desired proteins to be eluted out of the column. Proteins that have a low net charge will be eluted out first as the salt concentration increases causing the ionic strength to increase. Proteins with high net charge will need a higher ionic strength for them to be eluted out of the column. It is possible to perform ion exchange chromatography in bulk, on thin layers of medium such as glass or plastic plates coated with a layer of the desired stationary phase, or in chromatography columns. Thin layer chromatography or column chromatography share similarities in that they both act within the same governing principles; there is constant and frequent exchange of molecules as the mobile phase travels along the stationary phase. It is not imperative to add the sample in minute volumes as the predetermined conditions for the exchange column have been chosen so that there will be strong interaction between the mobile and stationary phases. Furthermore, the mechanism of the elution process will cause a compartmentalization of the differing molecules based on their respective chemical characteristics. This phenomenon is due to an increase in salt concentrations at or near the top of the column, thereby displacing the molecules at that position, while molecules bound lower are released at a later point when the higher salt concentration reaches that area. These principles are the reasons that ion exchange chromatography is an excellent candidate for initial chromatography steps in a complex purification procedure as it can quickly yield small volumes of target molecules regardless of a greater starting volume. Comparatively simple devices are often used to apply counterions of increasing gradient to a chromatography column. Counterions such as copper (II) are chosen most often for effectively separating peptides and amino acids through complex formation. A simple device can be used to create a salt gradient. Elution buffer is consistently being drawn from the chamber into the mixing chamber, thereby altering its buffer concentration. Generally, the buffer placed into the chamber is usually of high initial concentration, whereas the buffer placed into the stirred chamber is usually of low concentration. As the high concentration buffer from the left chamber is mixed and drawn into the column, the buffer concentration of the stirred column gradually increase. Altering the shapes of the stirred chamber, as well as of the limit buffer, allows for the production of concave, linear, or convex gradients of counterion. A multitude of different mediums are used for the stationary phase. Among the most common immobilized charged groups used are trimethylaminoethyl (TAM), triethylaminoethyl (TEAE), diethyl-2-hydroxypropylaminoethyl (QAE), aminoethyl (AE), diethylaminoethyl (DEAE), sulpho (S), sulphomethyl (SM), sulphopropyl (SP), carboxy (C), and carboxymethyl (CM). Successful packing of the column is an important aspect of ion chromatography. Stability and efficiency of a final column depends on packing methods, solvent used, and factors that affect mechanical properties of the column. In contrast to early inefficient dry- packing methods, wet slurry packing, in which particles that are suspended in an appropriate solvent are delivered into a column under pressure, shows significant improvement. Three different approaches can be employed in performing wet slurry packing: the balanced density method (solvent's density is about that of porous silica particles), the high viscosity method (a solvent of high viscosity is used), and the low viscosity slurry method (performed with low viscosity solvents). Polystyrene is used as a medium for ion- exchange. It is made from the polymerization of styrene with the use of divinylbenzene and benzoyl peroxide. Such exchangers form hydrophobic interactions with proteins which can be irreversible. Due to this property, polystyrene ion exchangers are not suitable for protein separation. They are used on the other hand for the separation of small molecules in amino acid separation and removal of salt from water. Polystyrene ion exchangers with large pores can be used for the separation of protein but must be coated with a hydrophilic substance. Cellulose based medium can be used for the separation of large molecules as they contain large pores. Protein binding in this medium is high and has low hydrophobic character. DEAE is an anion exchange matrix that is produced from a positive side group of diethylaminoethyl bound to cellulose or Sephadex. Agarose gel based medium contain large pores as well but their substitution ability is lower in comparison to dextrans. The ability of the medium to swell in liquid is based on the cross-linking of these substances, the pH and the ion concentrations of the buffers used. Incorporation of high temperature and pressure allows a significant increase in the efficiency of ion chromatography, along with a decrease in time. Temperature has an influence of selectivity due to its effects on retention properties. The retention factor (k = (tRg − tMg)/(tMg − text)) increases with temperature for small ions, and the opposite trend is observed for larger ions. Despite ion selectivity in different mediums, further research is being done to perform ion exchange chromatography through the range of 40–175 °C. An appropriate solvent can be chosen based on observations of how column particles behave in a solvent. Using an optical microscope, one can easily distinguish a desirable dispersed state of slurry from aggregated particles. Weak and strong ion exchangers A "strong" ion exchanger will not lose the charge on its matrix once the column is equilibrated and so a wide range of pH buffers can be used. "Weak" ion exchangers have a range of pH values in which they will maintain their charge. If the pH of the buffer used for a weak ion exchange column goes out of the capacity range of the matrix, the column will lose its charge distribution and the molecule of interest may be lost. Despite the smaller pH range of weak ion exchangers, they are often used over strong ion exchangers due to their having greater specificity. In some experiments, the retention times of weak ion exchangers are just long enough to obtain desired data at a high specificity. Resins (often termed 'beads') of ion exchange columns may include functional groups such as weak/strong acids and weak/strong bases. There are also special columns that have resins with amphoteric functional groups that can exchange both cations and anions. Some examples of functional groups of strong ion exchange resins are quaternary ammonium cation (Q), which is an anion exchanger, and sulfonic acid (S, -SO2OH), which is a cation exchanger. These types of exchangers can maintain their charge density over a pH range of 0–14. Examples of functional groups of Weak ion exchange resins include diethylaminoethyl (DEAE, -C2H4N(C2H5)2), which is an anion exchanger, and carboxymethyl (CM, -CH2-COOH), which is a cation exchanger. These two types of exchangers can maintain the charge density of their columns over a pH range of 5–9. In ion chromatography, the interaction of the solute ions and the stationary phase based on their charges determines which ions will bind and to what degree. When the stationary phase features positive groups which attracts anions, it is called an anion exchanger; when there are negative groups on the stationary phase, cations are attracted and it is a cation exchanger. The attraction between ions and stationary phase also depends on the resin, organic particles used as ion exchangers. Each resin features relative selectivity which varies based on the solute ions present who will compete to bind to the resin group on the stationary phase. The selectivity coefficient, the equivalent to the equilibrium constant, is determined via a ratio of the concentrations between the resin and each ion, however, the general trend is that ion exchangers prefer binding to the ion with a higher charge, smaller hydrated radius, and higher polarizability, or the ability for the electron cloud of an ion to be disrupted by other charges. Despite this selectivity, excess amounts of an ion with a lower selectivity introduced to the column would cause the lesser ion to bind more to the stationary phase as the selectivity coefficient allows fluctuations in the binding reaction that takes place during ion exchange chromatography. Following table shows the commonly used ion exchangers Typical technique A sample is introduced, either manually or with an autosampler, into a sample loop of known volume. A buffered aqueous solution known as the mobile phase carries the sample from the loop onto a column that contains some form of stationary phase material. This is typically a resin or gel matrix consisting of agarose or cellulose beads with covalently bonded charged functional groups. Equilibration of the stationary phase is needed in order to obtain the desired charge of the column. If the column is not properly equilibrated the desired molecule may not bind strongly to the column. The target analytes (anions or cations) are retained on the stationary phase but can be eluted by increasing the concentration of a similarly charged species that displaces the analyte ions from the stationary phase. For example, in cation exchange chromatography, the positively charged analyte can be displaced by adding positively charged sodium ions. The analytes of interest must then be detected by some means, typically by conductivity or UV/visible light absorbance. Control an IC system usually requires a chromatography data system (CDS). In addition to IC systems, some of these CDSs can also control gas chromatography (GC) and HPLC. Membrane exchange chromatography A type of ion exchange chromatography, membrane exchange is a relatively new method of purification designed to overcome limitations of using columns packed with beads. Membrane Chromatographic devices are cheap to mass-produce and disposable unlike other chromatography devices that require maintenance and time to revalidate. There are three types of membrane absorbers that are typically used when separating substances. The three types are flat sheet, hollow fibre, and radial flow. The most common absorber and best suited for membrane chromatography is multiple flat sheets because it has more absorbent volume. It can be used to overcome mass transfer limitations and pressure drop, making it especially advantageous for isolating and purifying viruses, plasmid DNA, and other large macromolecules. The column is packed with microporous membranes with internal pores which contain adsorptive moieties that can bind the target protein. Adsorptive membranes are available in a variety of geometries and chemistry which allows them to be used for purification and also fractionation, concentration, and clarification in an efficiency that is 10 fold that of using beads. Membranes can be prepared through isolation of the membrane itself, where membranes are cut into squares and immobilized. A more recent method involved the use of live cells that are attached to a support membrane and are used for identification and clarification of signaling molecules. Separating proteins Ion exchange chromatography can be used to separate proteins because they contain charged functional groups. The ions of interest (in this case charged proteins) are exchanged for another ions (usually H+) on a charged solid support. The solutes are most commonly in a liquid phase, which tends to be water. Take for example proteins in water, which would be a liquid phase that is passed through a column. The column is commonly known as the solid phase since it is filled with porous synthetic particles that are of a particular charge. These porous particles are also referred to as beads, may be aminated (containing amino groups) or have metal ions in order to have a charge. The column can be prepared using porous polymers, for macromolecules of a mass of over 100 000 Da, the optimum size of the porous particle is about 1 μm2. This is because slow diffusion of the solutes within the pores does not restrict the separation quality. The beads containing positively charged groups, which attract the negatively charged proteins, are commonly referred to as anion exchange resins. The amino acids that have negatively charged side chains at pH 7 (pH of water) are glutamate and aspartate. The beads that are negatively charged are called cation exchange resins, as positively charged proteins will be attracted. The amino acids that have positively charged side chains at pH 7 are lysine, histidine and arginine. The isoelectric point is the pH at which a compound - in this case a protein - has no net charge. A protein's isoelectric point or PI can be determined using the pKa of the side chains, if the amino (positive chain) is able to cancel out the carboxyl (negative) chain, the protein would be at its PI. Using buffers instead of water for proteins that do not have a charge at pH 7 is a good idea as it enables the manipulation of pH to alter ionic interactions between the proteins and the beads. Weakly acidic or basic side chains are able to have a charge if the pH is high or low enough respectively. Separation can be achieved based on the natural isoelectric point of the protein. Alternatively a peptide tag can be genetically added to the protein to give the protein an isoelectric point away from most natural proteins (e.g., 6 arginines for binding to a cation-exchange resin or 6 glutamates for binding to an anion-exchange resin such as DEAE-Sepharose). Elution by increasing ionic strength of the mobile phase is more subtle. It works because ions from the mobile phase interact with the immobilized ions on the stationary phase, thus "shielding" the stationary phase from the protein, and letting the protein elute. Elution from ion-exchange columns can be sensitive to changes of a single charge- chromatofocusing. Ion-exchange chromatography is also useful in the isolation of specific multimeric protein assemblies, allowing purification of specific complexes according to both the number and the position of charged peptide tags. Gibbs–Donnan effect In ion exchange chromatography, the Gibbs–Donnan effect is observed when the pH of the applied buffer and the ion exchanger differ, even up to one pH unit. For example, in anion-exchange columns, the ion exchangers repeal protons so the pH of the buffer near the column differs is higher than the rest of the solvent. As a result, an experimenter has to be careful that the protein(s) of interest is stable and properly charged in the "actual" pH. This effect comes as a result of two similarly charged particles, one from the resin and one from the solution, failing to distribute properly between the two sides; there is a selective uptake of one ion over another. For example, in a sulphonated polystyrene resin, a cation exchange resin, the chlorine ion of a hydrochloric acid buffer should equilibrate into the resin. However, since the concentration of the sulphonic acid in the resin is high, the hydrogen of HCl has no tendency to enter the column. This, combined with the need of electroneutrality, leads to a minimum amount of hydrogen and chlorine entering the resin. Uses Clinical utility A use of ion chromatography can be seen in argentation chromatography. Usually, silver and compounds containing acetylenic and ethylenic bonds have very weak interactions. This phenomenon has been widely tested on olefin compounds. The ion complexes the olefins make with silver ions are weak and made based on the overlapping of pi, sigma, and d orbitals and available electrons therefore cause no real changes in the double bond. This behavior was manipulated to separate lipids, mainly fatty acids from mixtures in to fractions with differing number of double bonds using silver ions. The ion resins were impregnated with silver ions, which were then exposed to various acids (silicic acid) to elute fatty acids of different characteristics. Detection limits as low as 1 μM can be obtained for alkali metal ions. It may be used for measurement of HbA1c, porphyrin and with water purification. Ion Exchange Resins(IER) have been widely used especially in medicines due to its high capacity and the uncomplicated system of the separation process. One of the synthetic uses is to use Ion Exchange Resins for kidney dialysis. This method is used to separate the blood elements by using the cellulose membraned artificial kidney. Another clinical application of ion chromatography is in the rapid anion exchange chromatography technique used to separate creatine kinase (CK) isoenzymes from human serum and tissue sourced in autopsy material (mostly CK rich tissues were used such as cardiac muscle and brain). These isoenzymes include MM, MB, and BB, which all carry out the same function given different amino acid sequences. The functions of these isoenzymes are to convert creatine, using ATP, into phosphocreatine expelling ADP. Mini columns were filled with DEAE-Sephadex A-50 and further eluted with tris- buffer sodium chloride at various concentrations (each concentration was chosen advantageously to manipulate elution). Human tissue extract was inserted in columns for separation. All fractions were analyzed to see total CK activity and it was found that each source of CK isoenzymes had characteristic isoenzymes found within. Firstly, CK- MM was eluted, then CK-MB, followed by CK-BB. Therefore, the isoenzymes found in each sample could be used to identify the source, as they were tissue specific. Using the information from results, correlation could be made about the diagnosis of patients and the kind of CK isoenzymes found in most abundant activity. From the finding, about 35 out of 71 patients studied suffered from heart attack (myocardial infarction) also contained an abundant amount of the CK-MM and CK-MB isoenzymes. Findings further show that many other diagnosis including renal failure, cerebrovascular disease, and pulmonary disease were only found to have the CK-MM isoenzyme and no other isoenzyme. The results from this study indicate correlations between various diseases and the CK isoenzymes found which confirms previous test results using various techniques. Studies about CK-MB found in heart attack victims have expanded since this study and application of ion chromatography. Industrial applications Since 1975 ion chromatography has been widely used in many branches of industry. The main beneficial advantages are reliability, very good accuracy and precision, high selectivity, high speed, high separation efficiency, and low cost of consumables. The most significant development related to ion chromatography are new sample preparation methods; improving the speed and selectivity of analytes separation; lowering of limits of detection and limits of quantification; extending the scope of applications; development of new standard methods; miniaturization and extending the scope of the analysis of a new group of substances. Allows for quantitative testing of electrolyte and proprietary additives of electroplating baths. It is an advancement of qualitative hull cell testing or less accurate UV testing. Ions, catalysts, brighteners and accelerators can be measured. Ion exchange chromatography has gradually become a widely known, universal technique for the detection of both anionic and cationic species. Applications for such purposes have been developed, or are under development, for a variety of fields of interest, and in particular, the pharmaceutical industry. The usage of ion exchange chromatography in pharmaceuticals has increased in recent years, and in 2006, a chapter on ion exchange chromatography was officially added to the United States Pharmacopia-National Formulary (USP-NF). Furthermore, in 2009 release of the USP-NF, the United States Pharmacopia made several analyses of ion chromatography available using two techniques: conductivity detection, as well as pulse amperometric detection. Majority of these applications are primarily used for measuring and analyzing residual limits in pharmaceuticals, including detecting the limits of oxalate, iodide, sulfate, sulfamate, phosphate, as well as various electrolytes including potassium, and sodium. In total, the 2009 edition of the USP-NF officially released twenty eight methods of detection for the analysis of active compounds, or components of active compounds, using either conductivity detection or pulse amperometric detection. Drug development There has been a growing interest in the application of IC in the analysis of pharmaceutical drugs. IC is used in different aspects of product development and quality control testing. For example, IC is used to improve stabilities and solubility properties of pharmaceutical active drugs molecules as well as used to detect systems that have higher tolerance for organic solvents. IC has been used for the determination of analytes as a part of a dissolution test. For instance, calcium dissolution tests have shown that other ions present in the medium can be well resolved among themselves and also from the calcium ion. Therefore, IC has been employed in drugs in the form of tablets and capsules in order to determine the amount of drug dissolve with time. IC is also widely used for detection and quantification of excipients or inactive ingredients used in pharmaceutical formulations. Detection of sugar and sugar alcohol in such formulations through IC has been done due to these polar groups getting resolved in ion column. IC methodology also established in analysis of impurities in drug substances and products. Impurities or any components that are not part of the drug chemical entity are evaluated and they give insights about the maximum and minimum amounts of drug that should be administered in a patient per day. See also Anion-exchange chromatography Chromatofocusing High performance liquid chromatography Isoelectric point References Bibliography External links Chromatography
Ion chromatography
[ "Chemistry" ]
6,713
[ "Chromatography", "Separation processes" ]
4,577,402
https://en.wikipedia.org/wiki/Elementary%20divisors
In algebra, the elementary divisors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain. If is a PID and a finitely generated -module, then M is isomorphic to a finite direct sum of the form , where the are nonzero primary ideals. The list of primary ideals is unique up to order (but a given ideal may be present more than once, so the list represents a multiset of primary ideals); the elements are unique only up to associatedness, and are called the elementary divisors. Note that in a PID, the nonzero primary ideals are powers of prime ideals, so the elementary divisors can be written as powers of irreducible elements. The nonnegative integer is called the free rank or Betti number of the module . The module is determined up to isomorphism by specifying its free rank , and for class of associated irreducible elements and each positive integer the number of times that occurs among the elementary divisors. The elementary divisors can be obtained from the list of invariant factors of the module by decomposing each of them as far as possible into pairwise relatively prime (non-unit) factors, which will be powers of irreducible elements. This decomposition corresponds to maximally decomposing each submodule corresponding to an invariant factor by using the Chinese remainder theorem for R. Conversely, knowing the multiset of elementary divisors, the invariant factors can be found, starting from the final one (which is a multiple of all others), as follows. For each irreducible element such that some power occurs in , take the highest such power, removing it from , and multiply these powers together for all (classes of associated) to give the final invariant factor; as long as is non-empty, repeat to find the invariant factors before it. See also Invariant factors Smith normal form References Chap.11, p.182. Chap. III.7, p.153 of Module theory
Elementary divisors
[ "Mathematics" ]
430
[ "Fields of abstract algebra", "Module theory" ]
4,577,462
https://en.wikipedia.org/wiki/Geometric%20modeling
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes. The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth. Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award. See also 2D geometric modeling Architectural geometry Computational conformal geometry Computational topology Computer-aided engineering Computer-aided manufacturing Digital geometry Geometric modeling kernel List of interactive geometry software Parametric equation Parametric surface Solid modeling Space partitioning References Further reading General textbooks: This book is out of print and freely available from the author. For multi-resolution (multiple level of detail) geometric modeling : Subdivision methods (such as subdivision surfaces): External links Geometry and Algorithms for CAD (Lecture Note, TU Darmstadt) Geometric algorithms Computer-aided design Applied geometry
Geometric modeling
[ "Mathematics", "Engineering" ]
393
[ "Computer-aided design", "Design engineering", "Applied mathematics", "Applied mathematics stubs", "Geometry", "Applied geometry" ]
4,579,933
https://en.wikipedia.org/wiki/Linear%20energy%20transfer
In dosimetry, linear energy transfer (LET) is the amount of energy that an ionizing particle transfers to the material traversed per unit distance. It describes the action of radiation into matter. It is identical to the retarding force acting on a charged ionizing particle travelling through the matter. By definition, LET is a positive quantity. LET depends on the nature of the radiation as well as on the material traversed. A high LET will slow down the radiation more quickly, generally making shielding more effective and preventing deep penetration. On the other hand, the higher concentration of deposited energy can cause more severe damage to any microscopic structures near the particle track. If a microscopic defect can cause larger-scale failure, as is the case in biological cells and microelectronics, the LET helps explain why radiation damage is sometimes disproportionate to the absorbed dose. Dosimetry attempts to factor in this effect with radiation weighting factors. Linear energy transfer is closely related to stopping power, since both equal the retarding force. The unrestricted linear energy transfer is identical to linear electronic stopping power, as discussed below. But the stopping power and LET concepts are different in the respect that total stopping power has the nuclear stopping power component, and this component does not cause electronic excitations. Hence nuclear stopping power is not contained in LET. The appropriate SI unit for LET is the newton, but it is most typically expressed in units of kiloelectronvolts per micrometre (keV/μm) or megaelectronvolts per centimetre (MeV/cm). While medical physicists and radiobiologists usually speak of linear energy transfer, most non-medical physicists talk about stopping power. Restricted and unrestricted LET The secondary electrons produced during the process of ionization by the primary charged particle are conventionally called delta rays, if their energy is large enough so that they themselves can ionize. Many studies focus upon the energy transferred in the vicinity of the primary particle track and therefore exclude interactions that produce delta rays with energies larger than a certain value Δ. This energy limit is meant to exclude secondary electrons that carry energy far from the primary particle track, since a larger energy implies a larger range. This approximation neglects the directional distribution of secondary radiation and the non-linear path of delta rays, but simplifies analytic evaluation. In mathematical terms, Restricted linear energy transfer is defined by where is the energy loss of the charged particle due to electronic collisions while traversing a distance , excluding all secondary electrons with kinetic energies larger than Δ. If Δ tends toward infinity, then there are no electrons with larger energy, and the linear energy transfer becomes the unrestricted linear energy transfer which is identical to the linear electronic stopping power. Here, the use of the term "infinity" is not to be taken literally; it simply means that no energy transfers, however large, are excluded. Application to radiation types During his investigations of radioactivity, Ernest Rutherford coined the terms alpha rays, beta rays and gamma rays for the three types of emissions that occur during radioactive decay. Alpha particles and other positive ions Linear energy transfer is best defined for monoenergetic ions, i.e. protons, alpha particles, and the heavier nuclei called HZE ions found in cosmic rays or produced by particle accelerators. These particles cause frequent direct ionizations within a narrow diameter around a relatively straight track, thus approximating continuous deceleration. As they slow down, the changing particle cross section modifies their LET, generally increasing it to a Bragg peak just before achieving thermal equilibrium with the absorber, i.e., before the end of range. At equilibrium, the incident particle essentially comes to rest or is absorbed, at which point LET is undefined. Since the LET varies over the particle track, an average value is often used to represent the spread. Averages weighted by track length or weighted by absorbed dose are present in the literature, with the latter being more common in dosimetry. These averages are not widely separated for heavy particles with high LET, but the difference becomes more important in the other type of radiations discussed below. Often overlooked for alpha particles is the recoil-nucleus of the alpha emitter, which has significant ionization energy of roughly 5% of the alpha particle, but because of its high electric charge and large mass, has an ultra-short range of only a few Angstroms. This can skew results significantly if one is examining the Relative Biological Effectiveness of the alpha particle in the cytoplasm, while ignoring the recoil nucleus contribution, which alpha-parent being one of numerous heavy metals, is typically adhered to chromatic material such as chromosomes. Beta particles Electrons produced in nuclear decay are called beta particles. Because of their low mass relative to atoms, they are strongly scattered by nuclei (Coulomb or Rutherford scattering), much more so than heavier particles. Beta particle tracks are therefore crooked. In addition to producing secondary electrons (delta rays) while ionizing atoms, they also produce bremsstrahlung photons. A maximum range of beta radiation can be defined experimentally which is smaller than the range that would be measured along the particle path. Gamma rays Gamma rays are photons, whose absorption cannot be described by LET. When a gamma quantum passes through matter, it may be absorbed in a single process (photoelectric effect, Compton effect or pair production), or it continues unchanged on its path. (Only in the case of the Compton effect, another gamma quantum of lower energy proceeds). Gamma ray absorption therefore obeys an exponential law (see Gamma rays); the absorption is described by the absorption coefficient or by the half-value thickness. LET has therefore no meaning when applied to photons. However, many authors speak of "gamma LET" anyway, where they are actually referring to the LET of the secondary electrons, i.e., mainly Compton electrons, produced by the gamma radiation. The secondary electrons will ionize far more atoms than the primary photon. This gamma LET has little relation to the attenuation rate of the beam, but it may have some correlation to the microscopic defects produced in the absorber. Even a monoenergetic gamma beam will produce a spectrum of electrons, and each secondary electron will have a variable LET as it slows down, as discussed above. The "gamma LET" is therefore an average. The transfer of energy from an uncharged primary particle to charged secondary particles can also be described by using the mass energy-transfer coefficient. Biological effects Many studies have attempted to relate linear energy transfer to the relative biological effectiveness (RBE) of radiation, with inconsistent results. The relationship varies widely depending on the nature of the biological material, and the choice of endpoint to define effectiveness. Even when these are held constant, different radiation spectra that shared the same LET have significantly different RBE. Despite these variations, some overall trends are commonly seen. The RBE is generally independent of LET for any LET less than 10 keV/μm, so a low LET is normally chosen as the reference condition where RBE is set to unity. Above 10 keV/μm, some systems show a decline in RBE with increasing LET, while others show an initial increase to a peak before declining. Mammalian cells usually experience a peak RBE for LET's around 100 keV/μm. These are very rough numbers; for example, one set of experiments found a peak at 30 keV/μm. The International Commission on Radiation Protection (ICRP) proposed a simplified model of RBE-LET relationships for use in dosimetry. They defined a quality factor of radiation as a function of dose-averaged unrestricted LET in water, and intended it as a highly uncertain, but generally conservative, approximation of RBE. Different iterations of their model are shown in the graph to the right. The 1966 model was integrated into their 1977 recommendations for radiation protection in ICRP 26. This model was largely replaced in the 1991 recommendations of ICRP 60 by radiation weighting factors that were tied to the particle type and independent of LET. ICRP 60 revised the quality factor function and reserved it for use with unusual radiation types that did not have radiation weighting factors assigned to them. Application fields When used to describe the dosimetry of ionizing radiation in the biological or biomedical setting, the LET (like linear stopping power) is usually expressed in units of keV/μm. In space applications, electronic devices can be disturbed by the passage of energetic electrons, protons or heavier ions that may alter the state of a circuit, producing "single event effects". The effect of the radiation is described by the LET (which is here taken as synonymous with stopping power), typically expressed in units of MeV·cm2/mg of material, the units used for mass stopping power (the material in question is usually Si for MOS devices). The units of measurement arise from a combination of the energy lost by the particle to the material per unit path length (MeV/cm) divided by the density of the material (mg/cm3). "Soft errors" of electronic devices due to cosmic rays on earth are, however, mostly due to neutrons which do not directly interact with the material and whose passage can therefore not be described by LET. Rather, one measures their effect in terms of neutrons per cm2 per hour, see Soft error. References Nuclear physics Radiation effects Radiobiology
Linear energy transfer
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,927
[ "Physical phenomena", "Radiobiology", "Materials science", "Radiation", "Condensed matter physics", "Nuclear physics", "Radiation effects", "Radioactivity" ]
4,580,454
https://en.wikipedia.org/wiki/Virtual%20Physiological%20Human
The Virtual Physiological Human (VPH) is a European initiative that focuses on a methodological and technological framework that, once established, will enable collaborative investigation of the human body as a single complex system. The collective framework will make it possible to share resources and observations formed by institutions and organizations, creating disparate but integrated computer models of the mechanical, physical and biochemical functions of a living human body. VPH is a framework which aims to be descriptive, integrative and predictive. Clapworthy et al. state that the framework should be descriptive by allowing laboratory and healthcare observations around the world "to be collected, catalogued, organized, shared and combined in any possible way." It should be integrative by enabling those observations to be collaboratively analyzed by related professionals in order to create "systemic hypotheses." Finally, it should be predictive by encouraging interconnections between extensible and scalable predictive models and "systemic networks that solidify those systemic hypotheses" while allowing observational comparison. The framework is formed by large collections of anatomical, physiological, and pathological data stored in digital format, typically by predictive simulations developed from these collections and by services intended to support researchers in the creation and maintenance of these models, as well as in the creation of end-user technologies to be used in the clinical practice. VPH models aim to integrate physiological processes across different length and time scales (multi-scale modelling). These models make possible the combination of patient-specific data with population-based representations. The objective is to develop a systemic approach which avoids a reductionist approach and seeks not to subdivide biological systems in any particular way by dimensional scale (body, organ, tissue, cells, molecules), by scientific discipline (biology, physiology, biophysics, biochemistry, molecular biology, bioengineering) or anatomical sub-system (cardiovascular, musculoskeletal, gastrointestinal, etc.). History The initial concepts that led to the Virtual Physiological Human initiative came from the IUPS Physiome Project. The project was started in 1997 and represented the first worldwide effort to define the physiome through the development of databases and models which facilitated the understanding of the integrative function of cells, organs, and organisms. The project focused on compiling and providing a central repository of databases that would link experimental information and computational models from many laboratories into a single, self-consistent framework. Following the launch of the Physiome Project, there were many other worldwide initiatives of loosely coupled actions all focusing on the development of methods for modelling and simulation of human pathophysiology. In 2005, an expert workshop of the Physiome was held as part of the Functional Imaging and Modelling of the Heart Conference in Barcelona where a white paper entitled Towards Virtual Physiological Human: Multilevel modelling and simulation of the human anatomy and physiology was presented. The goal of this paper was to shape a clear overview of on-going relevant VPH activities, to build a consensus on how they can be complemented by new initiatives for researchers in the EU and to identify possible mid-term and long term research challenges. In 2006, the European Commission funded a coordination and support action entitled STEP: Structuring The EuroPhysiome. The STEP consortium promoted a significant consensus process that involved more than 300 stakeholders including researchers, industry experts, policy makers, clinicians, etc. The prime result of this process was a booklet entitled Seeding the EuroPhysiome: A Roadmap to the Virtual Physiological Human. The STEP action and the resulting research roadmap were instrumental in the development of the VHP concept and in the initiation of much larger process that involves significant research funding, large collaborative projects, and a number of connected initiatives, not only in Europe but also in the United States, Japan, and China. VPH now forms a core target of the 7th Framework Programme of the European Commission, and aims to support the development of patient-specific computer models and their application in personalised and predictive healthcare. The Virtual Physiological Human Network of Excellence (VPH NoE) aims to connect the various VPH projects within the 7th Framework Programme. Goals of the initiative VPH-related projects have received substantial funding from the European Commission in order to further scientific progress in this area. The European Commission is insistent that VPH-related projects demonstrate strong industrial participation and clearly indicate a route from basic science into clinical practice. In the future, it is hoped that the VPH will eventually lead to a better healthcare system which aims to produce the following benefits: personalized care solutions reduced need for experiments on animals more holistic approaches to medicine preventative approaches to treatment of disease Personalized care solutions are a key aim of the VPH, with new modelling environments for predictive, individualized healthcare to result in better patient safety and drug efficacy. It is anticipated that the VPH could also result in healthcare improvement through greater understanding of pathophysiological processes. The use of biomedical data from a patient to simulate potential treatments and outcomes could prevent the patient from experiencing unnecessary or ineffective treatments. The use of in silico (by computer simulation) modelling and testing of drugs could also reduce the need for experiments on animals. A future goal is that there also will be a more holistic approach to medicine with the body treated as a single multi-organ system rather than as a collection of individual organs. Advanced integrative tools should further help to improve the European healthcare system on a number of different levels that include diagnosis, treatment and care of patients and in particular quality of life. Projects ImmunoGrid ImmunoGrid is a project funded by the EU under Framework 6, to model and simulate the human immune system using grid computing at different physiological levels. Osteoporotic Virtual Physiological Human VPHOP (Osteoporotic Virtual Physiological Human) is a European Osteoporosis research project within the framework of the Virtual Physiological Human initiative. With current technology, osteoporotic fractures can be predicted with an accuracy of less than 70%. Better ways to prevent and diagnose osteoporotic fractures are needed. Current fracture predictions are based on history and examination on the basis of which key factors are identified which contribute to the increased probability of an osteoporotic fracture. This approach oversimplifies the mechanisms leading to an osteoporotic fracture and fail to take into account numerous hierarchical factors which are unique to the individual. These factors range from cell-level to body-level functions. Musculoskeletal anatomy and neuromotor control define the daily loading spectrum, including paraphysiological overloading events. Fracture events occur at the organ level and are influenced by the elasticity and geometry of bone elasticity and geometry are determined by tissue morphology. Cell activity changes tissue morphology and composition over time. Constituents of the extracellular matrix are the prime determinants of tissue strength. Accuracy could be dramatically improved if a more deterministic approach was used that accounts for those factors and their variation between individuals. The goal of the Osteoporotic Virtual Physiological Human is to improve the accuracy of these osteoporotic fracture prediction algorithms. See also Cytome EuroPhysiome Human anatomy Living Human Project Physiology Physiome Virtual Physiological Rat VPHOP (Osteoporotic Virtual Physiological Human) References Bibliography Clapworthy, G., Kohl, P., Gregerson, H., Thomas, S., Viceconti, M., Hose, D., Pinney, D., Fenner, J., McCormack, K., Lawford, P., Van Sint Jan, S., Waters, S., & Coveney, P. 2007, "Digital Human Modelling: A Global Vision and a European Perspective," In Digital Human Modelling: A Global Vision and a European Perspective, Berlin: Springer, pp. 549–558. Hunter, P.J. 2006. Modeling living systems: the IUPS/EMBS Physiome project. Proceedings IEEE, 94, 678-991 Viceconti, M., Testi, D., Taddei, F., Martelli, S., Clapworthy, G. J., Van Sint Jan, S., 2006. Biomechanics Modeling of the Musculoskeletal Apparatus: Status and Key Issues. Proceedings of the IEEE 94(4), 725-739. External links VPH Institute Anatomical simulation Health informatics Pathology Physiology
Virtual Physiological Human
[ "Biology" ]
1,753
[ "Physiology", "Pathology", "Health informatics", "Medical technology" ]
4,580,462
https://en.wikipedia.org/wiki/Physiome
The physiome of an individual's or species' physiological state is the description of its functional behavior. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and structure (genome, proteome, and morphome). The term comes from "physio-" (nature) and "-ome" (as a whole). The study of physiome is called physiomics. The concept of a physiome project was presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology in 1993. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus for the next decade. The project is led by the Physiome Commission of the IUPS. Other research initiatives related to the physiome include: The EuroPhysiome Initiative The NSR Physiome Project of the National Simulation Resource (NSR) at the University of Washington, supporting the IUPS Physiome Project The Wellcome Trust Heart Physiome Project, a collaboration between the University of Auckland and the University of Oxford, part of the wider IUPS Physiome Project See also Cardiophysics Cytomics Human Genome Project List of omics topics in biology Living Human Project Virtual Physiological Human Virtual Physiological Rat References External links National Resource for Cell Analysis and Modeling (NRCAM) Biophysics Physiology
Physiome
[ "Physics", "Biology" ]
316
[ "Applied and interdisciplinary physics", "Biophysics", "Physiology" ]
12,551,029
https://en.wikipedia.org/wiki/ACS%20Chemical%20Biology
ACS Chemical Biology is a monthly peer-reviewed scientific journal published since 2006 by the American Chemical Society. It covers research at the interface between chemistry and biology spanning all aspects of chemical biology. The founding editor-in-chief was Laura L. Kiessling (Massachusetts Institute of Technology). Chuan He (University of Chicago) began the role of editor-in-chief in January 2022. According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.0. Types of content The journal publishes the following types of articles: research letters, articles, reviews, and perspectives, as well as specially commissioned articles that describe emerging directions in the field of chemical biology. Letters presenting findings of broad interest are typically five printed pages or fewer, while articles are twelve printed pages or fewer. Finally, reviews cover key concepts of interest to a broad readership. The journal has published the first three-dimensional interactive chemical structures replicating printed journal figures. Awards 2006 Award for Innovation in Journal Publishing from the Professional and Scholarly Publishing Division of the Association of American Publishers. Runner-up R.R. Hawkins Award for the Outstanding Professional, Reference or Scholarly Work of 2006. References External links The first freely-available 3d chemical structure corresponding to a journal figure Chemical Biology Biochemistry journals Monthly journals Academic journals established in 2006 English-language journals Delayed open access journals
ACS Chemical Biology
[ "Chemistry" ]
275
[ "Biochemistry journals", "Biochemistry literature" ]
12,552,062
https://en.wikipedia.org/wiki/Lemoine%27s%20conjecture
In number theory, Lemoine's conjecture, named after Émile Lemoine, also known as Levy's conjecture, after Hyman Levy, states that all odd integers greater than 5 can be represented as the sum of an odd prime number and an even semiprime. History The conjecture was posed by Émile Lemoine in 1895, but was erroneously attributed by MathWorld to Hyman Levy who pondered it in the 1960s. A similar conjecture by Sun in 2008 states that all odd integers greater than 3 can be represented as the sum of a prime number and the product of two consecutive positive integers ( p+x(x+1) ). Formal definition To put it algebraically, 2n + 1 = p + 2q always has a solution in primes p and q (not necessarily distinct) for n > 2. The Lemoine conjecture is similar to but stronger than Goldbach's weak conjecture. Example For example, the odd integer 47 can be expressed as the sum of a prime and a semiprime in four different ways: 47 = 13 + 2×17 = 37 + 2×5 = 41 + 2×3 = 43 + 2×2. The number of ways this can be done is given by . Lemoine's conjecture is that this sequence contains no zeros after the first three. Evidence According to MathWorld, the conjecture has been verified by Corbitt up to 109. A blog post in June of 2019 additionally claimed to have verified the conjecture up to 1010. A proof was claimed in 2017 by Agama and Gensel, but this was later found to be flawed. See also Lemoine's conjecture and extensions Notes References Emile Lemoine, L'intermédiare des mathématiciens, 1 (1894), 179; ibid 3 (1896), 151. H. Levy, "On Goldbach's Conjecture", Math. Gaz. 47 (1963): 274 L. Hodges, "A lesser-known Goldbach conjecture", Math. Mag., 66 (1993): 45–47. . John O. Kiltinen and Peter B. Young, "Goldbach, Lemoine, and a Know/Don't Know Problem", Mathematics Magazine, 58(4) (Sep., 1985), pp. 195–203. . Richard K. Guy, Unsolved Problems in Number Theory New York: Springer-Verlag 2004: C1 External links Levy's Conjecture by Jay Warendorff, Wolfram Demonstrations Project. Additive number theory Conjectures about prime numbers Unsolved problems in number theory
Lemoine's conjecture
[ "Mathematics" ]
536
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in number theory", "Number theory" ]
427,118
https://en.wikipedia.org/wiki/Principle%20of%20locality
In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings. A theory that includes the principle of locality is said to be a "local theory". This is an alternative to the concept of instantaneous, or "non-local" action at a distance. Locality evolved out of the field theories of classical physics. The idea is that for a cause at one point to have an effect at another point, something in the space between those points must mediate the action. To exert an influence, something, such as a wave or particle, must travel through the space between the two points, carrying the influence. The special theory of relativity limits the maximum speed at which causal influence can travel to the speed of light, . Therefore, the principle of locality implies that an event at one point cannot cause a truly simultaneous result at another point. An event at point cannot cause a result at point in a time less than , where is the distance between the points and is the speed of light in vacuum. The principle of locality plays a critical role in one of the central results of quantum mechanics. In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen, with their EPR paradox thought experiment, raised the possibility that quantum mechanics might not be a complete theory. They described two systems physically separated after interacting; this pair would be called entangled in modern terminology. They reasoned that without additions, now called hidden variables, quantum mechanics would predict illogical relationships between the physically separated measurements. In 1964, John Stewart Bell formulated Bell's theorem, an inequality which, if violated in actual experiments, implies that quantum mechanics violates local causality (referred to as local realism in later work), a result now considered equivalent to precluding local hidden variables. Progressive variations on those Bell test experiments have since shown that quantum mechanics broadly violates Bell's inequalities. According to some interpretations of quantum mechanics, this result implies that some quantum effects violate the principle of locality. Pre-quantum mechanics During the 17th century, Newton's principle of universal gravitation was formulated in terms of "action at a distance", thereby violating the principle of locality. Newton himself considered this violation to be absurd: Coulomb's law of electric forces was initially also formulated as instantaneous action at a distance, but in 1880, James Clerk Maxwell showed that field equations – which obey locality – predict all of the phenomena of electromagnetism. These equations show that electromagnetic forces propagate at the speed of light. In 1905, Albert Einstein's special theory of relativity postulated that no matter or energy can travel faster than the speed of light, and Einstein thereby sought to reformulate physics in a way that obeyed the principle of locality. He later succeeded in producing an alternative theory of gravitation, general relativity, which obeys the principle of locality. However, a different challenge to the principle of locality developed subsequently from the theory of quantum mechanics, which Einstein himself had helped to create. Models for locality Simple spacetime diagrams can help clarify the issues related to locality. A way to describe the issues of locality suitable for discussion of quantum mechanics is illustrated in the diagram. A particle is created in one location, then split and measured in two other, spatially separated, locations. The two measurements are named for Alice and Bob. Alice performs measurements (A) and gets a result ); Bob performs () and gets result . The experiment is repeated many times and the results are compared. Alice and Bob in spacetime A spacetime diagram has a time coordinate going vertical and a space coordinate going horizontal. Alice, in a local region on the left, can affect events only in a cone extending in the future as shown; the finite speed of light prevent her from affecting other areas including Bob's location in this case. Similarly we can use the diagram to reason that Bob's local circumstances cannot be altered by Alice at the same time: all events that cause an effect on Bob are in the cone below his location on the diagram. Dashed lines around Alice show her valid future locations; dashed lines around Bob show events that could have caused his present circumstance. When Alice measures quantum states in her location she gets the results labeled ; similarly Bob gets . Models of locality attempt to explain the statistical relationship between these measured values. Action at a distance The simplest locality model is no locality: instantaneous action at a distance with no limits for relativity. The locality model for action at a distance is called continuous action. The gray area (a circle here) is a mathematical concept called a "screen". Any path from a location through the screen becomes part of the physical model at that location. The gray ring indicates events from all parts of space and time can affect the probability measured by Alice or Bob. So in the case of continuous action, events at all times and places affect Alice's and Bob's model. This simple model is highly successful for solar planetary dynamics with Newtonian gravity and in electrostatics, cases where relativistic effects are insignificant. No future-input dependence Many locality models explicitly or implicitly ignore the possible effect of future events. The spacetime diagram at the right shows the effect of such a restriction when combined with continuous action. Inputs from the future (above the dashed line) are no longer considered part of Alice's or Bob's model. Comparing this diagram with the one for continuous action makes it clear that these are not the same locality model. Common sense arguments about the future not affecting the present are reasonable criteria but such assumptions alter the mathematical character of the models. Bell's local causality John Stewart Bell when discussing his Bell's theorem uses the screening model shown at the right. Events in the common past of Alice and Bob are part of the model used in calculating probabilities for Alice and for Bob as indicated the way the screen absorbs those events. However events at Bob's location during Alice measurement and events in the future are excluded. Bell called this assumption local causality, but with the diagram we can reason about the meaning of the assumption without getting tripped up by other meanings of local combined with other meanings of causal. Dash lines show relativistically valid regions in the past of Alice or Bob. The gray arc is the assumed Bell "screen". Quantum mechanics The relative positions of our few, easily distinguishable planets (for example) can be seen directly: understanding and measuring their relative location poses only technical issues. The submicroscopic world on the other hand is known only by measurements that average over many seemingly random ("statistical" or "probabilistic") events and measurements can show either particle-like or wave-like results depending on their design. This world is governed by quantum mechanics. The concepts of locality are more complex and they are described in the language of probability and correlation. In the 1935 Einstein–Podolsky–Rosen paradox paper (EPR paper), Albert Einstein, Boris Podolsky and Nathan Rosen imagined such an experiment. They observed that quantum mechanics predicts what is now known as quantum entanglement and examined its consequences. In their view, the classical principle of locality implied that "no real change can take place" at Bob's site as a result of whatever measurements Alice was doing. Since quantum mechanics does predict a wavefunction collapse that depends on Alice's choice of measurement, they concluded that this was a form of action-at-distance and that the wavefunction could not be a complete description of reality. Other physicists did not agree: they accepted the quantum wavefunction as complete and questioned the nature of locality and reality assumed in the EPR paper. In 1964 John Stewart Bell investigated whether it might be possible to fulfill Einstein's goal—to "complete" quantum theory—with local hidden variables to explain the correlations between spatially separated particles as predicted by quantum theory. Bell established a criterion to distinguish between local hidden-variables theory and quantum theory by measuring specific values of correlations between entangled particles. Subsequent experimental tests have shown that some quantum effects do violate Bell's inequalities and cannot be reproduced by a local hidden-variables theory. Bell's theorem depends on careful defined models of locality. Locality and hidden variables Bell described local causality in terms of probability needed for analysis of quantum mechanics. Using the notation that for the probability of a result with given state , Bell investigated the probability distribution where represents hidden state variables set (locally) when the two particles are initially co-located. If local causality holds, then the probabilities observed by Alice and by Bob should be only coupled by the hidden variables, and we can show that Bell proved that a consequence of this factorization are limits on the correlations observed by Alice and Bob known as Bell inequalities. Since quantum mechanics predicts correlations stronger than this limit, locally set hidden variables cannot be added to "complete" quantum theory as desired by the EPR paper. Numerous experiments specifically designed to probe the issues of locality confirm the predictions of quantum mechanics; these include experiments where the two measurement locations are more than a kilometer apart. The 2022 Nobel Prize in Physics was awarded to Alain Aspect, John Clauser and Anton Zeilinger, in part "for experiments with entangled photons, establishing the violation of Bell inequalities". The specific aspect of quantum theory that leads to these correlations is termed quantum entanglement, and versions of Bell's scenario are now used to verify entanglement experimentally. Terminology Bell's mathematical results, when compared to experimental data, eliminate local hidden-variable mathematical quantum theories. But the interpretation of the math with respect to the physical world remains under debate. Bell described the assumptions behind his work as "local causality", shortened to "locality"; later authors referred to the assumptions as local realism. These different names do not alter the mathematical assumptions. A review of papers using this phrase suggests that a common (classical) physics definition of realism is This definition includes classical concepts like "well-defined", which conflicts with quantum superposition, and "prior to ... measurements", which implies (metaphysical) preexistence of properties. Specifically, the term local realism in the context of Bell's theorem cannot be viewed as a kind of "realism" involving locality other than the kind implied by the Bell screening assumption. This conflict between common ideas of realism and quantum mechanics requires careful analysis whenever local realism is discussed. Adding a "locality" modifier, that the results of two spatially well-separated measurements cannot causally affect each other, does not make the combination relate to Bell's proof; the only interpretation that Bell assumed was the one he called local causality. Consequently, Bell's theorem does not restrict the possibility of nonlocal variables as well as theories based on retrocausality or superdeterminism. Because of the probabilistic nature of wave function collapse, this apparent violation of locality in quantum mechanics cannot be used to transmit information faster than light, in accordance to the no communication theorem. Asher Peres distinguishes between weak and strong nonlocality, the latter referring to the theories that allow faster-than-light communication. Under these terms, quantum mechanics would allow weakly nonlocal correlations but not strong nonlocality. Relativistic quantum mechanics One of the main principles of quantum field theory is the principle of locality. The field operators and the Lagrangian density describing the dynamics of the fields are local, in the sense that interactions are not described by action-at-a-distance. This condition can be achieved by avoiding terms in the Lagrangian that are products of two fields that depend on distant coordinates. Specifically, in relativistic quantum field theory, to enforce the principles of locality and causality the following condition is required: if there are two observables, each localized within two distinct spacetime regions which happen to be at a spacelike separation from each other, the observables must commute. This condition is sometimes imposed as one of the axioms of relativistic quantum field theory. See also Einstein's thought experiments Local hidden-variable theory Non-locality (disambiguation) Quantum nonlocality Cluster decomposition Counterfactual definiteness References External links Quantum nonlocality vs. Einstein locality by H. Dieter Zeh Quantum measurement
Principle of locality
[ "Physics" ]
2,530
[ "Quantum measurement", "Quantum mechanics" ]
427,282
https://en.wikipedia.org/wiki/Mutual%20information
In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" (in units such as shannons (bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like the correlation coefficient, MI is more general and determines how different the joint distribution of the pair is from the product of the marginal distributions of and . MI is the expected value of the pointwise mutual information (PMI). The quantity was defined and analyzed by Claude Shannon in his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later by Robert Fano. Mutual Information is also known as information gain. Definition Let be a pair of random variables with values over the space . If their joint distribution is and the marginal distributions are and , the mutual information is defined as where is the Kullback–Leibler divergence, and is the outer product distribution which assigns probability to each . Notice, as per property of the Kullback–Leibler divergence, that is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when and are independent (and hence observing tells you nothing about ). is non-negative, it is a measure of the price for encoding as a pair of independent random variables when in reality they are not. If the natural logarithm is used, the unit of mutual information is the nat. If the log base 2 is used, the unit of mutual information is the shannon, also known as the bit. If the log base 10 is used, the unit of mutual information is the hartley, also known as the ban or the dit. In terms of PMFs for discrete distributions The mutual information of two jointly discrete random variables and is calculated as a double sum: where is the joint probability mass function of and , and and are the marginal probability mass functions of and respectively. In terms of PDFs for continuous distributions In the case of jointly continuous random variables, the double sum is replaced by a double integral: where is now the joint probability density function of and , and and are the marginal probability density functions of and respectively. Motivation Intuitively, mutual information measures the information that and share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if and are independent, then knowing does not give any information about and vice versa, so their mutual information is zero. At the other extreme, if is a deterministic function of and is a deterministic function of then all information conveyed by is shared with : knowing determines the value of and vice versa. As a result, the mutual information is the same as the uncertainty contained in (or ) alone, namely the entropy of (or ). A very special case of this is when and are the same random variable. Mutual information is a measure of the inherent dependence expressed in the joint distribution of and relative to the marginal distribution of and under the assumption of independence. Mutual information therefore measures dependence in the following sense: if and only if and are independent random variables. This is easy to see in one direction: if and are independent, then , and therefore: Moreover, mutual information is nonnegative (i.e. see below) and symmetric (i.e. see below). Properties Nonnegativity Using Jensen's inequality on the definition of mutual information we can show that is non-negative, i.e. Symmetry The proof is given considering the relationship with entropy, as shown below. Supermodularity under independence If is independent of , then . Relation to conditional and joint entropy Mutual information can be equivalently expressed as: where and are the marginal entropies, and are the conditional entropies, and is the joint entropy of and . Notice the analogy to the union, difference, and intersection of two sets: in this respect, all the formulas given above are apparent from the Venn diagram reported at the beginning of the article. In terms of a communication channel in which the output is a noisy version of the input , these relations are summarised in the figure: Because is non-negative, consequently, . Here we give the detailed deduction of for the case of jointly discrete random variables: The proofs of the other identities above are similar. The proof of the general case (not just discrete) is similar, with integrals replacing sums. Intuitively, if entropy is regarded as a measure of uncertainty about a random variable, then is a measure of what does not say about . This is "the amount of uncertainty remaining about after is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in , minus the amount of uncertainty in which remains after is known", which is equivalent to "the amount of uncertainty in which is removed by knowing ". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. Note that in the discrete case and therefore . Thus , and one can formulate the basic principle that a variable contains at least as much information about itself as any other variable can provide. Relation to Kullback–Leibler divergence For jointly discrete or jointly continuous pairs , mutual information is the Kullback–Leibler divergence from the product of the marginal distributions, , of the joint distribution , that is, Furthermore, let be the conditional mass or density function. Then, we have the identity The proof for jointly discrete random variables is as follows: Similarly this identity can be established for jointly continuous random variables. Note that here the Kullback–Leibler divergence involves integration over the values of the random variable only, and the expression still denotes a random variable because is random. Thus mutual information can also be understood as the expectation of the Kullback–Leibler divergence of the univariate distribution of from the conditional distribution of given : the more different the distributions and are on average, the greater the information gain. Bayesian estimation of mutual information If samples from a joint distribution are available, a Bayesian approach can be used to estimate the mutual information of that distribution. The first work to do this, which also showed how to do Bayesian estimation of many other information-theoretic properties besides mutual information, was. Subsequent researchers have rederived and extended this analysis. See for a recent paper based on a prior specifically tailored to estimation of mutual information per se. Besides, recently an estimation method accounting for continuous and multivariate outputs, , was proposed in . Independence assumptions The Kullback-Leibler divergence formulation of the mutual information is predicated on that one is interested in comparing to the fully factorized outer product . In many problems, such as non-negative matrix factorization, one is interested in less extreme factorizations; specifically, one wishes to compare to a low-rank matrix approximation in some unknown variable ; that is, to what degree one might have Alternately, one might be interested in knowing how much more information carries over its factorization. In such a case, the excess information that the full distribution carries over the matrix factorization is given by the Kullback-Leibler divergence The conventional definition of the mutual information is recovered in the extreme case that the process has only one value for . Variations Several variations on mutual information have been proposed to suit various needs. Among these are normalized variants and generalizations to more than two variables. Metric Many applications require a metric, that is, a distance measure between pairs of points. The quantity satisfies the properties of a metric (triangle inequality, non-negativity, indiscernability and symmetry), where equality is understood to mean that can be completely determined from . This distance metric is also known as the variation of information. If are discrete random variables then all the entropy terms are non-negative, so and one can define a normalized distance Plugging in the definitions shows that This is known as the Rajski Distance. In a set-theoretic interpretation of information (see the figure for Conditional entropy), this is effectively the Jaccard distance between and . Finally, is also a metric. Conditional mutual information Sometimes it is useful to express the mutual information of two random variables conditioned on a third. For jointly discrete random variables this takes the form which can be simplified as For jointly continuous random variables this takes the form which can be simplified as Conditioning on a third random variable may either increase or decrease the mutual information, but it is always true that for discrete, jointly distributed random variables . This result has been used as a basic building block for proving other inequalities in information theory. Interaction information Several generalizations of mutual information to more than two random variables have been proposed, such as total correlation (or multi-information) and dual total correlation. The expression and study of multivariate higher-degree mutual information was achieved in two seemingly independent works: McGill (1954) who called these functions "interaction information", and Hu Kuo Ting (1962). Interaction information is defined for one variable as follows: and for Some authors reverse the order of the terms on the right-hand side of the preceding equation, which changes the sign when the number of random variables is odd. (And in this case, the single-variable expression becomes the negative of the entropy.) Note that Multivariate statistical independence The multivariate mutual information functions generalize the pairwise independence case that states that if and only if , to arbitrary numerous variable. n variables are mutually independent if and only if the mutual information functions vanish with (theorem 2). In this sense, the can be used as a refined statistical independence criterion. Applications For 3 variables, Brenner et al. applied multivariate mutual information to neural coding and called its negativity "synergy" and Watkinson et al. applied it to genetic expression. For arbitrary k variables, Tapia et al. applied multivariate mutual information to gene expression. It can be zero, positive, or negative. The positivity corresponds to relations generalizing the pairwise correlations, nullity corresponds to a refined notion of independence, and negativity detects high dimensional "emergent" relations and clusterized datapoints ). One high-dimensional generalization scheme which maximizes the mutual information between the joint distribution and other target variables is found to be useful in feature selection. Mutual information is also used in the area of signal processing as a measure of similarity between two signals. For example, FMI metric is an image fusion performance measure that makes use of mutual information in order to measure the amount of information that the fused image contains about the source images. The Matlab code for this metric can be found at. A python package for computing all multivariate mutual informations, conditional mutual information, joint entropies, total correlations, information distance in a dataset of n variables is available. Directed information Directed information, , measures the amount of information that flows from the process to , where denotes the vector and denotes . The term directed information was coined by James Massey and is defined as . Note that if , the directed information becomes the mutual information. Directed information has many applications in problems where causality plays an important role, such as capacity of channel with feedback. Normalized variants Normalized variants of the mutual information are provided by the coefficients of constraint, uncertainty coefficient or proficiency: The two coefficients have a value ranging in [0, 1], but are not necessarily equal. This measure is not symmetric. If one desires a symmetric measure they can consider the following redundancy measure: which attains a minimum of zero when the variables are independent and a maximum value of when one variable becomes completely redundant with the knowledge of the other. See also Redundancy (information theory). Another symmetrical measure is the symmetric uncertainty , given by which represents the harmonic mean of the two uncertainty coefficients . If we consider mutual information as a special case of the total correlation or dual total correlation, the normalized version are respectively, and This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty: There's a normalization which derives from first thinking of mutual information as an analogue to covariance (thus Shannon entropy is analogous to variance). Then the normalized mutual information is calculated akin to the Pearson correlation coefficient, Weighted variants In the traditional formulation of the mutual information, each event or object specified by is weighted by the corresponding probability . This assumes that all objects or events are equivalent apart from their probability of occurrence. However, in some applications it may be the case that certain objects or events are more significant than others, or that certain patterns of association are more semantically important than others. For example, the deterministic mapping may be viewed as stronger than the deterministic mapping , although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (, , ), and is therefore not sensitive at all to the form of the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the following weighted mutual information . which places a weight on the probability of each variable value co-occurrence, . This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevant holistic or Prägnanz factors. In the above example, using larger relative weights for , , and would have the effect of assessing greater informativeness for the relation than for the relation , which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs, and there are examples where the weighted mutual information also takes negative values. Adjusted mutual information A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be? The adjusted mutual information or AMI subtracts the expectation value of the MI, so that the AMI is zero when two different distributions are random, and one when two distributions are identical. The AMI is defined in analogy to the adjusted Rand index of two different partitions of a set. Absolute mutual information Using the ideas of Kolmogorov complexity, one can consider the mutual information of two sequences independent of any probability distribution: To establish that this quantity is symmetric up to a logarithmic factor () one requires the chain rule for Kolmogorov complexity . Approximations of this quantity via compression can be used to define a distance measure to perform a hierarchical clustering of sequences without having any domain knowledge of the sequences . Linear correlation Unlike correlation coefficients, such as the product moment correlation coefficient, mutual information contains information about all dependence—linear and nonlinear—and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution for and is a bivariate normal distribution (implying in particular that both marginal distributions are normally distributed), there is an exact relationship between and the correlation coefficient . The equation above can be derived as follows for a bivariate Gaussian: Therefore, For discrete data When and are limited to be in a discrete number of states, observation data is summarized in a contingency table, with row variable (or ) and column variable (or ). Mutual information is one of the measures of association or correlation between the row and column variables. Other measures of association include Pearson's chi-squared test statistics, G-test statistics, etc. In fact, with the same log base, mutual information will be equal to the G-test log-likelihood statistic divided by , where is the sample size. Applications In many applications, one wants to maximize mutual information (thus increasing dependencies), which is often equivalent to minimizing conditional entropy. Examples include: In search engine technology, mutual information between phrases and contexts is used as a feature for k-means clustering to discover semantic clusters (concepts). For example, the mutual information of a bigram might be calculated as: where is the number of times the bigram xy appears in the corpus, is the number of times the unigram x appears in the corpus, B is the total number of bigrams, and U is the total number of unigrams. In telecommunications, the channel capacity is equal to the mutual information, maximized over all input distributions. Discriminative training procedures for hidden Markov models have been proposed based on the maximum mutual information (MMI) criterion. RNA secondary structure prediction from a multiple sequence alignment. Phylogenetic profiling prediction from pairwise present and disappearance of functionally link genes. Mutual information has been used as a criterion for feature selection and feature transformations in machine learning. It can be used to characterize both the relevance and redundancy of variables, such as the minimum redundancy feature selection. Mutual information is used in determining the similarity of two different clusterings of a dataset. As such, it provides some advantages over the traditional Rand index. Mutual information of words is often used as a significance function for the computation of collocations in corpus linguistics. This has the added complexity that no word-instance is an instance to two different words; rather, one counts instances where 2 words occur adjacent or in close proximity; this slightly complicates the calculation, since the expected probability of one word occurring within words of another, goes up with Mutual information is used in medical imaging for image registration. Given a reference image (for example, a brain scan), and a second image which needs to be put into the same coordinate system as the reference image, this image is deformed until the mutual information between it and the reference image is maximized. Detection of phase synchronization in time series analysis. In the infomax method for neural-net and other machine learning, including the infomax-based Independent component analysis algorithm Average mutual information in delay embedding theorem is used for determining the embedding delay parameter. Mutual information between genes in expression microarray data is used by the ARACNE algorithm for reconstruction of gene networks. In statistical mechanics, Loschmidt's paradox may be expressed in terms of mutual information. Loschmidt noted that it must be impossible to determine a physical law which lacks time reversal symmetry (e.g. the second law of thermodynamics) only from physical laws which have this symmetry. He pointed out that the H-theorem of Boltzmann made the assumption that the velocities of particles in a gas were permanently uncorrelated, which removed the time symmetry inherent in the H-theorem. It can be shown that if a system is described by a probability density in phase space, then Liouville's theorem implies that the joint information (negative of the joint entropy) of the distribution remains constant in time. The joint information is equal to the mutual information plus the sum of all the marginal information (negative of the marginal entropies) for each particle coordinate. Boltzmann's assumption amounts to ignoring the mutual information in the calculation of entropy, which yields the thermodynamic entropy (divided by the Boltzmann constant). In stochastic processes coupled to changing environments, mutual information can be used to disentangle internal and effective environmental dependencies. This is particularly useful when a physical system undergoes changes in the parameters describing its dynamics, e.g., changes in temperature. The mutual information is used to learn the structure of Bayesian networks/dynamic Bayesian networks, which is thought to explain the causal relationship between random variables, as exemplified by the GlobalMIT toolkit: learning the globally optimal dynamic Bayesian network with the Mutual Information Test criterion. The mutual information is used to quantify information transmitted during the updating procedure in the Gibbs sampling algorithm. Popular cost function in decision tree learning. The mutual information is used in cosmology to test the influence of large-scale environments on galaxy properties in the Galaxy Zoo. The mutual information was used in Solar Physics to derive the solar differential rotation profile, a travel-time deviation map for sunspots, and a time–distance diagram from quiet-Sun measurements Used in Invariant Information Clustering to automatically train neural network classifiers and image segmenters given no labelled data. In stochastic dynamical systems with multiple timescales, mutual information has been shown to capture the functional couplings between different temporal scales. Importantly, it was shown that physical interactions may or may not give rise to mutual information, depending on the typical timescale of their dynamics. See also Data differencing Pointwise mutual information Quantum mutual information Specific-information Notes References English translation of original in Uspekhi Matematicheskikh Nauk 12 (1): 3-52. David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. (available free online) Athanasios Papoulis. Probability, Random Variables, and Stochastic Processes, second edition. New York: McGraw-Hill, 1984. (See Chapter 15.) Information theory Entropy and information
Mutual information
[ "Physics", "Mathematics", "Technology", "Engineering" ]
4,468
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Entropy and information", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
427,499
https://en.wikipedia.org/wiki/Proteinogenic%20amino%20acid
Proteinogenic amino acids are amino acids that are incorporated biosynthetically into proteins during translation from RNA. The word "proteinogenic" means "protein creating". Throughout known life, there are 22 genetically encoded (proteinogenic) amino acids, 20 in the standard genetic code and an additional 2 (selenocysteine and pyrrolysine) that can be incorporated by special translation mechanisms. In contrast, non-proteinogenic amino acids are amino acids that are either not incorporated into proteins (like GABA, L-DOPA, or triiodothyronine), misincorporated in place of a genetically encoded amino acid, or not produced directly and in isolation by standard cellular machinery (like hydroxyproline). The latter often results from post-translational modification of proteins. Some non-proteinogenic amino acids are incorporated into nonribosomal peptides which are synthesized by non-ribosomal peptide synthetases. Both eukaryotes and prokaryotes can incorporate selenocysteine into their proteins via a nucleotide sequence known as a SECIS element, which directs the cell to translate a nearby UGA codon as selenocysteine (UGA is normally a stop codon). In some methanogenic prokaryotes, the UAG codon (normally a stop codon) can also be translated to pyrrolysine. In eukaryotes, there are only 21 proteinogenic amino acids, the 20 of the standard genetic code, plus selenocysteine. Humans can synthesize 12 of these from each other or from other molecules of intermediary metabolism. The other nine must be consumed (usually as their protein derivatives), and so they are called essential amino acids. The essential amino acids are histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine (i.e. H, I, L, K, M, F, T, W, V). The proteinogenic amino acids have been found to be related to the set of amino acids that can be recognized by ribozyme autoaminoacylation systems. Thus, non-proteinogenic amino acids would have been excluded by the contingent evolutionary success of nucleotide-based life forms. Other reasons have been offered to explain why certain specific non-proteinogenic amino acids are not generally incorporated into proteins; for example, ornithine and homoserine cyclize against the peptide backbone and fragment the protein with relatively short half-lives, while others are toxic because they can be mistakenly incorporated into proteins, such as the arginine analog canavanine. The evolutionary selection of certain proteinogenic amino acids from the primordial soup has been suggested to be because of their better incorporation into a polypeptide chain as opposed to non-proteinogenic amino acids. Structures The following illustrates the structures and abbreviations of the 21 amino acids that are directly encoded for protein synthesis by the genetic code of eukaryotes. The structures given below are standard chemical structures, not the typical zwitterion forms that exist in aqueous solutions. IUPAC/IUBMB now also recommends standard abbreviations for the following two amino acids: Chemical properties Following is a table listing the one-letter symbols, the three-letter symbols, and the chemical properties of the side chains of the standard amino acids. The masses listed are based on weighted averages of the elemental isotopes at their natural abundances. Forming a peptide bond results in elimination of a molecule of water. Therefore, the protein's mass is equal to the mass of amino acids the protein is composed of minus 18.01524 Da per peptide bond. General chemical properties Side-chain properties §: Values for Asp, Cys, Glu, His, Lys & Tyr were determined using the amino acid residue placed centrally in an alanine pentapeptide. The value for Arg is from Pace et al. (2009). The value for Sec is from Byun & Kang (2011). N.D.: The pKa value of Pyrrolysine has not been reported. Note: The pKa value of an amino-acid residue in a small peptide is typically slightly different when it is inside a protein. Protein pKa calculations are sometimes used to calculate the change in the pKa value of an amino-acid residue in this situation. Gene expression and biochemistry * UAG is normally the amber stop codon, but in organisms containing the biological machinery encoded by the pylTSBCD cluster of genes the amino acid pyrrolysine will be incorporated. ** UGA is normally the opal (or umber) stop codon, but encodes selenocysteine if a SECIS element is present. † The stop codon is not an amino acid, but is included for completeness. †† UAG and UGA do not always act as stop codons (see above). ‡ An essential amino acid cannot be synthesized in humans and must, therefore, be supplied in the diet. Conditionally essential amino acids are not normally required in the diet, but must be supplied exogenously to specific populations that do not synthesize it in adequate amounts. & Occurrence of amino acids is based on 135 Archaea, 3775 Bacteria, 614 Eukaryota proteomes and human proteome (21 006 proteins) respectively. Mass spectrometry In mass spectrometry of peptides and proteins, knowledge of the masses of the residues is useful. The mass of the peptide or protein is the sum of the residue masses plus the mass of water (Monoisotopic mass = 18.01056 Da; average mass = 18.0153 Da). The residue masses are calculated from the tabulated chemical formulas and atomic weights. In mass spectrometry, ions may also include one or more protons (Monoisotopic mass = 1.00728 Da; average mass* = 1.0074 Da). *Protons cannot have an average mass, this confusingly infers to Deuterons as a valid isotope, but they should be a different species (see Hydron (chemistry)) § Monoisotopic mass Stoichiometry and metabolic cost in cell The table below lists the abundance of amino acids in E.coli cells and the metabolic cost (ATP) for synthesis of the amino acids. Negative numbers indicate the metabolic processes are energy favorable and do not cost net ATP of the cell. The abundance of amino acids includes amino acids in free form and in polymerization form (proteins). Remarks Catabolism Amino acids can be classified according to the properties of their main products: Glucogenic, with the products having the ability to form glucose by gluconeogenesis Ketogenic, with the products not having the ability to form glucose: These products may still be used for ketogenesis or lipid synthesis. Amino acids catabolized into both glucogenic and ketogenic products See also Glucogenic amino acid Ketogenic amino acid References General references External links The origin of the single-letter code for the amino acids Alpha-Amino acids Nitrogen cycle Nutrition
Proteinogenic amino acid
[ "Chemistry" ]
1,520
[ "Nitrogen cycle", "Metabolism" ]
427,826
https://en.wikipedia.org/wiki/List%20of%20transforms
This is a list of transforms in mathematics. Integral transforms Abel transform Aboodh transform Bateman transform Fourier transform Short-time Fourier transform Gabor transform Hankel transform Hartley transform Hermite transform Hilbert transform Hilbert–Schmidt integral operator Jacobi transform Laguerre transform Laplace transform Inverse Laplace transform Two-sided Laplace transform Inverse two-sided Laplace transform Laplace–Carson transform Laplace–Stieltjes transform Legendre transform Linear canonical transform Mellin transform Inverse Mellin transform Poisson–Mellin–Newton cycle N-transform Radon transform Stieltjes transformation Sumudu transform Wavelet transform (integral) Weierstrass transform Hussein Jassim Transform Discrete transforms Binomial transform Discrete Fourier transform, DFT Fast Fourier transform, a popular implementation of the DFT Discrete cosine transform Modified discrete cosine transform Discrete Hartley transform Discrete sine transform Discrete wavelet transform Hadamard transform (or, Walsh–Hadamard transform) Fast wavelet transform Hankel transform, the determinant of the Hankel matrix Discrete Chebyshev transform Equivalent, up to a diagonal scaling, to a discrete cosine transform Finite Legendre transform Spherical Harmonic transform Irrational base discrete weighted transform Number-theoretic transform Stirling transform Discrete-time transforms These transforms have a continuous frequency domain: Discrete-time Fourier transform Z-transform Data-dependent transforms Karhunen–Loève transform Other transforms Affine transformation (Euclidean geometry) Bäcklund transform Bilinear transform Box–Muller transform Burrows–Wheeler transform (data compression) Chirplet transform Distance transform Fractal transform Gelfand transform Hadamard transform Hough transform (digital image processing) Inverse scattering transform Legendre transformation Möbius transformation Perspective transform (computer graphics) Sequence transform Watershed transform (digital image processing) Wavelet transform (orthonormal) Y-Δ transform (electrical circuits) See also Linear transform List of Fourier-related transforms Sequence transformation Transform coding External links Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. Mathematics-related lists
List of transforms
[ "Mathematics" ]
421
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Transforms" ]
427,862
https://en.wikipedia.org/wiki/Conchoidal%20fracture
A conchoidal fracture is a break or fracture of a brittle material that does not follow any natural planes of separation. Mindat.org defines conchoidal fracture as follows: "a fracture with smooth, curved surfaces, typically slightly concave, showing concentric undulations resembling the lines of growth of a shell". Materials that break in this way include quartz, chert, flint, quartzite, jasper, and other fine-grained or amorphous materials with a composition of pure silica, such as obsidian and window glass, as well as a few metals, such as solid gallium. Crystalline materials such as quartz also exhibit conchoidal fractures when they lack a cleavage plane and do not break along a plane parallel to their crystalline faces. So, a conchoidal, or uneven fracture is not a specific indication of the amorphous character of a mineral, or a material. Amorphous, cryptocrystalline, and crystalline materials can all present conchoidal fracture when they lack a preferential cleavage plane. Conchoidal fractures can occur in various materials if they are properly percussed (struck). Cryptocrystalline silica, such as chert, or flint, with this material property were widely sought after, traded, and fashioned into sharp tools in the Stone Age. Conchoidal fractures often result in a curved breakage surface that resembles the rippling, gradual curves of a mussel shell; the word "conchoid" is derived from the word for this animal ( konchoeidēs < konchē). A swelling appears at the point of impact called the bulb of percussion. Shock waves emanating outwards from this point leave their mark on the stone as ripples. Other conchoidal features include small fissures emanating from the bulb of percussion. They are defined in contrast to the faceted fractures often seen in single crystals such as semiconductor wafers and gemstones and to the high-energy ductile fracture surfaces desirable in most structural applications. Subsets Several subdefinitions exist, for instance on the Webmineral website: Brittle—conchoidal: very brittle fracture producing small, conchoidal fragments Brittle—subconchoidal: brittle fracture with subconchoidal fragments Conchoidal—irregular: irregular fracture producing small, conchoidal fragments Conchoidal—uneven: uneven fracture producing small, conchoidal fragments Subconchoidal: fractures developed in brittle materials characterized by semi-curving surfaces Lithics In lithic stone tools, conchoidal fractures form the basis of flint knapping, since the shape of the broken surface is controlled only by the stresses applied, and not by some preferred orientation of the material. This property also makes such fractures useful in engineering, since they provide a permanent record of the stress state at the time of failure. As conchoidal fractures can be produced only by mechanical impact, rather than frost cracking for example, they can be a useful method of differentiating prehistoric stone tools from natural stones. See also Fracture (mineralogy) References External links Lithics Materials degradation Mineralogy concepts Petrology Stone Age
Conchoidal fracture
[ "Materials_science", "Engineering" ]
641
[ "Materials degradation", "Materials science" ]
427,992
https://en.wikipedia.org/wiki/Water%20hammer
Hydraulic shock (colloquial: water hammer; fluid hammer) is a pressure surge or wave caused when a fluid in motion is forced to stop or change direction suddenly: a momentum change. It is usually observed in a liquid but gases can also be affected. This phenomenon commonly occurs when a valve closes suddenly at an end of a pipeline system and a pressure wave propagates in the pipe. This pressure wave can cause major problems, from noise and vibration to pipe rupture or collapse. It is possible to reduce the effects of the water hammer pulses with accumulators, expansion tanks, surge tanks, blowoff valves, and other features. The effects can be avoided by ensuring that no valves will close too quickly with significant flow, but there are many situations that can cause the effect. Rough calculations can be made using the Zhukovsky (Joukowsky) equation, or more accurate ones using the method of characteristics. History In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supply. Water hammer was exploited before there was even a word for it. In 1772, Englishman John Whitehurst built a hydraulic ram for a home in Cheshire, England. In 1796, French inventor Joseph Michel Montgolfier (1740–1810) built a hydraulic ram for his paper mill in Voiron. In French and Italian, the terms for "water hammer" come from the hydraulic ram: coup de bélier (French) and colpo d'ariete (Italian) both mean "blow of the ram". As the 19th century witnessed the installation of municipal water supplies, water hammer became a concern to civil engineers. Water hammer also interested physiologists who were studying the circulatory system. Although it was prefigured in work by Thomas Young, the theory of water hammer is generally considered to have begun in 1883 with the work of German physiologist Johannes von Kries (1853–1928), who was investigating the pulse in blood vessels. However, his findings went unnoticed by civil engineers. Kries's findings were subsequently derived independently in 1898 by the Russian fluid dynamicist Nikolay Yegorovich Zhukovsky (1847–1921), in 1898 by the American civil engineer Joseph Palmer Frizell (1832–1910), and in 1902 by the Italian engineer Lorenzo Allievi (1856–1941). Cause and effect Water flowing through a pipe has momentum. If the moving water is suddenly stopped, such as by closing a valve downstream of the flowing water, the pressure can rise suddenly with a resulting shock wave. In domestic plumbing this shock wave is experienced as a loud banging resembling a hammering noise. Water hammer can cause pipelines to break if the pressure is sufficiently high. Air traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water. For example, the water traveling along a tunnel or pipeline to a turbine in a hydroelectric generating station may be slowed suddenly if a valve in the path is closed too quickly. If there is of tunnel of diameter full of water travelling at , that represents approximately of kinetic energy. This energy can be dissipated by a vertical surge shaft into which the water flows which is open at the top. As the water rises up the shaft its kinetic energy is converted into potential energy, avoiding sudden high pressure. At some hydroelectric power stations, such as the Saxon Falls Hydro Power Plant In Michigan, what looks like a water tower is in fact a surge drum. In residential plumbing systems, water hammer may occur when a dishwasher, washing machine or toilet suddenly shuts off water flow. The result may be heard as a loud bang, repetitive banging (as the shock wave travels back and forth in the plumbing system), or as some shuddering. Other potential causes of water hammer: A pump stopping A check valve which closes quickly (i.e., "check valve slam") due to the flow in a pipe reversing direction on loss of motive power, such as a pump stopping. "Non-slam" check valves can be used to reduce the pressure surge. Filling an empty pipe that has a restriction such as a partially open valve or an orifice that allows air to pass easily as the pipe rapidly fills, but with the pressure increasing once full the water encounters the restriction. Related phenomena Steam hammer can occur in steam systems when some of the steam condenses into water in a horizontal section of the piping. The steam forcing the liquid water along the pipe forms a "slug" which impacts a valve of pipe fitting, creating a loud hammering noise and high pressure. Vacuum caused by condensation from thermal shock can also cause a steam hammer. Steam hammer or steam condensation induced water hammer (CIWH) was exhaustively investigated both experimentally and theoretically more than a decade ago because it can have radical negative effects in nuclear power plants. It is possible to theoretically explain the 2 millisecond duration 130 bar overpressure peaks with a special 6 equation multiphase thermohydraulic model, similar to RELAP. Steam hammer can be minimized by using sloped pipes and installing steam traps. On turbocharged internal combustion engines, a "gas hammer" can take place when the throttle is closed while the turbocharger is forcing air into the engine. There is no shockwave but the pressure can still rapidly increase to damaging levels or cause compressor surge. A pressure relief valve placed before the throttle prevents the air from surging against the throttle body by diverting it elsewhere, thus protecting the turbocharger from pressure damage. This valve can either recirculate the air into the turbocharger's intake (recirculation valve), or it can blow the air into the atmosphere and produce the distinctive hiss-flutter of an aftermarket turbocharger (blowoff valve). Mitigation measures Water hammers have caused accidents and fatalities, but usually damage is limited to breakage of pipes or appendages. An engineer should always assess the risk of a pipeline burst. Pipelines transporting hazardous liquids or gases warrant special care in design, construction, and operation. Hydroelectric power plants especially must be carefully designed and maintained because the water hammer can cause water pipes to fail catastrophically. The following characteristics may reduce or eliminate water hammer: Reduce the pressure of the water supply to the building by fitting a regulator. Lower fluid velocities. To keep water hammer low, pipe-sizing charts for some applications recommend flow velocity at or below . Fit slowly closing valves. Toilet fill valves are available in a quiet fill type that closes quietly. Non-slam check valves do not rely on fluid flow to close and will do so before the water flow reaches significant velocity. High pipeline pressure rating (does not reduce the effect but protects against damage). Good pipeline control (start-up and shut-down procedures). Water towers (used in many drinking water systems) or surge tanks help maintain steady flow rates and trap large pressure fluctuations. Air vessels such as expansion tanks and some types of hydraulic accumulators work in much the same way as water towers, but are pressurized. They typically have an air cushion above the fluid level in the vessel, which may be regulated or separated by a bladder. Sizes of air vessels may be up to hundreds of cubic meters on large pipelines. They come in many shapes, sizes and configurations. Such vessels often are called accumulators or expansion tanks. A hydropneumatic device similar in principle to a shock absorber called a 'Water Hammer Arrestor' can be installed between the water pipe and the machine, to absorb the shock and stop the banging. Air valves often remediate low pressures at high points in the pipeline. Though effective, sometimes large numbers of air valves need be installed. These valves also allow air into the system, which is often unwanted. Blowoff valves may be used as an alternative. Shorter branch pipe lengths. Shorter lengths of straight pipe, i.e. add elbows, expansion loops. Water hammer is related to the speed of sound in the fluid, and elbows reduce the influences of pressure waves. Arranging the larger piping in loops that supply shorter smaller run-out pipe branches. With looped piping, lower velocity flows from both sides of a loop can serve a branch. Flywheel on a pump. Pumping station bypass. Magnitude of the pulse One of the first to successfully investigate the water hammer problem was the Italian engineer Lorenzo Allievi. Water hammer can be analyzed by two different approaches—rigid column theory, which ignores compressibility of the fluid and elasticity of the walls of the pipe, or by a full analysis that includes elasticity. When the time it takes a valve to close is long compared to the propagation time for a pressure wave to travel the length of the pipe, then rigid column theory is appropriate; otherwise considering elasticity may be necessary. Below are two approximations for the peak pressure, one that considers elasticity, but assumes the valve closes instantaneously, and a second that neglects elasticity but includes a finite time for the valve to close. Instant valve closure; compressible fluid The pressure profile of the water hammer pulse can be calculated from the Joukowsky equation So for a valve closing instantaneously, the maximal magnitude of the water hammer pulse is where ΔP is the magnitude of the pressure wave (Pa), ρ is the density of the fluid (kg/m3), a0 is the speed of sound in the fluid (m/s), and Δv is the change in the fluid's velocity (m/s). The pulse comes about due to Newton's laws of motion and the continuity equation applied to the deceleration of a fluid element. Equation for wave speed As the speed of sound in a fluid is , the peak pressure depends on the fluid compressibility if the valve is closed abruptly. where a = wave speed, B = equivalent bulk modulus of elasticity of the system fluid–pipe, ρ = density of the fluid, K = bulk modulus of elasticity of the fluid, E = elastic modulus of the pipe, D = internal pipe diameter, t = pipe wall thickness, c = dimensionless parameter due to on wave speed. Slow valve closure; incompressible fluid When the valve is closed slowly compared to the transit time for a pressure wave to travel the length of the pipe, the elasticity can be neglected, and the phenomenon can be described in terms of inertance or rigid column theory: Assuming constant deceleration of the water column (dv/dt = v/t), this gives where: F = force [N], m = mass of the fluid column [kg], a = acceleration [m/s2], P = pressure [Pa], A = pipe cross-section [m2], ρ = fluid density [kg/m3], L = pipe length [m], v = flow velocity [m/s], t = valve closure time [s]. The above formula becomes, for water and with imperial unit, For practical application, a safety factor of about 5 is recommended: where P1 is the inlet pressure in psi, V is the flow velocity in ft/s, t is the valve closing time in seconds, and L is the upstream pipe length in feet. Hence, we can say that the magnitude of the water hammer largely depends upon the time of closure, elastic components of pipe & fluid properties. Expression for the excess pressure due to water hammer When a valve with a volumetric flow rate Q is closed, an excess pressure ΔP is created upstream of the valve, whose value is given by the Joukowsky equation: In this expression: ΔP is the overpressurization in Pa; Q is the volumetric flow in m3/s; Z is the hydraulic impedance, expressed in kg/m4/s. The hydraulic impedance Z of the pipeline determines the magnitude of the water hammer pulse. It is itself defined by where ρ the density of the liquid, expressed in kg/m3; A cross sectional area of the pipe, m2; B equivalent modulus of compressibility of the liquid in the pipe, expressed in Pa. The latter follows from a series of hydraulic concepts: compressibility of the liquid, defined by its adiabatic compressibility modulus Bl, resulting from the equation of state of the liquid generally available from thermodynamic tables; the elasticity of the walls of the pipe, which defines an equivalent bulk modulus of compressibility for the solid Bs. In the case of a pipe of circular cross-section whose thickness t is small compared to the diameter D, the equivalent modulus of compressibility is given by the formula , in which E is the Young's modulus (in Pa) of the material of the pipe; possibly compressibility Bg of gas dissolved in the liquid, defined by γ being the specific heat ratio of the gas, α the rate of ventilation (the volume fraction of undissolved gas), and P the pressure (in Pa). Thus, the equivalent elasticity is the sum of the original elasticities: As a result, we see that we can reduce the water hammer by: increasing the pipe diameter at constant flow, which reduces the flow velocity and hence the deceleration of the liquid column; employing the solid material as tight as possible with respect to the internal fluid bulk (solid Young modulus low with respect to fluid bulk modulus); introducing a device that increases the flexibility of the entire hydraulic system, such as a hydraulic accumulator; where possible, increasing the fraction of undissolved gases in the liquid. Dynamic equations The water hammer effect can be simulated by solving the following partial differential equations. where V is the fluid velocity inside pipe, is the fluid density, B is the equivalent bulk modulus, and f is the Darcy–Weisbach friction factor. Column separation Column separation is a phenomenon that can occur during a water-hammer event. If the pressure in a pipeline drops below the vapor pressure of the liquid, cavitation will occur (some of the liquid vaporizes, forming a bubble in the pipeline, keeping the pressure close to the vapor pressure). This is most likely to occur at specific locations such as closed ends, high points or knees (changes in pipe slope). When subcooled liquid flows into the space previously occupied by vapor the area of contact between the vapor and the liquid increases. This causes the vapor to condense into the liquid reducing the pressure in the vapor space. The liquid on either side of the vapor space is then accelerated into this space by the pressure difference. The collision of the two columns of liquid (or of one liquid column if at a closed end) causes a large and nearly instantaneous rise in pressure. This pressure rise can damage hydraulic machinery, individual pipes and supporting structures. Many repetitions of cavity formation and collapse may occur in a single water-hammer event. Simulation software Most water hammer software packages use the method of characteristics to solve the differential equations involved. This method works well if the wave speed does not vary in time due to either air or gas entrainment in a pipeline. The wave method (WM) is also used in various software packages. WM lets operators analyze large networks efficiently. Many commercial and non-commercial packages are available. Software packages vary in complexity, dependent on the processes modeled. The more sophisticated packages may have any of the following features: Multiphase flow capabilities. An algorithm for cavitation growth and collapse. Unsteady friction: the pressure waves dampen as turbulence is generated and due to variations in the flow velocity distribution. Varying bulk modulus for higher pressures (water becomes less compressible). Fluid structure interaction: the pipeline reacts on the varying pressures and causes pressure waves itself. Applications The water hammer principle can be used to create a simple water pump called a hydraulic ram. Leaks can sometimes be detected using water hammer. Enclosed air pockets can be detected in pipelines. The water hammer from a liquid jet created by a collapsing microcavity is studied for potential applications noninvasive transdermal drug delivery. See also Blood hammer Cavitation Fluid dynamics Hydraulophone – musical instruments employing water and other fluids Impact force Recoil (fluid behavior) Transient (civil engineering) Watson's water hammer pulse References External links What Is Water Hammer/Steam Hammer? "Water hammer"—YouTube (animation) "Water Hammer Theory Explained"—YouTube; with examples Hydraulics Irrigation Plumbing Physical phenomena
Water hammer
[ "Physics", "Chemistry", "Engineering" ]
3,405
[ "Physical phenomena", "Plumbing", "Physical systems", "Construction", "Hydraulics", "Fluid dynamics" ]
428,085
https://en.wikipedia.org/wiki/Electrospray%20ionization
Electrospray ionization (ESI) is a technique used in mass spectrometry to produce ions using an electrospray in which a high voltage is applied to a liquid to create an aerosol. It is especially useful in producing ions from macromolecules because it overcomes the propensity of these molecules to fragment when ionized. ESI is different from other ionization processes (e.g. matrix-assisted laser desorption/ionization, MALDI) since it may produce multiple-charged ions, effectively extending the mass range of the analyser to accommodate the kDa-MDa orders of magnitude observed in proteins and their associated polypeptide fragments. Mass spectrometry using ESI is called electrospray ionization mass spectrometry (ESI-MS) or, less commonly, electrospray mass spectrometry (ES-MS). ESI is a so-called 'soft ionization' technique, since there is very little fragmentation. This can be advantageous in the sense that the molecular ion (or more accurately a pseudo molecular ion) is almost always observed, however very little structural information can be gained from the simple mass spectrum obtained. This disadvantage can be overcome by coupling ESI with tandem mass spectrometry (ESI-MS/MS). Another important advantage of ESI is that solution-phase information can be retained into the gas-phase. The electrospray ionization technique was first reported by Masamichi Yamashita and John Fenn in 1984, and independently by Lidia Gall and co-workers in Soviet Union, also in 1984. Gall's work was not recognised or translated in the western scientific literature until a translation was published in 2008. The development of electrospray ionization for the analysis of biological macromolecules was rewarded with the attribution of the Nobel Prize in Chemistry to John Bennett Fenn and Koichi Tanaka in 2002. One of the original instruments used by Fenn is on display at the Science History Institute in Philadelphia, Pennsylvania. History In 1882, Lord Rayleigh theoretically estimated the maximum amount of charge a liquid droplet could carry before throwing out fine jets of liquid. This is now known as the Rayleigh limit. In 1914, John Zeleny published work on the behaviour of fluid droplets at the end of glass capillaries and presented evidence for different electrospray modes. Wilson and Taylor and Nolan investigated electrospray in the 1920s and Macky in 1931. The electrospray cone (now known as the Taylor cone) was described by Sir Geoffrey Ingram Taylor. The first use of electrospray ionization with mass spectrometry was reported by Malcolm Dole in 1968. John Bennett Fenn was awarded the 2002 Nobel Prize in Chemistry for the development of electrospray ionization mass spectrometry in the late 1980s. Ionization mechanism The liquid containing the analytes of interest (typically 10−6 - 10−4 M needed ) is dispersed by electrospray, into a fine aerosol. Because the ion formation involves extensive solvent evaporation (also termed desolvation), the typical solvents for electrospray ionization are prepared by mixing water with volatile organic compounds (e.g. methanol acetonitrile). To decrease the initial droplet size, compounds that increase the conductivity (e.g. acetic acid) are customarily added to the solution. These species also act to provide a source of protons to facilitate the ionization process. Large-flow electrosprays can benefit from nebulization of a heated inert gas such as nitrogen or carbon dioxide in addition to the high temperature of the ESI source. The aerosol is sampled into the first vacuum stage of a mass spectrometer through a capillary carrying a potential difference of approximately 3000V, which can be heated to aid further solvent evaporation from the charged droplets. The solvent evaporates from a charged droplet until it becomes unstable upon reaching its Rayleigh limit. At this point, the droplet deforms as the electrostatic repulsion of like charges, in an ever-decreasing droplet size, becomes more powerful than the surface tension holding the droplet together. At this point the droplet undergoes Coulomb fission, whereby the original droplet 'explodes' creating many smaller, more stable droplets. The new droplets undergo desolvation and subsequently further Coulomb fissions. During the fission, the droplet loses a small percentage of its mass (1.0–2.3%) along with a relatively large percentage of its charge (10–18%). There are two major theories that explain the final production of gas-phase ions: the ion evaporation model (IEM) and the charge residue model (CRM). The IEM suggests that as the droplet reaches a certain radius the field strength at the surface of the droplet becomes large enough to assist the field desorption of solvated ions. The CRM suggests that electrospray droplets undergo evaporation and fission cycles, eventually leading progeny droplets that contain on average one analyte ion or less. The gas-phase ions form after the remaining solvent molecules evaporate, leaving the analyte with the charges that the droplet carried. A large body of evidence shows either directly or indirectly that small ions (from small molecules) are liberated into the gas phase through the ion evaporation mechanism, while larger ions (from folded proteins for instance) form by charged residue mechanism. A third model invoking combined charged residue-field emission has been proposed. Another model called chain ejection model (CEM) is proposed for disordered polymers (unfolded proteins). The ions observed by mass spectrometry may be quasimolecular ions created by the addition of a hydrogen cation and denoted [M + H]+, or of another cation such as sodium ion, [M + Na]+, or the removal of a hydrogen nucleus, [M − H]−. Multiply charged ions such as [M + nH]n+ are often observed. For large macromolecules, there can be many charge states, resulting in a characteristic charge state envelope. All these are even-electron ion species: electrons (alone) are not added or removed, unlike in some other ionization sources. The analytes are sometimes involved in electrochemical processes, leading to shifts of the corresponding peaks in the mass spectrum. This effect is demonstrated in the direct ionization of noble metals such as copper, silver and gold using electrospray. The efficiency of generating the gas phase ions for small molecules in ESI varies depending on the compound structure, the solvent used and instrumental parameters. The differences in ionization efficiency reach more than 1 million times. Variants The electrosprays operated at low flow rates generate much smaller initial droplets, which ensure improved ionization efficiency. In 1993 Gale and Richard D. Smith reported significant sensitivity increases could be achieved using lower flow rates, and down to 200 nL/min. In 1994, two research groups coined the name micro-electrospray (microspray) for electrosprays working at low flow rates. Emmett and Caprioli demonstrated improved performance for HPLC-MS analyses when the electrospray was operated at 300–800 nL/min. Wilm and Mann demonstrated that a capillary flow of ~ 25 nL/min can sustain an electrospray at the tip of emitters fabricated by pulling glass capillaries to a few micrometers. The latter was renamed nano-electrospray (nanospray) in 1996. Currently the name nanospray is also in use for electrosprays fed by pumps at low flow rates, not only for self-fed electrosprays. Although there may not be a well-defined flow rate range for electrospray, microspray, and nano-electrospray, studied "changes in analyte partition during droplet fission prior to ion release". In this paper, they compare results obtained by three other groups. and then measure the signal intensity ratio at different flow rates. Cold spray ionization is a form of electrospray in which the solution containing the sample is forced through a small cold capillary (10–80 °C) into an electric field to create a fine mist of cold charged droplets. Applications of this method include the analysis of fragile molecules and guest-host interactions that cannot be studied using regular electrospray ionization. Electrospray ionization has also been achieved at pressures as low as 25 torr and termed subambient pressure ionization with nanoelectrospray (SPIN) based upon a two-stage ion funnel interface developed by Richard D. Smith and coworkers. The SPIN implementation provided increased sensitivity due to the use of ion funnels that helped confine and transfer ions to the lower pressure region of the mass spectrometer. Nanoelectrospray emitter is made out of a fine capillary with a small aperture about 1–3 micrometer. For sufficient conductivity this capillary is usually sputter-coated with conductive material, e.g. gold. Nanoelectrospray ionization consumes only a few microliters of a sample and forms smaller droplets. Operation at low pressure was particularly effective for low flow rates where the smaller electrospray droplet size allowed effective desolvation and ion formation to be achieved. As a result, the researchers were later able to demonstrate achieving an excess of 50% overall ionization utilization efficiency for transfer of ions from the liquid phase, into the gas phase as ions, and through the dual ion funnel interface to the mass spectrometer. Ambient ionization In ambient ionization, the formation of ions occurs outside the mass spectrometer without sample preparation. Electrospray is used for ion formation in a number of ambient ion sources. Desorption electrospray ionization (DESI) is an ambient ionization technique in which a solvent electrospray is directed at a sample. The electrospray is attracted to the surface by applying a voltage to the sample. Sample compounds are extracted into the solvent which is again aerosolized as highly charged droplets that evaporate to form highly charged ions. After ionization, the ions enter the atmospheric pressure interface of the mass spectrometer. DESI allows for ambient ionization of samples at atmospheric pressure, with little sample preparation. Extractive electrospray ionization is a spray-type, ambient ionization method that uses two merged sprays, one of which is generated by electrospray. Laser-based electrospray-based ambient ionization is a two-step process in which a pulsed laser is used to desorb or ablate material from a sample and the plume of material interacts with an electrospray to create ions. For ambient ionization, the sample material is deposited on a target near the electrospray. The laser desorbs or ablates material from the sample which is ejected from the surface and into the electrospray which produces highly charged ions. Examples are electrospray laser desorption ionization, matrix-assisted laser desorption electrospray ionization, and laser ablation electrospray ionization. Electrostatic spray ionization (ESTASI) involved the analysis of samples located on a flat or porous surface, or inside a microchannel. A droplet containing analytes is deposited on a sample area, to which a pulsed high voltage to is applied. When the electrostatic pressure is larger than the surface tension, droplets and ions are sprayed. Secondary electrospray ionization (SESI) is an spray type, ambient ionization method where charging ions are produced by means of an electrospray. These ions then charge vapor molecules in the gas phase when colliding with them. In paper spray ionization, the sample is applied to a piece of paper, solvent is added, and a high voltage is applied to the paper, creating ions. Applications Electrospray is used to study protein folding. Liquid chromatography–mass spectrometry Electrospray ionization is the ion source of choice to couple liquid chromatography with mass spectrometry (LC-MS). The analysis can be performed online, by feeding the liquid eluting from the LC column directly to an electrospray, or offline, by collecting fractions to be later analyzed in a classical nanoelectrospray-mass spectrometry setup. Among the numerous operating parameters in ESI-MS,for proteins, the electrospray voltage has been identified as an important parameter to consider in ESI LC/MS gradient elution. The effect of various solvent compositions (such as TFA or ammonium acetate, or supercharging reagents, or derivitizing groups) or spraying conditions on electrospray-LCMS spectra and/or nanoESI-MS spectra. have been studied. Capillary electrophoresis-mass spectrometry (CE-MS) Capillary electrophoresis-mass spectrometry was enabled by an ESI interface that was developed and patented by Richard D. Smith and coworkers at Pacific Northwest National Laboratory, and shown to have broad utility for the analysis of very small biological and chemical compound mixtures, and even extending to a single biological cell. Noncovalent gas phase interactions Electrospray ionization is also utilized in studying noncovalent gas phase interactions. The electrospray process is thought to be capable of transferring liquid-phase noncovalent complexes into the gas phase without disrupting the noncovalent interaction. Problems such as non specific interactions have been identified when studying ligand substrate complexes by ESI-MS or nanoESI-MS. An interesting example of this is studying the interactions between enzymes and drugs which are inhibitors of the enzyme. Competition studies between STAT6 and inhibitors have used ESI as a way to screen for potential new drug candidates. Electrospray ionization can even be used for studying protein complexes >1 MDa. See also Laser ablation electrospray ionization Probe electrospray ionization Sonic spray ionization References Further reading External links Electrospray Ionization Primer National High Magnetic Field Laboratory Ion source
Electrospray ionization
[ "Physics" ]
2,943
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)" ]
428,111
https://en.wikipedia.org/wiki/Parabolic%20coordinates
Parabolic coordinates are a two-dimensional orthogonal coordinate system in which the coordinate lines are confocal parabolas. A three-dimensional version of parabolic coordinates is obtained by rotating the two-dimensional system about the symmetry axis of the parabolas. Parabolic coordinates have found many applications, e.g., the treatment of the Stark effect and the potential theory of the edges. Two-dimensional parabolic coordinates Two-dimensional parabolic coordinates are defined by the equations, in terms of Cartesian coordinates: The curves of constant form confocal parabolae that open upwards (i.e., towards ), whereas the curves of constant form confocal parabolae that open downwards (i.e., towards ). The foci of all these parabolae are located at the origin. The Cartesian coordinates and can be converted to parabolic coordinates by: Two-dimensional scale factors The scale factors for the parabolic coordinates are equal Hence, the infinitesimal element of area is and the Laplacian equals Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. Three-dimensional parabolic coordinates The two-dimensional parabolic coordinates form the basis for two sets of three-dimensional orthogonal coordinates. The parabolic cylindrical coordinates are produced by projecting in the -direction. Rotation about the symmetry axis of the parabolae produces a set of confocal paraboloids, the coordinate system of tridimensional parabolic coordinates. Expressed in terms of cartesian coordinates: where the parabolae are now aligned with the -axis, about which the rotation was carried out. Hence, the azimuthal angle is defined The surfaces of constant form confocal paraboloids that open upwards (i.e., towards ) whereas the surfaces of constant form confocal paraboloids that open downwards (i.e., towards ). The foci of all these paraboloids are located at the origin. The Riemannian metric tensor associated with this coordinate system is Three-dimensional scale factors The three dimensional scale factors are: It is seen that the scale factors and are the same as in the two-dimensional case. The infinitesimal volume element is then and the Laplacian is given by Other differential operators such as and can be expressed in the coordinates by substituting the scale factors into the general formulae found in orthogonal coordinates. See also Parabolic cylindrical coordinates Orthogonal coordinate system Curvilinear coordinates Bibliography Same as Morse & Feshbach (1953), substituting uk for ξk. External links MathWorld description of parabolic coordinates Orthogonal coordinate systems
Parabolic coordinates
[ "Mathematics" ]
549
[ "Orthogonal coordinate systems", "Coordinate systems" ]
428,424
https://en.wikipedia.org/wiki/Catallactics
Catallactics is a theory of the way the free market system reaches exchange ratios and prices. It aims to analyse all actions based on monetary calculation and trace the formation of prices back to the point where an agent makes his or her choices. It explains prices as they are, rather than as they "should" be. The laws of catallactics are not value judgments, but aim to be exact, empirical, and of universal validity. It was used extensively by the Austrian School economist Ludwig von Mises. Etymology The term catallactics or catallaxy, respectively, comes from the Greek verb which means to exchange, to reconcile. Definition Catallactics is a praxeological theory. The term catallaxy was used by Friedrich Hayek to describe "the order brought about by the mutual adjustment of many individual economies in a market." Hayek was dissatisfied with the usage of the word "economy" because its Greek root, which translates as "household management", implies that economic agents in a market economy possess shared goals. He derived the word "Catallaxy" (Hayek's suggested Greek construction would be rendered καταλλαξία) from the Greek verb katallasso (καταλλάσσω) which meant not only "to exchange" but also "to admit in the community" and "to change from enemy into friend." According to Mises and Hayek it was Richard Whately who coined the term "catallactics". Whately's Introductory Lectures on Political Economy (1831) reads: See also Price signal Catallaxy Henry Dunning Macleod Notes Bibliography External links Austrian School Friedrich Hayek Self-organization
Catallactics
[ "Mathematics" ]
352
[ "Self-organization", "Dynamical systems" ]
428,508
https://en.wikipedia.org/wiki/American%20Chemical%20Society
The American Chemical Society (ACS) is a scientific society based in the United States that supports scientific inquiry in the field of chemistry. Founded in 1876 at New York University, the ACS currently has more than 155,000 members at all degree levels and in all fields of chemistry, chemical engineering, and related fields. It is one of the world's largest scientific societies by membership. The ACS is a 501(c)(3) non-profit organization and holds a congressional charter under Title 36 of the United States Code. Its headquarters are located in Washington, D.C., and it has a large concentration of staff in Columbus, Ohio. The ACS is a leading source of scientific information through its peer-reviewed scientific journals, national conferences, and the Chemical Abstracts Service. Its publications division produces over 80 scholarly journals including the prestigious Journal of the American Chemical Society, as well as the weekly trade magazine Chemical & Engineering News. The ACS holds national meetings twice a year covering the complete field of chemistry and also holds smaller conferences concentrating on specific chemical fields or geographic regions. The primary source of income of the ACS is the Chemical Abstracts Service, a provider of chemical databases worldwide. The ACS has student chapters in virtually every major university in the United States and outside the United States as well. These student chapters mainly focus on volunteering opportunities, career development, and the discussion of student and faculty research. The organization also publishes textbooks, administers several national chemistry awards, provides grants for scientific research, and supports various educational and outreach activities. The ACS has been criticized for predatory pricing of its products (SciFinder, journals and other publications), for opposing open access publishing, as well as for initiating numerous copyright enforcement litigations despite its non-profit status and its chartered commitment to dissemination of chemical information. History Creation In 1874, a group of American chemists gathered at the Joseph Priestley House to mark the 100th anniversary of Priestley's discovery of oxygen. Although there was an American scientific society at that time (the American Association for the Advancement of Science, founded in 1848), the growth of chemistry in the U.S. prompted those assembled to consider founding a new society that would focus more directly on theoretical and applied chemistry. Two years later, on April 6, 1876, during a meeting of chemists at the University of the City of New York (now New York University) the American Chemical Society was founded. The society received its charter of incorporation from the State of New York in 1877. Charles F. Chandler, a professor of chemistry at Columbia University who was instrumental in organizing the society said that such a body would "prove a powerful and healthy stimulus to original research, ... would awaken and develop much talent now wasting in isolation, ... [bring] members of the association into closer union, and ensure a better appreciation of our science and its students on the part of the general public." Although Chandler was a likely choice to become the society's first president because of his role in organizing the society, New York University chemistry professor John William Draper was elected as the first president of the society because of his national reputation. Draper was a photochemist and pioneering photographer who had produced one of the first photographic portraits in 1840. Chandler would later serve as president in 1881 and 1889. In the ACS logo, originally designed in the early 20th century by Tiffany's Jewelers and used since 1909, a stylized symbol of a kaliapparat is used. Growth The Journal of the American Chemical Society was founded in 1879 to publish original chemical research. It was the first journal published by ACS and is still the society's flagship peer-reviewed publication. In 1907, Chemical Abstracts was established as a separate journal (it previously appeared within JACS), which later became the Chemical Abstracts Service, a division of ACS that provides chemical information to researchers and others worldwide. Chemical & Engineering News is a weekly trade magazine that has been published by ACS since 1923. The society adopted a new constitution aimed at nationalizing the organization in 1890. In 1905, the American Chemical Society moved from New York City to Washington, D.C. ACS was reincorporated under a congressional charter in 1937. It was granted by the U.S. Congress and signed by president Franklin D. Roosevelt. ACS's headquarters moved to its current location in downtown Washington in 1941. Organization Divisions ACS first established technical divisions in 1908 to foster the exchange of information among scientists who work in particular fields of chemistry or professional interests. Divisional activities include organizing technical sessions at ACS meetings, publishing books and resources, administering awards and lectureships, and conducting other events. The original five divisions were 1) organic chemistry, 2) industrial chemists and chemical engineers, 3) agricultural and food chemistry, 4) fertilizer chemistry, and 5) physical and inorganic chemistry. there are 32 technical divisions of ACS. Agricultural and food chemistry Agrochemicals Analytical chemistry Biochemical technology Biological chemistry Business development & management Carbohydrate chemistry Catalysis science & technology Cellulose and renewable materials Chemical education Chemical health & safety Chemical information Chemical toxicology Chemistry & the law Colloid & surface chemistry Computers in chemistry Energy & fuels Environmental chemistry Fluorine chemistry Geochemistry History of chemistry Industrial & engineering chemistry Inorganic chemistry Medicinal chemistry Nuclear chemistry and Technology Organic chemistry Physical chemistry Polymer chemistry Polymeric materials: science and engineering Professional relations Rubber Small chemical businesses Division of Organic Chemistry This is the largest division of the Society. It marked its 100th anniversary in 2008. The first Chair of the Division was Edward Curtis Franklin. The Organic Division played a part in establishing Organic Syntheses, Inc. and Organic Reactions, Inc. and it maintains close ties to both organizations. The Division's best known activities include organizing symposia (talks and poster sessions) at the biannual ACS National Meetings, for the purpose of recognizing promising Assistant Professors, talented young researchers, outstanding technical contributions from junior-level chemists, in the field of organic chemistry. The symposia also honor national award winners, including the Arthur C. Cope Award, Cope Scholar Award, James Flack Norris Award in Physical Organic Chemistry, Herbert C. Brown Award for Creative Research in Synthetic Methods. The Division helps to organize symposia at the international meeting called Pacifichem and it organizes the biennial National Organic Chemistry Symposium (NOS) which highlights recent advances in organic chemistry and hosts the Roger Adams Award address. The Division also organizes corporate sponsorships to provide fellowships for PhD students and undergraduates. It also organizes the Graduate Research Symposium and manages award and travel grant programs for undergraduates. Local sections Local sections were authorized in 1890 and are autonomous units of the American Chemical Society. They elect their own officers and select representatives to the national ACS organization. Local sections also provide professional development opportunities for members, organize community outreach events, offer awards, and conduct other business. The Rhode Island Section was the first local section of ACS, organized in 1891. There are currently 186 local sections of the American Chemical Society in all 50 states, the District of Columbia, and Puerto Rico. International Chemical Sciences Chapters International Chemical Sciences Chapters allow ACS members outside of the U.S. to organize locally for professional and scientific exchange. There are currently 24 International Chemical Sciences Chapters. Australia Brazil China Colombia Georgia Hong Kong Hungary India Iraq Jordan Malaysia Nigeria Pakistan Peru Qatar Republic of China (Taiwan) Romania Saudi Arabia Shanghai South Africa South Korea Southwestern China Thailand United Arab Emirates Educational activities and programs Chemical education and outreach ACS states that it offers teacher training to support the professional development of science teachers so they can better present chemistry in the classroom, foster the scientific curiosity of our nation's youth and encourage future generations to pursue scientific careers. As of 2009, Clifford and Kathryn Hach donated $33 million to ACS, to continue the work of the Hach Scientific Foundation in supporting high school chemistry teaching. The Society sponsors the United States National Chemistry Olympiad (USNCO), a contest used to select the four-member team that represents the United States at the International Chemistry Olympiad (IChO). The ACS Division of Chemical Education provides standardized tests for various subfields of chemistry. The two most commonly used tests are the undergraduate-level tests for general and organic chemistry. Each of these tests consists of 70 multiple-choice questions, and gives students 110 minutes to complete the exam. The ACS also approves certified undergraduate programs in chemistry. A student who completes the required laboratory and course work—sometimes in excess of what a particular college may require for its Bachelor's degree—is considered by the Society to be well trained for professional work. The ACS coordinates two annual public awareness campaigns, National Chemistry Week and Chemists Celebrate Earth Week, as part of its educational outreach. Since 1978 and 2003 respectively, the campaigns have been celebrated with a yearly theme, such as "Chemistry Colors Our World" (2015) and "Energy: Now and Forever!" (2013). Green Chemistry Institute The Green Chemistry Institute (GCI) supports the "implementation of green chemistry and engineering throughout the global chemistry enterprise." The GCI organizes an annual conference, the Green Chemistry and Engineering Conference, provides research grants, administers awards, and provides information and support for green chemistry practices to educators, researchers, and industry. The GCI was founded in 1997 as an independent non-profit organization, by chemists Joe Breen and Dennis Hjeresen in cooperation with the Environmental Protection Agency. In 2001, the GCI became a part of the American Chemical Society. Petroleum Research Fund The Petroleum Research Fund (PRF) is an endowment fund administered by the ACS that supports advanced education and fundamental research in the petroleum and fossil fuel fields at non-profit institutions. Several categories of grants are offered for various career levels and institutions. The fund awarded more than $25 million in grants in 2007. The PRF traces its origins to the acquisition of the Universal Oil Products laboratory by a consortium of oil companies in 1931. The companies established a trust fund, The Petroleum Research Fund, in 1944 to prevent antitrust litigation tied to their UOP assets. The ACS was named the beneficiary of the trust. The first grants from the PRF were awarded in 1954. In 2000, the trust was transferred to the ACS. The ACS established The American Chemical Society Petroleum Research Fund and the previous trust was dissolved. The PRF trust was valued at $144.7 million in December 2014. Other programs The ACS International Activities is the birthplace of the ACS International Center, an online resource for scientists and engineers looking to study abroad or explore an international career or internship. The site houses information on hundreds of scholarships and grants related to all levels of experience to promote scientific mobility of researchers and practitioners in STEM fields. The Society grants membership to undergraduates as student members provided they can pay the $25 yearly dues. Any university may start its own ACS Student Chapter and receive benefits of undergraduate participation in regional conferences and discounts on ACS publications. Awards National awards The American Chemical Society administers 64 national awards, medals and prizes based on scientific contributions at various career levels that promote achievement across the chemical sciences. The ACS national awards program began in 1922 with the establishment of the Priestley Medal, the highest award offered by the ACS, which is given for distinguished services to chemistry. The 2019 recipient of the Priestley Medal is K. Barry Sharpless. Other awards Additional awards are offered by divisions, local sections and other bodies of ACS. The William H. Nichols Medal Award was the first ACS award to honor outstanding researchers in the field of chemistry. It was established in 1903 by the ACS New York Section and is named for William H. Nichols, an American chemist and businessman and one of the original founders of ACS. Of the over 100 Nichols Medalists, 16 have subsequently been awarded the Nobel Prize in Chemistry. The Willard Gibbs Award, granted by the ACS Chicago Section, was established in 1910 in honor of Josiah Willard Gibbs, the Yale University professor who formulated the phase rule. The Georgia Local Section of ACS has awarded the Herty Medal since 1933 recognizing outstanding chemists who have significantly contributed to their chosen fields. All chemists in academic, government, or industrial laboratories who have been residing in the southeastern United States for at least 10 years are eligible. The New York Section of ACS also gives Leadership Awards. The Leadership Awards are the highest honors given by the Chemical Marketing and Economic Group of ACS NY since December 6, 2012. They are presented to leaders of industry, investments, and other sectors, for their contributions to science, technology, engineering and mathematics (STEM) initiatives. Honorees include Andrew N. Liveris (Dow Chemical), P. Roy Vagelos (Regeneron, Merck), Thomas M. Connelly (DuPont) and Juan Pablo del Valle (Mexichem). The ACS also administers regional awards presented annually at regional meetings. This includes the E. Ann Nalley Regional Award for Volunteer Service to the American Chemical Society, Regional Awards for Excellence in High School Teaching, and the Stanley C. Israel Regional Award for Advancing Diversity in the Chemical Sciences. Journals and magazines ACS Publications is the publishing division of the ACS. It is a nonprofit academic publisher of scientific journals covering various fields of chemistry and related sciences. As of 2021, ACS Publications published the following peer-reviewed journals: Accounts of Chemical Research Accounts of Materials Research ACS Agricultural Science & Technology ACS Applied Bio Materials ACS Applied Electronic Materials ACS Applied Energy Materials ACS Applied Materials & Interfaces ACS Applied Nano Materials ACS Applied Polymer Materials ACS Bio & Med Chem Au ACS Biomaterials Science & Engineering ACS Catalysis ACS Central Science ACS Chemical Biology ACS Chemical Health & Safety ACS Chemical Neuroscience ACS Combinatorial Science ACS Earth and Space Chemistry ACS Energy Letters ACS Engineering Au ACS Environmental Au ACS ES&T Engineering ACS ES&T Water ACS Food Science & Technology ACS Infectious Diseases ACS Macro Letters ACS Materials Au ACS Materials Letters ACS Measurement Science Au ACS Medicinal Chemistry Letters ACS Nano ACS Nanoscience Au ACS Omega ACS Organic & Inorganic Au ACS Pharmacology & Translational Science ACS Photonics ACS Physical Chem Au ACS Polymers Au ACS Sensors ACS Sustainable Chemistry & Engineering ACS Synthetic Biology Analytical Chemistry Biochemistry Bioconjugate Chemistry Biomacromolecules Bulletin for the History of Chemistry Biotechnology Progress Chemical Research in Toxicology Chemical Reviews Chemistry of Materials Crystal Growth & Design Energy & Fuels Environmental Science & Technology Environmental Science & Technology Letters Industrial & Engineering Chemistry Research Inorganic Chemistry JACS Au Journal of Agricultural and Food Chemistry Journal of Chemical & Engineering Data Journal of Chemical Education Journal of Chemical Information and Modeling Journal of Chemical Theory and Computation Journal of Medicinal Chemistry Journal of Natural Products Journal of Organic Chemistry Journal of Proteome Research Journal of the American Chemical Society Journal of the American Society for Mass Spectrometry Langmuir Macromolecules Molecular Pharmaceutics Nano Letters Organic Letters Organic Process Research & Development Organometallics The Journal of Organic Chemistry The Journal of Physical Chemistry A The Journal of Physical Chemistry B The Journal of Physical Chemistry C The Journal of Physical Chemistry Letters In addition to academic journals, ACS Publications also publishes Chemical & Engineering News, a weekly trade magazine covering news in the chemical profession, inChemistry, a magazine for undergraduate students, and ChemMatters, a magazine for high school students and teachers. ACS also created ChemRxiv, which is an open access preprint repository for the chemical sciences, co-owned, and collaboratively managed by the American Chemical Society (ACS), German Chemical Society (GDCh), Royal Society of Chemistry (RSC), the chemistry community, other societies, funders, and non-profits; open for submissions and available for all readers at ChemRxiv. Controversies Open access In debates about free access to scientific information, the ACS has been described as "in an interesting dilemma, with some of its representatives pushing for open access and others hating the very thought." The ACS has generally opposed legislation that would mandate free access to scientific journal articles and chemical information. However it has recently launched new open access journals and provided authors with open access publishing options. Nevertheless, the actual percentage of open-access publications in ACS journals is the lowest among the 8 major scientific journal publishers (see figure below): Journals The mid-2000s saw a debate between some research funders (including the federal government), which argued that research they funded should be presented freely to the public, and some publishers (including the ACS), which argued that the costs of peer-review and publishing justified their subscription prices. In 2006, Congress debated legislation that would have instructed the National Institutes of Health (NIH) to require all investigators it funded to submit copies of final, peer-reviewed journal articles to PubMed Central, a free-access digital repository it operates, within 12 months of publication. At the time the American Association of Publishers (of which ACS is a member) hired a public relations firm to counter the open access movement. In spite of publishers' opposition, the PubMed Central legislation was passed in December 2007 and became effective in 2008. As the open access issue has continued to evolve, so too has the ACS's position. In response to a 2013 White House Office of Science and Technology Policy directive that instructed federal agencies to provide greater access to federally funded research, the ACS joined other scholarly publishers in establishing the Clearinghouse for the Open Research of the United States (Chorus) to allow free access to published articles. The ACS has also introduced several open access publishing options for its journals, including providing authors the option to pay an upfront fee to enable free online access to their articles. In 2015, the ACS launched the first fully open access journal in the society's history, ACS Central Science. The ACS states that the journal offers the same peer-review standards as its subscription journals, but without publishing charges to either authors or readers. A second open access title, ACS Omega, an interdisciplinary mega journal, launched in 2016. In December 2020, the ACS launched a series of 9 open access journals under the name ACS Au (chemical symbol for gold) which include ACS Bio & Med Chem Au, ACS Engineering Au, ACS Environmental Au, ACS Materials Au, ACS Measurement Science Au, ACS Nanoscience Au, ACS Organic & Inorganic Au, ACS Physical Chem Au and ACS Polymers Au. Databases In 2005, the ACS was criticized for opposing the creation of PubChem, which is an open access chemical database developed by the NIH's National Center for Biotechnology Information. The ACS raised concerns that the publicly supported PubChem database would duplicate and unfairly compete with their existing fee-based Chemical Abstracts Service and argued that the database should only present data created by the Molecular Libraries Screening Center initiative of the NIH. The ACS lobbied members of the United States Congress to rein in PubChem and hired outside lobbying firms to try to persuade congressional members, the NIH, and the Office of Management and Budget (OMB) against establishing a publicly funded database. The ACS was unsuccessful, and as of 2012 PubChem is the world's largest free chemical database. The ACS is also the only provider of a major scientific publication database (SciFinder) that imposes a restriction on the number of records that can be exported. None of the competing products, such as Web of Science (owned by Clarivate), Scopus (owned by Elsevier) and The Lens (owned by Cambia) has similar restrictions. Litigations The ACS has been involved in numerous lawsuits regarding access to its databases, trademark rights, and copyrighted material. In many of these cases, the ACS lost or ended up with an unenforceable judgement. These include: Dialog v. American Chemical Society, a suit claiming antitrust violations in access to ACS databases, settled out of court in 1993; American Chemical Society v. Google, a suit claiming trademark violation, settled out of court in 2006; American Chemical Society v. Leadscope, a suit alleging stolen trade secrets, concluded in 2012 with ACS losing its trade secrets claim and Leadscope losing its counterclaim of defamation; against ResearchGate, where a German court refused to award monetary compensation to the ACS and Elsevier; against Sci-Hub, which resulted in a non-enforceable judgement. The ACS was also found guilty in several lawsuits brought against the Society by its employees. Executive compensation In 2004, a group of ACS members criticized the compensation of former executive director and chief executive officer John Crum, whose total salary, expenses, and bonuses for 2002 was reported to be $767,834. The ACS defended the figure, saying that it was in line with that of comparable organizations, including for-profit publishers. , two employees were reported to have a total compensation exceeding $900,000, while 694 had a compensation exceeding $100,000. See also Reagent Chemicals (Reagent ACS), standards of chemical purity ACS style, the ACS's citation standard Association for Learned and Professional Society Publishers Chemical Abstracts Service List of learned societies List of international professional associations National Chemistry Week National Historic Chemical Landmarks Footnotes References Further reading J. J. Bohning 2001. American Chemical Society Founded 1876. ACS, Washington, D.C. External links ACS website ACS Publications website ACS Chemical & Engineering News ACS Chemical Abstracts Service (CAS) International Year of Chemistry 2011(archived) A Cauldron Bubbles: PubChem and the American Chemical Society — Information Today, June 2005 ACS Chemical Biology WIKI(archived) ACS Chemical Biology Community(archived) ACS Green Chemistry Institute ACS Organic Division Leete Award Gassman Award Archives American Chemical Society Puget Sound Section Records. 1909–1989. 11.9 cubic feet plus 10 vertical files and 7 items. At the University of Washington Libraries, Special Collections. Green chemistry Organizations based in Washington, D.C. Scientific organizations established in 1876 Learned societies of the United States Patriotic and national organizations chartered by the United States Congress 1876 establishments in New York (state) Academic publishing companies
American Chemical Society
[ "Chemistry", "Engineering", "Environmental_science" ]
4,572
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan", "American Chemical Society" ]
428,513
https://en.wikipedia.org/wiki/Many-body%20problem
The many-body problem is a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of many interacting particles. Terminology Microscopic here implies that quantum mechanics has to be used to provide an accurate description of the system. Many can be anywhere from three to infinity (in the case of a practically infinite, homogeneous or periodic system, such as a crystal), although three- and four-body systems can be treated by specific means (respectively the Faddeev and Faddeev–Yakubovsky equations) and are thus sometimes separately classified as few-body systems. Explanation of the problem In general terms, while the underlying physical laws that govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In such a quantum system, the repeated interactions between particles create quantum correlations, or entanglement. As a consequence, the wave function of the system is a complicated object holding a large amount of information, which usually makes exact or analytical calculations impractical or even impossible. This becomes especially clear by a comparison to classical mechanics. Imagine a single particle that can be described with numbers (take for example a free particle described by its position and velocity vector, resulting in ). In classical mechanics, such particles can simply be described by numbers. The dimension of the classical many-body system scales linearly with the number of particles . In quantum mechanics, however, the many-body-system is in general in a superposition of combinations of single particle states - all the different combinations have to be accounted for. The dimension of the quantum many body system therefore scales exponentially with , much faster than in classical mechanics. Because the required numerical expense grows so quickly, simulating the dynamics of more than three quantum-mechanical particles is already infeasible for many physical systems. Thus, many-body theoretical physics most often relies on a set of approximations specific to the problem at hand, and ranks among the most computationally intensive fields of science. In many cases, emergent phenomena may arise which bear little resemblance to the underlying elementary laws. Many-body problems play a central role in condensed matter physics. Examples Condensed matter physics (solid-state physics, nanoscience, superconductivity) Bose–Einstein condensation and Superfluids Quantum chemistry (computational chemistry, molecular physics) Atomic physics Molecular physics Nuclear physics (Nuclear structure, nuclear reactions, nuclear matter) Quantum chromodynamics (Lattice QCD, hadron spectroscopy, QCD matter, quark–gluon plasma) Approaches Mean-field theory and extensions (e.g. Hartree–Fock, Random phase approximation) Dynamical mean field theory Many-body perturbation theory and Green's function-based methods Configuration interaction Coupled cluster Various Monte-Carlo approaches Density functional theory Lattice gauge theory Matrix product state Neural network quantum states Numerical renormalization group Further reading References Quantum mechanics Computational physics
Many-body problem
[ "Physics" ]
608
[ "Theoretical physics", "Quantum mechanics", "Computational physics" ]
428,795
https://en.wikipedia.org/wiki/Check%20valve
A check valve, non-return valve, reflux valve, retention valve, foot valve, or one-way valve is a valve that normally allows fluid (liquid or gas) to flow through it in only one direction. Check valves are two-port valves, meaning they have two openings in the body, one for fluid to enter and the other for fluid to leave. There are various types of check valves used in a wide variety of applications. Check valves are often part of common household items. Although they are available in a wide range of sizes and costs, check valves generally are very small, simple, and inexpensive. Check valves work automatically and most are not controlled by a person or any external control; accordingly, most do not have any valve handle or stem. The bodies (external shells) of most check valves are made of plastic or metal. An important concept in check valves is the cracking pressure which is the minimum differential upstream pressure between inlet and outlet at which the valve will operate. Typically the check valve is designed for and can therefore be specified for a specific cracking pressure. Technical terminology Cracking pressure Refers to the minimum pressure differential needed between the inlet and outlet of the valve at which the first indication of flow occurs (steady stream of bubbles). Cracking pressure is also known as unseating head (pressure) or opening pressure. Reseal pressure Refers to the pressure differential between the inlet and outlet of the valve during the closing process of the check valve, at which there is no visible leak rate. Reseal pressure is also known as sealing pressure, seating head (pressure) or closing pressure. Back pressure a pressure higher at the outlet of a fitting than that at the inlet or a point upstream Types Ball check valve A ball check valve is a check valve in which the closing member, the movable part to block the flow, is a ball. In some ball check valves, the ball is spring-loaded to help keep it shut. For those designs without a spring, reverse flow is required to move the ball toward the seat and create a seal. The interior surface of the main seats of ball check valves are more or less conically tapered to guide the ball into the seat and form a positive seal when stopping reverse flow. Ball check valves are often very small, simple, and cheap. They are commonly used in liquid or gel minipump dispenser spigots, spray devices, some rubber bulbs for pumping air, etc., manual air pumps and some other pumps, and refillable dispensing syringes. Although the balls are most often made of metal, they can be made of other materials; in some specialized cases out of highly durable or inert materials, such as sapphire. High-performance liquid chromatography pumps and similar high pressure applications commonly use small inlet and outlet ball check valves with balls of (artificial) ruby and seats made of sapphire or both ball and seat of ruby, for both hardness and chemical resistance. After prolonged use, such check valves can eventually wear out or the seat can develop a crack, requiring replacement. Therefore, such valves are made to be replaceable, sometimes placed in a small plastic body tightly fitted inside a metal fitting which can withstand high pressure and which is screwed into the pump head. There are similar check valves where the disc is not a ball, but some other shape, such as a poppet energized by a spring. Ball check valves should not be confused with ball valves, which are a different type of valve in which a ball rotating on a pin acts as a controllable rotor to stop or direct flow. Diaphragm check valve A diaphragm check valve uses a flexing rubber diaphragm positioned to create a normally-closed valve. Pressure on the upstream side must be greater than the pressure on the downstream side by a certain amount, known as the pressure differential, for the check valve to open allowing flow. Once positive pressure stops, the diaphragm automatically flexes back to its original closed position. This type is used in respirators (face masks) with an exhalation valve. Swing check valve A swing check valve (or tilting disc check valve) is a check valve in which the disc, the movable part to block the flow, swings on a hinge or trunnion, either onto the seat to block reverse flow or off the seat to allow forward flow. The seat opening cross-section may be perpendicular to the centerline between the two ports or at an angle. Although swing check valves can come in various sizes, large check valves are often swing check valves. A common issue caused by swing check valves is known as water hammer. This can occur when the swing check closes and the flow abruptly stops, causing a surge of pressure resulting in high velocity shock waves that act against the piping and valves, placing large stress on the metals and vibrations in the system. Undetected, water hammer can rupture pumps, valves, and pipes within the system. The flapper valve in a flush-toilet mechanism is an example of this type of valve. Tank pressure holding it closed is overcome by manual lift of the flapper. It then remains open until the tank drains and the flapper falls due to gravity. Another variation of this mechanism is the clapper valve, used in applications such firefighting and fire life safety systems. A hinged gate only remains open in the inflowing direction. The clapper valve often also has a spring that keeps the gate shut when there is no forward pressure. Another example is the backwater valve (for sanitary drainage system) that protects against flooding caused by return flow of sewage waters. Such risk occurs most often in sanitary drainage systems connected to combined sewerage systems and in rainwater drainage systems. It may be caused by intense rainfall, thaw or flood. Butterfly check valve A butterfly check valve is a variant on the swing check valve, having two hinged flaps which act as check valves to prevent backwards flow. It should not be confused with the similarly named butterfly valve, which is used for flow regulation and does not have a one-way flow function. Stop-check valve A stop-check valve is a check valve with override control to stop flow regardless of flow direction or pressure. In addition to closing in response to backflow or insufficient forward pressure (normal check-valve behavior), it can also be deliberately shut by an external mechanism, thereby preventing any flow regardless of forward pressure. Lift-check valve A lift-check valve is a check valve in which the disc, sometimes called a lift, can be lifted up off its seat by higher pressure of inlet or upstream fluid to allow flow to the outlet or downstream side. A guide keeps motion of the disc on a vertical line, so the valve can later reseat properly. When the pressure is no longer higher, gravity or higher downstream pressure will cause the disc to lower onto its seat, shutting the valve to stop reverse flow. In-line check valve An in-line check valve is a check valve similar to the lift check valve. However, this valve generally has a spring that will 'lift' when there is pressure on the upstream side of the valve. The pressure needed on the upstream side of the valve to overcome the spring tension is called the 'cracking pressure'. When the pressure going through the valve goes below the cracking pressure, the spring will close the valve to prevent back-flow in the process. Duckbill valve A duckbill valve is a check valve in which flow proceeds through a soft tube that protrudes into the downstream side. Back-pressure collapses this tube, cutting off flow. Pneumatic non-return valve Pneumatic non-return valves provide the ability to lock the valve, hence preventing flow in either direction. This may be used if for example a site with hazardous materials should be protected from flood water, however it is also important that the materials can’t leak, for example during transfer between vessels. Reed valve A reed valve is a check valve formed by a flexible flat sheet that seals an orifice plate. The cracking pressure is very low, the moving part has low mass allowing rapid operation, the flow resistance is moderate, and the seal improves with back pressure. These are commonly found in two stroke internal combustion engines as the air intake valve for the crankcase volume and in air compressors as both intake and exhaust valves for the cylinder(s). Although reed valves are typically used for gasses rather than liquids, the Autotrol brand of water treatment control valves are designed as a set of reed valves taking advantage of the sealing characteristic, selectively forcing open some of the reeds to establish a flow path. Flow check A flow check is a check valve used in hydronic heating and cooling systems to prevent unwanted passive gravity flow. A flow check is a simple flow lifted gravity closed heavy metal stopper designed for low flow resistance, many decades of continuous service, and to self-clean the fine particulates commonly found in hydronic systems from the sealing surfaces. To accomplish self cleaning, the stopper is typically not conical. A circular recess in a weight that fits over a matching narrow ridge at the rim of an orifice is a common design. The application inherently tolerates a modest reverse leakage rate, a perfect seal is not required. A flow check has an operating screw to allow the valve to be held open, the opposite of the control on a stop-check valve, as an aide for filling the system and for purging air from the system. Multiple valves Multiple check valves can be connected in series. For example, a double check valve is often used as a backflow prevention device to keep potentially contaminated water from siphoning back into municipal water supply lines. There are also double ball check valves in which there are two ball/seat combinations sequentially in the same body to ensure positive leak-tight shutoff when blocking reverse flow; and piston check valves, wafer check valves, and ball-and-cone check valves. Applications Pumps Check valves are often used with some types of pumps. Piston-driven and diaphragm pumps such as metering pumps and pumps for chromatography commonly use inlet and outlet ball check valves. These valves often look like small cylinders attached to the pump head on the inlet and outlet lines. Many similar pump-like mechanisms for moving volumes of fluids around use check valves such as ball check valves. The feed pumps or injectors which supply water to steam boilers are fitted with check valves to prevent back-flow. Check valves are also used in the pumps that supply water to water slides. The water to the slide flows through a pipe which doubles as the tower holding the steps to the slide. When the facility with the slide closes for the night, the check valve stops the flow of water through the pipe; when the facility reopens for the next day, the valve is opened and the flow restarts, making the slide ready for use again. Industrial processes Check valves are used in many fluid systems such as those in chemical and power plants, and in many other industrial processes. Typical applications in the nuclear industry are feed water control systems, dump lines, make-up water, miscellaneous process systems, N2 systems, and monitoring and sampling systems. In aircraft and aerospace, check valves are used where high vibration, large temperature extremes and corrosive fluids are present. For example, spacecraft and launch vehicle propulsion propellant control for reaction control systems (RCS) and Attitude Control Systems (ACS) and aircraft hydraulic systems. Check valves are also often used when multiple gases are mixed into one gas stream. A check valve is installed on each of the individual gas streams to prevent mixing of the gases in the original source. For example, if a fuel and an oxidizer are to be mixed, then check valves will normally be used on both the fuel and oxidizer sources to ensure that the original gas cylinders remain pure and therefore nonflammable. In 2010, NASA's Jet Propulsion Laboratory slightly modified a simple check valve design with the intention to store liquid samples indicative to life on Mars in separate reservoirs of the device without fear of cross contamination. Domestic use When a sanitary potable water supply is plumbed to an unsanitary system, for example lawn sprinklers, a dish washer or a washing machine, a check valve called a backflow preventer is used to prevent contaminated water from re-entering the domestic water supply. Some types of irrigation sprinklers and drip irrigation emitters have small check valves built into them to keep the lines from draining when the system is shut off. Check valves used in domestic heating systems to prevent vertical convection, especially in combination with solar thermal installations, also are called gravity brakes. Rainwater harvesting systems that are plumbed into the main water supply of a utility provider may be required to have one or more check valves fitted to prevent contamination of the primary supply by rainwater. Hydraulic jacks use ball check valves to build pressure on the lifting side of the jack. Check valves are commonly used in inflatables, such as toys, mattresses and boats. This allows the object to be inflated without continuous or uninterrupted air pressure. History Frank P. Cotter developed a "simple self sealing check valve, adapted to be connected in the pipe connections without requiring special fittings and which may be readily opened for inspection or repair" 1907 (U.S. patent 865,631). Nikola Tesla invented a deceptively simple one-way valve for fluids in 1916, called a Tesla valve. It was patented in 1920 (U.S. patent 1,329,559). Images See also Diode, the electrical analog of a check valve Top feed Vacuum breaker Reed valve Ball valve Butterfly valve Control valve Gate valve Globe valve Diaphragm valve Needle valve Tesla valve References External links Working Principle of Spring Check Valves Check Valves Tutorial The operation, benefits, applications and selection of different designs, including lift, disc, swing and wafer check valves are explained in this tutorial A picture of a microscopic checkvalve, a scaled down version of Tesla's original fluidic diode. , Tesla's original fluidic diode (a test of a design showing very poor performance – n.b. the test protocol did not match the conditions described in the patent) Check Valve Installation and Benefits Plumbing valves Steam boiler components Firefighting equipment Valves
Check valve
[ "Physics", "Chemistry" ]
2,936
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
428,874
https://en.wikipedia.org/wiki/Crassulacean%20acid%20metabolism
Crassulacean acid metabolism, also known as CAM photosynthesis, is a carbon fixation pathway that evolved in some plants as an adaptation to arid conditions that allows a plant to photosynthesize during the day, but only exchange gases at night. In a plant using full CAM, the stomata in the leaves remain shut during the day to reduce evapotranspiration, but they open at night to collect carbon dioxide () and allow it to diffuse into the mesophyll cells. The is stored as four-carbon malic acid in vacuoles at night, and then in the daytime, the malate is transported to chloroplasts where it is converted back to , which is then used during photosynthesis. The pre-collected is concentrated around the enzyme RuBisCO, increasing photosynthetic efficiency. This mechanism of acid metabolism was first discovered in plants of the family Crassulaceae. Historical background Observations relating to CAM were first made by de Saussure in 1804 in his Recherches Chimiques sur la Végétation. Benjamin Heyne in 1812 noted that Bryophyllum leaves in India were acidic in the morning and tasteless by afternoon. These observations were studied further and refined by Aubert, E. in 1892 in his Recherches physiologiques sur les plantes grasses and expounded upon by Richards, H. M. 1915 in Acidity and Gas Interchange in Cacti, Carnegie Institution. The term CAM may have been coined by Ranson and Thomas in 1940, but they were not the first to discover this cycle. It was observed by the botanists Ranson and Thomas, in the succulent family Crassulaceae (which includes jade plants and Sedum). The name "Crassulacean acid metabolism" refers to acid metabolism in Crassulaceae, and not the metabolism of "crassulacean acid"; there is no chemical by that name. Overview: a two-part cycle CAM is an adaptation for increased efficiency in the use of water, and so is typically found in plants growing in arid conditions. (CAM is found in over 99% of the known 1700 species of Cactaceae and in nearly all of the cacti producing edible fruits.) During the night During the night, a plant employing CAM has its stomata open, allowing to enter and be fixed as organic acids by a PEP reaction similar to the pathway. The resulting organic acids are stored in vacuoles for later use, as the Calvin cycle cannot operate without ATP and NADPH, products of light-dependent reactions that do not take place at night. During the day During the day, the stomata close to conserve water, and the -storing organic acids are released from the vacuoles of the mesophyll cells. An enzyme in the stroma of chloroplasts releases the , which enters into the Calvin cycle so that photosynthesis may take place. Benefits The most important benefit of CAM to the plant is the ability to leave most leaf stomata closed during the day. Plants employing CAM are most common in arid environments, where water is scarce. Being able to keep stomata closed during the hottest and driest part of the day reduces the loss of water through evapotranspiration, allowing such plants to grow in environments that would otherwise be far too dry. Plants using only carbon fixation, for example, lose 97% of the water they take up through the roots to transpiration - a high cost avoided by plants able to employ CAM. Comparison with metabolism The pathway bears resemblance to CAM; both act to concentrate around RuBisCO, thereby increasing its efficiency. CAM concentrates it temporally, providing during the day, and not at night, when respiration is the dominant reaction. plants, in contrast, concentrate spatially, with a RuBisCO reaction centre in a "bundle sheath cell" being inundated with . Due to the inactivity required by the CAM mechanism, carbon fixation has a greater efficiency in terms of PGA synthesis. There are some /CAM intermediate species, such as Peperomia camptotricha, Portulaca oleracea, and Portulaca grandiflora. It was previously thought that the two pathways of photosynthesis in such plants could occur in the same leaves but not in the same cells, and that the two pathways could not couple but only occur side by side. It is now known, however, that in at least some species such as Portulaca oleracea, C4 and CAM photosynthesis are fully integrated within the same cells, and that CAM-generated metabolites are incorporated directly into the C4 cycle. Biochemistry Plants with CAM must control storage of and its reduction to branched carbohydrates in space and time. At low temperatures (frequently at night), plants using CAM open their stomata, molecules diffuse into the spongy mesophyll's intracellular spaces and then into the cytoplasm. Here, they can meet phosphoenolpyruvate (PEP), which is a phosphorylated triose. During this time, the plants are synthesizing a protein called PEP carboxylase kinase (PEP-C kinase), whose expression can be inhibited by high temperatures (frequently at daylight) and the presence of malate. PEP-C kinase phosphorylates its target enzyme PEP carboxylase (PEP-C). Phosphorylation dramatically enhances the enzyme's capability to catalyze the formation of oxaloacetate, which can be subsequently transformed into malate by NAD+ malate dehydrogenase. Malate is then transported via malate shuttles into the vacuole, where it is converted into the storage form malic acid. In contrast to PEP-C kinase, PEP-C is synthesized all the time but almost inhibited at daylight either by dephosphorylation via PEP-C phosphatase or directly by binding malate. The latter is not possible at low temperatures, since malate is efficiently transported into the vacuole, whereas PEP-C kinase readily inverts dephosphorylation. In daylight, plants using CAM close their guard cells and discharge malate that is subsequently transported into chloroplasts. There, depending on plant species, it is cleaved into pyruvate and either by malic enzyme or by PEP carboxykinase. is then introduced into the Calvin cycle, a coupled and self-recovering enzyme system, which is used to build branched carbohydrates. The by-product pyruvate can be further degraded in the mitochondrial citric acid cycle, thereby providing additional molecules for the Calvin Cycle. Pyruvate can also be used to recover PEP via pyruvate phosphate dikinase, a high-energy step, which requires ATP and an additional phosphate. During the following cool night, PEP is finally exported into the cytoplasm, where it is involved in fixing carbon dioxide via malate. Use by plants Plants use CAM to different degrees. Some are "obligate CAM plants", i.e. they use only CAM in photosynthesis, although they vary in the amount of they are able to store as organic acids; they are sometimes divided into "strong CAM" and "weak CAM" plants on this basis. Other plants show "inducible CAM", in which they are able to switch between using either the or mechanism and CAM depending on environmental conditions. Another group of plants employ "CAM-cycling", in which their stomata do not open at night; the plants instead recycle produced by respiration as well as storing some during the day. Plants showing inducible CAM and CAM-cycling are typically found in conditions where periods of water shortage alternate with periods when water is freely available. Periodic drought – a feature of semi-arid regions – is one cause of water shortage. Plants which grow on trees or rocks (as epiphytes or lithophytes) also experience variations in water availability. Salinity, high light levels and nutrient availability are other factors which have been shown to induce CAM. Since CAM is an adaptation to arid conditions, plants using CAM often display other xerophytic characters, such as thick, reduced leaves with a low surface-area-to-volume ratio; thick cuticle; and stomata sunken into pits. Some shed their leaves during the dry season; others (the succulents) store water in vacuoles. CAM also causes taste differences: plants may have an increasingly sour taste during the night yet become sweeter-tasting during the day. This is due to malic acid being stored in the vacuoles of the plants' cells during the night and then being used up during the day. Aquatic CAM CAM photosynthesis is also found in aquatic species in at least 4 genera, including: Isoetes, Crassula, Littorella, Sagittaria, and possibly Vallisneria, being found in a variety of species e.g. Isoetes howellii, Crassula aquatica. These plants follow the same nocturnal acid accumulation and daytime deacidification as terrestrial CAM species. However, the reason for CAM in aquatic plants is not due to a lack of available water, but a limited supply of . is limited due to slow diffusion in water, 10000x slower than in air. The problem is especially acute under acid pH, where the only inorganic carbon species present is , with no available bicarbonate or carbonate supply. Aquatic CAM plants capture carbon at night when it is abundant due to a lack of competition from other photosynthetic organisms. This also results in lowered photorespiration due to less photosynthetically generated oxygen. Aquatic CAM is most marked in the summer months when there is increased competition for , compared to the winter months. However, in the winter months CAM still has a significant role. Ecological and taxonomic distribution of CAM-using plants The majority of plants possessing CAM are either epiphytes (e.g., orchids, bromeliads) or succulent xerophytes (e.g., cacti, cactoid Euphorbias), but CAM is also found in hemiepiphytes (e.g., Clusia); lithophytes (e.g., Sedum, Sempervivum); terrestrial bromeliads; wetland plants (e.g., Isoetes, Crassula (Tillaea), Lobelia); and in one halophyte, Mesembryanthemum crystallinum; one non-succulent terrestrial plant, (Dodonaea viscosa) and one mangrove associate (Sesuvium portulacastrum). The only trees that can do CAM are in the genus Clusia; species of which are found across Central America, South America and the Caribbean. In Clusia, CAM is found in species that inhabit hotter, drier ecological niches, whereas species living in cooler montane forests tend to be . In addition, some species of Clusia can temporarily switch their photosynthetic physiology from to CAM, a process known as facultative CAM. This allows these trees to benefit from the elevated growth rates of photosynthesis, when water is plentiful, and the drought tolerant nature of CAM, when the dry season occurs. Plants which are able to switch between different methods of carbon fixation include Portulacaria afra, better known as Dwarf Jade Plant, which normally uses C3 fixation but can use CAM if it is drought-stressed, and Portulaca oleracea, better known as Purslane, which normally uses fixation but is also able to switch to CAM when drought-stressed. CAM has evolved convergently many times. It occurs in 16,000 species (about 7% of plants), belonging to over 300 genera and around 40 families, but this is thought to be a considerable underestimate. The great majority of plants using CAM are angiosperms (flowering plants) but it is found in ferns, Gnetopsida and in quillworts (relatives of club mosses). Interpretation of the first quillwort genome in 2021 (I. taiwanensis) suggested that its use of CAM was another example of convergent evolution. In Tillandsia CAM evolution has been associated with gene family expansion. The following list summarizes the taxonomic distribution of plants with CAM: See also C2 photosynthesis C3 carbon fixation C4 carbon fixation RuBisCO References External links Khan Academy, video lecture Photosynthesis Plant metabolism
Crassulacean acid metabolism
[ "Chemistry", "Biology" ]
2,635
[ "Biochemistry", "Plant metabolism", "Photosynthesis", "Metabolism" ]
429,167
https://en.wikipedia.org/wiki/Shaku%20%28unit%29
or Japanese foot is a Japanese unit of length derived (but varying) from the Chinese , originally based upon the distance measured by a human hand from the tip of the thumb to the tip of the forefinger (compare span). Traditionally, the length varied by location or use, but it is now standardized as 10/33 m, or approximately . Etymology in English entered English in the early 18th century, a romanization of the Japanese Go-on reading of the character for . Use in Japan The had been standardized as since 1891. This means that there are about 3.3 () to one meter. This definition was established by Meiji government law; until then, even though the unit was given the same name, its length varied depending on the era. At the same time, other units were established based on shaku. English:1Shaku = 10Cun = 100bu Japanese:1尺 = 10寸 = 100分 The use of the unit for official purposes in Japan was banned on March 31, 1966, although it is still used in traditional Japanese carpentry and some other fields, such as kimono construction. The traditional Japanese bamboo flute known as the ( and ) derives its name from its length of one and eight . Similarly, the remains in use in the Japanese lumber trade. In the Japanese construction industry, the standard sizes of drywall, plywood, and other sheet goods are based on , with the most common width being three (rounded up to ). In Japanese media parlance, refers to screen time: the amount of time someone or something is shown on screen (similar to the English "footage"). History Traditionally, the actual length of the varied over time, location, and use. By the early 19th century, the was largely within the range of , but a longer value of the (also known as the ) was also known, and was 1.17 times longer than the present value (). Carpenter's unit and tailor's unit Another variant was used for measuring cloth, which measured meters (), and was known as the , as baleen (whale whiskers) were used as cloth rulers. To distinguish the two variants of , the general unit was known as the . The Shōsōin treasure house in Nara preserves some antique ivory one- rulers, known as the . Derived units Length Just as with the Chinese unit, the is divided into ten smaller units, known as in Japanese, and ten together form a larger unit known in Japanese as a . The Japanese also had a third derived unit, the , equal to six ; this was used extensively in traditional Japanese architecture as the distance between supporting pillars in Buddhist temples and Shinto shrines. Volume Ten cubic comprised a , reckoned as the amount of rice necessary to sustain a peasant for a year. Outside Japan The Japanese also forms the basis of the modern Taiwanese foot. In 1909, the Korean Empire adopted the Japanese definition of the as that of the (). See also Japanese units of measurement Notes References Bibliography Japanese words and phrases Units of length
Shaku (unit)
[ "Mathematics" ]
617
[ "Quantity", "Units of measurement", "Units of length" ]
429,296
https://en.wikipedia.org/wiki/Hausdorff%20distance
In mathematics, the Hausdorff distance, or Hausdorff metric, also called Pompeiu–Hausdorff distance, measures how far two subsets of a metric space are from each other. It turns the set of non-empty compact subsets of a metric space into a metric space in its own right. It is named after Felix Hausdorff and Dimitrie Pompeiu. Informally, two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is the longest distance someone can be forced to travel by an adversary who chooses a point in one of the two sets, from where they then must travel to the other set. In other words, it is the greatest of all the distances from a point in one set to the closest point in the other set. This distance was first introduced by Hausdorff in his book Grundzüge der Mengenlehre, first published in 1914, although a very close relative appeared in the doctoral thesis of Maurice Fréchet in 1906, in his study of the space of all continuous curves from . Definition Let be a metric space. For each pair of non-empty subsets and , the Hausdorff distance between and is defined as where represents the supremum operator, the infimum operator, and where quantifies the distance from a point to the subset . An equivalent definition is as follows. For each set let which is the set of all points within of the set (sometimes called the -fattening of or a generalized ball of radius around ). Then, the Hausdorff distance between and is defined as Equivalently, where is the smallest distance from the point to the set . Remark It is not true for arbitrary subsets that implies For instance, consider the metric space of the real numbers with the usual metric induced by the absolute value, Take Then . However because , but . But it is true that and ; in particular it is true if are closed. Properties In general, may be infinite. If both X and Y are bounded, then is guaranteed to be finite. if and only if X and Y have the same closure. For every point x of M and any non-empty sets Y, Z of M: d(x,Y) ≤ d(x,Z) + dH(Y,Z), where d(x,Y) is the distance between the point x and the closest point in the set Y. |diameter(Y)-diameter(X)| ≤ 2 dH(X,Y). If the intersection X ∩ Y has a non-empty interior, then there exists a constant r > 0, such that every set X′ whose Hausdorff distance from X is less than r also intersects Y. On the set of all subsets of M, dH yields an extended pseudometric. On the set F(M) of all non-empty compact subsets of M, dH is a metric. If M is complete, then so is F(M). If M is compact, then so is F(M). The topology of F(M) depends only on the topology of M, not on the metric d. Motivation The definition of the Hausdorff distance can be derived by a series of natural extensions of the distance function in the underlying metric space M, as follows: Define a distance function between any point x of M and any non-empty set Y of M by: For example, d(1, {3,6}) = 2 and d(7, {3,6}) = 1. Define a (not-necessarily-symmetric) "distance" function between any two non-empty sets X and Y of M by: For example, If X and Y are compact then d(X,Y) will be finite; d(X,X)=0; and d inherits the triangle inequality property from the distance function in M. As it stands, d(X,Y) is not a metric because d(X,Y) is not always symmetric, and does not imply that (It does imply that ). For example, , but . However, we can create a metric by defining the Hausdorff distance to be: Applications In computer vision, the Hausdorff distance can be used to find a given template in an arbitrary target image. The template and image are often pre-processed via an edge detector giving a binary image. Next, each 1 (activated) point in the binary image of the template is treated as a point in a set, the "shape" of the template. Similarly, an area of the binary target image is treated as a set of points. The algorithm then tries to minimize the Hausdorff distance between the template and some area of the target image. The area in the target image with the minimal Hausdorff distance to the template, can be considered the best candidate for locating the template in the target. In computer graphics the Hausdorff distance is used to measure the difference between two different representations of the same 3D object particularly when generating level of detail for efficient display of complex 3D models. If is the surface of Earth, and is the land-surface of Earth, then by finding the point Nemo, we see is around 2,704.8 km. Related concepts A measure for the dissimilarity of two shapes is given by Hausdorff distance up to isometry, denoted DH. Namely, let X and Y be two compact figures in a metric space M (usually a Euclidean space); then DH(X,Y) is the infimum of dH(I(X),Y) among all isometries I of the metric space M to itself. This distance measures how far the shapes X and Y are from being isometric. The Gromov–Hausdorff convergence is a related idea: measuring the distance of two metric spaces M and N by taking the infimum of among all isometric embeddings and into some common metric space L. See also Wijsman convergence Kuratowski convergence Hemicontinuity Fréchet distance Hypertopology References External links Hausdorff distance between convex polygons. Using MeshLab to measure difference between two surfaces A short tutorial on how to compute and visualize the Hausdorff distance between two triangulated 3D surfaces using the open source tool MeshLab. MATLAB code for Hausdorff distance: Distance Metric geometry
Hausdorff distance
[ "Physics", "Mathematics" ]
1,329
[ "Distance", "Physical quantities", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities" ]
15,352,314
https://en.wikipedia.org/wiki/Supermodule
In mathematics, a supermodule is a Z2-graded module over a superring or superalgebra. Supermodules arise in super linear algebra which is a mathematical framework for studying the concept supersymmetry in theoretical physics. Supermodules over a commutative superalgebra can be viewed as generalizations of super vector spaces over a (purely even) field K. Supermodules often play a more prominent role in super linear algebra than do super vector spaces. These reason is that it is often necessary or useful to extend the field of scalars to include odd variables. In doing so one moves from fields to commutative superalgebras and from vector spaces to modules. In this article, all superalgebras are assumed be associative and unital unless stated otherwise. Formal definition Let A be a fixed superalgebra. A right supermodule over A is a right module E over A with a direct sum decomposition (as an abelian group) such that multiplication by elements of A satisfies for all i and j in Z2. The subgroups Ei are then right A0-modules. The elements of Ei are said to be homogeneous. The parity of a homogeneous element x, denoted by |x|, is 0 or 1 according to whether it is in E0 or E1. Elements of parity 0 are said to be even and those of parity 1 to be odd. If a is a homogeneous scalar and x is a homogeneous element of E then |x·a| is homogeneous and |x·a| = |x| + |a|. Likewise, left supermodules and superbimodules are defined as left modules or bimodules over A whose scalar multiplications respect the gradings in the obvious manner. If A is supercommutative, then every left or right supermodule over A may be regarded as a superbimodule by setting for homogeneous elements a ∈ A and x ∈ E, and extending by linearity. If A is purely even this reduces to the ordinary definition. Homomorphisms A homomorphism between supermodules is a module homomorphism that preserves the grading. Let E and F be right supermodules over A. A map is a supermodule homomorphism if for all a∈A and all x,y∈E. The set of all module homomorphisms from E to F is denoted by Hom(E, F). In many cases, it is necessary or convenient to consider a larger class of morphisms between supermodules. Let A be a supercommutative algebra. Then all supermodules over A be regarded as superbimodules in a natural fashion. For supermodules E and F, let Hom(E, F) denote the space of all right A-linear maps (i.e. all module homomorphisms from E to F considered as ungraded right A-modules). There is a natural grading on Hom(E, F) where the even homomorphisms are those that preserve the grading and the odd homomorphisms are those that reverse the grading If φ ∈ Hom(E, F) and a ∈ A are homogeneous then That is, the even homomorphisms are both right and left linear whereas the odd homomorphism are right linear but left antilinear (with respect to the grading automorphism). The set Hom(E, F) can be given the structure of a bimodule over A by setting With the above grading Hom(E, F) becomes a supermodule over A whose even part is the set of all ordinary supermodule homomorphisms In the language of category theory, the class of all supermodules over A forms a category with supermodule homomorphisms as the morphisms. This category is a symmetric monoidal closed category under the super tensor product whose internal Hom functor is given by Hom. References Module theory Super linear algebra
Supermodule
[ "Physics", "Mathematics" ]
845
[ "Super linear algebra", "Fields of abstract algebra", "Module theory", "Supersymmetry", "Symmetry" ]
15,353,539
https://en.wikipedia.org/wiki/PTK7
Tyrosine-protein kinase-like 7 also known as colon carcinoma kinase 4 (CCK4) is a receptor tyrosine kinase that in humans is encoded by the PTK7 gene. Function Receptor protein tyrosine kinases transduce extracellular signals across the cell membrane. A subgroup of these kinases lack detectable catalytic tyrosine kinase activity but retain roles in signal transduction. The protein encoded by this gene an intracellular domain with tyrosine kinase homology and may function as a cell adhesion molecule. This gene is thought to be expressed in colon carcinomas but not in normal colon, and therefore may be a marker for or may be involved in tumor progression. Four transcript variants encoding four different isoforms have been found for this gene. PTK7 serves as a context-dependent signalling switch for the Wnt pathways (particularly in planar cell polarity related functions such as convergent extension and neural crest cell migration) and appears to have similar functions for plexin and Flt-1 pathways. PTK7 was identified to be highly expressed in colon cancer by Saha et al. using serial analysis of gene expression (LongSAGE). Pfizer is targeting PTK7 for cancer by generating an antibody-drug conjugate against the PTK7 receptor. References Further reading Tyrosine kinase receptors
PTK7
[ "Chemistry" ]
279
[ "Tyrosine kinase receptors", "Signal transduction" ]
15,354,008
https://en.wikipedia.org/wiki/Fred%20S.%20Keller
Fred Simmons Keller (January 2, 1899February 2, 1996) was an American psychologist and a pioneer in experimental psychology. He taught at Columbia University for 26 years and gave his name to the Keller Plan, also known as Personalized System of Instruction, an individually paced, mastery-oriented teaching method that has had a significant impact on college-level science education system. He died at home, age 97, on February 2, 1996, in Chapel Hill, North Carolina. Life Keller was born on January 2, 1899, on a farm near Rural Grove, New York, in Montgomery County. He was the only child to Vrooming Barney Keller and Minnie Vanderveer Simmons. Due to instability in his childhood, he left school at an early age to pursue employment as a Western Union telegrapher in Saranac Lake, New York. He enlisted in the U.S. Army in 1918 during World War I and served in Camp Jackson, South Carolina, in addition to active duty overseas in France and Germany. While in France, he served in battle five times, and in Germany he participated in the Army of Occupation. Keller exited the military ranking as a sergeant in August 1919. Keller attended Goddard Seminary, in which he received athletic scholarship in 1919. However, he was accepted to Tufts College in 1920 through a scholarship and decided to attend there. Keller was majoring in English literature. Originally Keller had left school due to discrepancies with his attendance. During his time off of at school, he read John Watson's Psychology from the Standpoint of a Behaviorist, in which he returned to Tufts with a focus in psychology a few years later. He earned a B.S. from Tufts College in 1926 and was awarded an academic position in which he was employed for, for two years. Keller attended Harvard University for graduate school following his bachelor's degree in addition to teaching at Tufts. Additionally, he took part in a position at Harvard College as a laboratory assistant and a tutor. Keller received his master's degree in 1928 and left Harvard College in 1931. During his time in Harvard, Keller took classes under E.G. Boring, who was the peak of Keller's career. Also, Keller met fellow graduate and famous psychologist, B.F. Skinner, in which they roomed together and became long-life friends. He found a job at Colgate University during the Great Depression and remained there for seven years until 1938. Following Colgate University he was offered a position at Columbia University; he was named assistant professor in 1942, associate professor in 1946, and professor of psychology in 1950. He also served as chairman of the department from 1959 to 1962 and became professor emeritus of psychology in 1964, the same year he retired from the university. In 1963 Keller was invited to Brazil to São Paulo to incorporate his new psychology into their existing academics. Keller returned to the U.S. and was invited back to Brazil, this time to the University of Brazila, where he was instructed to continue the development of his new findings. Upon returning for good, Keller found employment at the Institute for Behavioral Research, as well as Western Michigan University, Texas Christian University, and Georgetown University. Keller spent his time as a visiting professor, adjunct professor, as well as the chair at the listed universities. He retired in May 1976 from Georgetown University. Keller's students appreciated him as their instructor, as well as appreciated his frequent reinforcers of their behavior. His "theoretical commitment to the power of positive reinforcement... he delivered were never routine" but "mechanical repetitions of 'uhhunh' or 'good. It was seen as a possible plan to increase academic performance and often singled out his students to denote praise. He was recognized throughout his life and was liked by many. After his death, a memorial was held in San Francisco by the Association for Behavior Analysis, in which over two hundred people took part in. Keller was also recognized for his "effects of his lectures on one student's behavior ... efficacy of his teaching" and "graceful flow of sentence into paragraph". Contributions to psychology Inspired by B. F. Skinner, Keller proposed a reinforcement theory that applies to his teaching in addition to research. He and William N. Schoenfeld produced a textbook as well as an introductory laboratory course in psychology that related animal psychology with white rats as their subjects. The textbook was called "Principles of Psychology," published in 1950. The book emphasized scientific methods in the study of psychology such as escape, avoidance, conflict, cooperation, imitation, verbal behavior, thinking, and concept formation. In addition to providing a course that initiated the first use of experimental analysis of behavior during his time at Columbia University, Keller and Schoenfeld provided a more concise understanding for concepts that Skinner proposed in his The Behavior of Organisms. Concepts were written in a clearer way for students to understand and was more appropriate for an undergraduate beginners course. In the lab, students were able to test the methods they were learning of in their lecture classes for the first time. Among their experiments, the students observed the responses of white rats to stimuli and rewards and measured human learning by testing people's ability to remember the pathways of mazes and other sensory processes. Keller and Skinner interacted frequently, and even collectively held the 1947 conference on the Experimental Analysis of Behavior. The audience consisted of students from both colleges of Skinner and Keller and the precursors to their new movement. With the aid of students, Skinner and Keller were able to formulate, at the conferences held up until 1950, the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Division 25 of the APA, the Association for Behavior Analysis, The Behavior Analyst, Analysis of Verbal Behavior, the Cambridge Center for Behavioral Studies, Behavior and Social Issues, and Behavior and Philosophy. Keller subsequently studied learning in addition to behavior. He was the first to administer Skinner's previous findings into real-world applications by the process of transcribing auditory signals of Morse code into English. The method he used was called code-voice, which resembled Skinner's programmed instruction, which was fancied by the US Army. Code-voice was used mostly in the Signal Corps, although it was also used in other divisions, and became one of the most used methods in radio-operator training. The new method "represented an early application of the laws of learning to practical human affairs and served as a model for the study of several other skills" and was awarded by President Truman a certificate of Merit in 1948. He was a fellow of the American Psychological Association and a past president of the Eastern Psychological Association. He received the Distinguished Teaching Award from the American Psychological Foundation in 1970. Brazil Concluding his years at Columbia University, he was invited to the University of São Paulo by Fulbright–Hays, in which he attended as a visiting professor. He spent one year in Brazil during 1961, establishing his reinforcement theory into the psychological presence. Keller constructed multiple experimental instruments used to test reinforcement, such as living cages that were constructed from ordinary resources. He used hard-wire cloths, wooden frames, bent wire that acted as levers, pencils, watches, and cocktail stirrers. Keller initiated a new psychology in the city of Brazil and was recognized heavily for it. His influence at the university initiated multiple students to travel to the US to continue the work of Keller. Several years after returning from Brazil he got an invitation to come back in 1963. He was asked to create a novel department, designed completely by him (carta branca), called "personalized system of instruction". He spent his time at the University of Brasília developing his new psychological program, in which he was forced to depart from in 1964. He continued his work in the US on the PSI, also known as the Keller Plan, with several other theorists, and his work continued in Brazil by his students. His work was also translated into Portuguese. Personalized System of Instruction (PSI) Keller's paper "Good-bye, teacher..." issued in Journal of Applied Behavior Analysis in 1968 introduced the concept of Personalized System of Instruction (PSI). This lead later to "Mastery learning" plan. When Keller returned to the US, he continued the work he had started in Brazil dealing with the personalized system of instruction. His work was developed in collaboration with J. G. Sherman at Arizona State University, where he remained for three years. During his time, J. G. Sherman and Keller advanced principles associated with the PSI. These included "(1) The semester's work was divided into, let's say, twenty units, in a logical progression such as the necessary background for each unit, was contained in the units that went before. (2) Repeated tests were given, administered by student "proctors" selected from those who had been most successful during the previous semester, and each student would continue testing on a given unit of the material until he had demonstrated satisfactory proficiency. Erroneous answers could be discussed, or even debated with the student proctor until a clear understanding was achieved. (3) The criterion for moving on to the next unit was not a particular understanding, but 'mastery' of a given unit of material. Eventually, all students, if they persisted, were expected to turn in a performance. (4) Each student worked at his own pace". Students were encouraged to master each concept before moving on to a subsequent concept. This allowed each student to work at their own pace without fear of falling behind. Following Arizona State, Keller attended in 1967 the Institute for Behavioral Research located in Silver Spring, Maryland. During his attendance, he developed several additional concepts in regards to the PSI and published a well-known paper called "Good-bye Teacher...." where the concept of PSI was first presented. In 1970 he had received the Distinguished Teacher Award given to him by the APA, as well as the honorary Doctor of Science degree from Long Island University in addition to Colgate University in 1976. Additionally, he was given the Behavioral Scientist medal in 1973 from the Institute for Behavioral Research and the Doctor of Humane Letters degree in 1976. References See also Behavioral activation 1899 births 1996 deaths 20th-century American psychologists Colgate University faculty Columbia University faculty Harvard Graduate School of Arts and Sciences alumni Tufts University alumni People from Chapel Hill, North Carolina Fellows of the American Psychological Association Behaviourist psychologists James McKeen Cattell Fellow Award recipients
Fred S. Keller
[ "Biology" ]
2,122
[ "Behaviourist psychologists", "Behavior", "Behaviorism" ]
15,354,559
https://en.wikipedia.org/wiki/SOX12
SOX12 is a protein that in humans is encoded by the SOX12 gene. Sox12 belongs to the SoxC group of Sox family of transcription factors, together with Sox4 and Sox11. Sox12-null knockout mice appear normal, unlike Sox4 or Sox11 knockout mice. This probably comes from functional redundancy with Sox4 and Sox11. Sox12 is a weaker activator than both Sox4 and Sox11 in mouse. Members of the SOX family of transcription factors are characterized by the presence of a DNA-binding high mobility group (HMG) domain, homologous to the HMG box of sex-determining region Y (SRY). Forming a subgroup of the HMG domain superfamily, SOX proteins have been implicated in cell fate decisions in a diverse range of developmental processes. SOX transcription factors have diverse tissue-specific expression patterns during early development and have been proposed to act as target-specific transcription factors and/or as chromatin structure regulatory elements. The protein encoded by this gene was identified as a SOX family member based on conserved domains and its expression in various tissues suggests a role in both differentiation and maintenance of several cell types. References Further reading Transcription factors
SOX12
[ "Chemistry", "Biology" ]
242
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,358,528
https://en.wikipedia.org/wiki/Calcium%20silicate%20hydrate
Calcium silicate hydrates (CSH or C-S-H) are the main products of the hydration of Portland cement and are primarily responsible for the strength of cement-based materials. They are the main binding phase (the "glue") in most concrete. Only well defined and rare natural crystalline minerals can be abbreviated as CSH while extremely variable and poorly ordered phases without well defined stoichiometry, as it is commonly observed in hardened cement paste (HCP), are denoted C-S-H. Preparation When water is added to cement, each of the compounds undergoes hydration and contributes to the final state of the concrete. Only calcium silicates contribute to the strength. Tricalcium silicate is responsible for most of the early strength (first 7 days). Dicalcium silicate, which reacts more slowly, only contributes to late strength. Calcium silicate hydrate (also shown as C-S-H) is a result of the reaction between the silicate phases of Portland cement and water. This reaction typically is expressed as: also written in cement chemist notation, (CCN) as: 2 + 7 H → + 3 CH + heat or, tricalcium silicate + water → calcium silicate hydrate + calcium hydroxide + heat The stoichiometry of C-S-H in cement paste is variable and the state of chemically and physically bound water in its structure is not transparent, which is why "-" is used between C, S, and H. Synthetic C-S-H can be prepared from the reaction of CaO and SiO2 in water or through the double precipitation method using various salts. These methods provide the flexibility of producing C-S-H at specific C/S (Ca/Si, or CaO/SiO2) ratios. The C-S-H from cement phases can also be treated with an ammonium nitrate solution in order to induce calcium leaching, and so to achieve a given C/S ratio. Properties C-S-H is a nano sized material with some degree of crystallinity as observed by X-ray diffraction techniques. The underlying atomic structure of C-S-H is similar to the naturally occurring mineral tobermorite. It has a layered geometry with calcium silicate sheet structure separated by an interlayer space. The silicates in C-S-H exist as dimers, pentamers and 3n-1 chain units (where n is an integer greater than 0) and calcium ions are found to connect these chains making the three dimensional nano structure as observed by dynamic nuclear polarisation surface-enhanced nuclear magnetic resonance. The exact nature of the interlayer remains unknown. One of the greatest difficulties in characterising C-S-H is due to its variable stoichiometry. The scanning electron microscope micrographs of C-S-H does not show any specific crystalline form. They usually manifest as foils or needle/oriented foils. Synthetic C-S-H can be divided in two categories separated at the Ca/Si ratio of about 1.1. There are several indications that the chemical, physical and mechanical characteristics of C-S-H varies noticeably between these two categories. See also Other C-S-H minerals: (a rare mineral from hydrothermal alteration, or an ageing product of alkali-silica reaction) Other calcium aluminium silicate hydrate, (C-A-S-H) minerals: (, and also ) Mechanisms of formation of C-S-H phases: References Calcium compounds Cement Concrete Hydrates Inorganic compounds Silicates Phyllosilicates
Calcium silicate hydrate
[ "Chemistry", "Engineering" ]
759
[ "Structural engineering", "Inorganic compounds", "Concrete", "Hydrates" ]
15,360,151
https://en.wikipedia.org/wiki/ChIP%20sequencing
ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA. ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. It can be used to map global binding sites precisely for any protein of interest. Previously, ChIP-on-chip was the most common technique utilized to study these protein–DNA relations. Uses ChIP-seq is primarily used to determine how transcription factors and other chromatin-associated proteins influence phenotype-affecting mechanisms. Determining how proteins interact with DNA to regulate gene expression is essential for fully understanding many biological processes and disease states. This epigenetic information is complementary to genotype and expression analysis. ChIP-seq technology is currently seen primarily as an alternative to ChIP-chip which requires a hybridization array. This introduces some bias, as an array is restricted to a fixed number of probes. Sequencing, by contrast, is thought to have less bias, although the sequencing bias of different sequencing technologies is not yet fully understood. Specific DNA sites in direct physical interaction with transcription factors and other proteins can be isolated by chromatin immunoprecipitation. ChIP produces a library of target DNA sites bound to a protein of interest. Massively parallel sequence analyses are used in conjunction with whole-genome sequence databases to analyze the interaction pattern of any protein with DNA, or the pattern of any epigenetic chromatin modifications. This can be applied to the set of ChIP-able proteins and modifications, such as transcription factors, polymerases and transcriptional machinery, structural proteins, protein modifications, and DNA modifications. As an alternative to the dependence on specific antibodies, different methods have been developed to find the superset of all nucleosome-depleted or nucleosome-disrupted active regulatory regions in the genome, like DNase-Seq and FAIRE-Seq. Workflow of ChIP-sequencing ChIP ChIP is a powerful method to selectively enrich for DNA sequences bound by a particular protein in living cells. However, the widespread use of this method has been limited by the lack of a sufficiently robust method to identify all of the enriched DNA sequences. The ChIP wet lab protocol contains ChIP and hybridization. There are essentially five parts to the ChIP protocol that aid in better understanding the overall process of ChIP. In order to carry out the ChIP, the first step is cross-linking using formaldehyde and large batches of the DNA in order to obtain a useful amount. The cross-links are made between the protein and DNA, but also between RNA and other proteins. The second step is the process of chromatin fragmentation which breaks up the chromatin in order to get high quality DNA pieces for ChIP analysis in the end. These fragments should be cut to become under 500 base pairs each to have the best outcome for genome mapping. The third step is called chromatin immunoprecipitation, which is what ChIP is short for. The ChIP process enhances specific crosslinked DNA-protein complexes using an antibody against the protein of interest followed by incubation and centrifugation to obtain the immunoprecipitation. The immunoprecipitation step also allows for the removal of non-specific binding sites. The fourth step is DNA recovery and purification, taking place by the reversed effect on the cross-link between DNA and protein to separate them and cleaning DNA with an extraction. The fifth and final step is the analyzation step of the ChIP protocol by the process of qPCR, ChIP-on-chip (hybrid array) or ChIP sequencing. Oligonucleotide adaptors are then added to the small stretches of DNA that were bound to the protein of interest to enable massively parallel sequencing. Through the analysis, the sequences can then be identified and interpreted by the gene or region to where the protein was bound. Sequencing After size selection, all the resulting ChIP-DNA fragments are sequenced simultaneously using a genome sequencer. A single sequencing run can scan for genome-wide associations with high resolution, meaning that features can be located precisely on the chromosomes. ChIP-chip, by contrast, requires large sets of tiling arrays for lower resolution. There are many new sequencing methods used in this sequencing step. Some technologies that analyze the sequences can use cluster amplification of adapter-ligated ChIP DNA fragments on a solid flow cell substrate to create clusters of approximately 1000 clonal copies each. The resulting high density array of template clusters on the flow cell surface is sequenced by a genome analyzing program. Each template cluster undergoes sequencing-by-synthesis in parallel using novel fluorescently labelled reversible terminator nucleotides. Templates are sequenced base-by-base during each read. Then, the data collection and analysis software aligns sample sequences to a known genomic sequence to identify the ChIP-DNA fragments. Quality control ChIP-seq offers us a fast analysis, however, a quality control must be performed to make sure that the results obtained are reliable: Non-redundant fraction: low-complexity regions should be removed as they are not informative and may interfere with mapping in the reference genome. Fragments in peaks: ratio of reads that are located in peaks over reads that are located where there isn't a peak. Sensitivity Sensitivity of this technology depends on the depth of the sequencing run (i.e. the number of mapped sequence tags), the size of the genome and the distribution of the target factor. The sequencing depth is directly correlated with cost. If abundant binders in large genomes have to be mapped with high sensitivity, costs are high as an enormously high number of sequence tags will be required. This is in contrast to ChIP-chip in which the costs are not correlated with sensitivity. Unlike microarray-based ChIP methods, the precision of the ChIP-seq assay is not limited by the spacing of predetermined probes. By integrating a large number of short reads, highly precise binding site localization is obtained. Compared to ChIP-chip, ChIP-seq data can be used to locate the binding site within few tens of base pairs of the actual protein binding site. Tag densities at the binding sites are a good indicator of protein–DNA binding affinity, which makes it easier to quantify and compare binding affinities of a protein to different DNA sites. Current research STAT1 DNA association: ChIP-seq was used to study STAT1 targets in HeLa S3 cells which are clones of the HeLa line that are used for analysis of cell populations. The performance of ChIP-seq was then compared to the alternative protein–DNA interaction methods of ChIP-PCR and ChIP-chip. Nucleosome Architecture of Promoters: Using ChIP-seq, it was determined that Yeast genes seem to have a minimal nucleosome-free promoter region of 150bp in which RNA polymerase can initiate transcription. Transcription factor conservation: ChIP-seq was used to compare conservation of TFs in the forebrain and heart tissue in embryonic mice. The authors identified and validated the heart functionality of transcription enhancers, and determined that transcription enhancers for the heart are less conserved than those for the forebrain during the same developmental stage. Genome-wide ChIP-seq: ChIP-sequencing was completed on the worm C. elegans to explore genome-wide binding sites of 22 transcription factors. Up to 20% of the annotated candidate genes were assigned to transcription factors. Several transcription factors were assigned to non-coding RNA regions and may be subject to developmental or environmental variables. The functions of some of the transcription factors were also identified. Some of the transcription factors regulate genes that control other transcription factors. These genes are not regulated by other factors. Most transcription factors serve as both targets and regulators of other factors, demonstrating a network of regulation. Inferring regulatory network: ChIP-seq signal of Histone modification were shown to be more correlated with transcription factor motifs at promoters in comparison to RNA level. Hence author proposed that using histone modification ChIP-seq would provide more reliable inference of gene-regulatory networks in comparison to other methods based on expression. ChIP-seq offers an alternative to ChIP-chip. STAT1 experimental ChIP-seq data have a high degree of similarity to results obtained by ChIP-chip for the same type of experiment, with greater than 64% of peaks in shared genomic regions. Because the data are sequence reads, ChIP-seq offers a rapid analysis pipeline as long as a high-quality genome sequence is available for read mapping and the genome doesn't have repetitive content that confuses the mapping process. ChIP-seq also has the potential to detect mutations in binding-site sequences, which may directly support any observed changes in protein binding and gene regulation. Computational analysis As with many high-throughput sequencing approaches, ChIP-seq generates extremely large data sets, for which appropriate computational analysis methods are required. To predict DNA-binding sites from ChIP-seq read count data, peak calling methods have been developed. One of the most popular methods is MACS which empirically models the shift size of ChIP-Seq tags, and uses it to improve the spatial resolution of predicted binding sites. MACS is optimized for higher resolution peaks, while another popular algorithm, SICER is programmed to call for broader peaks, spanning over kilobases to megabases in order to search for broader chromatin domains. SICER is more useful for histone marks spanning gene bodies. A mathematical more rigorous method BCP (Bayesian Change Point) can be used for both sharp and broad peaks with faster computational speed, see benchmark comparison of ChIP-seq peak-calling tools by Thomas et al. (2017). Another relevant computational problem is differential peak calling, which identifies significant differences in two ChIP-seq signals from distinct biological conditions. Differential peak callers segment two ChIP-seq signals and identify differential peaks using Hidden Markov Models. Examples for two-stage differential peak callers are ChIPDiff and ODIN. To reduce spurious sites from ChIP-seq, multiple experimental controls can be used to detect binding sites from an IP experiment. Bay2Ctrls adopts a Bayesian model to integrate the DNA input control for the IP, the mock IP and its corresponding DNA input control to predict binding sites from the IP. This approach is particularly effective for complex samples such as whole model organisms. In addition, the analysis indicates that for complex samples mock IP controls substantially outperform DNA input controls probably due to the active genomes of the samples. See also ChIP-on-chip ChIP-PCR ChIP-PET Mammalian promoter database Similar methods CUT&RUN sequencing, antibody-targeted controlled cleavage by micrococcal nuclease instead of ChIP, allowing for enhanced signal-to-noise ratio during sequencing. CUT&Tag sequencing, antibody-targeted controlled cleavage by transposase Tn5 instead of ChIP, allowing for enhanced signal-to-noise ratio during sequencing. Sono-Seq, identical to ChIP-Seq but skipping the immunoprecipitation step. HITS-CLIP (also called CLIP-Seq), for finding interactions with RNA rather than DNA. PAR-CLIP, another method for identifying the binding sites of cellular RNA-binding proteins (RBPs). RIP-Chip, same goal and first steps, but does not use cross linking methods and uses microarray instead of sequencing SELEX, a method for finding a consensus binding sequence Competition-ChIP, to measure relative replacement dynamics on DNA. ChiRP-Seq to measure RNA-bound DNA and proteins. ChIP-exo uses exonuclease treatment to achieve up to single base-pair resolution ChIP-nexus improved version of ChIP-exo to achieve up to single base-pair resolution. DRIP-seq uses S9.6 antibody to precipitate three-stranded DND:RNA hybrids called R-loops. TCP-seq, principally similar method to measure mRNA translation dynamics. Calling Cards, uses a transposase to mark the sequence where a transcription factor binds. References External links ReMap catalogue: An integrative and uniform ChIP-Seq analysis of regulatory elements from +2800 ChIP-seq datasets, giving a catalogue of 80 million peaks from 485 transcription regulators. ChIPBase database: a database for exploring transcription factor binding maps from ChIP-Seq data. It provides the most comprehensive ChIP-Seq data set for various cell/tissue types and conditions. GeneProf database and analysis tool: GeneProf is a freely accessible, easy-to-use analysis environment for ChIP-seq and RNA-seq data and comes with a large database of ready-analysed public experiments, e.g. for transcription factor binding and histone modifications. Differential Peak Calling : Tutorial for differential peak calling with ODIN. Bioinformatic analysis of ChIP-seq data: Comprehensive analysis of ChIP-seq data. KLTepigenome: Uncovering correlated variability in epigenomic datasets using the Karhunen-Loeve transform. SignalSpider: a tool for probabilistic pattern discovery on multiple normalized ChIP-Seq signal profiles FullSignalRanker: a tool for regression and peak prediction on multiple normalized ChIP-Seq signal profiles Biotechnology DNA Genomics techniques Proteomic sequencing
ChIP sequencing
[ "Chemistry", "Biology" ]
2,774
[ "Genetics techniques", "Genomics techniques", "Proteomic sequencing", "Biotechnology", "Molecular biology techniques", "nan" ]
15,363,710
https://en.wikipedia.org/wiki/Modelling%20and%20Simulation%20in%20Materials%20Science%20and%20Engineering
Modelling and Simulation in Materials Science and Engineering is a peer-reviewed scientific journal published by the IOP Publishing eight times per year. The journal covers computational materials science including properties, structure, and behavior of all classes of materials at scales from the atomic to the macroscopic. This includes electronic structure/properties of materials determined by ab initio and/or semi-empirical methods, atomic level properties of materials, microstructural level phenomena, continuum-level modelling pertaining to material behaviour, and modelling behaviour in service. Mechanical, microstructural, electronic, chemical, biological, and optical properties of materials are also of interest. The editors-in-chief is Javier Llorca (Polytechnic University of Madrid & IMDEA Materials Institute, Spain). Abstracting and indexing The journal is abstracted and indexed by: See also Journal of Physics: Condensed Matter External links IOP Publishing academic journals Materials science journals Academic journals established in 1992 English-language journals Computational modeling journals
Modelling and Simulation in Materials Science and Engineering
[ "Materials_science", "Engineering" ]
201
[ "Materials science journals", "Materials science" ]
10,212,831
https://en.wikipedia.org/wiki/Activity-based%20proteomics
Activity-based proteomics, or activity-based protein profiling (ABPP) is a functional proteomic technology that uses chemical probes that react with mechanistically related classes of enzymes. Description The basic unit of ABPP is the probe, which typically consists of two elements: a reactive group (RG, sometimes called a "warhead") and a tag. Additionally, some probes may contain a binding group which enhances selectivity. The reactive group usually contains a specially designed electrophile that becomes covalently-linked to a nucleophilic residue in the active site of an active enzyme. An enzyme that is inhibited or post-translationally modified will not react with an activity-based probe. The tag may be either a reporter such as a fluorophore or an affinity label such as biotin or an alkyne or azide for use with the Huisgen 1,3-dipolar cycloaddition (also known as click chemistry). Advantages A major advantage of ABPP is the ability to monitor the availability of the enzyme active site directly, rather than being limited to protein or mRNA abundance. With classes of enzymes such as the serine hydrolases and metalloproteases that often interact with endogenous inhibitors or that exist as inactive zymogens, this technique offers a valuable advantage over traditional techniques that rely on abundance rather than activity. Furthermore, ABPP could be used to target specific proteins which were previously viewed as undruggable targets. Multidimensional protein identification technology In recent years ABPP has been combined with tandem mass spectrometry enabling the identification of hundreds of active enzymes from a single sample. This technique, known as ABPP-MudPIT (multidimensional protein identification technology) is especially useful for profiling inhibitor selectivity as the potency of an inhibitor can be tested against hundreds of targets simultaneously. ABPP were first reported in the 1990s in the study of proteases. See also Mass spectrometry Proteomics Related inhibitors MAFP and DIFP Chemoproteomics References Proteomics Genomics Protein methods Mass spectrometry
Activity-based proteomics
[ "Physics", "Chemistry", "Biology" ]
445
[ "Biochemistry methods", "Spectrum (physical sciences)", "Instrumental analysis", "Protein methods", "Mass", "Protein biochemistry", "Mass spectrometry", "Matter" ]
10,215,842
https://en.wikipedia.org/wiki/Superdeterminism
In quantum mechanics, superdeterminism is a loophole in Bell's theorem. By postulating that all systems being measured are correlated with the choices of which measurements to make on them, the assumptions of the theorem are no longer fulfilled. A hidden variables theory which is superdeterministic can thus fulfill Bell's notion of local causality and still violate the inequalities derived from Bell's theorem. This makes it possible to construct a local hidden-variable theory that reproduces the predictions of quantum mechanics, for which a few toy models have been proposed. In addition to being deterministic, superdeterministic models also postulate correlations between the state that is measured and the measurement setting. Overview Bell's theorem assumes that the measurements performed at each detector can be chosen independently of each other and of the hidden variables that determine the measurement outcome. This relation is often referred to as measurement independence or statistical independence. In a superdeterministic theory this relation is not fulfilled; the hidden variables are necessarily correlated with the measurement setting. Since the choice of measurements and the hidden variable are predetermined, the results at one detector can depend on which measurement is done at the other without any need for information to travel faster than the speed of light. The assumption of statistical independence is sometimes referred to as the free choice or free will assumption, since its negation implies that human experimentalists are not free to choose which measurement to perform. It is possible to test restricted versions of superdeterminism that posit that the correlations between the hidden variables and the choice of measurement have been established in the recent past. In general, though, superdeterminism is fundamentally untestable, as the correlations can be postulated to exist since the Big Bang, making the loophole impossible to eliminate. In the 1980s, John Stewart Bell discussed superdeterminism in a BBC interview: There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the "decision" by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster than light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already "knows" what that measurement, and its outcome, will be. Although he acknowledged the loophole, he also argued that it was implausible. Even if the measurements performed are chosen by deterministic random number generators, the choices can be assumed to be "effectively free for the purpose at hand," because the machine's choice is altered by a large number of very small effects. It is unlikely for the hidden variable to be sensitive to all of the same small influences that the random number generator was. Nobel Prize in Physics winner Gerard 't Hooft discussed this loophole with John Bell in the early 1980s: According to the physicist Anton Zeilinger, if superdeterminism is true, some of its implications would bring into question the value of science itself by destroying falsifiability: [W]e always implicitly assume the freedom of the experimentalist... This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature. Physicists Sabine Hossenfelder and Tim Palmer have argued that superdeterminism "is a promising approach not only to solve the measurement problem, but also to understand the apparent non-locality of quantum physics". Howard M. Wiseman and Eric Cavalcanti argue that any hypothetical superdeterministic theory "would be about as plausible, and appealing, as belief in ubiquitous alien mind-control". Examples The first superdeterministic hidden variables model was put forward by Carl H. Brans in 1988. Other models were proposed in 2010 by Michael Hall, and in 2022 by Donadi and Hossenfelder. Gerard 't Hooft has referred to his cellular automaton model of quantum mechanics as superdeterministic though it has remained unclear whether it fulfills the definition. Some authors consider retrocausality in quantum mechanics to be an example of superdeterminism, whereas other authors treat the two cases as distinct. No agreed-upon definition for distinguishing them exists. See also Hard determinism Necessitarianism Laplace's demon De Broglie–Bohm theory Many-worlds interpretation Quantum entanglement Free will theorem References External links Quantum mechanics Philosophy of physics
Superdeterminism
[ "Physics" ]
1,040
[ "Philosophy of physics", "Applied and interdisciplinary physics" ]
10,215,849
https://en.wikipedia.org/wiki/Atomistix%20ToolKit
QuantumATK (formerly Atomistix ToolKit or ATK) is a commercial software for atomic-scale modeling and simulation of nanosystems. The software was originally developed by Atomistix A/S, and was later acquired by QuantumWise following the Atomistix bankruptcy. QuantumWise was then acquired by Synopsys in 2017. Atomistix ToolKit is a further development of TranSIESTA-C, which in turn in based on the technology, models, and algorithms developed in the academic codes TranSIESTA, and McDCal, employing localized basis sets as developed in SIESTA. Features Atomistix ToolKit combines density functional theory with non-equilibrium Green's functions for first principles electronic structure and transport calculations of electrode—nanostructure—electrode systems (two-probe systems) molecules periodic systems (bulk crystals and nanotubes) The key features are Calculation of transport properties of two-probe systems under an applied bias voltage Calculation of energy spectra, wave functions, electron densities, atomic forces, effective potentials etc. Calculation of spin-polarized physical properties Geometry optimization A Python-based NanoLanguage scripting environment See also Atomistix Virtual NanoLab — a graphical user interface NanoLanguage Atomistix Quantum chemistry computer programs Molecular mechanics programs References External links QuantumWise web site Nanotechnology companies Computational science Computational chemistry software Physics software Density functional theory software Computational physics
Atomistix ToolKit
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
287
[ "Quantum chemistry stubs", "Materials science stubs", "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Applied mathematics", "Nanotechnology companies", "Computational physics", "Computational science", "Computational chemistry", "C...
10,215,914
https://en.wikipedia.org/wiki/NanoLanguage
NanoLanguage is a scripting interface built on top of the interpreted programming language Python, and is primarily intended for simulation of physical and chemical properties of nanoscale systems. Introduction Over the years, several electronic-structure codes based on density functional theory have been developed by different groups of academic researchers; VASP, Abinit, SIESTA, and Gaussian are just a few examples. The input to these programs is usually a simple text file written in a code-specific format with a set of code-specific keywords. NanoLanguage was introduced by Atomistix A/S as an interface to Atomistix ToolKit (version 2.1) in order to provide a more flexible input format. A NanoLanguage script (or input file) is just a Python program and can be anything from a few lines to a script performing complex numerical simulations, communicating with other scripts and files, and communicating with other software (e.g. plotting programs). NanoLanguage is not a proprietary product of Atomistix and can be used as an interface to other density functional theory codes as well as to codes utilizing e.g. tight-binding, k.p, or quantum-chemical methods. Features Built on top of Python, NanoLanguage includes the same functionality as Python and with the same syntax. Hence, NanoLanguage contains, among other features, common programming elements (for loops, if statements, etc.), mathematical functions, and data arrays. In addition, a number of concepts and objects relevant to quantum chemistry and physics are built into NanoLanguage, e.g. a periodic table, a unit system (including both SI units and atomic units like Ångström), constructors of atomic geometries, and different functions for density-functional theory and transport calculations. Example This NanoLanguage script uses the Kohn–Sham method to calculate the total energy of a water molecule as a function of the bending angle. # Define function for molecule setup def waterConfiguration(angle, bondLength): from math import sin, cos theta = angle.inUnitsOf(radians) positions = [ (0.0, 0.0, 0.0) * Angstrom, (1.0, 0.0, 0.0) * bondLength, (cos(theta), sin(theta), 0.0) * bondLength, ] elements = [Oxygen] + [Hydrogen] * 2 return MoleculeConfiguration(elements, positions) # Choose DFT method with default arguments method = KohnShamMethod() # Scan different bending angles and calculate the total energy for i in range(30, 181, 10): theta = i * degrees h2o = waterConfiguration(theta, 0.958 * Angstrom) scf = method.apply(h2o) print "Angle = ", theta, " Total Energy = ", calculateTotalEnergy(scf) See also List of software for nanostructures modeling References Nanotechnology Computational science Computational chemistry software Physics software
NanoLanguage
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
654
[ "Computational chemistry software", "Chemistry software", "Applied mathematics", "Materials science", "Computational physics", "Computational science", "Computational chemistry", "Nanotechnology", "Physics software" ]
10,216,271
https://en.wikipedia.org/wiki/Multiphase%20flow%20meter
A multiphase flow meter is a device used to measure the individual phase flow rates of constituent phases in a given flow (for example in oil and gas industry) where oil, water and gas mixtures are initially co-mingled together during the oil production processes. Background Knowledge of the individual fluid flow rates of a producing oil well is required to facilitate reservoir management, field development, operational control, flow assurance and production allocation. Conventional Solutions Conventional solutions concerning two- and three-phase metering systems require expensive and cumbersome test separators, with associated high maintenance, and field personnel intervention. These conventional solutions do not lend themselves to continuous automated monitoring or metering. Moreover, with diminishing oil resources, oil companies are now frequently confronted with the need to recover hydrocarbons from marginally economical reservoirs. In order to ensure economic viability of these accumulations, the wells may have to be completed subsea, or crude oil from several wells sent to a common production facility with excess processing capacity. The economic constraints on such developments do not lend themselves to the continued deployment of three-phase separators as the primary measurement devices. Consequently, viable alternatives to three-phase separators are essential. Industry’s response is the multiphase flow meter (MPFM). Historical Development The oil and gas industry began to be interested in developing MPFMs in the early 1980s, as measurement technology improved, and wellhead separators were costly. Depleting oil reserves, (More water and gas in the produced oil) along with smaller, deeper wells with higher water contents, saw the advent of increasingly frequent occurrences of multiphase flow where the single-phase meters were unable to provide accurate answers. After a lengthy gestation period, MPFMs capable of performing the required measurements became commercially available. Much of the early research was done at the Christian Michelsen research center in Bergen, Norway, and this work spawned a number of spin off companies in Norway leading to the Roxar / Emerson, Schlumberger, Framo, and MPM meters. ENI and Shell supported the development in Italy of the Pietro Fiorentini meter. Haimo introduced a meter with partial separation, making accurate measurement simpler, but at the expense of a physically larger device. Norway has remained a technology center for MPFM with the Norwegian Society for Oil and Gas Measurement (NFOGM) providing an academic and educational role. Since 1994, MPFM installation numbers have steadily increased as technology in the field has advanced, with substantial growth witnessed from 1999 onwards. A recent study estimated that there were approximately 2,700 MPFM applications including field allocation, production optimisation and mobile well testing in 2006. A number of factors have instigated the recent rapid uptake of multiphase measurement technology: improved meter performances, decreases in meter costs, more compact meters enabling deployment of mobile systems, the need for sub sea metering, increases in oil prices and a wider assortment of operators. As the initial interest in multiphase flow metering came from the offshore industry, most of the multiphase metering activity was concentrated in the North Sea. However, the present distribution of multiphase flow meters is much more diverse. Most modern meters combine a venturi flow rate meter, with a gamma densitometer, and some meters have additional measurements for water salinity. The meter measures the flow rates at line pressures, which are typically orders of magnitude greater than atmospheric pressure, but the meter must report the oil and gas volumes at standard (atmospheric) pressure and temperature. The meter must thus know the Pressure / Volume / Temperature properties of the oil, to add to the measured gas rate at line pressure the additional gas that would be liberated from the oil at atmospheric pressure, and also know the loss in oil volume from the release of that gas in conversion to standard conditions. With co-mingled flow from oil zones with differing PVT response, and different water salinities and hence densities, this PVT uncertainty may be the largest source of error in the measurement. The introduction of the multiport selector valve (MSV) also facilitated the automation of the use of MPFMs, but this can also be achieved with conventional valving designs for well tests. MSVs are particularly suitable for onshore pad drilling, and where many nearby wells have similar pressures, and allow MPFMs to be shared between groups of wells. Subsea meters typically use conventional subsea valve designs, to ensure maintainability. Unconventional Solutions - SONAR Multiphase Measurement Measurement and interpretation of 2 and 3 phase multiphase flow can also be achieved by using alternative flow measurement technologies such as SONAR. SONAR meters apply the principles of underwater acoustics to measure flow regimes and; can be clamped on to wellheads and flow lines to measure the bulk (mean) fluid velocity of the total mixture which is then post-processed and analyzed along with wellbore compositional information and process conditions to infer the flow rates of each individual phase. This approached can be used in various applications such as black oil, gas condensate and wet gas. Market Industry experts have forecast that MPFMs will become feasible on an installation per well basis when their capital cost falls to around US$40,000 – US$60,000. The cost of MPFMs today remains in the range of US$100,000 – US$500,000 (varying with onshore/offshore, topside/subsea, the physical dimensions of the meter and the number of units ordered). Installation of these MPFMs can cost up to 25% of the hardware cost and associated operating costs are estimated at between US$20,000 and $40,000 per year. A number of novel multiphase metering techniques, employing a variety of technologies, have been developed which eliminate the need for three-phase separator deployment. These MPFMs offer substantial economic and operating advantages over their phase separating predecessor. Nevertheless, it is still widely recognised that no single MPFM on the market can meet all multiphase metering requirements. References External links NFOGM 2005 Handbook on Multiphase Flow Metering Overview of MPFM Technology by Lex Scheers - 2008 but still valid Petroleum technology Multiphase flow
Multiphase flow meter
[ "Chemistry", "Engineering" ]
1,271
[ "Petroleum engineering", "Petroleum technology" ]
10,217,523
https://en.wikipedia.org/wiki/Ad%20hoc%20wireless%20distribution%20service
Ad hoc Wireless Distribution Service (AWDS) is a layer 2 routing protocol to connect mobile ad hoc networks, sometimes called wireless mesh networks. It is based on a link-state routing protocol, similar to OLSR. Principle of operation AWDS uses a link-state routing protocol for organizing the network. In contrast to other implementations like OLSR it operates in layer 2. That means no IP addresses must be assigned because the unique MAC addresses of the WLAN hardware is used instead. Furthermore, all kinds of layer 3 protocols can be used, like IP, DHCP, IPv6, IPX, etc. The protocol daemon creates a virtual network interface, which can be used by the kernel like a typical LAN interface. Alternatives The list of ad hoc routing protocols contains a large set of alternatives. However, most of them are academic and do not exist as practical implementations. References External links An implementation for Linux is available at https://web.archive.org/web/20070504155750/http://awds.berlios.de/ Why all mesh technologies are not created equal. What is Third Generation Mesh? Review of three generation of mesh networking architectures. Cost and performance considerations of multi-radio mesh. Wireless networking Ad hoc routing protocols
Ad hoc wireless distribution service
[ "Technology", "Engineering" ]
262
[ "Wireless networking", "Computer networks engineering" ]