id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,527,578
https://en.wikipedia.org/wiki/Critical%20variable
Critical variables are defined, for example in thermodynamics, in terms of the values of variables at the critical point. On a PV diagram, the critical point is an inflection point. Thus: For the van der Waals equation, the above yields: References Thermodynamic properties Conformal field theory
Critical variable
[ "Physics", "Chemistry", "Mathematics" ]
68
[ "Thermodynamic properties", "Quantity", "Thermodynamics", "Physical quantities" ]
1,528,346
https://en.wikipedia.org/wiki/Totally%20bounded%20space
In topology and related branches of mathematics, total-boundedness is a generalization of compactness for circumstances in which a set is not necessarily closed. A totally bounded set can be covered by finitely many subsets of every fixed “size” (where the meaning of “size” depends on the structure of the ambient space). The term precompact (or pre-compact) is sometimes used with the same meaning, but precompact is also used to mean relatively compact. These definitions coincide for subsets of a complete metric space, but not in general. In metric spaces A metric space is totally bounded if and only if for every real number , there exists a finite collection of open balls of radius whose centers lie in M and whose union contains . Equivalently, the metric space M is totally bounded if and only if for every , there exists a finite cover such that the radius of each element of the cover is at most . This is equivalent to the existence of a finite ε-net. A metric space is said to be totally bounded if every sequence admits a Cauchy subsequence; in complete metric spaces, a set is compact if and only if it is closed and totally bounded. Each totally bounded space is bounded (as the union of finitely many bounded sets is bounded). The reverse is true for subsets of Euclidean space (with the subspace topology), but not in general. For example, an infinite set equipped with the discrete metric is bounded but not totally bounded: every discrete ball of radius or less is a singleton, and no finite union of singletons can cover an infinite set. Uniform (topological) spaces A metric appears in the definition of total boundedness only to ensure that each element of the finite cover is of comparable size, and can be weakened to that of a uniform structure. A subset of a uniform space is totally bounded if and only if, for any entourage , there exists a finite cover of by subsets of each of whose Cartesian squares is a subset of . (In other words, replaces the "size" , and a subset is of size if its Cartesian square is a subset of .) The definition can be extended still further, to any category of spaces with a notion of compactness and Cauchy completion: a space is totally bounded if and only if its (Cauchy) completion is compact. Examples and elementary properties Every compact set is totally bounded, whenever the concept is defined. Every totally bounded set is bounded. A subset of the real line, or more generally of finite-dimensional Euclidean space, is totally bounded if and only if it is bounded. The unit ball in a Hilbert space, or more generally in a Banach space, is totally bounded (in the norm topology) if and only if the space has finite dimension. Equicontinuous bounded functions on a compact set are precompact in the uniform topology; this is the Arzelà–Ascoli theorem. A metric space is separable if and only if it is homeomorphic to a totally bounded metric space. The closure of a totally bounded subset is again totally bounded. Comparison with compact sets In metric spaces, a set is compact if and only if it is complete and totally bounded; without the axiom of choice only the forward direction holds. Precompact sets share a number of properties with compact sets. Like compact sets, a finite union of totally bounded sets is totally bounded. Unlike compact sets, every subset of a totally bounded set is again totally bounded. The continuous image of a compact set is compact. The uniformly continuous image of a precompact set is precompact. In topological groups Although the notion of total boundedness is closely tied to metric spaces, the greater algebraic structure of topological groups allows one to trade away some separation properties. For example, in metric spaces, a set is compact if and only if complete and totally bounded. Under the definition below, the same holds for any topological vector space (not necessarily Hausdorff nor complete). The general logical form of the definition is: a subset of a space is totally bounded if and only if, given any size there exists a finite cover of such that each element of has size at most is then totally bounded if and only if it is totally bounded when considered as a subset of itself. We adopt the convention that, for any neighborhood of the identity, a subset is called () if and only if A subset of a topological group is () if it satisfies any of the following equivalent conditions: : For any neighborhood of the identity there exist finitely many such that For any neighborhood of there exists a finite subset such that (where the right hand side is the Minkowski sum ). For any neighborhood of there exist finitely many subsets of such that and each is -small. For any given filter subbase of the identity element's neighborhood filter (which consists of all neighborhoods of in ) and for every there exists a cover of by finitely many -small subsets of is : for every neighborhood of the identity and every countably infinite subset of there exist distinct such that (If is finite then this condition is satisfied vacuously). Any of the following three sets satisfies (any of the above definitions of) being (left) totally bounded: The closure of in This set being in the list means that the following characterization holds: is (left) totally bounded if and only if is (left) totally bounded (according to any of the defining conditions mentioned above). The same characterization holds for the other sets listed below. The image of under the canonical quotient which is defined by (where is the identity element). The sum The term usually appears in the context of Hausdorff topological vector spaces. In that case, the following conditions are also all equivalent to being (left) totally bounded: In the completion of the closure of is compact. Every ultrafilter on is a Cauchy filter. The definition of is analogous: simply swap the order of the products. Condition 4 implies any subset of is totally bounded (in fact, compact; see above). If is not Hausdorff then, for example, is a compact complete set that is not closed. Topological vector spaces Any topological vector space is an abelian topological group under addition, so the above conditions apply. Historically, statement 6(a) was the first reformulation of total boundedness for topological vector spaces; it dates to a 1935 paper of John von Neumann. This definition has the appealing property that, in a locally convex space endowed with the weak topology, the precompact sets are exactly the bounded sets. For separable Banach spaces, there is a nice characterization of the precompact sets (in the norm topology) in terms of weakly convergent sequences of functionals: if is a separable Banach space, then is precompact if and only if every weakly convergent sequence of functionals converges uniformly on Interaction with convexity The balanced hull of a totally bounded subset of a topological vector space is again totally bounded. The Minkowski sum of two compact (totally bounded) sets is compact (resp. totally bounded). In a locally convex (Hausdorff) space, the convex hull and the disked hull of a totally bounded set is totally bounded if and only if is complete. See also Compact space Locally compact space Measure of non-compactness Orthocompact space Paracompact space Relatively compact subspace References Bibliography Uniform spaces Metric geometry Topology Functional analysis Compactness (mathematics)
Totally bounded space
[ "Physics", "Mathematics" ]
1,543
[ "Functions and mappings", "Functional analysis", "Uniform spaces", "Mathematical objects", "Space (mathematics)", "Topological spaces", "Topology", "Mathematical relations", "Space", "Geometry", "Spacetime" ]
1,528,467
https://en.wikipedia.org/wiki/Copper%20coulometer
The copper coulometer is a one application for the copper-copper(II) sulfate electrode. Such a coulometer consists of two identical copper electrodes immersed in slightly acidic pH-buffered solution of copper(II) sulfate. Passing of current through the element leads to the anodic dissolution of the metal on anode and simultaneous deposition of copper ions on the cathode. These reactions have 100% efficiency over a wide range of current density. Calculation The amount of electric charge (quantity of electricity) passed through the cell can easily be determined by measuring the change in mass of either electrode and calculating: , where: is the quantity of electricity (coulombs) is the mass transported (gm) is the charge of the copper ions, equal to +2 is the Faraday constant (96485.3383 coulombs per mole) is the atomic weight of copper, equal to 63.546 grams per mole. Although this apparatus is interesting from a theoretical and historical point of view, present-day electronic measurement of time and electric current provide in their multiplication the amount of passed coulombs much easier, with greater precision, and in a shorter period of time than is possible by weighing the electrodes. See also Mercury coulometer Coulometry References Physical chemistry Electroanalytical chemistry devices Coulometer Coulometers
Copper coulometer
[ "Physics", "Chemistry" ]
276
[ "Applied and interdisciplinary physics", "Electroanalytical chemistry", "Electroanalytical chemistry devices", "nan", "Physical chemistry", "Physical chemistry stubs" ]
1,530,353
https://en.wikipedia.org/wiki/Mutation%20rate
In genetics, the mutation rate is the frequency of new mutations in a single gene, nucleotide sequence, or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense, nonsense, and synonymous mutations are three subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate. There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome. When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases. Background Different genetic variants within a species are referred to as alleles, therefore a new mutation can create a new allele. In population genetics, each allele is characterized by a selection coefficient, which measures the expected change in an allele's frequency over time. The selection coefficient can either be negative, corresponding to an expected decrease, positive, corresponding to an expected increase, or zero, corresponding to no expected change. The distribution of fitness effects of new mutations is an important parameter in population genetics and has been the subject of extensive investigation. Although measurements of this distribution have been inconsistent in the past, it is now generally thought that the majority of mutations are mildly deleterious, that many have little effect on an organism's fitness, and that a few can be favorable. Because of natural selection, unfavorable mutations will typically be eliminated from a population while favorable changes are generally kept for the next generation, and neutral changes accumulate at the rate they are created by mutations. This process happens by reproduction. In a particular generation the 'best fit' survive with higher probability, passing their genes to their offspring. The sign of the change in this probability defines mutations to be beneficial, neutral or harmful to organisms. Measurement An organism's mutation rates can be measured by a number of techniques. One way to measure the mutation rate is by the fluctuation test, also known as the Luria–Delbrück experiment. This experiment demonstrated that bacteria mutations occur in the absence of selection instead of the presence of selection. This is very important to mutation rates because it proves experimentally mutations can occur without selection being a component—in fact, mutation and selection are completely distinct evolutionary forces. Different DNA sequences can have different propensities to mutation (see below) and may not occur randomly. The most commonly measured class of mutations are substitutions, because they are relatively easy to measure with standard analyses of DNA sequence data. However substitutions have a substantially different rate of mutation (10−8 to 10−9 per generation for most cellular organisms) than other classes of mutation, which are frequently much higher (~10−3 per generation for satellite DNA expansion/contraction). Substitution rates Many sites in an organism's genome may admit mutations with small fitness effects. These sites are typically called neutral sites. Theoretically mutations under no selection become fixed between organisms at precisely the mutation rate. Fixed synonymous mutations, i.e. synonymous substitutions, are changes to the sequence of a gene that do not change the protein produced by that gene. They are often used as estimates of that mutation rate, despite the fact that some synonymous mutations have fitness effects. As an example, mutation rates have been directly inferred from the whole genome sequences of experimentally evolved replicate lines of Escherichia coli B. Mutation accumulation lines A particularly labor-intensive way of characterizing the mutation rate is the mutation accumulation line. Mutation accumulation lines have been used to characterize mutation rates with the Bateman-Mukai Method and direct sequencing of well-studied experimental organisms ranging from intestinal bacteria (E. coli), roundworms (C. elegans), yeast (S. cerevisiae), fruit flies (D. melanogaster), and small ephemeral plants (A. thaliana). Variation in mutation rates Mutation rates differ between species and even between different regions of the genome of a single species. Mutation rates can also differ even between genotypes of the same species; for example, bacteria have been observed to evolve hypermutability as they adapt to new selective conditions. These different rates of nucleotide substitution are measured in substitutions (fixed mutations) per base pair per generation. For example, mutations in intergenic, or non-coding, DNA tend to accumulate at a faster rate than mutations in DNA that is actively in use in the organism (gene expression). That is not necessarily due to a higher mutation rate, but to lower levels of purifying selection. A region which mutates at predictable rate is a candidate for use as a molecular clock. If the rate of neutral mutations in a sequence is assumed to be constant (clock-like), and if most differences between species are neutral rather than adaptive, then the number of differences between two different species can be used to estimate how long ago two species diverged (see molecular clock). In fact, the mutation rate of an organism may change in response to environmental stress. For example, UV light damages DNA, which may result in error prone attempts by the cell to perform DNA repair. The human mutation rate is higher in the male germ line (sperm) than the female (egg cells), but estimates of the exact rate have varied by an order of magnitude or more. This means that a human genome accumulates around 64 new mutations per generation because each full generation involves a number of cell divisions to generate gametes. Human mitochondrial DNA has been estimated to have mutation rates of ~3× or ~2.7×10−5 per base per 20 year generation (depending on the method of estimation); these rates are considered to be significantly higher than rates of human genomic mutation at ~2.5×10−8 per base per generation. Using data available from whole genome sequencing, the human genome mutation rate is similarly estimated to be ~1.1×10−8 per site per generation. The rate for other forms of mutation also differs greatly from point mutations. An individual microsatellite locus often has a mutation rate on the order of 10−4, though this can differ greatly with length. Some sequences of DNA may be more susceptible to mutation. For example, stretches of DNA in human sperm which lack methylation are more prone to mutation. In general, the mutation rate in unicellular eukaryotes (and bacteria) is roughly 0.003 mutations per genome per cell generation. However, some species, especially the ciliate of the genus Paramecium have an unusually low mutation rate. For instance, Paramecium tetraurelia has a base-substitution mutation rate of ~2 × 10−11 per site per cell division. This is the lowest mutation rate observed in nature so far, being about 75× lower than in other eukaryotes with a similar genome size, and even 10× lower than in most prokaryotes. The low mutation rate in Paramecium has been explained by its transcriptionally silent germ-line nucleus, consistent with the hypothesis that replication fidelity is higher at lower gene expression levels. The highest per base pair per generation mutation rates are found in viruses, which can have either RNA or DNA genomes. DNA viruses have mutation rates between 10−6 to 10−8 mutations per base per generation, and RNA viruses have mutation rates between 10−3 to 10−5 per base per generation. Mutation spectrum A mutation spectrum is a distribution of rates or frequencies for the mutations relevant in some context, based on the recognition that rates of occurrence are not all the same. In any context, the mutation spectrum reflects the details of mutagenesis and is affected by conditions such as the presence of chemical mutagens or genetic backgrounds with mutator alleles or damaged DNA repair systems. The most fundamental and expansive concept of a mutation spectrum is the distribution of rates for all individual mutations that might happen in a genome (e.g., ). From this full de novo spectrum, for instance, one may calculate the relative rate of mutation in coding vs non-coding regions. Typically the concept of a spectrum of mutation rates is simplified to cover broad classes such as transitions and transversions (figure), i.e., different mutational conversions across the genome are aggregated into classes, and there is an aggregate rate for each class. In many contexts, a mutation spectrum is defined as the observed frequencies of mutations identified by some selection criterion, e.g., the distribution of mutations associated clinically with a particular type of cancer, or the distribution of adaptive changes in a particular context such as antibiotic resistance (e.g., ). Whereas the spectrum of de novo mutation rates reflects mutagenesis alone, this kind of spectrum may also reflect effects of selection and ascertainment biases (e.g., both kinds of spectrum are used in ). Evolution The theory on the evolution of mutation rates identifies three principal forces involved: the generation of more deleterious mutations with higher mutation, the generation of more advantageous mutations with higher mutation, and the metabolic costs and reduced replication rates that are required to prevent mutations. Different conclusions are reached based on the relative importance attributed to each force. The optimal mutation rate of organisms may be determined by a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate (such as increasing the expression of DNA repair enzymes. or, as reviewed by Bernstein et al. having increased energy use for repair, coding for additional gene products and/or having slower replication). Secondly, higher mutation rates increase the rate of beneficial mutations, and evolution may prevent a lowering of the mutation rate in order to maintain optimal rates of adaptation. As such, hypermutation enables some cells to rapidly adapt to changing conditions in order to avoid the entire population from becoming extinct. Finally, natural selection may fail to optimize the mutation rate because of the relatively minor benefits of lowering the mutation rate, and thus the observed mutation rate is the product of neutral processes. Studies have shown that treating RNA viruses such as poliovirus with ribavirin produce results consistent with the idea that the viruses mutated too frequently to maintain the integrity of the information in their genomes. This is termed error catastrophe. The characteristically high mutation rate of HIV (Human Immunodeficiency Virus) of 3 x 10−5 per base and generation, coupled with its short replication cycle leads to a high antigen variability, allowing it to evade the immune system. See also Mutation Critical mutation rate Mutation frequency Dysgenics Allele frequency Rate of evolution Genetics Cancer References External links Mutation Evolutionary biology Temporal rates
Mutation rate
[ "Physics", "Biology" ]
2,322
[ "Temporal quantities", "Evolutionary biology", "Temporal rates", "Physical quantities" ]
1,530,478
https://en.wikipedia.org/wiki/Bioturbation
Bioturbation is defined as the reworking of soils and sediments by animals or plants. It includes burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land. Bioturbators are deemed ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation. Walruses, salmon, and pocket gophers are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as earthworms, polychaetes, ghost shrimp, mud shrimp, and midge larvae. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure. Functional groups Bioturbators have been organized by a variety of functional groupings based on either ecological characteristics or biogeochemical effects. While the prevailing categorization is based on the way bioturbators transport and interact with sediments, the various groupings likely stem from the relevance of a categorization mode to a field of study (such as ecology or sediment biogeochemistry) and an attempt to concisely organize the wide variety of bioturbating organisms in classes that describe their function. Examples of categorizations include those based on feeding and motility, feeding and biological interactions, and mobility modes. The most common set of groupings are based on sediment transport and are as follows: Gallery-diffusers create complex tube networks within the upper sediment layers and transport sediment through feeding, burrow construction, and general movement throughout their galleries. Gallery-diffusers are heavily associated with burrowing polychaetes, such as Nereis diversicolor and Marenzelleria spp. Biodiffusers transport sediment particles randomly over short distances as they move through sediments. Animals mostly attributed to this category include bivalves such as clams, and amphipod species, but can also include larger vertebrates, such as bottom-dwelling fish and rays that feed along the sea floor. Biodiffusers can be further divided into two subgroups, which include epifaunal (organisms that live on the surface sediments) biodiffusers and surface biodiffusers. This subgrouping may also include gallery-diffusers, reducing the number of functional groups. Upward-conveyors are oriented head-down in sediments, where they feed at depth and transport sediment through their guts to the sediment surface. Major upward-conveyor groups include burrowing polychaetes like the lugworm, Arenicola marina, and thalassinid shrimps. Downward-conveyor species are oriented with their heads towards the sediment-water interface and defecation occurs at depth. Their activities transport sediment from the surface to deeper sediment layers as they feed. Notable downward-conveyors include those in the peanut worm family, Sipunculidae. Regenerators are categorized by their ability to release sediment to the overlying water column, which is then dispersed as they burrow. After regenerators abandon their burrows, water flow at the sediment surface can push in and collapse the burrow. Examples of regenerator species include fiddler and ghost crabs. Ecological roles The evaluation of the ecological role of bioturbators has largely been species-specific. However, their ability to transport solutes, such as dissolved oxygen, enhance organic matter decomposition and diagenesis, and alter sediment structure has made them important for the survival and colonization by other macrofaunal and microbial communities. Microbial communities are greatly influenced by bioturbator activities, as increased transport of more energetically favorable oxidants, such as oxygen, to typically highly reduced sediments at depth alters the microbial metabolic processes occurring around burrows. As bioturbators burrow, they also increase the surface area of sediments across which oxidized and reduced solutes can be exchanged, thereby increasing the overall sediment metabolism. This increase in sediment metabolism and microbial activity further results in enhanced organic matter decomposition and sediment oxygen uptake. In addition to the effects of burrowing activity on microbial communities, studies suggest that bioturbator fecal matter provides a highly nutritious food source for microbes and other macrofauna, thus enhancing benthic microbial activity. This increased microbial activity by bioturbators can contribute to increased nutrient release to the overlying water column. Nutrients released from enhanced microbial decomposition of organic matter, notably limiting nutrients, such as ammonium, can have bottom-up effects on ecosystems and result in increased growth of phytoplankton and bacterioplankton. Burrows offer protection from predation and harsh environmental conditions. For example, termites (Macrotermes bellicosus) burrow and create mounds that have a complex system of air ducts and evaporation devices that create a suitable microclimate in an unfavorable physical environment. Many species are attracted to bioturbator burrows because of their protective capabilities. The shared use of burrows has enabled the evolution of symbiotic relationships between bioturbators and the many species that utilize their burrows. For example, gobies, scale-worms, and crabs live in the burrows made by innkeeper worms. Social interactions provide evidence of co-evolution between hosts and their burrow symbionts. This is exemplified by shrimp-goby associations. Shrimp burrows provide shelter for gobies and gobies serve as a scout at the mouth of the burrow, signaling the presence of potential danger. In contrast, the blind goby Typhlogobius californiensis lives within the deep portion of Callianassa shrimp burrows where there is not much light. The blind goby is an example of a species that is an obligate commensalist, meaning their existence depends on the host bioturbator and its burrow. Although newly hatched blind gobies have fully developed eyes, their eyes become withdrawn and covered by skin as they develop. They show evidence of commensal morphological evolution because it is hypothesized that the lack of light in the burrows where the blind gobies reside is responsible for the evolutionary loss of functional eyes. Bioturbators can also inhibit the presence of other benthic organisms by smothering, exposing other organisms to predators, or resource competition. While thalassinidean shrimps can provide shelter for some organisms and cultivate interspecies relationships within burrows, they have also been shown to have strong negative effects on other species, especially those of bivalves and surface-grazing gastropods, because thalassinidean shrimps can smother bivalves when they resuspend sediment. They have also been shown to exclude or inhibit polychaetes, cumaceans, and amphipods. This has become a serious issue in the northwestern United States, as ghost and mud shrimp (thalassinidean shrimp) are considered pests to bivalve aquaculture operations. The presence of bioturbators can have both negative and positive effects on the recruitment of larvae of conspecifics (those of the same species) and those of other species, as the resuspension of sediments and alteration of flow at the sediment-water interface can affect the ability of larvae to burrow and remain in sediments. This effect is largely species-specific, as species differences in resuspension and burrowing modes have variable effects on fluid dynamics at the sediment-water interface. Deposit-feeding bioturbators may also hamper recruitment by consuming recently settled larvae. Biogeochemical effects Since its onset around 539 million years ago, bioturbation has been responsible for changes in ocean chemistry, primarily through nutrient cycling. Bioturbators played, and continue to play, an important role in nutrient transport across sediments. For example, bioturbating animals are hypothesized to have affected the cycling of sulfur in the early oceans. According to this hypothesis, bioturbating activities had a large effect on the sulfate concentration in the ocean. Around the Cambrian-Precambrian boundary (539 million years ago), animals begin to mix reduced sulfur from ocean sediments to overlying water causing sulfide to oxidize, which increased the sulfate composition in the ocean. During large extinction events, the sulfate concentration in the ocean was reduced. Although this is difficult to measure directly, seawater sulfur isotope compositions during these times indicates bioturbators influenced the sulfur cycling in the early Earth. Bioturbators have also altered phosphorus cycling on geologic scales. Bioturbators mix readily available particulate organic phosphorus (P) deeper into ocean sediment layers which prevents the precipitation of phosphorus (mineralization) by increasing the sequestration of phosphorus above normal chemical rates. The sequestration of phosphorus limits oxygen concentrations by decreasing production on a geologic time scale. This decrease in production results in an overall decrease in oxygen levels, and it has been proposed that the rise of bioturbation corresponds to a decrease in oxygen levels of that time. The negative feedback of animals sequestering phosphorus in the sediments and subsequently reducing oxygen concentrations in the environment limits the intensity of bioturbation in this early environment. Organic contaminants Bioturbation can either enhance or reduce the flux of contaminants from the sediment to the water column, depending on the mechanism of sediment transport. In polluted sediments, bioturbating animals can mix the surface layer and cause the release of sequestered contaminants into the water column. Upward-conveyor species, like polychaete worms, are efficient at moving contaminated particles to the surface. Invasive animals can remobilize contaminants previously considered to be buried at a safe depth. In the Baltic Sea, the invasive Marenzelleria species of polychaete worms can burrow to 35-50 centimeters which is deeper than native animals, thereby releasing previously sequestered contaminants. However, bioturbating animals that live in the sediment (infauna) can also reduce the flux of contaminants to the water column by burying hydrophobic organic contaminants into the sediment. Burial of uncontaminated particles by bioturbating organisms provides more absorptive surfaces to sequester chemical pollutants in the sediments. Ecosystem impacts Nutrient cycling is still affected by bioturbation in the modern Earth. Some examples in the terrestrial and aquatic ecosystems are below. Terrestrial Plants and animals utilize soil for food and shelter, disturbing the upper soil layers and transporting chemically weathered rock called saprolite from the lower soil depths to the surface. Terrestrial bioturbation is important in soil production, burial, organic matter content, and downslope transport. Tree roots are sources of soil organic matter, with root growth and stump decay also contributing to soil transport and mixing. Death and decay of tree roots first delivers organic matter to the soil and then creates voids, decreasing soil density. Tree uprooting causes considerable soil displacement by producing mounds, mixing the soil, or inverting vertical sections of soil. Burrowing animals, such as earth worms and small mammals, form passageways for air and water transport which changes the soil properties, such as the vertical particle-size distribution, soil porosity, and nutrient content. Invertebrates that burrow and consume plant detritus help produce an organic-rich topsoil known as the soil biomantle, and thus contribute to the formation of soil horizons. Small mammals such as pocket gophers also play an important role in the production of soil, possibly with an equal magnitude to abiotic processes. Pocket gophers form above-ground mounds, which moves soil from the lower soil horizons to the surface, exposing minimally weathered rock to surface erosion processes, speeding soil formation. Pocket gophers are thought to play an important role in the downslope transport of soil, as the soil that forms their mounds is more susceptible to erosion and subsequent transport. Similar to tree root effects, the construction of burrows-even when backfilled- decreases soil density. The formation of surface mounds also buries surface vegetation, creating nutrient hotspots when the vegetation decomposes, increasing soil organic matter. Due to the high metabolic demands of their burrow-excavating subterranean lifestyle, pocket gophers must consume large amounts of plant material. Though this has a detrimental effect on individual plants, the net effect of pocket gophers is increased plant growth from their positive effects on soil nutrient content and physical soil properties. Freshwater Important sources of bioturbation in freshwater ecosystems include benthivorous (bottom-dwelling) fish, macroinvertebrates such as worms, insect larvae, crustaceans and molluscs, and seasonal influences from anadromous (migrating) fish such as salmon. Anadromous fish migrate from the sea into fresh-water rivers and streams to spawn. Macroinvertebrates act as biological pumps for moving material between the sediments and water column, feeding on sediment organic matter and transporting mineralized nutrients into the water column. Both benthivorous and anadromous fish can affect ecosystems by decreasing primary production through sediment re-suspension, the subsequent displacement of benthic primary producers, and recycling nutrients from the sediment back into the water column. Lakes and ponds The sediments of lake and pond ecosystems are rich in organic matter, with higher organic matter and nutrient contents in the sediments than in the overlying water. Nutrient re-regeneration through sediment bioturbation moves nutrients into the water column, thereby enhancing the growth of aquatic plants and phytoplankton (primary producers). The major nutrients of interest in this flux are nitrogen and phosphorus, which often limit the levels of primary production in an ecosystem. Bioturbation increases the flux of mineralized (inorganic) forms of these elements, which can be directly used by primary producers. In addition, bioturbation increases the water column concentrations of nitrogen and phosphorus-containing organic matter, which can then be consumed by fauna and mineralized. Lake and pond sediments often transition from the aerobic (oxygen containing) character of the overlaying water to the anaerobic (without oxygen) conditions of the lower sediment over sediment depths of only a few millimeters, therefore, even bioturbators of modest size can affect this transition of the chemical characteristics of sediments. By mixing anaerobic sediments into the water column, bioturbators allow aerobic processes to interact with the re-suspended sediments and the newly exposed bottom sediment surfaces. Macroinvertebrates including chironomid (non-biting midges) larvae and tubificid worms (detritus worms) are important agents of bioturbation in these ecosystems and have different effects based on their respective feeding habits. Tubificid worms do not form burrows, they are upward conveyors. Chironomids, on the other hand, form burrows in the sediment, acting as bioirrigators and aerating the sediments and are downward conveyors. This activity, combined with chironomid's respiration within their burrows, decrease available oxygen in the sediment and increase the loss of nitrates through enhanced rates of denitrification. The increased oxygen input to sediments by macroinvertebrate bioirrigation coupled with bioturbation at the sediment-water interface complicates the total flux of phosphorus . While bioturbation results in a net flux of phosphorus into the water column, the bio-irrigation of the sediments with oxygenated water enhances the adsorption of phosphorus onto iron-oxide compounds, thereby reducing the total flux of phosphorus into the water column. The presence of macroinvertebrates in sediment can initiate bioturbation due to their status as an important food source for benthivorous fish such as carp. Of the bioturbating, benthivorous fish species, carp in particular are important ecosystem engineers and their foraging and burrowing activities can alter the water quality characteristics of ponds and lakes. Carp increase water turbidity by the re-suspension of benthic sediments. This increased turbidity limits light penetration and coupled with increased nutrient flux from the sediment into the water column, inhibits the growth of macrophytes (aquatic plants) favoring the growth of phytoplankton in the surface waters. Surface phytoplankton colonies benefit from both increased suspended nutrients and from recruitment of buried phytoplankton cells released from the sediments by the fish bioturbation. Macrophyte growth has also been shown to be inhibited by displacement from the bottom sediments due to fish burrowing. Rivers and streams River and stream ecosystems show similar responses to bioturbation activities, with chironomid larvae and tubificid worm macroinvertebrates remaining as important benthic agents of bioturbation. These environments can also be subject to strong season bioturbation effects from anadromous fish. Salmon function as bioturbators on both gravel to sand-sized sediment and a nutrient scale, by moving and re-working sediments in the construction of redds (gravel depressions or "nests" containing eggs buried under a thin layer of sediment) in rivers and streams and by mobilization of nutrients. The construction of salmon redds functions to increase the ease of fluid movement (hydraulic conductivity) and porosity of the stream bed. In select rivers, if salmon congregate in large enough concentrations in a given area of the river, the total sediment transport from redd construction can equal or exceed the sediment transport from flood events. The net effect on sediment movement is the downstream transfer of gravel, sand and finer materials and enhancement of water mixing within the river substrate. The construction of salmon redds increases sediment and nutrient fluxes through the hyporheic zone (area between surface water and groundwater) of rivers and effects the dispersion and retention of marine derived nutrients (MDN) within the river ecosystem. MDN are delivered to river and stream ecosystems by the fecal matter of spawning salmon and the decaying carcasses of salmon that have completed spawning and died. Numerical modeling suggests that residence time of MDN within a salmon spawning reach is inversely proportional to the amount of redd construction within the river. Measurements of respiration within a salmon-bearing river in Alaska further suggest that salmon bioturbation of the river bed plays a significant role in mobilizing MDN and limiting primary productivity while salmon spawning is active. The river ecosystem was found to switch from a net autotrophic to heterotrophic system in response to decreased primary production and increased respiration. The decreased primary production in this study was attributed to the loss of benthic primary producers who were dislodged due to bioturbation, while increased respiration was thought to be due to increased respiration of organic carbon, also attributed to sediment mobilization from salmon redd construction. While marine derived nutrients are generally thought to increase productivity in riparian and freshwater ecosystems, several studies have suggested that temporal effects of bioturbation should be considered when characterizing salmon influences on nutrient cycles. Marine Major marine bioturbators range from small infaunal invertebrates to fish and marine mammals. In most marine sediments, however, they are dominated by small invertebrates, including polychaetes, bivalves, burrowing shrimp, and amphipods. Shallow and coastal Coastal ecosystems, such as estuaries, are generally highly productive, which results in the accumulation of large quantities of detritus (organic waste). These large quantities, in addition to typically small sediment grain size and dense populations, make bioturbators important in estuarine respiration. Bioturbators enhance the transport of oxygen into sediments through irrigation and increase the surface area of oxygenated sediments through burrow construction. Bioturbators also transport organic matter deeper into sediments through general reworking activities and production of fecal matter. This ability to replenish oxygen and other solutes at sediment depth allows for enhanced respiration by both bioturbators as well as the microbial community, thus altering estuarine elemental cycling. The effects of bioturbation on the nitrogen cycle are well-documented. Coupled denitrification and nitrification are enhanced due to increased oxygen and nitrate delivery to deep sediments and increased surface area across which oxygen and nitrate can be exchanged. The enhanced nitrification-denitrification coupling contributes to greater removal of biologically available nitrogen in shallow and coastal environments, which can be further enhanced by the excretion of ammonium by bioturbators and other organisms residing in bioturbator burrows. While both nitrification and denitrification are enhanced by bioturbation, the effects of bioturbators on denitrification rates have been found to be greater than that on rates of nitrification, further promoting the removal of biologically available nitrogen. This increased removal of biologically available nitrogen has been suggested to be linked to increased rates of nitrogen fixation in microenvironments within burrows, as indicated by evidence of nitrogen fixation by sulfate-reducing bacteria via the presence of nifH (nitrogenase) genes. Bioturbation by walrus feeding is a significant source of sediment and biological community structure and nutrient flux in the Bering Sea. Walruses feed by digging their muzzles into the sediment and extracting clams through powerful suction. By digging through the sediment, walruses rapidly release large amounts of organic material and nutrients, especially ammonium, from the sediment to the water column. Additionally, walrus feeding behavior mixes and oxygenates the sediment and creates pits in the sediment which serve as new habitat structures for invertebrate larvae. Deep sea Bioturbation is important in the deep sea because deep-sea ecosystem functioning depends on the use and recycling of nutrients and organic inputs from the photic zone. In low energy regions (areas with relatively still water), bioturbation is the only force creating heterogeneity in solute concentration and mineral distribution in the sediment. It has been suggested that higher benthic diversity in the deep sea could lead to more bioturbation which, in turn, would increase the transport of organic matter and nutrients to benthic sediments. Through the consumption of surface-derived organic matter, animals living on the sediment surface facilitate the incorporation of particulate organic carbon (POC) into the sediment where it is consumed by sediment dwelling animals and bacteria. Incorporation of POC into the food webs of sediment dwelling animals promotes carbon sequestration by removing carbon from the water column and burying it in the sediment. In some deep-sea sediments, intense bioturbation enhances manganese and nitrogen cycling. Mathematical modelling The role of bioturbators in sediment biogeochemistry makes bioturbation a common parameter in sediment biogeochemical models, which are often numerical models built using ordinary and partial differential equations. Bioturbation is typically represented as DB, or the biodiffusion coefficient, and is described by a diffusion and, sometimes, an advective term. This representation and subsequent variations account for the different modes of mixing by functional groups and bioirrigation that results from them. The biodiffusion coefficient is usually measured using radioactive tracers such as Pb210, radioisotopes from nuclear fallout, introduced particles including glass beads tagged with radioisotopes or inert fluorescent particles, and chlorophyll a. Biodiffusion models are then fit to vertical distributions (profiles) of tracers in sediments to provide values for DB. Parameterization of bioturbation, however, can vary, as newer and more complex models can be used to fit tracer profiles. Unlike the standard biodiffusion model, these more complex models, such as expanded versions of the biodiffusion model, random walk, and particle-tracking models, can provide more accuracy, incorporate different modes of sediment transport, and account for more spatial heterogeneity. Evolution The onset of bioturbation had a profound effect on the environment and the evolution of other organisms. Bioturbation is thought to have been an important co-factor of the Cambrian Explosion, during which most major animal phyla appeared in the fossil record over a short time. Predation arose during this time and promoted the development of hard skeletons, for example bristles, spines, and shells, as a form of armored protection. It is hypothesized that bioturbation resulted from this skeleton formation. These new hard parts enabled animals to dig into the sediment to seek shelter from predators, which created an incentive for predators to search for prey in the sediment (see Evolutionary Arms Race). Burrowing species fed on buried organic matter in the sediment which resulted in the evolution of deposit feeding (consumption of organic matter within sediment). Prior to the development of bioturbation, laminated microbial mats were the dominant biological structures of the ocean floor and drove much of the ecosystem functions. As bioturbation increased, burrowing animals disturbed the microbial mat system and created a mixed sediment layer with greater biological and chemical diversity. This greater biological and chemical diversity is thought to have led to the evolution and diversification of seafloor-dwelling species. An alternate, less widely accepted hypothesis for the origin of bioturbation exists. The trace fossil Nenoxites is thought to be the earliest record of bioturbation, predating the Cambrian Period. The fossil is dated to 555 million years, which places it in the Ediacaran Period. The fossil indicates a 5 centimeter depth of bioturbation in muddy sediments by a burrowing worm. This is consistent with food-seeking behavior, as there tended to be more food resources in the mud than the water column. However, this hypothesis requires more precise geological dating to rule out an early Cambrian origin for this specimen. The evolution of trees during the Devonian Period enhanced soil weathering and increased the spread of soil due to bioturbation by tree roots. Root penetration and uprooting also enhanced soil carbon storage by enabling mineral weathering and the burial of organic matter. Fossil record Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen. Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic (250 Million years ago), although bioturbation in other sediments has been seen as far back as 550 Ma. Research history Bioturbation's importance for soil processes and geomorphology was first realized by Charles Darwin, who devoted his last scientific book to the subject (The Formation of Vegetable Mould through the Action of Worms). Darwin spread chalk dust over a field to observe changes in the depth of the chalk layer over time. Excavations 30 years after the initial deposit of chalk revealed that the chalk was buried 18 centimeters under the sediment, which indicated a burial rate of 6 millimeters per year. Darwin attributed this burial to the activity of earthworms in the sediment and determined that these disruptions were important in soil formation. In 1891, geologist Nathaniel Shaler expanded Darwin's concept to include soil disruption by ants and trees. The term "bioturbation" was later coined by Rudolf Richter in 1952 to describe structures in sediment caused by living organisms. Since the 1980s, the term "bioturbation" has been widely used in soil and geomorphology literature to describe the reworking of soil and sediment by plants and animals. See also Argillipedoturbation Bioirrigation Zoophycos References External links Nereis Park (the World of Bioturbation) Worm Cam Biological oceanography Aquatic ecology Limnology Pedology Physical oceanography Sedimentology
Bioturbation
[ "Physics", "Biology" ]
6,150
[ "Aquatic ecology", "Ecosystems", "Applied and interdisciplinary physics", "Physical oceanography" ]
1,530,689
https://en.wikipedia.org/wiki/Complementarity%20%28physics%29
In physics, complementarity is a conceptual aspect of quantum mechanics that Niels Bohr regarded as an essential feature of the theory. The complementarity principle holds that certain pairs of complementary properties cannot all be observed or measured simultaneously. For example, position and momentum or wave and particle properties. In contemporary terms, complementarity encompasses both the uncertainty principle and wave-particle duality. Bohr considered one of the foundational truths of quantum mechanics to be the fact that setting up an experiment to measure one quantity of a pair, for instance the position of an electron, excludes the possibility of measuring the other, yet understanding both experiments is necessary to characterize the object under study. In Bohr's view, the behavior of atomic and subatomic objects cannot be separated from the measuring instruments that create the context in which the measured objects behave. Consequently, there is no "single picture" that unifies the results obtained in these different experimental contexts, and only the "totality of the phenomena" together can provide a completely informative description. History Background Complementarity as a physical model derives from Niels Bohr's 1927 presentation in Como, Italy, at a scientific celebration of the work of Alessandro Volta 100 years previous. Bohr's subject was complementarity, the idea that measurements of quantum events provide complementary information through seemingly contradictory results. While Bohr's presentation was not well received, it did crystallize the issues ultimately leading to the modern wave-particle duality concept. The contradictory results that triggered Bohr's ideas had been building up over the previous 20 years. This contradictory evidence came both from light and from electrons. The wave theory of light, broadly successful for over a hundred years, had been challenged by Planck's 1901 model of blackbody radiation and Einstein's 1905 interpretation of the photoelectric effect. These theoretical models use discrete energy, a quantum, to describe the interaction of light with matter. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum seemingly contradicted other experiments demonstrating the wave-like interference of light. The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thompson, Robert Millikan, and Charles Wilson, among others, had shown that free electrons had particle properties. However, in 1924, Louis de Broglie proposed that electrons had an associated wave and Schrödinger demonstrated that wave equations accurately account for electron properties in atoms. Again some experiments showed particle properties and others wave properties. Bohr's resolution of these contradictions is to accept them. In his Como lecture he says: "our interpretation of the experimental material rests essentially upon the classical concepts." Direct observation being impossible, observations of quantum effects are necessarily classical. Whatever the nature of quantum events, our only information will arrive via classical results. If experiments sometimes produce wave results and sometimes particle results, that is the nature of light and of the ultimate constituents of matter. Bohr's lectures Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding an as-yet-unpublished result, a thought experiment about a microscope using gamma rays. This thought experiment implied a tradeoff between uncertainties that would later be formalized as the uncertainty principle. To Bohr, Heisenberg's paper did not make clear the distinction between a position measurement merely disturbing the momentum value that a particle carried and the more radical idea that momentum was meaningless or undefinable in a context where position was measured instead. Upon returning from his vacation, by which time Heisenberg had already submitted his paper for publication, Bohr convinced Heisenberg that the uncertainty tradeoff was a manifestation of the deeper concept of complementarity. Heisenberg duly appended a note to this effect to his paper, before its publication, stating: Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand. Bohr publicly introduced the principle of complementarity in a lecture he delivered on 16 September 1927 at the International Physics Congress held in Como, Italy, attended by most of the leading physicists of the era, with the notable exceptions of Einstein, Schrödinger, and Dirac. However, these three were in attendance one month later when Bohr again presented the principle at the Fifth Solvay Congress in Brussels, Belgium. The lecture was published in the proceedings of both of these conferences, and was republished the following year in Naturwissenschaften (in German) and in Nature (in English). In his original lecture on the topic, Bohr pointed out that just as the finitude of the speed of light implies the impossibility of a sharp separation between space and time (relativity), the finitude of the quantum of action implies the impossibility of a sharp separation between the behavior of a system and its interaction with the measuring instruments and leads to the well-known difficulties with the concept of 'state' in quantum theory; the notion of complementarity is intended to capture this new situation in epistemology created by quantum theory. Physicists F.A.M. Frescura and Basil Hiley have summarized the reasons for the introduction of the principle of complementarity in physics as follows: Debate following the lectures Complementarity was a central feature of Bohr's reply to the EPR paradox, an attempt by Albert Einstein, Boris Podolsky and Nathan Rosen to argue that quantum particles must have position and momentum even without being measured and so quantum mechanics must be an incomplete theory. The thought experiment proposed by Einstein, Podolsky and Rosen involved producing two particles and sending them far apart. The experimenter could choose to measure either the position or the momentum of one particle. Given that result, they could in principle make a precise prediction of what the corresponding measurement on the other, faraway particle would find. To Einstein, Podolsky and Rosen, this implied that the faraway particle must have precise values of both quantities whether or not that particle is measured in any way. Bohr argued in response that the deduction of a position value could not be transferred over to the situation where a momentum value is measured, and vice versa. Later expositions of complementarity by Bohr include a 1938 lecture in Warsaw and a 1949 article written for a festschrift honoring Albert Einstein. It was also covered in a 1953 essay by Bohr's collaborator Léon Rosenfeld. Mathematical formalism For Bohr, complementarity was the "ultimate reason" behind the uncertainty principle. All attempts to grapple with atomic phenomena using classical physics were eventually frustrated, he wrote, leading to the recognition that those phenomena have "complementary aspects". But classical physics can be generalized to address this, and with "astounding simplicity", by describing physical quantities using non-commutative algebra. This mathematical expression of complementarity builds on the work of Hermann Weyl and Julian Schwinger, starting with Hilbert spaces and unitary transformation, leading to the theorems of mutually unbiased bases. In the mathematical formulation of quantum mechanics, physical quantities that classical mechanics had treated as real-valued variables become self-adjoint operators on a Hilbert space. These operators, called "observables", can fail to commute, in which case they are called "incompatible": Incompatible observables cannot have a complete set of common eigenstates; there can be some simultaneous eigenstates of and , but not enough in number to constitute a complete basis. The canonical commutation relation implies that this applies to position and momentum. In a Bohrian view, this is a mathematical statement that position and momentum are complementary aspects. Likewise, an analogous relationship holds for any two of the spin observables defined by the Pauli matrices; measurements of spin along perpendicular axes are complementary. The Pauli spin observables are defined for a quantum system described by a two-dimensional Hilbert space; mutually unbiased bases generalize these observables to Hilbert spaces of arbitrary finite dimension. Two bases and for an -dimensional Hilbert space are mutually unbiased when Here the basis vector , for example, has the same overlap with every ; there is equal transition probability between a state in one basis and any state in the other basis. Each basis corresponds to an observable, and the observables for two mutually unbiased bases are complementary to each other. This leads to the description of complementarity as a statement about quantum kinematics: The concept of complementarity has also been applied to quantum measurements described by positive-operator-valued measures (POVMs). Continuous complementarity While the concept of complementarity can be discussed via two experimental extremes, continuous tradeoff is also possible. In 1979 Wooters and Zurek introduced an information-theoretic treatment of the double-slit experiment providing on approach to a quantiative model of complementarity. The wave-particle relation, introduced by Daniel Greenberger and Allaine Yasin in 1988, and since then refined by others, quantifies the trade-off between measuring particle path distinguishability, , and wave interference fringe visibility, : The values of and can vary between 0 and 1 individually, but any experiment that combines particle and wave detection will limit one or the other, or both. The detailed definition of the two terms vary among applications, but the relation expresses the verified constraint that efforts to detect particle paths will result in less visible wave interference. Modern role While many of the early discussions of complementarity discussed hypothetical experiments, advances in technology have allowed advanced tests of this concept. Experiments like the quantum eraser verify the key ideas in complementarity; modern exploration of quantum entanglement builds directly on complementarity: In his Nobel lecture, physicist Julian Schwinger linked complementarity to quantum field theory: The Consistent histories interpretation of quantum mechanics takes a generalized form of complementarity as a key defining postulate. See also Copenhagen interpretation Canonical coordinates Conjugate variables Interpretations of quantum mechanics Wave–particle duality References Further reading Berthold-Georg Englert, Marlan O. Scully & Herbert Walther, Quantum Optical Tests of Complementarity, Nature, Vol 351, pp 111–116 (9 May 1991) and (same authors) The Duality in Matter and Light Scientific American, pg 56–61, (December 1994). Niels Bohr, Causality and Complementarity: supplementary papers edited by Jan Faye and Henry J. Folse. The Philosophical Writings of Niels Bohr, Volume IV. Ox Bow Press. 1998. External links Discussions with Einstein on Epistemological Problems in Atomic Physics Einstein's Reply to Criticisms Quantum mechanics Niels Bohr Dichotomies Scientific laws
Complementarity (physics)
[ "Physics", "Mathematics" ]
2,302
[ "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Equations", "Scientific laws" ]
1,531,369
https://en.wikipedia.org/wiki/Phase%20problem
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years. Overview Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics. In X-ray crystallography, the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell. If the phases are known, the electron density can be simply obtained by Fourier synthesis. This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also called Fraunhofer diffraction) giving rise to a similar type of phase problem. Phase retrieval There are several ways to retrieve the lost phases. The phase problem must be solved in x-ray crystallography, neutron crystallography, and electron crystallography. Not all of the methods of phase retrieval work with every wavelength (x-ray, neutron, and electron) used in crystallography. Direct (ab initio) methods If the crystal diffracts to high resolution (<1.2 Å), the initial phases can be estimated using direct methods. Direct methods can be used in x-ray crystallography, neutron crystallography, and electron crystallography. A number of initial phases are tested and selected by this method. The other is the Patterson method, which directly determines the positions of heavy atoms. The Patterson function gives a large value in a position which corresponds to interatomic vectors. This method can be applied only when the crystal contains heavy atoms or when a significant fraction of the structure is already known. For molecules whose crystals provide reflections in the sub-Ångström range, it is possible to determine phases by brute force methods, testing a series of phase values until spherical structures are observed in the resultant electron density map. This works because atoms have a characteristic structure when viewed in the sub-Ångström range. The technique is limited by processing power and data quality. For practical purposes, it is limited to "small molecules" and peptides because they consistently provide high-quality diffraction with very few reflections. Molecular replacement (MR) Phases can also be inferred by using a process called molecular replacement, where a similar molecule's already-known phases are grafted onto the intensities of the molecule at hand, which are observationally determined. These phases can be obtained experimentally from a homologous molecule or if the phases are known for the same molecule but in a different crystal, by simulating the molecule's packing in the crystal and obtaining theoretical phases. Generally, these techniques are less desirable since they can severely bias the solution of the structure. They are useful, however, for ligand binding studies, or between molecules with small differences and relatively rigid structures (for example derivatizing a small molecule). Isomorphous replacement Multiple isomorphous replacement (MIR) Multiple isomorphous replacement (MIR), where heavy atoms are inserted into structure (usually by synthesizing proteins with analogs or by soaking) Anomalous scattering Single-wavelength anomalous dispersion (SAD). Multi-wavelength anomalous dispersion (MAD) A powerful solution is the multi-wavelength anomalous dispersion (MAD) method. In this technique, atoms' inner electrons absorb X-rays of particular wavelengths, and reemit the X-rays after a delay, inducing a phase shift in all of the reflections, known as the anomalous dispersion effect. Analysis of this phase shift (which may be different for individual reflections) results in a solution for the phases. Since X-ray fluorescence techniques (like this one) require excitation at very specific wavelengths, it is necessary to use synchrotron radiation when using the MAD method. Phase improvement Refining initial phases In many cases, an initial set of phases are determined, and the electron density map for the diffraction pattern is calculated. Then the map is used to determine portions of the structure, which portions are used to simulate a new set of phases. This new set of phases is known as a refinement. These phases are reapplied to the original amplitudes, and an improved electron density map is derived, from which the structure is corrected. This process is repeated until an error term (usually ) has stabilized to a satisfactory value. Because of the phenomenon of phase bias, it is possible for an incorrect initial assignment to propagate through successive refinements, so satisfactory conditions for a structure assignment are still a matter of debate. Indeed, some spectacular incorrect assignments have been reported, including a protein where the entire sequence was threaded backwards. Density modification (phase improvement) Solvent flattening Histogram matching Non-crystallographic symmetry averaging Partial structure Phase extension See also Coherent diffraction imaging Ptychography Phase retrieval External links An example of phase bias An appropriate use of 'molecular replacement' Learning crystallography References Crystallography Inverse problems
Phase problem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,189
[ "Applied mathematics", "Materials science", "Crystallography", "Condensed matter physics", "Inverse problems" ]
1,531,404
https://en.wikipedia.org/wiki/Split%20exact%20sequence
In mathematics, a split exact sequence is a short exact sequence in which the middle term is built out of the two outer terms in the simplest possible way. Equivalent characterizations A short exact sequence of abelian groups or of modules over a fixed ring, or more generally of objects in an abelian category is called split exact if it is isomorphic to the exact sequence where the middle term is the direct sum of the outer ones: The requirement that the sequence is isomorphic means that there is an isomorphism such that the composite is the natural inclusion and such that the composite equals b. This can be summarized by a commutative diagram as: The splitting lemma provides further equivalent characterizations of split exact sequences. Examples A trivial example of a split short exact sequence is where are R-modules, is the canonical injection and is the canonical projection. Any short exact sequence of vector spaces is split exact. This is a rephrasing of the fact that any set of linearly independent vectors in a vector space can be extended to a basis. The exact sequence (where the first map is multiplication by 2) is not split exact. Related notions Pure exact sequences can be characterized as the filtered colimits of split exact sequences. References Sources Abstract algebra
Split exact sequence
[ "Mathematics" ]
252
[ "Abstract algebra", "Algebra" ]
1,531,409
https://en.wikipedia.org/wiki/Homeomorphism%20group
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. They are important to the theory of topological spaces, generally exemplary of automorphism groups and topologically invariant in the group isomorphism sense. Properties and examples There is a natural group action of the homeomorphism group of a space on that space. Let be a topological space and denote the homeomorphism group of by . The action is defined as follows: This is a group action since for all , where denotes the group action, and the identity element of (which is the identity function on ) sends points to themselves. If this action is transitive, then the space is said to be homogeneous. Topology As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology. In the case of regular, locally compact spaces the group multiplication is then continuous. If the space is compact and Hausdorff, the inversion is continuous as well and becomes a topological group. If is Hausdorff, locally compact and locally connected this holds as well. tSome locally compact separable metric spaces exhibit an inversion map that is not continuous, resulting in not forming a topological group. Mapping class group In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy, called the mapping class group: The MCG can also be interpreted as the 0th homotopy group, . This yields the short exact sequence: In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, and by first studying the mapping class group and group of isotopically trivial homeomorphisms, and then (at times) the extension. See also Mapping class group References Group theory Topology Topological groups
Homeomorphism group
[ "Physics", "Mathematics" ]
392
[ "Space (mathematics)", "Group theory", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
1,531,739
https://en.wikipedia.org/wiki/Electric%20form%20factor
The electric form factor is the Fourier transform of electric charge distribution in a nucleon. Nucleons (protons and neutrons) are made of up and down quarks which have charges associated with them (2/3 & -1/3, respectively). The study of Form Factors falls within the regime of Perturbative QCD. The idea originated from young William Thomson. See also Form factor (disambiguation) References Electrodynamics
Electric form factor
[ "Physics", "Mathematics" ]
99
[ "Electrodynamics", "Particle physics stubs", "Particle physics", "Dynamical systems" ]
1,531,742
https://en.wikipedia.org/wiki/Magnetic%20form%20factor
In electromagnetism, a magnetic form factor is the Fourier transform of an electric charge distribution in space. See also Atomic form factor, for the form factor relevant to magnetic diffraction of free neutrons by unpaired outer electrons of an atom. Electric form factor Form factor (quantum field theory) External links Magnetic form factors, Andrey Zheludev, HFIR Center for Neutron Scattering, Oak Ridge National Laboratory "The magnetic form factor of the neutron", E.E.W. Bruins, November 1996 Electromagnetism
Magnetic form factor
[ "Physics", "Materials_science" ]
112
[ "Electromagnetism", "Materials science stubs", "Physical phenomena", "Fundamental interactions", "Electromagnetism stubs" ]
1,531,781
https://en.wikipedia.org/wiki/Rarita%E2%80%93Schwinger%20equation
In theoretical physics, the Rarita–Schwinger equation is the relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941. In modern notation it can be written as: where is the Levi-Civita symbol, are Dirac matrices (with ) and , is the mass, , and is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the representation of the Lorentz group, or rather, its part. This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian: where the bar above denotes the Dirac adjoint. This equation controls the propagation of the wave function of composite objects such as the delta baryons () or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally. The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation , where is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino. "Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist. Equations of motion in the massless case Consider a massless Rarita–Schwinger field described by the Lagrangian density where the sum over spin indices is implicit, are Majorana spinors, and To obtain the equations of motion we vary the Lagrangian with respect to the fields , obtaining: using the Majorana flip properties we see that the second and first terms on the RHS are equal, concluding that plus unimportant boundary terms. Imposing we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads: The gauge symmetry of the massless Rarita-Schwinger equation allows the choice of the gauge , reducing the equations to: A solution with spins 1/2 and 3/2 is given by: where is the spatial Laplacian, is doubly transverse, carrying spin 3/2, and satisfies the massless Dirac equation, therefore carrying spin 1/2. Drawbacks of the equation The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli formalisms is afflicted with several maladies. Superluminal propagation As in the case of the Dirac equation, electromagnetic interaction can be added by promoting the partial derivative to gauge covariant derivative: . In 1969, Velo and Zwanziger showed that the Rarita–Schwinger Lagrangian coupled to electromagnetism leads to equation with solutions representing wavefronts, some of which propagate faster than light. In other words, the field then suffers from acausal, superluminal propagation; consequently, the quantization in interaction with electromagnetism is essentially flawed. In extended supergravity, though, Das and Freedman have shown that local supersymmetry solves this problem. References Sources Collins P.D.B., Martin A.D., Squires E.J., Particle physics and cosmology (1989) Wiley, Section 1.6. Eponymous equations of physics Quantum field theory Spinors Partial differential equations Fermions Mathematical physics
Rarita–Schwinger equation
[ "Physics", "Materials_science", "Mathematics" ]
763
[ "Quantum field theory", "Matter", "Equations of physics", "Fermions", "Applied mathematics", "Theoretical physics", "Eponymous equations of physics", "Quantum mechanics", "Condensed matter physics", "Mathematical physics", "Subatomic particles" ]
7,011,270
https://en.wikipedia.org/wiki/Warm%20dense%20matter
Warm dense matter, abbreviated WDM, can refer to either equilibrium or non-equilibrium states of matter in a (loosely defined) regime of temperature and density between condensed matter and hot plasma. It can be defined as the state that is too dense to be described by weakly coupled plasma physics yet too hot to be described by condensed matter physics. In this state, the potential energy of the Coulomb interaction between electrons and ions is on the same order of magnitude (or even significantly exceeds) their thermal energy, while the latter is comparable to the Fermi energy. Typically, WDM has a density somewhere between and a temperature on the order of several thousand kelvins (somewhere between , in the units favored by practitioners). WDM is expected in the interiors of giant planets, brown dwarfs, and small stars. WDM is routinely formed in the course of intense-laser–target interactions (including the inertial confinement fusion research), particle-beam–target interactions, and in other setups where a condensed matter is quickly heated to become a strongly interacting plasma. As such, the WDM physics is also relevant to ablation of metals (atmospheric entry from space, laser-machining of materials, etc). A WDM created using ultra-fast laser pulses may for a short time exist in a two-temperature non-equilibrium form where a small fraction of electrons are very hot, with the temperature well above that of the bulk matter. See also Liquid metals References Plasma theory and modeling
Warm dense matter
[ "Physics" ]
303
[ "Plasma theory and modeling", "Plasma physics" ]
7,011,453
https://en.wikipedia.org/wiki/Autochem
AutoChem is NASA release software that constitutes an automatic computer code generator and documenter for chemically reactive systems written by David Lary between 1993 and the present. It was designed primarily for modeling atmospheric chemistry, and in particular, for chemical data assimilation. The user selects a set of chemical species. AutoChem then searches chemical reaction databases for these species and automatically constructs the ordinary differential equations (ODE) that describe the chemical system. AutoChem symbolically differentiates the time derivatives to give the Jacobian matrix, and symbolically differentiates the Jacobian matrix to give the Hessian matrix and the adjoint. The Jacobian matrix is required by many algorithms that solve the ordinary differential equations numerically, particular when the ODEs are stiff. The Hessian matrix and the adjoint are required for four-dimensional variational data assimilation (4D-Var). AutoChem documents the whole process in a set of LaTeX and PDF files. The reactions involving the user specified constituents are extracted by the first AutoChem preprocessor program called Pick. This subset of reactions is then used by the second AutoChem preprocessor program RoC (rate of change) to generate the time derivatives, Jacobian, and Hessian. Once the two preprocessor programs have run to completion all the Fortran 90 code has been generated that is necessary for modeling and assimilating the kinetic processes. A huge observational database of many different atmospheric constituents from a host of platforms are available from the AutoChem site. AutoChem has been used to perform long term chemical data assimilation of atmospheric chemistry. This assimilation was automatically documented by the AutoChem software and is available on line at CDACentral. Data quality is always an issue for chemical data assimilation, in particular the presence of biases. To identify and understand the biases it is useful to compare observations using probability distribution functions. Such an analysis is available on line at PDFCentral which was designed for the validation of observations from the NASA Aura satellite. See also Chemical kinetics CHEMKIN Cantera Chemical WorkBench Kinetic PreProcessor (KPP) SpeedCHEM References Computational chemistry software Chemical kinetics Environmental chemistry
Autochem
[ "Chemistry", "Environmental_science" ]
451
[ "Chemical reaction engineering", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Environmental chemistry", "Computational chemistry", "Computational chemistry stubs", "nan", "Chemical kinetics", "Physical chemistry stubs" ]
7,011,824
https://en.wikipedia.org/wiki/Biotechnology%20in%20pharmaceutical%20manufacturing
Biotechnology is the use of living organisms to develop useful products. Biotechnology is often used in pharmaceutical manufacturing. Notable examples include the use of bacteria to produce things such as insulin or human growth hormone. Other examples include the use of transgenic pigs for the creation of hemoglobin in use of humans. Human Insulin Amongst the earliest uses of biotechnology in pharmaceutical manufacturing is the use of recombinant DNA technology to modify Escherichia coli bacteria to produce human insulin, which was performed at Genentech in 1978. Prior to the development of this technique, insulin was extracted from the pancreas glands of cattle, pigs, and other farm animals. While generally efficacious in the treatment of diabetes, animal-derived insulin is not indistinguishable from human insulin, and may therefore produce allergic reactions. Genentech researchers produced artificial genes for each of the two protein chains that comprise the insulin molecule. The artificial genes were "then inserted... into plasmids... among a group of genes that" are activated by lactose. Thus, the insulin-producing genes were also activated by lactose. The recombinant plasmids were inserted into Escherichia coli bacteria, which were "induced to produce 100,000 molecules of either chain A or chain B human insulin." The two protein chains were then combined to produce insulin molecules. Human growth hormone Prior to the use of recombinant DNA technology to modify bacteria to produce human growth hormone, the hormone was manufactured by extraction from the pituitary glands of cadavers, as animal growth hormones have no therapeutic value in humans. Production of a single year's supply of human growth hormone required up to fifty pituitary glands, creating significant shortages of the hormone. In 1979, scientists at Genentech produced human growth hormone by inserting DNA coding for human growth hormone into a plasmid that was implanted in Escherichia coli bacteria. The gene that was inserted into the plasmid was created by reverse transcription of the mRNA found in pituitary glands to complementary DNA. HaeIII, a type of restriction enzyme which acts at restriction sites "in the 3' noncoding region" and at the 23rd codon in complementary DNA for human growth hormone, was used to produce "a DNA fragment of 551 base pairs which includes coding sequences for amino acids 24–191 of HGH." Then "a chemically synthesized DNA 'adaptor' fragment containing an ATG initiation codon..." was produced with the codons for the first through 23rd amino acids in human growth hormone. The "two DNA fragments... [were] combined to form a synthetic-natural 'hybrid' gene." The use of entirely synthetic methods of DNA production to produce a gene that would be translated to human growth hormone in Escherichia coli would have been exceedingly laborious due to the significant length of the amino acid sequence in human growth hormone. However, if the cDNA reverse transcribed from the mRNA for human growth hormone were inserted directly into the plasmid inserted into the Escherichia coli, the bacteria would translate regions of the gene that are not translated in humans, thereby producing a "pre-hormone containing an extra 26 amino acids" which might be difficult to remove. Human blood clotting factors Prior to the development and FDA approval of a means to produce human blood clotting factors using recombinant DNA technologies, human blood clotting factors were produced from donated blood that was inadequately screened for HIV. Thus, HIV infection posed a significant danger to patients with hemophilia who received human blood clotting factors: Most reports indicate that 60 to 80 percent of patients with hemophilia who were exposed to factor VIII concentrates between 1979 and 1984 are seropositive for HIV by [the] Western blot assay. As of May 1988, more than 659 patients with hemophilia had AIDS... The first human blood clotting factor to be produced in significant quantities using recombinant DNA technology was Factor IX, which was produced using transgenic Chinese hamster ovary cells in 1986. Lacking a map of the human genome, researchers obtained a known sequence of the RNA for Factor IX by examining the amino acids in Factor IX:Microsequencing of highly purified... [Factor IX] yielded sufficient amino acid sequence to construct oligonucleotide probes. The known sequence of Factor IX RNA was then used to search for the gene coding for Factor IX in a library of the DNA found in the human liver, since it was known that blood clotting factors are produced by the human liver:A unique oligonucleotide... homologous to Factor IX mRNA... was synthesized and labeled... The resultant probe was used to screen a human liver double-stranded cDNA library... Complete two-stranded DNA sequences of the... [relevant] cDNA... contained all of the coding sequence COOH-terminal of the eleventh codon (11) and the entire 3'-untranslated sequence. This sequence of cDNA was used to find the remaining DNA sequences comprising the Factor IX gene by searching the DNA in the X chromosome:A genomic library from a human XXXX chromosome was prepared... and screen[ed] with a Factor IX cDNA probe. Hybridizing recombinant phage were isolated, plaque-purified, and the DNA isolated. Restriction mapping, Southern analysis, and DNA sequencing permitted identification of five recombinant phage-containing inserts which, when overlapped at common sequences, coded the entire 35kb Factor IX gene. Plasmids containing the Factor IX gene, along with plasmids with a gene that codes for resistance to methotrexate, were inserted into Chinese hamster ovary cells via transfection. Transfection involves the insertion of DNA into a eukaryotic cell. Unlike the analogous process of transformation in bacteria, transfected DNA is not ordinarily integrated into the cell's genome, and is therefore not usually passed on to subsequent generations via cell division. Thus, in order to obtain a "stable" transfection, a gene which confers a significant survival advantage must also be transfected, causing the few cells that did integrate the transfected DNA into their genomes to increase their population as cells that did not integrate the DNA are eliminated. In the case of this study, "grow[th] in increasing concentrations of methotrexate" promoted the survival of stably transfected cells, and diminished the survival of other cells. The Chinese hamster ovary cells that were stably transfected produced significant quantities of Factor IX, which was shown to have substantial coagulant properties, though of a lesser degree than Factor IX produced from human blood:The specific activity of the recombinant Factor IX was measured on the basis of direct measurement of the coagulant activity... The specific activity of recombinant Factor IX was 75 units/mg... compared to 150 units/mg measured for plasma-derived Factor IX... In 1992, the FDA approved Factor VIII produced using transgenic Chinese hamster ovary cells, the first such blood clotting factor produced using recombinant DNA technology to be approved. Transgenic farm animals Recombinant DNA techniques have also been employed to create transgenic farm animals that can produce pharmaceutical products for use in humans. For instance, pigs that produce human hemoglobin have been created. While blood from such pigs could not be employed directly for transfusion to humans, the hemoglobin could be refined and employed to manufacture a blood substitute. Paclitaxel (Taxol) Bristol-Myers Squibb manufactures paclitaxel using Penicillium raistrickii and plant cell fermentation (PCF). Artemisinin Transgenic yeast are used to produce artemisinin, as well as a number of insulin analogs. See also Molecular Biotechnology (journal) Bacillus isolates Fungal isolates Medicinal molds Sponge isolates Streptomyces isolates References Drug manufacturing Biotechnology
Biotechnology in pharmaceutical manufacturing
[ "Biology" ]
1,700
[ "nan", "Biotechnology" ]
7,012,714
https://en.wikipedia.org/wiki/Cav1.2
{{DISPLAYTITLE:Cav1.2}} Calcium channel, voltage-dependent, L type, alpha 1C subunit (also known as Cav1.2) is a protein that in humans is encoded by the CACNA1C gene. Cav1.2 is a subunit of L-type voltage-dependent calcium channel. Structure and function This gene encodes an alpha-1 subunit of a voltage-dependent calcium channel. Calcium channels mediate the influx of calcium ions (Ca2+) into the cell upon membrane polarization (see membrane potential and calcium in biology). The alpha-1 subunit consists of 24 transmembrane segments and forms the pore through which ions pass into the cell. The calcium channel consists of a complex of alpha-1, alpha-2/delta and beta subunits in a 1:1:1 ratio. The S3-S4 linkers of Cav1.2 determine the gating phenotype and modulated gating kinetics of the channel. Cav1.2 is widely expressed in the smooth muscle, pancreatic cells, fibroblasts, and neurons. However, it is particularly important and well known for its expression in the heart where it mediates L-type currents, which causes calcium-induced calcium release from the ER Stores via ryanodine receptors. It depolarizes at -30mV and helps define the shape of the action potential in cardiac and smooth muscle. The protein encoded by this gene binds to and is inhibited by dihydropyridine. In the arteries of the brain, high levels of calcium in mitochondria elevates activity of nuclear factor kappa B NF-κB and transcription of CACNA1c and functional Cav1.2 expression increases. Cav1.2 also regulates levels of osteoprotegerin. CaV1.2 is inhibited by the action of STIM1. Regulation The activity of CaV1.2 channels is tightly regulated by the Ca2+ signals they produce. An increase in intracellular Ca2+ concentration implicated in Cav1.2 facilitation, a form of positive feedback called Ca2+-dependent facilitation, that amplifies Ca2+ influx. In addition, increasing influx intracellular Ca2+ concentration has implicated to exert the opposite effect Ca2+ dependent inactivation. These activation and inactivation mechanisms both involve Ca2+ binding to calmodulin (CaM) in the IQ domain in the C-terminal tail of these channels. Cav1.2 channels are arranged in cluster of eight, on average, in the cell membrane. When calcium ions bind to calmodulin, which in turn binds to a Cav1.2 channel, it allows the Cav1.2 channels within a cluster to interact with each other. This results in channels working cooperatively when they open at the same time to allow more calcium ions to enter and then close together to allow the cell to relax. Clinical significance Mutation in the CACNA1C gene, the single-nucleotide polymorphism located in the third intron of the Cav1.2 gene, are associated with a variant of Long QT syndrome called Timothy's syndrome and more broadly with other CACNA1C-related disorders, and also with Brugada syndrome. Large-scale genetic analyses have shown the possibility that CACNA1C is associated with bipolar disorder and subsequently also with schizophrenia. Also, a CACNA1C risk allele has been associated to a disruption in brain connectivity in patients with bipolar disorder, while not or only to a minor degree, in their unaffected relatives or healthy controls. In a first study in Indian population, the Schizophrenia associated Genome-wide association study (GWAS) SNP was found not to be associated with the disease. Furthermore, the main effect of rs1006737 was found to be associated with spatial abilityefficiency scores. Subjects with genotypes carrying the risk allele of rs1006737 (G/A and A/A) were found to have higher spatial abilityefficiency scores as compared to those with the G/G genotype. While in healthy controls those with G/A and A/A genotypes were found to have higher spatial memoryprocessing speed scores than those with G/G genotypes, the former had lower scores than the latter in schizophrenia subjects. In the same study the genotypes with the risk allele of rs1006737 namely A/A was associated with a significantly lower Align rank transformed Abnormal and involuntary movement scale(AIMS) scores of Tardive dyskinesia(TD). Interactive pathway map See also Calcium channel Calcium channel associated transcriptional regulator References Further reading External links GeneReviews/NIH/NCBI/UW entry on Brugada syndrome GeneReviews/NIH/NCBI/UW entry on Timothy Syndrome Ion channels Biology of bipolar disorder
Cav1.2
[ "Chemistry" ]
1,036
[ "Neurochemistry", "Ion channels" ]
7,013,570
https://en.wikipedia.org/wiki/Glycosynthase
The term glycosynthase refers to a class of proteins that have been engineered to catalyze the formation of a glycosidic bond. Glycosynthase are derived from glycosidase enzymes, which catalyze the hydrolysis of glycosidic bonds. They were traditionally formed from retaining glycosidase by mutating the active site nucleophilic amino acid (usually an aspartate or glutamate) to a small non-nucleophilic amino acid (usually alanine or glycine). More modern approaches use directed evolution to screen for amino acid substitutions that enhance glycosynthase activity. The first glycosynthase Two discoveries led to the development of glycosynthase enzymes. The first was that a change of the active site nucleophile of a glycosidase from a carboxylate to another amino acid resulted in a properly folded protein that had no hydrolase activity. The second discovery was that some glycosidase enzymes were able to catalyze the hydrolysis of glycosyl fluorides that had the incorrect anomeric configuration. The enzymes underwent a transglycosidation reaction to form a disaccharide, which was then a substrate for hydrolase activity. The first reported glycosynthase was a mutant of the Agrobacterium sp. β-glucosidase / galactosidase in which the nucleophile glutamate 358 was mutated to an alanine by site directed mutagenesis. When incubated with α-glycosyl fluorides and an acceptor sugar it was found to catalyze the transglycosidation reaction without any hydrolysis. This glycosynthase was used to synthesize a series of di- and trisaccharide products with yields between 64% and 92%. Reaction mechanism The mechanism of a glycosynthase is similar to the hydrolysis reaction of retaining glycosidases except no covalent-enzyme intermediate is formed. Mutation of the active site nucleophile to a non-nucleophilic amino acid prevents the formation of a covalent intermediate. An activated glycosyl donor with a good anomeric-leaving group (often a fluorine) is required. The leaving group is displaced by an alcohol of the acceptor sugar aided by the active site general base amino acid of the enzyme. Modern extensions The first glycosynthase was a retaining exoglycosidase that catalyzed the formation of β 1-4 linked glycosides of glucose and galactose. Glycosynthase enzymes have since been expanded to include mutants of endoglycosidase, as well as mutants of inverting glycosidase. Substrates of glycosynthase include glucose, galactose, mannose, xylose, and glucuronic acid. Modern methods to prepare glycosynthase use directed evolution to introduce modifications, which improve the enzymes function. This process was made available due to the development of high throughput screens for glycosynthase activity. Limitations Glycosynthase have been useful for the preparation of oligosaccharides; however, their use suffers from certain limitations. First, glycosynthase can only be used to synthesize glycosidic linkages for which there is a known glycosidase. That glycosidase must also be first converted into a glycosynthase, which is not always possible. Second, the product of the glycosynthase reaction is often a better substrate for the glycosynthase then the starting material, resulting in the formation of multiple products of varying lengths. Finally, glycosynthase are specific for the donor sugar but often have loose specificity for the acceptor sugar. This can result in different regioselectivity depending on the acceptor resulting in products with different glycosidic linkages. One example is the Agrobacterium sp. β-glucosynthase, which forms a β-1,4-glycoside with glucose as the acceptor, but forms a β-1,3-glycoside with xylose as the acceptor. See also Glucosidase Glycoside hydrolase family 1 References Carbohydrate chemistry Carbohydrates Glycobiology
Glycosynthase
[ "Chemistry", "Biology" ]
970
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Biochemistry", "Glycobiology" ]
7,013,607
https://en.wikipedia.org/wiki/Glycoside%20hydrolase
In biochemistry, glycoside hydrolases (also called glycosidases or glycosyl hydrolases) are a class of enzymes which catalyze the hydrolysis of glycosidic bonds in complex sugars. They are extremely common enzymes, with roles in nature including degradation of biomass such as cellulose (cellulase), hemicellulose, and starch (amylase), in anti-bacterial defense strategies (e.g., lysozyme), in pathogenesis mechanisms (e.g., viral neuraminidases) and in normal cellular function (e.g., trimming mannosidases involved in N-linked glycoprotein biosynthesis). Together with glycosyltransferases, glycosidases form the major catalytic machinery for the synthesis and breakage of glycosidic bonds. Occurrence and importance Glycoside hydrolases are found in essentially all domains of life. In prokaryotes, they are found both as intracellular and extracellular enzymes that are largely involved in nutrient acquisition. One of the important occurrences of glycoside hydrolases in bacteria is the enzyme beta-galactosidase (LacZ), which is involved in regulation of expression of the lac operon in E. coli. In higher organisms glycoside hydrolases are found within the endoplasmic reticulum and Golgi apparatus where they are involved in processing of N-linked glycoproteins, and in the lysosome as enzymes involved in the degradation of carbohydrate structures. Deficiency in specific lysosomal glycoside hydrolases can lead to a range of lysosomal storage disorders that result in developmental problems or death. Glycoside hydrolases are found in the intestinal tract and in saliva where they degrade complex carbohydrates such as lactose, starch, sucrose and trehalose. In the gut they are found as glycosylphosphatidyl anchored enzymes on endothelial cells. The enzyme lactase is required for degradation of the milk sugar lactose and is present at high levels in infants, but in most populations will decrease after weaning or during infancy, potentially leading to lactose intolerance in adulthood. The enzyme O-GlcNAcase is involved in removal of N-acetylglucosamine groups from serine and threonine residues in the cytoplasm and nucleus of the cell. The glycoside hydrolases are involved in the biosynthesis and degradation of glycogen in the body. Classification Glycoside hydrolases are classified into EC 3.2.1 as enzymes catalyzing the hydrolysis of O- or S-glycosides. Glycoside hydrolases can also be classified according to the stereochemical outcome of the hydrolysis reaction: thus they can be classified as either retaining or inverting enzymes. Glycoside hydrolases can also be classified as exo or endo acting, dependent upon whether they act at the (usually non-reducing) end or in the middle, respectively, of an oligo/polysaccharide chain. Glycoside hydrolases may also be classified by sequence or structure-based methods. Sequence-based classification Sequence-based classifications are one of the most powerful predictive methods for suggesting function for newly sequenced enzymes for which function has not been biochemically demonstrated. A classification system for glycosyl hydrolases, based on sequence similarity, has led to the definition of more than 100 different families. This classification is available on the CAZy (CArbohydrate-Active EnZymes) web site. The database provides a series of regularly updated sequence based classification that allow reliable prediction of mechanism (retaining/inverting), active site residues and possible substrates. The online database is supported by CAZypedia, an online encyclopedia of carbohydrate active enzymes. Based on three-dimensional structural similarities, the sequence-based families have been classified into 'clans' of related structure. Recent progress in glycosidase sequence analysis and 3D structure comparison has allowed the proposal of an extended hierarchical classification of the glycoside hydrolases. Mechanisms Inverting glycoside hydrolases Inverting enzymes utilize two enzymic residues, typically carboxylate residues, that act as acid and base respectively, as shown below for a β-glucosidase. The product of the reaction has an axial position on C1, but some spontaneous changes of conformation can appear. Retaining glycoside hydrolases Retaining glycosidases operate through a two-step mechanism, with each step resulting in inversion, for a net retention of stereochemistry. Again, two residues are involved, which are usually enzyme-borne carboxylates. One acts as a nucleophile and the other as an acid/base. In the first step, the nucleophile attacks the anomeric centre, resulting in the formation of a glycosyl enzyme intermediate, with acidic assistance provided by the acidic carboxylate. In the second step, the now deprotonated acidic carboxylate acts as a base and assists a nucleophilic water to hydrolyze the glycosyl enzyme intermediate, giving the hydrolyzed product. The mechanism is illustrated below for hen egg white lysozyme. An alternative mechanism for hydrolysis with retention of stereochemistry can occur that proceeds through a nucleophilic residue that is bound to the substrate, rather than being attached to the enzyme. Such mechanisms are common for certain N-acetylhexosaminidases, which have an acetamido group capable of neighboring group participation to form an intermediate oxazoline or oxazolinium ion. This mechanism proceeds in two steps through individual inversions to lead to a net retention of configuration. A variant neighboring group participation mechanism has been described for endo-α-mannanases that involves 2-hydroxyl group participation to form an intermediate epoxide. Hydrolysis of the epoxide leads to a net retention of configuration. Nomenclature and examples Glycoside hydrolases are typically named after the substrate that they act upon. Thus glucosidases catalyze the hydrolysis of glucosides and xylanases catalyze the cleavage of the xylose based homopolymer xylan. Other examples include lactase, amylase, chitinase, sucrase, maltase, neuraminidase, invertase, hyaluronidase and lysozyme. Uses Glycoside hydrolases are predicted to gain increasing roles as catalysts in biorefining applications in the future bioeconomy. These enzymes have a variety of uses including degradation of plant materials (e.g., cellulases for degrading cellulose to glucose, which can be used for ethanol production), in the food industry (invertase for manufacture of invert sugar, amylase for production of maltodextrins), and in the paper and pulp industry (xylanases for removing hemicelluloses from paper pulp). Cellulases are added to detergents for the washing of cotton fabrics and assist in the maintenance of colours through removing microfibres that are raised from the surface of threads during wear. In organic chemistry, glycoside hydrolases can be used as synthetic catalysts to form glycosidic bonds through either reverse hydrolysis (kinetic approach) where the equilibrium position is reversed; or by transglycosylation (kinetic approach) whereby retaining glycoside hydrolases can catalyze the transfer of a glycosyl moiety from an activated glycoside to an acceptor alcohol to afford a new glycoside. Mutant glycoside hydrolases termed glycosynthases have been developed that can achieve the synthesis of glycosides in high yield from activated glycosyl donors such as glycosyl fluorides. Glycosynthases are typically formed from retaining glycoside hydrolases by site-directed mutagenesis of the enzymic nucleophile to some other less nucleophilic group, such as alanine or glycine. Another group of mutant glycoside hydrolases termed thioglycoligases can be formed by site-directed mutagenesis of the acid-base residue of a retaining glycoside hydrolase. Thioglycoligases catalyze the condensation of activated glycosides and various thiol-containing acceptors. Various glycoside hydrolases have shown efficacy in degrading matrix polysaccharides within the extracellular polymeric substance (EPS) of microbial biofilms. Medically, biofilms afford infectious microorganisms a variety of advantages over their planktonic, fre-floating counterparts, including greatly increased tolerances to antimicrobial agents and the host immune system. Thus, degrading the biofilm may increase antibiotic efficacy, and potentiate host immune function and healing ability. For example, a combination of alpha-amylase and cellulase was shown to degrade polymicrobial bacterial biofilms from both in vitro and in vivo sources, and increase antibiotic effectiveness against them. Inhibitors Many compounds are known that can act to inhibit the action of a glycoside hydrolase. Nitrogen-containing, 'sugar-shaped' heterocycles have been found in nature, including deoxynojirimycin, swainsonine, australine and castanospermine. From these natural templates many other inhibitors have been developed, including isofagomine and deoxygalactonojirimycin, and various unsaturated compounds such as PUGNAc. Inhibitors that are in clinical use include the anti-diabetic drugs acarbose and miglitol, and the antiviral drugs oseltamivir and zanamivir. Some proteins have been found to act as glycoside hydrolase inhibitors. See also Mucopolysaccharidoses Glucosidase Lysozyme Glycosyltransferase List of glycoside hydrolase families Clans of glycoside hydrolases Hierarchical classification of the TIM-barrel type glycoside hydrolases References External links Cazypedia, an online encyclopedia of the "CAZymes," the carbohydrate-active enzymes and binding proteins involved in the synthesis and degradation of complex carbohydrates Carbohydrate-Active enZYmes Database ExPASy classification Carbohydrates Carbohydrate chemistry EC 3.2.1 Glycobiology
Glycoside hydrolase
[ "Chemistry", "Biology" ]
2,299
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Biochemistry", "Glycobiology" ]
7,014,404
https://en.wikipedia.org/wiki/Transport%20Accident%20Investigation%20Commission
The Transport Accident Investigation Commission (TAIC, ) is a transport safety body of New Zealand. It has its headquarters on the 7th floor of 10 Brandon Street in Wellington. The agency investigates aviation, marine, and rail accidents and incidents occurring in New Zealand, with a view to avoid similar occurrences in the future, rather than ascribing blame to any person. It does not investigate road accidents except where they affect the safety of aviation, marine, or rail (e.g. level crossing or car ferry accidents). It was established by an act of the Parliament of New Zealand (the Transport Accident Investigation Commission Act 1990) on 1 September 1990. TAIC's legislation, functions and powers were modelled on and share some similarities with the National Transportation Safety Board (USA) and the Transportation Safety Board (Canada). It is a standing Commission of Inquiry and an independent Crown entity, and reports to the minister of transport. Initially investigating aviation accidents only, the TAIC's jurisdiction was extended in 1992 to cover railway accidents and later in 1995 to cover marine accidents. In May 2006, the Aviation Industry Association claimed too often the organisation did not find the true cause of accidents, after TAIC released the results of a second investigation into a fatal helicopter crash at Taumarunui in 2001. The commission rejected the criticism, CEO Lois Hutchinson citing the results of a March 2003 audit by the International Civil Aviation Organization. Ron Chippindale, who investigated the Mount Erebus Disaster, was Chief Inspector of Accidents from 1990 to 31 October 1998. He was succeeded as chief investigator of accidents by Capt. Tim Burfoot, John Mockett in 2002, Tim Burfoot again in 2007, Aaron Holman in 2019, Harald Hendel in 2020, and Naveen Kozhuppakalam in 2022. Peer agencies in other countries Australian Transport Safety Bureau Aviation and Railway Accident Investigation Board – South Korea Dutch Safety Board – Netherlands Taiwan Transportation Safety Board – Taiwan Japan Transport Safety Board National Transportation Safety Board – United States National Transportation Safety Committee – Indonesia Safety Investigation Authority – Finland Swedish Accident Investigation Authority – Sweden Swiss Transportation Safety Investigation Board – Switzerland Transportation Safety Board of Canada Transport Safety Investigation Bureau – Singapore References External links New Zealand Rail accident investigators New Zealand independent crown entities 1990 establishments in New Zealand Transport organisations based in New Zealand
Transport Accident Investigation Commission
[ "Technology" ]
466
[ "Railway accidents and incidents", "Rail accident investigators" ]
8,672,984
https://en.wikipedia.org/wiki/Password%20fatigue
Password fatigue is the feeling experienced by many people who are required to remember an excessive number of passwords as part of their daily routine, such as to log in to a computer at work, undo a bicycle lock or conduct banking from an automated teller machine. The concept is also known as password chaos, or more broadly as identity chaos. Causes The increasing prominence of information technology and the Internet in employment, finance, recreation and other aspects of people's lives, and the ensuing introduction of secure transaction technology, has led to people accumulating a proliferation of accounts and passwords. According to a survey conducted in February 2020 by password manager Nordpass, a typical user has 100 passwords. Some factors causing password fatigue are: unexpected demands that a user create a new password unexpected demands that a user create a new password that uses a particular pattern of letters, digits, and special characters demand that the user type the new password twice frequent and unexpected demands for the user to re-enter their password throughout the day as they surf to different parts of an intranet blind typing, both when responding to a password prompt and when setting a new password. Responses Some companies are well organized in this respect and have implemented alternative authentication methods, or have adopted technologies so that a user's credentials are entered automatically. However, others may not focus on ease of use, or even worsen the situation, by constantly implementing new applications with their own authentication system. Single sign-on software (SSO) can help mitigate this problem by only requiring users to remember one password to an application that in turn will automatically give access to several other accounts, with or without the need for agent software on the user's computer. A potential disadvantage is that loss of a single password will prevent access to all services using the SSO system, and moreover theft or misuse of such a password presents a criminal or attacker with many targets. Integrated password management software - Many operating systems provide a mechanism to store and retrieve passwords by using the user's login password to unlock an encrypted password database. Microsoft Windows provides Credential Manager to store usernames and passwords used to log on to websites or other computers on a network; iOS, iPadOS, and macOS share a Keychain feature that provides this functionality; and similar functionality is present in the GNOME and KDE open source desktops. In addition, web browser developers have added similar functionality to all the major browsers. Although, if the user's system is corrupted, stolen or compromised, they can also lose access to sites where they rely on the password store or recovery features to remember their login data. Third-party (add-on) password management software such as KeePass and Password Safe can help mitigate the problem of password fatigue by storing passwords in a database encrypted with a single password. However, this presents problems similar to that of single sign-on in that losing the single password prevents access to all the other passwords while someone else gaining it will have access to them. Password recovery - The majority of password-protected web services provide a password recovery feature that will allow users to recover their passwords via the email address (or other information) tied to that account. However, this system has itself become a target of social engineering attacks by criminals. These criminals obtain enough information about the target to impersonate them and request a reset email, which is then redirected through other means to an account under the attacker's control, enabling the attacker to hijack the account. Passwordless authentication - One solution to eliminate password fatigue is to get rid of passwords entirely. Passwordless authentication services such as Okta, Transmit Security and Secret Double Octopus replace passwords with alternative verification methods such as biometric authentication or security tokens. Unlike SSO or password management software, passwordless authentication does not require a user to create or remember a password at any point. Innovative approaches As password fatigue continues to challenge users, notable advances in password management techniques have emerged to alleviate this burden. These innovative approaches provide alternatives to traditional password-based authentication systems. Here are some notable strategies: Biometric Authentication Biometric authentication methods offer a seamless and secure alternative to traditional passwords, including fingerprint recognition, facial recognition, and iris scanning. Users can authenticate their identities without remembering complex passwords by leveraging unique biological characteristics. Companies like Okta and Transmit Security have developed robust biometric authentication solutions, reducing reliance on traditional passwords. Security Tokens Security tokens, also referred to as hardware tokens or authentication tokens, add an extra layer of security beyond passwords. These physical devices generate a one-time passcode or cryptographic key that users input alongside their passwords for authentication. This two-factor authentication (2FA) method enhances security while reducing the cognitive load of managing multiple passwords. Secret Double Octopus is a notable provider of security token solutions. Passwordless Authentication Passwordless authentication services represent a significant shift in authentication methods by eliminating the need for passwords. Instead, these services utilize alternative verification methods, such as biometric authentication, security keys, or magic email links. By removing passwords from the equation, passwordless authentication significantly simplifies the user experience and reduces the risk of password-related security breaches. Okta, Transmit Security, and Secret Double Octopus are pioneering providers of passwordless authentication solutions. Behavioral Biometrics Emerging technologies in behavioral biometrics analyze unique behavioral patterns, such as typing speed, mouse movements, and touchscreen interactions, for user authentication. By continuously monitoring these behavioral signals, the system can accurately verify a user's identity without requiring an explicit authentication action. Behavioral biometrics provide a seamless authentication experience while minimizing the cognitive load associated with traditional password-based systems. These innovative approaches offer promising alternatives to traditional password management techniques, delivering enhancements in security, usability, and user convenience. As technology advances, further progress in authentication methods will effectively address the ongoing challenge of password fatigue. See also BugMeNot Decision fatigue Identity management Password manager Password strength Security question Usability of web authentication systems Notes External links Noguchi, Yuki. Access Denied, Washington Post, 23 September 2006. Catone, Josh. Bad Form: 61% Use Same Password for Everything, 17 January 2008. Data security Password authentication
Password fatigue
[ "Engineering" ]
1,284
[ "Cybersecurity engineering", "Data security" ]
16,001,928
https://en.wikipedia.org/wiki/Concrete%20step%20barrier
A concrete step barrier is a safety barrier used on the central reservation of motorways and dual carriageways as an alternative to the standard steel crash barrier. United Kingdom With effect from January 2005 and based primarily on safety grounds, the UK National Highways policy is that all new motorway schemes are to use high-containment concrete barriers in the central reserve. All existing motorways will introduce concrete barriers into the central reserve as part of ongoing upgrades and through replacement when these systems have reached the end of their useful life. This change of policy applies only to barriers in the central reserve of high-speed roads and not to verge-side barriers. Other routes will continue to use steel barriers. Government policy ensures that all future crash barriers in the UK will be made of concrete unless there are overriding circumstances. Ireland The usage of the concrete step barrier has become widespread in Ireland. As of 2017, of motorways use this barrier. Some motorways such as parts of the M8 and M6 have had the crash barrier since their original construction. Other motorways had it installed as part of their upgrade (M50). Hong Kong Steel guard rails (since 2000s as thrie-beam barrier) and concrete profile barrier are the barrier systems used in expressways in the territory. The designs of their beam barrier are based in American and Australian designs and concrete based in European standards. Degradation processes Various types of aggregate may undergo chemical reactions in concrete, leading to damaging expansive phenomena. The most common are those containing reactive silica, that can react with the alkalis in concrete. Amorphous silica is one of the most reactive mineral components in some aggregates containing e.g., opal, chalcedony, flint. Following the alkali-silica reaction (ASR), an expansive gel can form, that creates extensive cracks and damage on structural members. See also Jersey barrier Constant-slope barrier F-shape barrier Road-traffic safety Traffic barrier References Concrete Road safety Protective barriers
Concrete step barrier
[ "Engineering" ]
400
[ "Structural engineering", "Concrete" ]
16,006,394
https://en.wikipedia.org/wiki/Food%20vs.%20fuel
Food versus fuel is the dilemma regarding the risk of diverting farmland or crops for biofuels production to the detriment of the food supply. The biofuel and food price debate involves wide-ranging views and is a long-standing, controversial one in the literature. There is disagreement about the significance of the issue, what is causing it, and what can or should be done to remedy the situation. This complexity and uncertainty are due to the large number of impacts and feedback loops that can positively or negatively affect the price system. Moreover, the relative strengths of these positive and negative impacts vary in the short and long terms, and involve delayed effects. The academic side of the debate is also blurred by the use of different economic models and competing forms of statistical analysis. Biofuel production has increased in recent years. Some commodities, like maize (corn), sugar cane or vegetable oil can be used either as food, feed, or to make biofuels. For example, since 2006, a portion of land that was also formerly used to grow food crops in the United States is now used to grow corn for biofuels, and a larger share of corn is destined for ethanol production, reaching 25% in 2007. Oil price increases since 2003, the desire to reduce oil dependency, and the need to reduce greenhouse gas emissions from transportation have together increased global demand for biofuels. Increased demand tends to improve financial returns on production, making biofuel more profitable and attractive than food production. This, in turn, leads to greater resource inputs to biofuel production, with correspondingly reduced resources put towards the production of food. Global food security issues may result from such economic disincentives to large-scale agricultural food production. There is, in addition, potential for the destruction of habitats with increasing pressure to convert land use to agriculture, for the production of biofuel. Environmental groups have raised concerns about these potential harms for some years, but the issues drew widespread attention worldwide due to the 2007–2008 world food price crisis. Second-generation biofuels could potentially provide solutions to these negative effects. For example, they may allow for combined farming for food and fuel, and electricity could be generated simultaneously. This could be especially beneficial for developing countries and rural areas in developed countries. Some research suggests that biofuel production can be significantly increased without the need for increased acreage. Biofuels are not a new phenomenon. Before industrialisation, horses were the primary (and probably the secondary) source of power for transportation and physical work, requiring food. The growing of crops for horses (typically oats) to carry out physical work is comparable to the growing of crops for biofuels used in engines. However, the earlier, pre-industrial "biofuel" crops were at smaller scale. Brazil has been considered to have the world's first sustainable biofuels economy, and its government claims Brazil's sugar cane-based ethanol industry did not contribute to the 2008 food crisis. A World Bank policy research working paper released in July 2008 concluded that "large increases in biofuel production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher.". However, a 2010 study also by the World Bank concluded that their previous study may have overestimated the contribution of biofuel production, as "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialization of commodities") may have been partly responsible for the 2007/08 spike." A 2008 independent study by the OECD also found that the impact of biofuels on food prices are much smaller. Food price inflation From 1974 to 2005, real food prices (adjusted for inflation) dropped by 75%. Food commodity prices were relatively stable after reaching lows in 2000 and 2001. Therefore, recent rapid food price increases are considered extraordinary. A World Bank policy research working paper published in July 2008 found that the increase in food commodity prices was led by grains, with sharp price increases in 2005 despite record crops worldwide. From January 2005 until June 2008, maize prices almost tripled, wheat increased 127 percent, and rice rose 170 percent. The increase in grain prices was followed by increases in fat and oil prices in mid-2006. On the other hand, the study found that sugar cane production has increased rapidly, and it was large enough to keep sugar price increases small except for 2005 and early 2006. The paper concluded that biofuels produced from grains have raised food prices in combination with other related factors by between 70 and 75 percent, but ethanol produced from sugar cane has not contributed significantly to the recent increase in food commodities prices. An economic assessment report published by the OECD in July 2008 found that "the impact of current biofuel policies on world crop prices, largely through increased demand for cereals and vegetable oils, is significant but should not be overestimated. Current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years." Corn is used to make ethanol and prices went up by a factor of three in less than 3 years (measured in US dollars). Reports in 2007 linked stories as diverse as food riots in Mexico due to rising prices of corn for tortillas and reduced profits at Heineken, the large international brewer, to the increasing use of corn (maize) grown in the US Midwest for ethanol production. In the case of beer, the barley area was cut in order to increase corn production. Barley is not currently used to produce ethanol.) Wheat is up by almost a factor of 3 in three years, while soybeans are up by a factor of 2 in two years (both measured in US dollars). As corn is commonly used as feed for livestock, higher corn prices lead to higher prices for animal source foods. Vegetable oil is used to make biodiesel and has about doubled in price in the last couple years. The prices are roughly tracking crude oil prices. The 2007–2008 world food price crisis is blamed partly on the increased demand for biofuels. During the same period, rice prices went up by a factor of 3 even though rice is not directly used in biofuels. The USDA expects the 2008/2009 wheat season to be a record crop and 8% higher than the previous year. They also expect rice to have a record crop. Wheat prices have dropped from a high of over $12 per bushel in May 2008 to under $8/bushel in May. Rice has also dropped from its highs. According to a 2008 report from the World Bank, the production of biofuel pushed food prices up. These conclusions were supported by the Union of Concerned Scientists in their September 2008 newsletter, in which they remarked that the World Bank analysis "contradicts U.S. Secretary of Agriculture Ed Schaffer's assertion that biofuels account for only a small percentage of rising food prices". According to the October Consumer Price Index released on November 19, 2008, food prices continued to rise in October 2008 and were 6.3 percent higher than in October 2007. Since July 2008, fuel costs dropped by nearly 60 percent. Proposed causes Ethanol fuel as an oxygenate additive The demand for ethanol fuel produced from field corn was spurred in the U.S. by the discovery that methyl tertiary butyl ether (MTBE) was contaminating groundwater. MTBE use as an oxygenate additive was widespread due to the mandates of the Clean Air Act amendments of 1992 to reduce carbon monoxide emissions. As a result, by 2006, MTBE use in gasoline was banned in almost 20 states. There was also concern that widespread and costly litigation might be taken against the U.S. gasoline suppliers, and a 2005 decision refusing legal protection for MTBE, opened a new market for ethanol fuel, the primary substitute for MTBE. At a time when corn prices were around US$2 a bushel, corn growers recognized the potential of this new market and delivered accordingly. This demand shift took place at a time when oil prices were already significantly rising. Other factors The fact that food prices went up at the same time fuel prices went up is not surprising and should not be entirely blamed on biofuels. Energy costs are a significant cost for fertilizer, farming, and food distribution. Also, China and other countries have had significant increases in their imports as their economies have grown. Sugar is one of the main feedstocks for ethanol, and prices are down from two years ago. Part of the food price increase for international food commodities measured in US dollars is due to the dollar being devalued. Protectionism is also an important contributor to price increases. 36% of world grain goes as fodder to feed animals, rather than people. Over long periods of time, population growth and climate change could cause food prices to go up. However, these factors have been around for many years and food prices have jumped up in the last three years, so their contribution to the current problem is minimal. Government regulations of food and fuel markets France, Germany, the United Kingdom, and the United States governments have supported biofuels with tax breaks, mandated use, and subsidies. These policies have the unintended consequence of diverting resources from food production and leading to surging food prices and the potential destruction of natural habitats. Fuel for agricultural use often does not have fuel taxes (farmers get duty-free petrol or diesel fuel). Biofuels may have subsidies and low/no retail fuel taxes. Biofuels compete with retail gasoline and diesel prices which have substantial taxes included. The net result is that it is possible for a farmer to use more than a gallon of fuel to make a gallon of biofuel and still make a profit. There have been thousands of scholarly papers analyzing how much energy goes into making ethanol from corn and how that compares to the energy in the ethanol. A World Bank policy research working paper concluded that food prices have risen by 35 to 40 percent between 2002 and 2008, of which 70 to 75 percent are attributable to biofuels. The "month-by-month" five-year analysis disputes that increases in global grain consumption and droughts were responsible for significant price increases, reporting that this had only a marginal impact. Instead, the report argues that the EU and US drive for biofuels has had by far the biggest impact on food supply and prices, as increased production of biofuels in the US and EU was supported by subsidies and tariffs on imports, and considers that without these policies, price increases would have been smaller. This research also concluded that Brazil's sugar cane-based ethanol has not raised sugar prices significantly and recommends removing tariffs on ethanol imports by both the US and EU, to allow more efficient producers such as Brazil and other developing countries, including many African countries, to produce ethanol profitably for export to meet the mandates in the EU and the US. An economic assessment published by the OECD in July 2008 agrees with the World Bank report recommendations regarding the negative effects of subsidies and import tariffs but finds that the estimated impact of biofuels on food prices is much smaller. The OECD study found that trade restrictions, mainly through import tariffs, protect the domestic industry from foreign competitors but impose a cost burden on domestic biofuel users and limit alternative suppliers. The report is also critical of limited reduction of greenhouse gas emissions achieved from biofuels based on feedstocks used in Europe and North America, finding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8% by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80% compared to fossil fuels. The assessment calls for the need for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs. Oil price increases Oil price increases since 2003 resulted in increased demand for biofuels. Transforming vegetable oil into biodiesel is not very hard or costly, so there is a profitable arbitrage situation if vegetable oil is much cheaper than diesel. Diesel is also made from crude oil, so vegetable oil prices are partially linked to crude oil prices. Farmers can switch to growing vegetable oil crops if those are more profitable than food crops. So all food prices are linked to vegetable oil prices and, in turn, to crude oil prices. A World Bank study concluded that oil prices and a weak dollar explain 25–30% of the total price rise between January 2002 until June 2008. Demand for oil is outstripping the supply of oil and oil depletion is expected to cause crude oil prices to go up over the next 50 years. Record oil prices are inflating food prices worldwide, including those crops that have no relation to biofuels, such as rice and fish. In Germany and Canada, it is now much cheaper to heat a house by burning grain than by using fuel derived from crude oil. With oil at $120 per barrel, a savings of a factor of 3 on heating costs is possible. When crude oil was at $25/barrel there was no economic incentive to switch to a grain fed heater. From 1971 to 1973, around the time of the 1973 oil crisis, corn and wheat prices went up by a factor of 3. There was no significant biofuel usage at that time. US government policy Some argue that the US government's policy of encouraging ethanol from corn is the main cause of food price increases. US federal government ethanol subsidies total $7 billion per year, or $1.90 per gallon. Ethanol provides only 55% as much energy as gasoline per gallon, realizing about a $3.45 per gallon gasoline trade off. Corn is used to feed chickens, cows, and pigs, so higher corn prices lead to higher prices for chicken, beef, pork, milk, cheese, etc. U.S. Senators introduced the BioFuels Security Act in 2006. "It's time for Congress to realize what farmers in America's heartland have known all along: that we have the capacity and ingenuity to decrease our dependence on foreign oil by growing our own fuel", said U.S. Senator for Illinois Barack Obama. Two-thirds of U.S. oil consumption is due to the transportation sector. The Energy Independence and Security Act of 2007 has a significant impact on U.S. Energy Policy. With the high profitability of growing corn, more and more farmers switch to growing corn until the profitability of other crops matches that of corn. So the ethanol/corn subsidies drive up the prices of other farm crops. The US, an important export country for food stocks - will convert 18% of its grain output to ethanol in 2008. Across the US, 25% of the whole corn crop went to ethanol in 2007. The percentage of corn going to biofuel is expected to go up. Since 2004, a US subsidy has been paid to companies that blend biofuel and regular fuel. The European biofuel subsidy is paid at the point of sale. Companies import biofuel to the US, blend 1% or even 0.1% regular fuel, and then ship the blended fuel to Europe, where it can get a second subsidy. These blends are called B99 or B99.9 fuel. The practice is called "splash and dash.". The imported fuel may even come from Europe to the US, get 0.1% regular fuel, and then go back to Europe. For B99.9 fuel the US blender gets a subsidy of $0.999 per gallon. The European biodiesel producers have urged the EU to impose punitive duties on these subsidized imports. In 2007, US lawmakers were also looking at closing this loophole. Freeze on first generation biofuel production The prospects for the use of biofuels could change in a relatively dramatic way in 2014. Petroleum trade groups petitioned the EPA in August 2013 to take into consideration a reduction of renewable biofuel content in transportation fuels. On November 15, 2013, the United States EPA announced a review of the proportion of ethanol that should be required by regulation. The standards established by the Energy Independence and Security Act of 2007 could be modified significantly. The announcement allows sixty days for the submission of commentary about the proposal. Journalist George Monbiot has argued for a 5-year freeze on biofuels while their impact on poor communities and the environment is assessed. A 2007 UN report on biofuel also raises issues regarding food security and biofuel production. Jean Ziegler, then UN Special Rapporteur on food, concluded that while the argument for biofuels in terms of energy efficiency and climate change are legitimate, the effects for the world's hungry of transforming wheat and maize crops into biofuel are "absolutely catastrophic", and terms such use of arable land a "crime against humanity". Ziegler also calls for a five-year moratorium on biofuel production. Ziegler's proposal for a five-year ban was rejected by the U.N. Secretary Ban Ki-moon, who called for a comprehensive review of the policies on biofuels, and said that "just criticising biofuel may not be a good solution". Food surpluses exist in many developed countries. For example, the UK wheat surplus was around 2 million tonnes in 2005. This surplus alone could produce sufficient bioethanol to replace around 2.5% of the UK's petroleum consumption, without requiring any increase in wheat cultivation or reduction in food supply or exports. However, above a few percent, there would be direct competition between first generation biofuel production and food production. This is one reason why many view second-generation biofuels as increasingly important. Non-food crops for biofuel There are different types of biofuels and different feedstocks for them, and it has been proposed that only non-food crops be used for biofuel. This avoids direct competition for commodities like corn and edible vegetable oil. However, as long as farmers are able to derive a greater profit by switching to biofuels, they will. The law of supply and demand predicts that if fewer farmers are producing food the price of food will rise. Second-generation biofuels use lignocellulosic raw material such as forest residues (sometimes referred to as brown waste and black liquor from Kraft process or sulfite process pulp mills). Third generation biofuels (biofuel from algae) use non-edible raw materials sources that can be used for biodiesel and bioethanol. It has long been recognized that the huge supply of agricultural cellulose, the lignocellulosic material commonly referred to as "Nature's polymer", would be an ideal source of material for biofuels and many other products. Composed of lignin and monomer sugars such as glucose, fructose, arabinose, galactose, and xylose, these constituents are very valuable in their own right. To this point in history, there are some methods commonly used to coax "recalcitrant" cellulose to separate or hydrolyse into its lignin and sugar parts, treatment with; steam explosion, supercritical water, enzymes, acids and alkalines. All these methods involve heat or chemicals, are expensive, have lower conversion rates and produce waste materials. In recent years the rise of "mechanochemistry" has resulted in the use of ball mills and other mill designs to reduce cellulose to a fine powder in the presence of a catalyst, a common bentonite or kaolinite clay, that will hydrolyse the cellulose quickly and with low energy input into pure sugar and lignin. Still currently only in pilot stage, this promising technology offers the possibility that any agricultural economy might be able to get rid of its requirement to refine oil for transportation fuels. This would be a major improvement in carbon neutral energy sources and allow the continued use of internal combustion engines on a large scale. Biodiesel Soybean oil, which only represents half of the domestic raw materials available for biodiesel production in the United States, is one of many raw materials that can be used to produce biodiesel. Non-food crops like Camelina, Jatropha, seashore mallow and mustard, used for biodiesel, can thrive on marginal agricultural land where many trees and crops will not grow, or would produce only slow growth yields. Camelina is virtually 100 percent efficient. It can be harvested and crushed for oil and the remaining parts can be used to produce high quality omega-3 rich animal feed, fiberboard, and glycerin. Camelina does not take away from land currently being utilized for food production. Most camelina acres are grown in areas that were previously not utilized for farming. For example, areas that receive limited rainfall that can not sustain corn or soybeans without the addition of irrigation can grow camelina and add to their profitability. Jatropha cultivation provides benefits for local communities: Cultivation and fruit picking by hand is labour-intensive and needs around one person per hectare. In parts of rural India and Africa this provides much-needed jobs - about 200,000 people worldwide now find employment through jatropha. Moreover, villagers often find that they can grow other crops in the shade of the trees. Their communities will avoid importing expensive diesel and there will be some for export too. NBB's Feedstock Development program is addressing production of arid variety crops, algae, waste greases, and other feedstocks on the horizon to expand available material for biodiesel in a sustainable manner. Bioalcohols Cellulosic ethanol is a type of biofuel produced from lignocellulose, a material that comprises much of the mass of plants. Corn stover, switchgrass, miscanthus and woodchip are some of the more popular non-edible cellulosic materials for ethanol production. Commercial investment in such second-generation biofuels began in 2006/2007, and much of this investment went beyond pilot-scale plants. Cellulosic ethanol commercialization is moving forward rapidly. The world's first commercial wood-to-ethanol plant began operation in Japan in 2007, with a capacity of 1.4 million liters/year. The first wood-to-ethanol plant in the United States is planned for 2008 with an initial output of 75 million liters/year. Other second-generation biofuels may be commercialized in the future and compete less with food. Synthetic fuel can be made from coal or biomass and may be commercialized soon. Bioprotein Protein rich feed for cattle/fish/poultry can be produced from biogas/natural gas which is presently used as fuel source. Cultivation of Methylococcus capsulatus bacteria culture by consuming natural gas produces high protein rich feed with tiny land and water foot print. The carbon dioxide gas produced as by product from these plants can also be put to use in cheaper production of algae oil or spirulina from algaculture which can displace the prime position of crude oil in near future. With these proven technologies, abundant natural gas/ biogas availability can impart full global food security by producing highly nutrient food products without any water pollution or greenhouse gas (GHG) emissions. Biofuel from food byproducts and coproducts Biofuels can also be produced from the waste byproducts of food-based agriculture (such as citrus peels or used vegetable oil) to manufacture an environmentally sustainable fuel supply, and reduce waste disposal cost. A growing percentage of U.S. biodiesel production is made from waste vegetable oil (recycled restaurant oils) and greases. Collocation of a waste generator with a waste-to-ethanol plant can reduce the waste producer's operating cost, while creating a more-profitable ethanol production business. This innovative collocation concept is sometimes called holistic systems engineering. Collocation disposal elimination may be one of the few cost-effective, environmentally sound, biofuel strategies, but its scalability is limited by availability of appropriate waste generation sources. For example, millions of tons of wet Florida-and-California citrus peels cannot supply billions of gallons of biofuels. Due to the higher cost of transporting ethanol, it is a local partial solution, at best. Biofuel subsidies and tariffs Some people have claimed that ending subsidies and tariffs would enable sustainable development of a global biofuels market. Taxing biofuel imports while letting petroleum in duty-free does not fit with the goal of encouraging biofuels. Ending mandates, subsidies, and tariffs would end the distortions that current policy is causing. The US ethanol tariff and some US ethanol subsidies are currently set to expire over the next couple years. The EU is rethinking their biofuels directive due to environmental and social concerns. On 18 January 2008 the UK House of Commons Environmental Audit Committee raised similar concerns, and called for a moratorium on biofuel targets. Germany ended their subsidy of biodiesel on 1 January 2008 and started taxing it. Reduce farmland reserves and set asides To avoid overproduction and to prop up farmgate prices for agricultural commodities, the EU has for a long time have had farm subsidy programs to encourage farmers not to produce and leave productive acres fallow. The 2008 crisis prompted proposals to bring some of the reserve farmland back into use, and the used area increased actually with 0.5% but today these areas are once again out of use. According to Eurostat, 18 million hectares has been abandoned since 1990, 7,4 millions hectares are currently set aside, and the EU has recently decided to set aside another 5–7% in so called Ecological Focus Areas, corresponding to 10–12 million hectares. In spite of this reduction of used land, the EU is a net exporter of e.g. wheat. The American Bakers Association has proposed reducing the amount of farmland held in the US Conservation Reserve Program. Currently the US has in the program. In Europe about 8% of the farmland is in set aside programs. Farmers have proposed freeing up all of this for farming. Two-thirds of the farmers who were on these programs in the UK are not renewing when their term expires. Sustainable production of biofuels Second-generation biofuels are now being produced from the cellulose in dedicated energy crops (such as perennial grasses), forestry materials, the co-products from food production, and domestic vegetable waste. Advances in the conversion processes will almost certainly improve the sustainability of biofuels, through better efficiencies and reduced environmental impact of producing biofuels, from both existing food crops and from cellulosic sources. Lord Ron Oxburgh suggests that responsible production of biofuels has several advantages: Produced responsibly they are a sustainable energy source that need not divert any land from growing food nor damage the environment; they can also help solve the problems of the waste generated by Western society; and they can create jobs for the poor where previously were none. Produced irresponsibly, they at best offer no climate benefit and, at worst, have detrimental social and environmental consequences. In other words, biofuels are pretty much like any other product. Far from creating food shortages, responsible production and distribution of biofuels represents the best opportunity for sustainable economic prospects in Africa, Latin America and impoverished Asia. Biofuels offer the prospect of real market competition and oil price moderation. Crude oil would be trading 15 per cent higher and gasoline would be as much as 25 per cent more expensive, if it were not for biofuels. A healthy supply of alternative energy sources will help to combat gasoline price spikes. Continuation of the status quo An additional policy option is to continue the current trends of government incentive for these types of crops to further evaluate the effects on food prices over a longer period of time due to the relatively recent onset of the biofuel production industry. Additionally, by virtue of the newness of the industry we can assume that like other startup industries techniques and alternatives will be cultivated quickly if there is sufficient demand for the alternative fuels and biofuels. What could result from the shock to food prices is a very quick move toward some of the non-food biofuels as are listed above amongst the other policy alternatives. Impact on developing countries Demand for fuel in rich countries is now competing against demand for food in poor countries. The increase in world grain consumption in 2006 happened due to the increase in consumption for fuel, not human consumption. The grain required to fill a fuel tank with ethanol will feed one person for a year. Several factors combine to make recent grain and oilseed price increases impact poor countries more: Poor people buy more grains (e.g. wheat), and are more exposed to grain price changes. Poor people spend a higher portion of their income on food, so increasing food prices influence them more. Aid organizations which buy food and send it to poor countries see more need when prices go up but are able to buy less food on the same budget. The impact is not all negative. The Food and Agriculture Organization (FAO) recognizes the potential opportunities that the growing biofuel market offers to small farmers and aquaculturers around the world and has recommended small-scale financing to help farmers in poor countries produce local biofuel. On the other hand, poor countries that do substantial farming have increased profits due to biofuels. If vegetable oil prices double, the profit margin could more than double. In the past rich countries have been dumping subsidized grains at below cost prices into poor countries and hurting the local farming industries. With biofuels using grains the rich countries no longer have grain surpluses to get rid of. Farming in poor countries is seeing healthier profit margins and expanding. Interviews with local farmers in southern Ecuador provide strong anecdotal evidence that the high price of corn is encouraging the burning of tropical forests in order to grow more. The destruction of tropical forests now account for 20% of all greenhouse gas emissions. National Corn Growers Association US government subsidies for making ethanol from corn have been attacked as the main cause of the food vs fuel problem. To defend themselves, the National Corn Growers Association has published their views on this issue. They consider the "food vs fuel" argument to be a fallacy that is "fraught with misguided logic, hyperbole and scare tactics." Claims made by the NCGA include: Corn growers have been and will continue to produce enough corn so that supply and demand meet and there is no shortage. Farmers make their planting decisions based on signals from the marketplace. If demand for corn is high and projected revenue-per-acre is strong relative to other crops, farmers will plant more corn. In 2007 US farmers planted with corn, 19% more acres than they did in 2006. The U.S. has doubled corn yields over the last 40 years and expects to double them again in the next 20 years. With twice as much corn from each acre, corn can be put to new uses without taking food from the hungry or causing deforestation. US consumers buy things like corn flakes where the cost of the corn per box is around 5 cents. Most of the cost is packaging, advertising, shipping, etc. Only about 19% of the US retail food prices can be attributed to the actual cost of food inputs like grains and oilseeds. So if the price of a bushel of corn goes up, there may be no noticeable impact on US retail food prices. The US retail food price index has gone up only a few percent per year and is expected to continue to have very small increases. Most of the corn produced in the US is field corn, not sweet corn, and not digestible by humans in its raw form. Most corn is used for livestock feed and not human food, even the portion that is exported. Only the starch portion of corn kernels is converted to ethanol. The rest (protein, fat, vitamins and minerals) is passed through to the feed co-products or human food ingredients. One of the most significant and immediate benefits of higher grain prices is a dramatic reduction in federal farm support payments. According to the U.S. Department of Agriculture, corn farmers received $8.8 billion in government support in 2006. Because of higher corn prices, payments are expected to drop to $2.1 billion in 2007, a 76 percent reduction. While the EROEI and economics of corn based ethanol are a bit weak, it paves the way for cellulosic ethanol which should have much better EROEI and economics. While basic nourishment is clearly important, fundamental societal needs of energy, mobility, and energy security are too. If farmers crops can help their country in these areas also, it seems right to do so. Since reaching record high prices in June 2008, corn prices fell 50% by October 2008, declining sharply together with other commodities, including oil. According to a Reuters article, "Analysts, including some in the ethanol sector, say ethanol demand adds about 75 cents to $1.00 per bushel to the price of corn, as a rule of thumb. Other analysts say it adds around 20 percent, or just under 80 cents per bushel at current prices. Those estimates hint that $4 per bushel corn might be priced at only $3 without demand for ethanol fuel.". These industry sources consider that a speculative bubble in the commodity markets holding positions in corn futures was the main driver behind the observed hike in corn prices affecting food supply. Controversy within the international system The United States and Brazil lead the industrial world in global ethanol production, with Brazil as the world's largest exporter and biofuel industry leader. In 2006 the U.S. produced 18.4 billion liters (4.86 billion gallons), closely followed by Brazil with 16.3 billion liters (4.3 billion gallons), producing together 70% of the world's ethanol market and nearly 90% of ethanol used as fuel. These countries are followed by China with 7.5%, and India with 3.7% of the global market share. Since 2007, the concerns, criticisms and controversy surrounding the food vs biofuels issue has reached the international system, mainly heads of states, and inter-governmental organizations (IGOs), such as the United Nations and several of its agencies, particularly the Food and Agriculture Organization (FAO) and the World Food Programme (WFP); the International Monetary Fund; the World Bank; and agencies within the European Union. The 2007 controversy: Ethanol diplomacy in the Americas In March 2007, "ethanol diplomacy" was the focus of President George W. Bush's Latin American tour, in which he and Brazil's president, Luiz Inácio Lula da Silva, were seeking to promote the production and use of sugar cane based ethanol throughout Latin America and the Caribbean. The two countries also agreed to share technology and set international standards for biofuels. The Brazilian sugar cane technology transfer will permit various Central American countries, such as Honduras, Nicaragua, Costa Rica and Panama, several Caribbean countries, and various Andean Countries tariff-free trade with the U.S. thanks to existing concessionary trade agreements. Even though the U.S. imposes a US$0.54 tariff on every gallon of imported ethanol, the Caribbean nations and countries in the Central American Free Trade Agreement are exempt from such duties if they produce ethanol from crops grown in their own countries. The expectation is that using Brazilian technology for refining sugar cane based ethanol, such countries could become exporters to the United States in the short-term. In August 2007, Brazil's President toured Mexico and several countries in Central America and the Caribbean to promote Brazilian ethanol technology. This alliance between the U.S. and Brazil generated some negative reactions. While Bush was in São Paulo as part of the 2007 Latin American tour, Venezuela's President Hugo Chavez, from Buenos Aires, dismissed the ethanol plan as "a crazy thing" and accused the U.S. of trying "to substitute the production of foodstuffs for animals and human beings with the production of foodstuffs for vehicles, to sustain the American way of life." Chavez' complaints were quickly followed by then Cuban President Fidel Castro, who wrote that "you will see how many people among the hungry masses of our planet will no longer consume corn." "Or even worse", he continued, "by offering financing to poor countries to produce ethanol from corn or any other kind of food, no tree will be left to defend humanity from climate change." Daniel Ortega, Nicaragua's President, and one of the preferential recipients of Brazil technical aid, said that "we reject the gibberish of those who applaud Bush's totally absurd proposal, which attacks the food security rights of Latin Americans and Africans, who are major corn consumers", however, he voiced support for sugar cane based ethanol during Lula's visit to Nicaragua. The 2008 controversy: Global food prices As a result of the international community's concerns regarding the steep increase in food prices, on 14 April 2008, Jean Ziegler, the United Nations Special Rapporteur on the Right to Food, at the Thirtieth Regional Conference of the Food and Agriculture Organization (FAO) in Brasília, called biofuels a "crime against humanity", a claim he had previously made in October 2007, when he called for a 5-year ban for the conversion of land for the production of biofuels. The previous day, at their Annual International Monetary Fund and World Bank Group meeting at Washington, D.C., the World Bank's President, Robert Zoellick, stated that "While many worry about filling their gas tanks, many others around the world are struggling to fill their stomachs. And it's getting more and more difficult every day." Luiz Inácio Lula da Silva gave a strong rebuttal, calling both claims "fallacies resulting from commercial interests", and putting the blame instead on U.S. and European agricultural subsidies, and a problem restricted to U.S. ethanol produced from maize. He also said that "biofuels aren't the villain that threatens food security". In the middle of this new wave of criticism, Hugo Chavez reaffirmed his opposition and said that he is concerned that "so much U.S.-produced corn could be used to make biofuel, instead of feeding the world's poor", calling the U.S. initiative to boost ethanol production during a world food crisis a "crime". German Chancellor Angela Merkel said the rise in food prices is due to poor agricultural policies and changing eating habits in developing nations, not biofuels as some critics claim. On the other hand, British Prime Minister Gordon Brown called for international action and said Britain had to be "selective" in supporting biofuels, and depending on the UK's assessment of biofuels' impact on world food prices, "we will also push for change in EU biofuels targets". Stavros Dimas, European Commissioner for the Environment said through a spokeswoman that "there is no question for now of suspending the target fixed for biofuels", though he acknowledged that the EU had underestimated problems caused by biofuels. On 29 April 2008, U.S. President George W. Bush declared during a press conference that "85 percent of the world's food prices are caused by weather, increased demand and energy prices", and recognized that "15 percent has been caused by ethanol". He added that "the high price of gasoline is going to spur more investment in ethanol as an alternative to gasoline. And the truth of the matter is it's in our national interests that our farmers grow energy, as opposed to us purchasing energy from parts of the world that are unstable or may not like us." Regarding the effect of agricultural subsidies on rising food prices, Bush said that "Congress is considering a massive, bloated farm bill that would do little to solve the problem. The bill Congress is now considering would fail to eliminate subsidy payments to multi-millionaire farmers", he continued, "this is the right time to reform our nation's farm policies by reducing unnecessary subsidies". Just a week before this new wave of international controversy began, U.N. Secretary General Ban Ki-moon had commented that several U.N. agencies were conducting a comprehensive review of the policy on biofuels, as the world food price crisis might trigger global instability. He said "We need to be concerned about the possibility of taking land or replacing arable land because of these biofuels", then he added "While I am very much conscious and aware of these problems, at the same time you need to constantly look at having creative sources of energy, including biofuels. Therefore, at this time, just criticising biofuel may not be a good solution. I would urge we need to address these issues in a comprehensive manner." Regarding Jean Ziegler's proposal for a five-year ban, the U.N. Secretary rejected that proposal. A report released by Oxfam in June 2008 criticized biofuel policies of high-income countries as neither a solution to the climate crisis nor the oil crisis, while contributing to the food price crisis. The report concluded that from all biofuels available in the market, Brazilian sugarcane ethanol is not very effective, but it is the most favorable biofuel in the world in term of cost and greenhouse gas balance. The report discusses some existing problems and potential risks, and asks the Brazilian government for caution to avoid jeopardizing its environmental and social sustainability. The report also says that: "Rich countries spent up to $15 billion last year supporting biofuels while blocking cheaper Brazilian ethanol, which is far less damaging for global food security." A World Bank research report published in July 2008 found that from June 2002 to June 2008 "biofuels and the related consequences of low grain stocks, large land use shifts, speculative activity and export bans" pushed prices up by 70 percent to 75 percent. The study found that higher oil prices and a weak dollar explain 25–30% of total price rise. The study said that "large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher". The Renewable Fuels Association (RFA) published a rebuttal based on the version leaked before its formal release. The RFA critique considers that the analysis is highly subjective and that the author "estimates the impact of global food prices from the weak dollar and the direct and indirect effect of high petroleum prices and attributes everything else to biofuels". An economic assessment by the OECD also published in July 2008 agrees with the World Bank report regarding the negative effects of subsidies and trade restrictions, but found that the impact of biofuels on food prices are much smaller. The OECD study is also critical of the limited reduction of greenhouse gas emissions achieved from biofuels produced in Europe and North America, concluding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8 percent by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80 percent compared to fossil fuels. The assessment calls on governments for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs. The OECD study concluded that "current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years." Another World Bank research report published in July 2010 found their previous study may have overestimated the contribution of biofuel production, as the paper concluded that "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialization of commodities") may have been partly responsible for the 2007/08 spike". See also Biodiesel Biofuel Biofuel advocacy groups Bioplastics: impact on food Commodity price shocks Corn stoves Deforestation Distillers grains Ethanol economy Ethanol fuel in Australia Ethanol fuel in Brazil Ethanol fuel in Sweden Ethanol fuel in the Philippines Ethanol fuel in the United States Food security Food vs. feed Methanol economy Methanol fuel Malthusian catastrophe Oil depletion Vegetable oil economy World Agricultural Supply and Demand Estimates (monthly report) 2007–2008 world food price crisis References Bibliography . See Chapter 7. Food, Farming, and Land Use. External links Avoiding Bioenergy Competition for Food Crops and Land FAO World Food Situation World Food Security: the Challenges of Climate Change and Bioenergy Global Trade and Environmental Impact Study of the EU Biofuels Mandate by the International Food Policy Institute (IFPRI) March 2010 Policy Research Working Paper WPS 5371: Placing the 2006/08 Commodity Price Boom into Perspective, July 2010 Reconciling food security and bioenergy: priorities for action, Global Change Biology Bioenergy Journal, June 2016. Towards Sustainable Production and Use of Resources: Assessing Biofuels, United Nations Environment Programme, October 2009 Biofuels Peak oil Energy and the environment Energy economics Dilemmas Climate change and agriculture Environmental ethics
Food vs. fuel
[ "Environmental_science" ]
9,378
[ "Energy economics", "Environmental social science", "Environmental ethics" ]
16,011,006
https://en.wikipedia.org/wiki/Worst-case%20circuit%20analysis
Worst-case circuit analysis (WCCA or WCA) is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and delivery. It is a quantitative assessment of the equipment performance, accounting for manufacturing, environmental and aging effects. In addition to a circuit analysis, a WCCA often includes stress and derating analysis, failure modes and effects criticality (FMECA) and reliability prediction (MTBF). The specific objective is to verify that the design is robust enough to provide operation which meets the system performance specification over design life under worst-case conditions and tolerances (initial, aging, radiation, temperature, etc.). Stress and de rating analysis is intended to increase reliability by providing sufficient margin compared to the allowable stress limits. This reduces overstress conditions that may induce failure, and reduces the rate of stress-induced parameter change over life. It determines the maximum applied stress to each component in the system. General information A worst-case circuit analysis should be performed on all circuitry that is safety and financially critical. Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario (under extreme environmental or operating conditions). Environmental conditions are defined as external stresses applied to each circuit component. It includes temperature, humidity or radiation. Operating conditions include external electrical inputs, component quality level, interaction between parts, and drift due to component aging. WCCA helps in the process of building design reliability into hardware for long-term field operation. Electronic piece-parts fail in two distinct modes: Out-of-tolerance limits: Through this, the circuit continues to operate, though with degraded performance, and ultimately exceeds the circuit's required operating limits. Catastrophic failures may be minimized through MTBF, stress and derating, and FMECA analyses that help to ensure that all components are properly derated, as well as that degradation is occurring “gracefully...” A WCCA permits you to predict and judge the circuit performance limits beneath all of the combos of half tolerances. There are many reasons to perform WCCA. Here are a few that may be impactful to schedule and cost. Methodology Worst-case analysis is the analysis of a device (or system) that assures that the device meets its performance specifications. These are typically accounting for tolerances that are due to initial component tolerance, temperature tolerance, age tolerance and environmental exposures (such as radiation for a space device). The beginning of life analysis comprises the initial tolerance and provides the data sheet limits for the manufacturing test cycle. The end of life analysis provides the additional degradation resulting from the aging and temperature effects on the elements within the device or system. This analysis is usually performed using SPICE, but mathematical models of individual circuits within the device (or system) are needed to determine the sensitivities or the worst-case performance. A computer program is frequently used to total and summarize the results. A WCCA follows these steps: Generate/obtain circuit model Obtain correlation to validate model Determine sensitivity to each component parameter Determine component tolerances Calculate the variance of each component parameter as sensitivity times absolute tolerance Use at least two methods of analysis (e.g. hand analysis and SPICE or Saber, SPICE and measured data) to assure the result Generate a formal report to convey the information produced The design is broken down into the appropriate functional sections. A mathematical model of the circuit is developed and the effects of various part/system tolerances are applied. The circuit's EVA and RSS results are determined for beginning-of-life and end-of-life states. These results are used to calculate part stresses and are applied to other analysis. In order for the WCCA to be useful throughout the product’s life cycle, it is extremely important that the analysis be documented in a clear and concise format. This will allow for future updates and review by other than the original designer. A compliance matrix is generated that clearly identifies the results and all issues. References External links WCCA Simple Comparing of different Methods :DOI: 10.13140/RG.2.2.13287.75689 Mil-Std 785B has a short section on WCCA Why Perform a Worse Case Analysis Aerospace Corporation - Aerospace Corp. Mission Assurance Improvement Workshop: Electrical Design Worst-Case Circuit Analysis: Guidelines and Draft Standard (REV A) (MAIW), TOR-2013-00297 European Cooperation for Space Standardization, See Worst case circuit performance analysis - ECSS-Q-30-01A and ECSS-Q-HB-30-01A and Dependability ECSS-Q-ST-30C Reliability analysis
Worst-case circuit analysis
[ "Engineering" ]
976
[ "Reliability analysis", "Reliability engineering" ]
2,177,071
https://en.wikipedia.org/wiki/B-tagging
b-tagging is a method of jet flavor tagging used in modern particle physics experiments. It is the identification (or "tagging") of jets originating from bottom quarks (or b quarks, hence the name). Importance b-tagging is important because: The physics of bottom quarks is quite interesting; in particular, it sheds light on CP violation. Some important high-mass particles (both recently discovered and hypothetical) decay into bottom quarks. Top quarks very nearly always do so, and the Higgs boson is expected to decay into bottom quarks more than any other particle given its mass has been observed to be about 125 GeV. Identifying bottom quarks helps to identify the decays of these particles. Methods The methods for b-tagging are based on the unique features of b-jets. These include: Hadrons containing bottom quarks have sufficient lifetime that they travel some distance before decaying. On the other hand, their lifetimes are not so high as those of light quark hadrons, so they decay inside the detector rather than escape. The advent of precision silicon detectors within particle detectors has made it possible to identify particles that originate from a place different to where the bottom quark was formed (e.g. the beam–beam collision point in a particle accelerator), and thus indicating the likely presence of a b-jet. The bottom quark is much more massive than anything it decays into. Thus its decay products tend to have higher transverse momentum (momentum perpendicular to the original direction of the bottom quark, and therefore of the b-jet). This causes b-jets to be wider, have higher multiplicities (numbers of constituent particles) and invariant masses, and also to contain low-energy leptons with momentum perpendicular to the jet. These two features can be measured, and jets that have them are more likely to be b-jets. Opposite-side algorithms have been used at the LHCb to tag the flavor in pairs of b quarks using the decay products of B-hadrons to infer the flavor of B-mesons. None of the methods of identifying b-jets are foolproof, and modern particle physics experiments must devote significant time to studying how often they successfully identify b-jets and how often they misidentify other jets. Monte Carlo simulations are used to develop and evaluate the performance of tagging algorithms. Experiments making precise measurements of B mesons (mesons containing b-quarks) also try to identify the particular initial B meson within the jet. This is done in order to observe the oscillation of one meson into another (– oscillation), which allows the measurement of CP violation. See also B-Factory – oscillation References Experimental particle physics Hadrons B_physics
B-tagging
[ "Physics" ]
585
[ "Matter", "Hadrons", "Experimental physics", "Particle physics", "Experimental particle physics", "Subatomic particles" ]
2,178,278
https://en.wikipedia.org/wiki/Jet%20%28particle%20physics%29
A jet is a narrow cone of hadrons and other particles produced by the hadronization of quarks and gluons in a particle physics or heavy ion experiment. Particles carrying a color charge, i.e. quarks and gluons, cannot exist in free form because of quantum chromodynamics (QCD) confinement which only allows for colorless states. When protons collide at high energies, their color charged components each carry away some of the color charge. In accordance with confinement, these fragments create other colored objects around them to form colorless hadrons. The ensemble of these objects is called a jet, since the fragments all tend to travel in the same direction, forming a narrow "jet" of particles. Jets are measured in particle detectors and studied in order to determine the properties of the original quarks. A jet definition includes a jet algorithm and a recombination scheme. The former defines how some inputs, e.g. particles or detector objects, are grouped into jets, while the latter specifies how a momentum is assigned to a jet. For example, jets can be characterized by the thrust. The jet direction (jet axis) can be defined as the thrust axis. In particle physics experiments, jets are usually built from clusters of energy depositions in the detector calorimeter. When studying simulated processes, the calorimeter jets can be reconstructed based on a simulated detector response. However, in simulated samples, jets can also be reconstructed directly from stable particles emerging from fragmentation processes. Particle-level jets are often referred to as truth-jets. A good jet algorithm usually allows for obtaining similar sets of jets at different levels in the event evolution. Typical jet reconstruction algorithms are, e.g., the anti-kT algorithm, kT algorithm, cone algorithm. A typical recombination scheme is the E-scheme, or 4-vector scheme, in which the 4-vector of a jet is defined as the sum of 4-vectors of all its constituents. In relativistic heavy ion physics, jets are important because the originating hard scattering is a natural probe for the QCD matter created in the collision, and indicate its phase. When the QCD matter undergoes a phase crossover into quark gluon plasma, the energy loss in the medium grows significantly, effectively quenching (reducing the intensity of) the outgoing jet. Example of jet analysis techniques are: jet correlation flavor tagging (e.g., b-tagging) jet substructure. The Lund string model is an example of a jet fragmentation model. Jet production Jets are produced in QCD hard scattering processes, creating high transverse momentum quarks or gluons, or collectively called partons in the partonic picture. The probability of creating a certain set of jets is described by the jet production cross section, which is an average of elementary perturbative QCD quark, antiquark, and gluon processes, weighted by the parton distribution functions. For the most frequent jet pair production process, the two particle scattering, the jet production cross section in a hadronic collision is given by with x, Q2: longitudinal momentum fraction and momentum transfer : perturbative QCD cross section for the reaction ij → k : parton distribution function for finding particle species i in beam a. Elementary cross sections are e.g. calculated to the leading order of perturbation theory in Peskin & Schroeder (1995), section 17.4. A review of various parameterizations of parton distribution functions and the calculation in the context of Monte Carlo event generators is discussed in T. Sjöstrand et al. (2003), section 7.4.1. Jet fragmentation Perturbative QCD calculations may have colored partons in the final state, but only the colorless hadrons that are ultimately produced are observed experimentally. Thus, to describe what is observed in a detector as a result of a given process, all outgoing colored partons must first undergo parton showering and then combination of the produced partons into hadrons. The terms fragmentation and hadronization are often used interchangeably in the literature to describe soft QCD radiation, formation of hadrons, or both processes together. As the parton which was produced in a hard scatter exits the interaction, the strong coupling constant will increase with its separation. This increases the probability for QCD radiation, which is predominantly shallow-angled with respect to the progenitor parton. Thus, one parton will radiate gluons, which will in turn radiate pairs and so on, with each new parton nearly collinear with its parent. This can be described by convolving the spinors with fragmentation functions , in a similar manner to the evolution of parton density functions. This is described by a -Gribov-Lipatov-Altarelli-Parisi (DGLAP) type equation Parton showering produces partons of successively lower energy, and must therefore exit the region of validity for perturbative QCD. Phenomenological models must then be applied to describe the length of time when showering occurs, and then the combination of colored partons into bound states of colorless hadrons, which is inherently not-perturbative. One example is the Lund String Model, which is implemented in many modern event generators. Infrared and collinear safety A jet algorithm is infrared safe if it yields the same set of jets after modifying an event to add a soft radiation. Similarly, a jet algorithm is collinear safe if the final set of jets is not changed after introducing a collinear splitting of one of the inputs. There are several reasons why a jet algorithm must fulfill these two requirements. Experimentally, jets are useful if they carry information about the seed parton. When produced, the seed parton is expected to undergo a parton shower, which may include a series of nearly-collinear splittings before the hadronization starts. Furthermore, the jet algorithm must be robust when it comes to fluctuations in the detector response. Theoretically, If a jet algorithm is not infrared and collinear safe, it can not be guaranteed that a finite cross-section can be obtained at any order of perturbation theory. See also Dijet event References M. Gyulassy et al., "Jet Quenching and Radiative Energy Loss in Dense Nuclear Matter", in R.C. Hwa & X.-N. Wang (eds.), Quark Gluon Plasma 3 (World Scientific, Singapore, 2003). J. E. Huth et al., in E. L. Berger (ed.), Proceedings of Research Directions For The Decade: Snowmass 1990, (World Scientific, Singapore, 1992), 134. (Preprint at Fermilab Library Server) M. E. Peskin, D. V. Schroeder, "An Introduction to Quantum Field Theory" (Westview, Boulder, CO, 1995). T. Sjöstrand et al., "Pythia 6.3 Physics and Manual", Report LU TP 03-38 (2003). G. Sterman, "QCD and Jets", Report YITP-SB-04-59 (2004). External links The Pythia/Jetset Monte Carlo event generator The FastJet jet clustering program Experimental particle physics
Jet (particle physics)
[ "Physics" ]
1,528
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
2,178,380
https://en.wikipedia.org/wiki/Induced%20gamma%20emission
In physics, induced gamma emission (IGE) refers to the process of fluorescent emission of gamma rays from excited nuclei, usually involving a specific nuclear isomer. It is analogous to conventional fluorescence, which is defined as the emission of a photon (unit of light) by an excited electron in an atom or molecule. In the case of IGE, nuclear isomers can store significant amounts of excitation energy for times long enough for them to serve as nuclear fluorescent materials. There are over 800 known nuclear isomers but almost all are too intrinsically radioactive to be considered for applications. there were two proposed nuclear isomers that appeared to be physically capable of IGE fluorescence in safe arrangements: tantalum-180m and hafnium-178m2. History Induced gamma emission is an example of interdisciplinary research bordering on both nuclear physics and quantum electronics. Viewed as a nuclear reaction it would belong to a class in which only photons were involved in creating and destroying states of nuclear excitation. It is a class usually overlooked in traditional discussions. In 1939 Pontecorvo and Lazard reported the first example of this type of reaction. Indium was the target and in modern terminology describing nuclear reactions it would be written 115In(γ,γ')115mIn. The product nuclide carries an "m" to denote that it has a long enough half life (4.5 h in this case) to qualify as being a nuclear isomer. That is what made the experiment possible in 1939 because the researchers had hours to remove the products from the irradiating environment and then to study them in a more appropriate location. With projectile photons, momentum and energy can be conserved only if the incident photon, X-ray or gamma, has precisely the energy corresponding to the difference in energy between the initial state of the target nucleus and some excited state that is not too different in terms of quantum properties such as spin. There is no threshold behavior and the incident projectile disappears and its energy is transferred into internal excitation of the target nucleus. It is a resonant process that is uncommon in nuclear reactions but normal in the excitation of fluorescence at the atomic level. Only as recently as 1988 was the resonant nature of this type of reaction finally proven. Such resonant reactions are more readily described by the formalities of atomic fluorescence and further development was facilitated by an interdisciplinary approach of IGE. There is little conceptual difference in an IGE experiment when the target is a nuclear isomer. Such a reaction as mX(γ,γ')X where mX is one of the five candidates listed above, is only different because there are lower energy states for the product nuclide to enter after the reaction than there were at the start. Practical difficulties arise from the need to ensure safety from the spontaneous radioactive decay of nuclear isomers in quantities sufficient for experimentation. Lifetimes must be long enough that doses from the spontaneous decay from the targets always remain within safe limits. In 1988 Collins and coworkers reported the first excitation of IGE from a nuclear isomer. They excited fluorescence from the nuclear isomer tantalum-180m with x-rays produced by an external beam radiotherapy linac. Results were surprising and considered to be controversial until the resonant states excited in the target were identified. Distinctive features If an incident photon is absorbed by an initial state of a target nucleus, that nucleus will be raised to a more energetic state of excitation. If that state can radiate its energy only during a transition back to the initial state, the result is a scattering process as seen in the schematic figure. That is not an example of IGE. If an incident photon is absorbed by an initial state of a target nucleus, that nucleus will be raised to a more energetic state of excitation. If there is a nonzero probability that sometimes that state will start a cascade of transitions as shown in the schematic, that state has been called a "gateway state" or "trigger level" or "intermediate state". One or more fluorescent photons are emitted, often with different delays after the initial absorption and the process is an example of IGE. If the initial state of the target nucleus is its ground (lowest energy) state, then the fluorescent photons will have less energy than that of the incident photon (as seen in the schematic figure). Since the scattering channel is usually the strongest, it can "blind" the instruments being used to detect the fluorescence and early experiments preferred to study IGE by pulsing the source of incident photons while detectors were gated off and then concentrating upon any delayed photons of fluorescence when the instruments could be safely turned back on. If the initial state of the target nucleus is a nuclear isomer (starting with more energy than the ground) it can also support IGE. However, in that case the schematic diagram is not simply the example seen for 115In but read from right to left with the arrows turned the other way. Such a "reversal" would require simultaneous (to within <0.25 ns) absorption of two incident photons of different energies to get from the 4 h isomer back up to the "gateway state". Usually the study of IGE from a ground state to an isomer of the same nucleus teaches little about how the same isomer would perform if used as the initial state for IGE. In order to support IGE an energy for an incident photon would have to be found that would "match" the energy needed to reach some other gateway state not shown in the schematic that could launch its own cascade down to the ground state. If the target is a nuclear isomer storing a considerable amount of energy then IGE might produce a cascade that contains a transition that emits a photon with more energy than that of the incident photon. This would be the nuclear analog of upconversion in laser physics. If the target is a nuclear isomer storing a considerable amount of energy then IGE might produce a cascade through a pair of excited states whose lifetimes are "inverted" so that in a collection of such nuclei, population would build up in the longer lived upper level while emptying rapidly from the shorter lived lower member of the pair. The resulting inversion of population might support some form of coherent emission analogous to amplified spontaneous emission (ASE) in laser physics. If the physical dimensions of the collection of target isomer nuclei were long and thin, then a form of gamma-ray laser might result. Potential applications Energy-specific dosimeters Since the IGE from ground state nuclei requires the absorption of very specific photon energies to produce delayed fluorescent photons that are easily counted, there is the possibility to construct energy-specific dosimeters by combining several different nuclides. This was demonstrated for the calibration of the radiation spectrum from the DNA-PITHON pulsed nuclear simulator. Such a dosimeter could be useful in radiation therapy where X-ray beams may contain many energies. Since photons of different energies deposit their effects at different depths in the tissue being treated, it could help calibrate how much of the total dose would be deposited in the actual target volume. Aircraft power In February 2003, the non-peer reviewed New Scientist wrote about the possibility of an IGE-powered airplane, a variant on nuclear propulsion. The idea was to utilize 178m2Hf (presumably due to its high energy to weight ratio) which would be triggered to release gamma rays that would heat air in a chamber for jet propulsion. This power source is described as a "quantum nucleonic reactor", although it is not clear if this name exists only in reference to the New Scientist article. Nuclear weaponry It is partly this theoretical density that has made the entire IGE field so controversial. It has been suggested that the materials might be constructed to allow all of the stored energy to be released very quickly in a "burst". The possible energy release of the gammas alone would make IGE a potential high power "explosive" on its own, or a potential radiological weapon. Fusion bomb ignition The density of gammas produced in this reaction would be high enough that it might allow them to be used to compress the fusion fuel of a fusion bomb. If this turns out to be the case, it might allow a fusion bomb to be constructed with no fissile material inside (i.e. a pure fusion weapon); it is the control of the fissile material and the means for making it that underlies most attempts to stop nuclear proliferation. See also Particle-induced gamma emission References Literature External links "Scary Things Come in Small Packages", Washington Post article of 2004 by Sharon Weinberger | Hf-isomer Summary Page of Results , C.B. Collins, University of Texas, Dallas "Atomic Powered Global Hawk Jet Reving For Take-Off?", a SciScoop weblog entry | Conflicting Results on a Long-Lived Nuclear Isomer of Hafnium Have Wider Implications This Physics Today article provides a balanced view from 2004. Reprints of articles about nuclear isomers in peer reviewed journals. - The Center for Quantum Electronics, The University of Texas at Dallas. Nuclear interdisciplinary topics
Induced gamma emission
[ "Physics" ]
1,881
[ "Nuclear interdisciplinary topics", "Nuclear physics" ]
2,178,421
https://en.wikipedia.org/wiki/Straw%20chamber
A straw chamber is a type of Gaseous ionization detector. It is a long tube with a wire down the center and a gas which becomes ionized when a particle passes through. A potential difference is maintained between the wire and the walls of the tube, so that once the gas is ionized electrons move in one direction and ions in the other. This produces a current which indicates that a particle has passed through the chamber. Many straws together can be used to track particles in a straw tracker. A straw tracker is a type of particle detector which uses many straw chambers to track the path of a particle. The path of a particle is determined by the best fit to all the straws with hits. Since the time for a particular straw to produce a signal is proportional to the distance of the particle's closest approach to that chamber's wire, if a particle on a predictable path (e.g. a helix in a magnetic field) passes through many straws, the path of the particle can be determined more precisely than the size of any particular straw. Specific uses There are about 298,000 drift tubes (straws) in the Transition Radiation Tracker (TRT) of the ATLAS_experiment at the Large Hadron Collider. References Particle detectors
Straw chamber
[ "Physics", "Technology", "Engineering" ]
257
[ "Particle physics stubs", "Particle detectors", "Particle physics", "Measuring instruments" ]
2,178,474
https://en.wikipedia.org/wiki/Particle%20identification
Particle identification is the process of using information left by a particle passing through a particle detector to identify the type of particle. Particle identification reduces backgrounds and improves measurement resolutions, and is essential to many analyses at particle detectors. Charged particles Charged particles have been identified using a variety of techniques. All methods rely on a measurement of the momentum in a tracking chamber combined with a measurement of the velocity to determine the charged particle's mass, and therefore its identity. Specific ionization A charged particle loses energy in matter by ionization at a rate determined in part by its velocity. The energy loss per unit distance is typically called . The energy loss is measured either in dedicated detectors, or in tracking chambers designed to also measure energy loss. The energy lost in a thin layer of material is subject to large fluctuations, and therefore accurate determination requires a large number of measurements. Individual measurements in the low- and high-energy tails are excluded. Time of flight Time-of-flight detectors determine charged particle velocity by measuring the time required to travel from the interaction point to the time-of-flight detector, or between two detectors. The ability to distinguish particle types diminishes as the particle velocity approaches its maximum allowed value, the speed of light, and thus is efficient only for particles with a small Lorentz factor. Cherenkov detectors Cherenkov radiation is emitted by a charged particle when it passes through a material with a speed greater than , where is the index of refraction of the material. The angle of the photons with respect to the charged particle's direction depends on velocity. A number of Cherenkov detector geometries have been used. Photons Photons are identified because they leave all their energy in a detector's electromagnetic calorimeter, but do not appear in the tracking chamber (see, for example, ATLAS Inner Detector) because they are neutral. A neutral pion which decays inside the EM calorimeter can replicate this effect. Electrons Electrons appear as tracks in the inner detector and deposit all their energy in the electromagnetic calorimeter. The energy deposited in the calorimeter must match the momentum measured in the tracking chamber. Muons Muons penetrate more material than other charged particles, and can therefore be identified by their presence in the outermost detectors. Tau particles Tau identification requires differentiating the narrow "jet" produced by the hadronic decay of the tau from ordinary quark jets. Neutrinos Neutrinos do not interact in particle detectors, and therefore escape undetected. Their presence can be inferred by the momentum imbalance of the visible particles in an event. In electron-positron colliders, both the neutrino momentum in all three dimensions and the neutrino energy can be reconstructed. Neutrino energy reconstruction requires accurate charged particle identification. In colliders using hadrons, only the momentum transverse to the beam direction can be determined. Neutral hadrons Neutral hadrons can sometimes be identified in calorimeters. In particular, antineutrons and Ks can be identified. Neutral hadrons can also be identified at electron-positron colliders in the same way as neutrinos. Heavy quarks Quark flavor tagging identifies the flavor of quark that a jet comes from. B-tagging, the identification of bottom quarks, is the most important example. B-tagging relies on the bottom quark being the heaviest quark involved in a hadronic decay (tops are heavier, but to have a top in a decay, it is necessary to produce some heavier particle to have a subsequent decay into a top). This implies that the bottom quark has a short lifetime and it is possible to look for its decay vertex in the inner tracker. Additionally, its decay products are transversal to the beam, resulting in a high jet multiplicity. Charm tagging using similar techniques is also possible, but extremely difficult due to the lower mass. Tagging jets from lighter quarks is simply impossible; due to QCD background, there are simply too many indistinguishable jets. See also Spark chamber Wire chamber References Experimental particle physics
Particle identification
[ "Physics" ]
848
[ "Experimental physics", "Particle physics", "Experimental particle physics" ]
2,178,511
https://en.wikipedia.org/wiki/Ephebos
Ephebos (; pl. epheboi, ), latinized as ephebus (pl. ephebi) and anglicised as ephebe (pl. ephebes), is a term for a male adolescent in Ancient Greece. The term was particularly used to denote one who was doing military training and preparing to become an adult. From about 335 BC, ephebes from Athens (aged between 18–20) underwent two years of military training under supervision, during which time they were exempt from civic duties and deprived of most civic rights. During the 3rd century BC, ephebic service ceased to be compulsory and its time was reduced to one year. By the 1st century BC, the ephebia became an institution reserved for wealthy individuals and, besides military training, it also included philosophic and literary studies. History Though the word ephebos (from epi "upon" + hebe "youth", "early manhood") can simply refer to the adolescent age of young men of training age, its main use is for the members, exclusively from that age group, of an official institution (ephebia) that saw to building them into citizens, but especially to training them as soldiers, sometimes already sent into the field; the Greek city states (poleis) mainly depended (like the Roman Republic) on its militia of citizens for defense. In the time of Aristotle (384–322 BC), Athens engraved the names of the enrolled ephebi on a bronze pillar (formerly on wooden tablets) in front of the council-chamber. After admission to the college, the ephebus took the oath of allegiance (as recorded in histories by Pollux and Stobaeus—but not in Aristotle) in the temple of Aglaurus and was sent to Munichia or Acte as a member of the garrison. At the end of the first year of training the ephebi were reviewed; if their performance was satisfactory, the state provided each with a spear and a shield, which, together with the (cloak) and (broad-brimmed hat), made up their equipment. In their second year they were transferred to other garrisons in Attica, patrolled the frontiers, and on occasion took an active part in war. During these two years they remained free from taxation, and were generally not allowed to appear in the law courts as plaintiffs or defendants. The ephebi took part in some of the most important Athenian festivals. Thus during the Eleusinian Mysteries they were sent to fetch the sacred objects from Eleusis and to escort the image of Iacchus on the sacred way. They also performed police duty at the meetings of the ecclesia. After the end of the 4th century BC, the institution underwent a radical change. Enrolment ceased to be obligatory, lasted only for a year, and the limit of age was dispensed with. Inscriptions attest a continually decreasing number of ephebi, and with the admission of foreigners the college lost its representative national character. This was mainly due to the weakening of the military spirit and to the progress of intellectual culture. The military element was no longer all-important, and the ephebia became a sort of university for well-to-do young men of good family, whose social position has been compared with that of the Athenian "knights" of earlier times. The institution lasted till the end of the 3rd century AD. In the Hellenistic and Roman periods, foreigners, including Romans, began to be admitted as ephebes. At this period the college of ephebi was a miniature city, which possessed an archon, strategos, herald and other officials, after the model of the city of Athens. Sculpture In Ancient Greek sculpture, an ephebe is a sculptural type depicting a nude ephebos (Archaic examples of the type are also often known as the kouros type, or kouroi in the plural). This typological name often occurs in the form "the Ephebe", where is the collection to which the object belongs or belonged, or the site on which it was found (e.g. the Agrigento Ephebe). Gallery See also Bishōnen Ephebe, a fictional nation in Terry Pratchett's Discworld Ephebic oath Ephebophilia Kóryos Kouros Pauly-Wissowa References H. Jeanmaire, Couroi et Courètes: Essai sur l'éducation spartiate et sur les rites d'adolescence dans l'Antiquité hellénique, Bibliothèque universitaire, Lille, 1939 C. Pélékidis, Éphébie: Histoire de l'éphébie attique, des origines à 31 av. J.-C., éd. de Boccard, Paris, 1962 O. W. Reinmuth, The Ephebic Inscriptions of the Fourth Century B.C., Leiden Brill, Leyde, 1971 P. Vidal-Naquet, Le Chasseur noir et l'origine de l'éphébie athénienne, Maspéro, 1981 P. Vidal-Naquet, Le Chasseur noir. Formes de pensée et formes de société dans le monde grec, Maspéro, 1981 U. von Wilamowitz-Moellendorf, Aristoteles: Aristoteles und Athen, 2 vol., Berlin, 1916 Further reading External links Ephebarchic Law of Amphipolis Ancient Greek sculptures Human development Social classes in ancient Greece Society of ancient Greece Society of ancient Rome Pederasty in ancient Greece
Ephebos
[ "Biology" ]
1,175
[ "Behavioural sciences", "Behavior", "Human development" ]
2,178,720
https://en.wikipedia.org/wiki/Koopmans%27%20theorem
Koopmans' theorem states that in closed-shell Hartree–Fock theory (HF), the first ionization energy of a molecular system is equal to the negative of the orbital energy of the highest occupied molecular orbital (HOMO). This theorem is named after Tjalling Koopmans, who published this result in 1934. Koopmans' theorem is exact in the context of restricted Hartree–Fock theory if it is assumed that the orbitals of the ion are identical to those of the neutral molecule (the frozen orbital approximation). Ionization energies calculated this way are in qualitative agreement with experiment – the first ionization energy of small molecules is often calculated with an error of less than two electron volts. Therefore, the validity of Koopmans' theorem is intimately tied to the accuracy of the underlying Hartree–Fock wavefunction. The two main sources of error are orbital relaxation, which refers to the changes in the Fock operator and Hartree–Fock orbitals when changing the number of electrons in the system, and electron correlation, referring to the validity of representing the entire many-body wavefunction using the Hartree–Fock wavefunction, i.e. a single Slater determinant composed of orbitals that are the eigenfunctions of the corresponding self-consistent Fock operator. Empirical comparisons with experimental values and higher-quality ab initio calculations suggest that in many cases, but not all, the energetic corrections due to relaxation effects nearly cancel the corrections due to electron correlation. A similar theorem (Janak's theorem) exists in density functional theory (DFT) for relating the exact first vertical ionization energy and electron affinity to the HOMO and LUMO energies, although both the derivation and the precise statement differ from that of Koopmans' theorem. Ionization energies calculated from DFT orbital energies are usually poorer than those of Koopmans' theorem, with errors much larger than two electron volts possible depending on the exchange-correlation approximation employed. The LUMO energy shows little correlation with the electron affinity with typical approximations. The error in the DFT counterpart of Koopmans' theorem is a result of the approximation employed for the exchange correlation energy functional so that, unlike in HF theory, there is the possibility of improved results with the development of better approximations. Generalizations While Koopmans' theorem was originally stated for calculating ionization energies from restricted (closed-shell) Hartree–Fock wavefunctions, the term has since taken on a more generalized meaning as a way of using orbital energies to calculate energy changes due to changes in the number of electrons in a system. Ground-state and excited-state ions Koopmans’ theorem applies to the removal of an electron from any occupied molecular orbital to form a positive ion. Removal of the electron from different occupied molecular orbitals leads to the ion in different electronic states. The lowest of these states is the ground state and this often, but not always, arises from removal of the electron from the HOMO. The other states are excited electronic states. For example, the electronic configuration of the H2O molecule is (1a1)2 (2a1)2 (1b2)2 (3a1)2 (1b1)2, where the symbols a1, b2 and b1 are orbital labels based on molecular symmetry. From Koopmans’ theorem the energy of the 1b1 HOMO corresponds to the ionization energy to form the H2O+ ion in its ground state (1a1)2 (2a1)2 (1b2)2 (3a1)2 (1b1)1. The energy of the second-highest MO 3a1 refers to the ion in the excited state (1a1)2 (2a1)2 (1b2)2 (3a1)1 (1b1)2, and so on. In this case the order of the ion electronic states corresponds to the order of the orbital energies. Excited-state ionization energies can be measured by photoelectron spectroscopy. For H2O, the near-Hartree–Fock orbital energies (with sign changed) of these orbitals are 1a1 559.5, 2a1 36.7 1b2 19.5, 3a1 15.9 and 1b1 13.8 eV. The corresponding ionization energies are 539.7, 32.2, 18.5, 14.7 and 12.6 eV. As explained above, the deviations are due to the effects of orbital relaxation as well as differences in electron correlation energy between the molecular and the various ionized states. For N2 in contrast, the order of orbital energies is not identical to the order of ionization energies. Near-Hartree–Fock calculations with a large basis set indicate that the 1πu bonding orbital is the HOMO. However the lowest ionization energy corresponds to removal of an electron from the 3σg bonding orbital. In this case the deviation is attributed primarily to the difference in correlation energy between the two orbitals. For electron affinities It is sometimes claimed that Koopmans' theorem also allows the calculation of electron affinities as the energy of the lowest unoccupied molecular orbitals (LUMO) of the respective systems. However, Koopmans' original paper makes no claim with regard to the significance of eigenvalues of the Fock operator other than that corresponding to the HOMO. Nevertheless, it is straightforward to generalize the original statement of Koopmans' to calculate the electron affinity in this sense. Calculations of electron affinities using this statement of Koopmans' theorem have been criticized on the grounds that virtual (unoccupied) orbitals do not have well-founded physical interpretations, and that their orbital energies are very sensitive to the choice of basis set used in the calculation. As the basis set becomes more complete; more and more "molecular" orbitals that are not really on the molecule of interest will appear, and care must be taken not to use these orbitals for estimating electron affinities. Comparisons with experiment and higher-quality calculations show that electron affinities predicted in this manner are generally quite poor. For open-shell systems Koopmans' theorem is also applicable to open-shell systems, however, orbital energies (eigenvalues of Roothaan equations) should be corrected, as was shown in the 1970s. Despite this early work, application of Koopmans theorem to open-shell systems continued to  cause  confusion, e.g., it was stated that Koopmans theorem can only be applied for removing the unpaired electron. Later, the validity of Koopmans’ theorem for ROHF was revisited and several procedures for obtaining meaningful orbital energies were reported. The spin up (alpha) and spin down (beta) orbital energies do not necessarily have to be the same. Counterpart in density functional theory Kohn–Sham (KS) density functional theory (KS-DFT) admits its own version of Koopmans' theorem (sometimes called the DFT-Koopmans' theorem) very similar in spirit to that of Hartree-Fock theory. The theorem equates the first (vertical) ionization energy of a system of electrons to the negative of the corresponding KS HOMO energy . More generally, this relation is true even when the KS system describes a zero-temperature ensemble with non-integer number of electrons for integer and . When considering electrons the infinitesimal excess charge enters the KS LUMO of the N electron system but then the exact KS potential jumps by a constant known as the "derivative discontinuity". It can be argued that the vertical electron affinity is equal exactly to the negative of the sum of the LUMO energy and the derivative discontinuity. Unlike the approximate status of Koopmans' theorem in Hartree Fock theory (because of the neglect of orbital relaxation), in the exact KS mapping the theorem is exact, including the effect of orbital relaxation. A sketchy proof of this exact relation goes in three stages. First, for any finite system determines the asymptotic form of the density, which decays as . Next, as a corollary (since the physically interacting system has the same density as the KS system), both must have the same ionization energy. Finally, since the KS potential is zero at infinity, the ionization energy of the KS system is, by definition, the negative of its HOMO energy, i.e., . While these are exact statements in the formalism of DFT, the use of approximate exchange-correlation potentials makes the calculated energies approximate and often the orbital energies are very different from the corresponding ionization energies (even by several eV!). A tuning procedure is able to "impose" Koopmans' theorem on DFT approximations, thereby improving many of its related predictions in actual applications. In approximate DFTs one can estimate to high degree of accuracy the deviance from Koopmans' theorem using the concept of energy curvature. It provides excitation energies to zeroth-order and . Orbital picture within many-body formalisms The concept of molecular orbitals and a Koopmans-like picture of ionization or electron attachment processes can be extended to correlated many-body wavefunctions by introducing Dyson orbitals. Dyson orbitals are defined as the generalized overlap between an -electron molecular wavefunction and the electron wave function of the ionized system (or electron wave function of an electron-attached system): Hartree-Fock canonical orbitals are Dyson orbitals computed for the Hartree-Fock wavefunction of the -electron system and Koopmans approximation of the electron system. When correlated wavefunctions are used, Dyson orbitals include correlation and orbital relaxation effects.  Dyson orbitals contain all information about the initial and final states of the system needed to compute experimentally observable quantities, such as total and differential photoionization/phtodetachment cross sections. References External links Quantum chemistry Computational chemistry Theoretical chemistry
Koopmans' theorem
[ "Physics", "Chemistry" ]
2,063
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
2,178,880
https://en.wikipedia.org/wiki/Diabatic%20representation
The diabatic representation as a mathematical tool for theoretical calculations of atomic collisions and of molecular interactions. One of the guiding principles in modern chemical dynamics and spectroscopy is that the motion of the nuclei in a molecule is slow compared to that of its electrons. This is justified by the large disparity between the mass of an electron, and the typical mass of a nucleus and leads to the Born–Oppenheimer approximation and the idea that the structure and dynamics of a chemical species are largely determined by nuclear motion on potential energy surfaces. The potential energy surfaces are obtained within the adiabatic or Born–Oppenheimer approximation. This corresponds to a representation of the molecular wave function where the variables corresponding to the molecular geometry and the electronic degrees of freedom are separated. The non separable terms are due to the nuclear kinetic energy terms in the molecular Hamiltonian and are said to couple the potential energy surfaces. Nearby an avoided crossing or conical intersection, these terms are substantive. Therefore one unitary transformation is performed from the adiabatic representation to the so-called diabatic representation in which the nuclear kinetic energy operator is diagonal. In this representation, the coupling is due to the electronic energy and is a scalar quantity that is significantly easier to estimate numerically. In the diabatic representation, the potential energy surfaces are smoother, so that low order Taylor series expansions of the surface capture much of the complexity of the original system. However strictly diabatic states do not exist in the general case. Hence, diabatic potentials generated from transforming multiple electronic energy surfaces together are generally not exact. These can be called pseudo-diabatic potentials, but generally the term is not used unless it is necessary to highlight this subtlety. Hence, pseudo-diabatic potentials are synonymous with diabatic potentials. Applicability The motivation to calculate diabatic potentials often occurs when the Born–Oppenheimer approximation breaks down, or is not justified for the molecular system under study. For these systems, it is necessary to go beyond the Born–Oppenheimer approximation. This is often the terminology used to refer to the study of nonadiabatic systems. A well-known approach involves recasting the molecular Schrödinger equation into a set of coupled eigenvalue equations. This is achieved by expanding the exact wave function in terms of products of electronic and nuclear wave functions (adiabatic states) followed by integration over the electronic coordinates. The coupled operator equations thus obtained depend on nuclear coordinates only. Off-diagonal elements in these equations are nuclear kinetic energy terms. A diabatic transformation of the adiabatic states replaces these off-diagonal kinetic energy terms by potential energy terms. Sometimes, this is called the "adiabatic-to-diabatic transformation", abbreviated ADT. Diabatic transformation of two electronic surfaces In order to introduce the diabatic transformation, assume that only two Potential Energy Surfaces (PES), 1 and 2, approach each other and that all other surfaces are well separated; the argument can be generalized to more surfaces. Let the collection of electronic coordinates be indicated by , while indicates dependence on nuclear coordinates. Thus, assume with corresponding orthonormal electronic eigenstates and . In the absence of magnetic interactions these electronic states, which depend parametrically on the nuclear coordinates, may be taken to be real-valued functions. The nuclear kinetic energy is a sum over nuclei A with mass MA, (Atomic units are used here). By applying the Leibniz rule for differentiation, the matrix elements of are (where coordinates are suppressed for clarity): The subscript indicates that the integration inside the bracket is over electronic coordinates only. Let us further assume that all off-diagonal matrix elements may be neglected except for k = 1 and p = 2. Upon making the expansion the coupled Schrödinger equations for the nuclear part take the form (see the article Born–Oppenheimer approximation) In order to remove the problematic off-diagonal kinetic energy terms, define two new orthonormal states by a diabatic transformation of the adiabatic states and where is the diabatic angle. Transformation of the matrix of nuclear momentum for gives for diagonal matrix elements These elements are zero because is real and is Hermitian and pure-imaginary. The off-diagonal elements of the momentum operator satisfy, Assume that a diabatic angle exists, such that to a good approximation i.e., and diagonalize the 2 x 2 matrix of the nuclear momentum. By the definition of Smith and are diabatic states. (Smith was the first to define this concept; earlier the term diabatic was used somewhat loosely by Lichten). By a small change of notation these differential equations for can be rewritten in the following more familiar form: It is well known that the differential equations have a solution (i.e., the "potential" V exists) if and only if the vector field ("force") is irrotational, It can be shown that these conditions are rarely ever satisfied, so that a strictly diabatic transformation rarely ever exists. It is common to use approximate functions leading to pseudo diabatic states. Under the assumption that the momentum operators are represented exactly by 2 x 2 matrices, which is consistent with neglect of off-diagonal elements other than the (1,2) element and the assumption of "strict" diabaticity, it can be shown that On the basis of the diabatic states the nuclear motion problem takes the following generalized Born–Oppenheimer form It is important to note that the off-diagonal elements depend on the diabatic angle and electronic energies only. The surfaces and are adiabatic PESs obtained from clamped nuclei electronic structure calculations and is the usual nuclear kinetic energy operator defined above. Finding approximations for is the remaining problem before a solution of the Schrödinger equations can be attempted. Much of the current research in quantum chemistry is devoted to this determination. Once has been found and the coupled equations have been solved, the final vibronic wave function in the diabatic approximation is Adiabatic-to-diabatic transformation Here, in contrast to previous treatments, the non-Abelian case is considered. Felix Smith in his article considers the adiabatic-to-diabatic transformation (ADT) for a multi-state system but a single coordinate, . In Diabatic, the ADT is defined for a system of two coordinates and , but it is restricted to two states. Such a system is defined as Abelian and the ADT matrix is expressed in terms of an angle, (see Comment below), known also as the ADT angle. In the present treatment a system is assumed that is made up of M (> 2) states defined for an N-dimensional configuration space, where N = 2 or N > 2. Such a system is defined as non-Abelian. To discuss the non-Abelian case the equation for the just mentioned ADT angle, (see Diabatic), is replaced by an equation for the MxM, ADT matrix, : where is the force-matrix operator, introduced in Diabatic, also known as the Non-Adiabatic Coupling Transformation (NACT) matrix: Here is the N-dimensional (nuclear) grad-operator: and , are the electronic adiabatic eigenfunctions which depend explicitly on the electronic coordinates and parametrically on the nuclear coordinates . To derive the matrix one has to solve the above given first order differential equation along a specified contour . This solution is then applied to form the diabatic potential matrix : where ; j = 1, M are the Born–Oppenheimer adiabatic potentials. In order for to be single-valued in configuration space, has to be analytic and in order for to be analytic (excluding the pathological points), the components of the vector matrix, , have to satisfy the following equation: where is a tensor field. This equation is known as the non-Abelian form of the Curl Equation. A solution of the ADT matrix along the contour can be shown to be of the form: (see also Geometric phase). Here is an ordering operator, the dot stands for a scalar product and and are two points on . A different type of solution is based on quasi-Euler angles according to which any -matrix can be expressed as a product of Euler matrices. For instance in case of a tri-state system this matrix can be presented as a product of three such matrices, (i < j = 2, 3) where e.g. is of the form: The product which can be written in any order, is substituted in Eq. (1) to yield three first order differential equations for the three -angles where two of these equations are coupled and the third stands on its own. Thus, assuming: the two coupled equations for and are: whereas the third equation (for ) becomes an ordinary (line) integral: expressed solely in terms of and . Similarly, in case of a four-state system is presented as a product of six 4 x 4 Euler matrices (for the six quasi-Euler angles) and the relevant six differential equations form one set of three coupled equations, whereas the other three become, as before, ordinary line integrals. A comment concerning the two-state (Abelian) case Since the treatment of the two-state case as presented in Diabatic raised numerous doubts we consider it here as a special case of the Non-Abelian case just discussed. For this purpose we assume the 2 × 2 ADT matrix to be of the form: Substituting this matrix in the above given first order differential equation (for ) we get, following a few algebraic rearrangements, that the angle fulfills the corresponding first order differential equation as well as the subsequent line integral: where is the relevant NACT matrix element, the dot stands for a scalar product and is a chosen contour in configuration space (usually a planar one) along which the integration is performed. The line integral yields meaningful results if and only if the corresponding (previously derived) Curl-equation is zero for every point in the region of interest (ignoring the pathological points). References Quantum chemistry
Diabatic representation
[ "Physics", "Chemistry" ]
2,110
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
2,178,942
https://en.wikipedia.org/wiki/Conical%20intersection
In quantum chemistry, a conical intersection of two or more potential energy surfaces is the set of molecular geometry points where the potential energy surfaces are degenerate (intersect) and the non-adiabatic couplings between these states are non-vanishing. In the vicinity of conical intersections, the Born–Oppenheimer approximation breaks down and the coupling between electronic and nuclear motion becomes important, allowing non-adiabatic processes to take place. The location and characterization of conical intersections are therefore essential to the understanding of a wide range of important phenomena governed by non-adiabatic events, such as photoisomerization, photosynthesis, vision and the photostability of DNA. Conical intersections are also called molecular funnels or diabolic points as they have become an established paradigm for understanding reaction mechanisms in photochemistry as important as transitions states in thermal chemistry. This comes from the very important role they play in non-radiative de-excitation transitions from excited electronic states to the ground electronic state of molecules. For example, the stability of DNA with respect to the UV irradiation is due to such conical intersection. The molecular wave packet excited to some electronic excited state by the UV photon follows the slope of the potential energy surface and reaches the conical intersection from above. At this point the very large vibronic coupling induces a non-radiative transition (surface-hopping) which leads the molecule back to its electronic ground state. The singularity of vibronic coupling at conical intersections is responsible for the existence of Geometric phase, which was discovered by Longuet-Higgins in this context. Degenerate points between potential energy surfaces lie in what is called the intersection or seam space with a dimensionality of 3N-8 (where N is the number of atoms). Any critical points in this space of degeneracy are characterised as minima, transition states or higher-order saddle points and can be connected to each other through the analogue of an intrinsic reaction coordinate in the seam. In benzene, for example, there is a recurrent connectivity pattern where permutationally isomeric seam segments are connected by intersections of a higher symmetry point group. The remaining two dimensions that lift the energetic degeneracy of the system are known as the branching space. Experimental observation In order to be able to observe it, the process would need to be slowed down from femtoseconds to milliseconds. A novel 2023 quantum experiment, involving trapped-ion quantum computer, slowed down interference pattern of a single atom (caused by a conical intersection) by a factor of 100 billion, making a direct observation possible. Local characterization Conical intersections are ubiquitous in both trivial and non-trivial chemical systems. In an ideal system of two dimensionalities, this can occur at one molecular geometry. If the potential energy surfaces are plotted as functions of the two coordinates, they form a cone centered at the degeneracy point. This is shown in the adjacent picture, where the upper and lower potential energy surfaces are plotted in different colors. The name conical intersection comes from this observation. In diatomic molecules, the number of vibrational degrees of freedom is 1. Without the necessary two dimensions required to form the cone shape, conical intersections cannot exist in these molecules. Instead, the potential energy curves experience avoided crossings if they have the same point group symmetry, otherwise they can cross. In molecules with three or more atoms, the number of degrees of freedom for molecular vibrations is at least 3. In these systems, when spin–orbit interaction is ignored, the degeneracy of conical intersection is lifted through first order by displacements in a two dimensional subspace of the nuclear coordinate space. The two-dimensional degeneracy lifting subspace is referred to as the branching space or branching plane. This space is spanned by two vectors, the difference of energy gradient vectors of the two intersecting electronic states (the g vector), and the non-adiabatic coupling vector between these two states (the h vector). Because the electronic states are degenerate, the wave functions of the two electronic states are subject to an arbitrary rotation. Therefore, the g and h vectors are also subject to a related arbitrary rotation, despite the fact that the space spanned by the two vectors is invariant. To enable a consistent representation of the branching space, the set of wave functions that makes the g and h vectors orthogonal is usually chosen. This choice is unique up to the signs and switchings of the two vectors, and allows these two vectors to have proper symmetry when the molecular geometry is symmetric. The degeneracy is preserved through first order by differential displacements that are perpendicular to the branching space. The space of non-degeneracy-lifting displacements, which is the orthogonal complement of the branching space, is termed the seam space. Movement within the seam space will take the molecule from one point of conical intersection to an adjacent point of conical intersection. The degeneracy space connecting different conical intersections can be explored and characterised using band and molecular dynamics methods. For an open shell molecule, when spin-orbit interaction is added to the, the dimensionality of seam space is reduced. The presence of conical intersections can be detected experimentally. It has been proposed that two-dimensional spectroscopy can be used to detect their presence through the modulation of the frequency of the vibrational coupling mode. A more direct spectroscopy of conical intersections, based on ultrafast X-ray transient absorption spectroscopy, was proposed, offering new approaches to their study. Categorization by symmetry of intersecting electronic states Conical intersections can occur between electronic states with the same or different point group symmetry, with the same or different spin symmetry. When restricted to a non-relativistic Coulomb Hamiltonian, conical intersections can be classified as symmetry-required, accidental symmetry-allowed, or accidental same-symmetry, according to the symmetry of the intersecting states. A symmetry-required conical intersection is an intersection between two electronic states carrying the same multidimensional irreducible representation. For example, intersections between a pair of E states at a geometry that has a non-abelian group symmetry (e.g. C3h, C3v or D3h). It is named symmetry-required because these electronic states will always be degenerate as long as the symmetry is present. Symmetry-required intersections are often associated with Jahn–Teller effect. An accidental symmetry-allowed conical intersection is an intersection between two electronic states that carry different point group symmetry. It is called accidental because the states may or may not be degenerate when the symmetry is present. Movement along one of the dimensions along which the degeneracy is lifted, the direction of the difference of the energy gradients of the two electronic states, will preserve the symmetry while displacements along the other degeneracy lifting dimension, the direction of the non-adiabatic couplings, will break the symmetry of the molecule. Thus, by enforcing the symmetry of the molecule, the degeneracy lifting effect caused by inter-state couplings is prevented. Therefore, the search for a symmetry-allowed intersection becomes a one-dimensional problem and does not require knowledge of the non-adiabatic couplings, significantly simplifying the effort. As a result, all the conical intersections found through quantum mechanical calculations during the early years of quantum chemistry were symmetry-allowed intersections. An accidental same-symmetry conical intersection is an intersection between two electronic states that carry the same point group symmetry. While this type of intersection was traditionally more difficult to locate, a number of efficient searching algorithms and methods to compute non-adiabatic couplings have emerged in the past decade. It is now understood that same-symmetry intersections play as important a role in non-adiabatic processes as symmetry-allowed intersections. See also Born–Oppenheimer approximation Potential energy surface Geometric phase Christopher Longuet-Higgins Diabatic Representation Jahn–Teller effect Avoided crossing Bond softening Bond hardening Vibronic coupling Surface hopping Ab initio multiple spawning References External links Computational Organic Photochemistry Potential Energy Surfaces and Conical Intersections Quantum chemistry
Conical intersection
[ "Physics", "Chemistry" ]
1,653
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
2,179,433
https://en.wikipedia.org/wiki/Supersecondary%20structure
A supersecondary structure is a compact three-dimensional protein structure of several adjacent elements of a secondary structure that is smaller than a protein domain or a subunit. Supersecondary structures can act as nucleations in the process of protein folding. Examples Helix supersecondary structures Helix hairpin A helix hairpin, also known as an alpha-alpha hairpin, is composed of two antiparallel alpha helices connected by a loop of two or more residues. True to its name, it resembles a hairpin. A longer loop has a greater number of possible conformations. If short strands connect the helices, then the individual helices will pack together through their hydrophobic residues. The function of a helix hairpin is unknown; however, a four helix bundle is composed of two helix hairpins, which have important ligand binding sites. Helix corner A helix corner, also called an alpha-alpha corner, has two alpha helices almost at right angles to each other connected by a short 'loop'. This loop is formed from a hydrophobic residue. The function of a helix corner is unknown. Helix-loop-helix The helix-loop-helix structure has two helices connected by a 'loop'. These are fairly common and usually bind ligands. For example, calcium binds with the carboxyl groups of the side chains within the loop region between the helices. Helix-turn-helix The helix-turn-helix motif is important for DNA binding and is therefore in many DNA binding proteins. Beta sheet supersecondary structures Beta hairpin A beta hairpin is a common supersecondary motif composed of two anti-parallel beta strands connected by a loop. The structure resembles a hairpin and is often found in globular proteins. The loop between the beta strands can range anywhere from 2 to 16 residues. However, most loops contain less than seven residues. Residues in beta hairpins with loops of 2, 3, or 4 residues have distinct conformations. However, a wide range of conformations can be seen in longer loops, which are sometimes referred to as 'random coils'. A beta-meander consists of consecutive antiparallel-beta strands linked by hairpins. Two residue loops are called beta turns or reverse turns. Type I' and Type II' reverse turns occur most frequently because they have less steric hindrance than Type I and Type II turns. The function of beta hairpins is unknown. Beta corner A beta hairpin has two antiparallel beta strands that are at about a 90 degree angle to each other. It is formed by a beta hairpin changing direction with one strand having a glycine residue and the other strand having a beta bulge. Beta corners have no known function. Greek key motif A Greek key motif has four features: Four sequentially connected beta strands are adjacent to, but not necessarily geometrically aligned with, each other. The beta sheet is anti-parallel, and alternate strands run in the same directions. The first strand and last strand are next to each other and bonded by hydrogen bonds. Connecting loops can be long and include other secondary structures. The Greek key motif has its name because the structure looks like the pattern seen on Greek urns. This motif has no known function. Other β-sheets (composed of multiple hydrogen-bonded individual β-strands) are sometimes considered a secondary or supersecondary structure. Mixed supersecondary structures Beta-alpha-beta motifs A beta-alpha-beta motif is composed of two beta strands joined by an alpha helix through connecting loops. The beta strands are parallel, and the helix is also almost parallel to the strands. This structure can be seen in almost all proteins with parallel strands. The loops connecting the beta strands and alpha helix can vary in length and often binds ligands. Beta-alpha-beta helices can be either left-handed or right-handed. When viewed from the N-terminal side of the beta strands, so that one strand is on top of the other, a left-handed beta-alpha-beta motif has the alpha helix on the left side of the beta strands. The more common right-handed motif would have an alpha helix on the right side of the plane containing the beta strands. Rossman fold Rossman folds, named after Michael Rossman, consist of 3 beta strands and 2 helices in an alternating fashion: beta strand, helix, beta strand, helix, beta strand. This motif tends to reverse the direction of the chain within a protein. Rossman folds have an important biological function in binding nucleotides such as NAD within most dehydrogenases. See also Protein folding Secondary structure Structural motif References Further reading Protein structural motifs
Supersecondary structure
[ "Biology" ]
967
[ "Protein structural motifs", "Protein classification" ]
2,179,652
https://en.wikipedia.org/wiki/Cluster%20impact%20fusion
Cluster Impact Fusion is a suggested method of producing practical fusion power using small clusters of heavy water molecules directly accelerated into a titanium-deuteride target. Calculations suggested that such a system enhanced the cross section by many orders of magnitude. It is a particular implementation of the larger beam-target fusion concept. The idea was first reported by researchers at Brookhaven in 1989. Intrigued by recent reports of cold fusion, they attempted to study potential causes for the effect by accelerating tiny droplets of heavy water, about 25 to 1300 D2O molecules each, into a target at about 220 eV. To their surprise they immediately saw fusion effects, at a rate that was many times what any of them could explain via conventional theory. The experiment was fairly simple in concept but required an appropriate accelerator, so it was some time before other labs were able to repeat the experiments. One of the first was the University of Washington, who reported a null result in 1991. Further experiments and a review from MIT in 1992 solved the mystery: the fusion products were the results of contamination, which could be eliminated by filtering with a magnet. The Brookhaven experimenters tried this and the effect disappeared. Cluster impact fusion references end abruptly at that point. See also Impact fusion, which fires Macron (physics) or other projectiles into fuel to compress and heat it References Nuclear technology Cold fusion
Cluster impact fusion
[ "Physics", "Chemistry" ]
273
[ "Nuclear fusion", "Nuclear technology", "Cold fusion", "Nuclear physics" ]
2,180,085
https://en.wikipedia.org/wiki/Otic%20ganglion
The otic ganglion is a small parasympathetic ganglion located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve. It is functionally associated with the glossopharyngeal nerve and innervates the parotid gland for salivation. It is one of four parasympathetic ganglia of the head and neck. The others are the ciliary ganglion, the submandibular ganglion and the pterygopalatine ganglion. Structure and relations The otic ganglion is a small (2–3 mm), oval shaped, flattened parasympathetic ganglion of a reddish-grey color, located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve. It is in relation, laterally, with the trunk of the mandibular nerve at the point where the motor and sensory roots join; medially, with the cartilaginous part of the auditory tube, and the origin of the tensor veli palatini; posteriorly, with the middle meningeal artery. It surrounds the origin of the nerve to the medial pterygoid. Connections The preganglionic parasympathetic fibres originate in the inferior salivatory nucleus of the glossopharyngeal nerve. They leave the glossopharyngeal nerve by its tympanic branch and then pass via the tympanic plexus and the lesser petrosal nerve to the otic ganglion. Here, the fibers synapse and the postganglionic fibers pass by communicating branches to the auriculotemporal nerve, which conveys them to the parotid gland. They produce vasodilator and secretomotor effects. Its sympathetic root is derived from the plexus on the middle meningeal artery. It contains post-ganglionic fibers arising in the superior cervical ganglion. The fibers pass through the ganglion without relay and reach the parotid gland via the auriculotemporal nerve. They are vasomotor in function. The sensory root comes from the auriculotemporal nerve and is sensory to the parotid gland. The motor fibers supplying the medial pterygoid and the tensor veli palatini and the tensor tympani pass through the ganglion without relay. Clinical significance Frey's syndrome is caused by re-routing of parasympathetic and sympathetic fibres of the auriculotemporal nerve (V3) within the otic ganglion. It is a complication of surgery involving the parotid gland whereby injury to these branches, which innervate the parotid gland and sweat glands of the face respectively, form abnormal connections. Salivation leads to perspiration and flushing of the pre-auricular region and is called 'gustatory sweating'. Additional images References External links (, ) Autonomic ganglia of the head and neck Parasympathetic ganglia Glossopharyngeal nerve Otorhinolaryngology Nerves of the head and neck Neurology Nervous system
Otic ganglion
[ "Biology" ]
671
[ "Organ systems", "Nervous system" ]
9,330,399
https://en.wikipedia.org/wiki/Condensate%20polisher
A condensate polisher is a device used to filter water condensed from steam as part of the steam cycle, for example in a conventional or nuclear power plant (powdered resin or deep bed system). It is frequently filled with tiny polymer resin beads which are used to remove or exchange ions so that the purity of the condensate is maintained at or near that of distilled water. Description Condensate polishers are important in systems using the boiling and condensing of water to transport or transform thermal energy. Using technology similar to a water softener, trace amounts of minerals or other contamination are removed from the system before such contamination becomes concentrated enough to cause problems by depositing minerals inside pipes, or within precision-engineered devices such as boilers, steam generators, heat exchangers, steam turbines, cooling towers, and condensers. The removal of minerals has the secondary effect of maintaining the pH balance of the water at or near neutral (a pH of 7.0) by removing ions that would tend to make the water more acidic. This reduces the rate of corrosion from water against metal. Condensate polishing typically involves ion exchange technology for the removal of trace dissolved minerals and suspended matter. Commonly used as part of a power plant's condensate system, it prevents premature chemical failure and deposition within the power cycle which would have resulted in loss of unit efficiency and possible mechanical damage to key generating equipment. During the process of steam generation in power plants, the steam cools and condensate forms. The condensate is collected and then recycled as boiler feedwater. Prior to re-use, the condensate must be purified or "polished", to remove impurities (predominantly silicon oxides and sodium hydroxide) which have the potential to cause damage to the boilers, steam generators, reactors and turbines. Both dissolved (i.e. silica) and suspended matter (ex. iron oxide particles from corrosion, also called 'crud'), as well as other contaminants which can cause corrosion and maintenance issues are effectively removed by condensate polishing treatment. References Tools Industrial water treatment
Condensate polisher
[ "Chemistry" ]
436
[ "Water treatment", "Industrial water treatment" ]
9,331,066
https://en.wikipedia.org/wiki/Induction%20generator
An induction generator or asynchronous generator is a type of alternating current (AC) electrical generator that uses the principles of induction motors to produce electric power. Induction generators operate by mechanically turning their rotors faster than synchronous speed. A regular AC induction motor usually can be used as a generator, without any internal modifications. Because they can recover energy with relatively simple controls, induction generators are useful in applications such as mini hydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure. An induction generator draws reactive excitation current from an external source. Induction generators have an AC rotor and cannot bootstrap using residual magnetization to black start a de-energized distribution system as synchronous machines do. Power factor correcting capacitors can be added externally to neutralize a constant amount of the variable reactive excitation current. After starting, an induction generator can use a capacitor bank to produce reactive excitation current, but the isolated power system's voltage and frequency are not self-regulating and destabilize readily. Principle of Operation An induction generator produces electrical power when its rotor is turned faster than the synchronous speed. For a four-pole motor (two pairs of poles on stator) powered by a 60 Hz source, the synchronous speed is 1800 rotations per minute (rpm) and 1500 RPM powered at 50 Hz. The motor always turns slightly slower than the synchronous speed. The difference between synchronous and operating speed is called "slip" and is often expressed as percent of the synchronous speed. For example, a motor operating at 1450 RPM that has a synchronous speed of 1500 RPM is running at a slip of +3.3%. In operation as a motor, the stator flux rotation is at the synchronous speed, which is faster than the rotor speed. This causes the stator flux to cycle at the slip frequency inducing rotor current through the mutual inductance between the stator and rotor. The induced current create a rotor flux with magnetic polarity opposite to the stator. In this way, the rotor is dragged along behind stator flux, with the currents in the rotor induced at the slip frequency. The motor runs at the speed where the induced rotor current gives rise to torque equal to the shaft load. In generator operation, a prime mover (turbine or engine) drives the rotor above the synchronous speed (negative slip). The stator flux induces current in the rotor, but the opposing rotor flux is now cutting the stator coils, a current is induced in the stator coils 270° behind the magnetizing current, in phase with magnetizing voltage. The motor delivers real (in-phase) power to the power system. Excitation An induction motor requires an externally supplied current to the stator windings in order to induce a current in the rotor. Because the current in an inductor is integral of the voltage with respect to time, for a sinusoidal voltage waveform the current lags the voltage by 90°, and the induction motor always consumes reactive power, regardless of whether it is consuming electrical power and delivering mechanical power as a motor or consuming mechanical power and delivering electrical power to the system. A source of excitation current for magnetizing flux (reactive power) for the stator is still required, to induce rotor current. This can be supplied from the electrical grid or, once it starts producing power, from a capacitive reactance. The generating mode for induction motors is complicated by the need to excite the rotor, which being induced by an alternating current is demagnetized at shutdown with no residual magnetization to bootstrap a cold start. It is necessary to connect an external source of magnetizing current to initialize production. The power frequency and voltage are not self regulating. The generator is able to supply current out of phase with the voltage requiring more external equipment to build a functional isolated power system. Similar is the operation of the induction motor in parallel with a synchronous motor serving as a power factor compensator. A feature in the generator mode in parallel to the grid is that the rotor speed is higher than in the driving mode. Then active energy is being given to the grid. Another disadvantage of induction motor generator is that it consumes a significant magnetizing current I0 = (20-35)%. Active Power Active power delivered to the line is proportional to slip above the synchronous speed. Full rated power of the generator is reached at very small slip values (motor dependent, typically 3%). At synchronous speed of 1800 RPM, generator will produce no power. When the driving speed is increased to 1860 RPM (typical example), full output power is produced. If the prime mover is unable to produce enough power to fully drive the generator, speed will remain somewhere between 1800 and 1860 RPM range. Required Capacitance A capacitor bank must supply reactive power to the motor when used in stand-alone mode. The reactive power supplied should be equal or greater than the reactive power that the generator normally draws when operating as a motor. Torque vs. Slip The basic fundamental of induction generators is the conversion from mechanical energy to electrical energy. This requires an external torque applied to the rotor to turn it faster than the synchronous speed. However, indefinitely increasing torque doesn't lead to an indefinite increase in power generation. The rotating magnetic field torque excited from the armature works to counter the motion of the rotor and prevent over speed because of induced motion in the opposite direction. As the speed of the motor increases the counter torque reaches a max value of torque (breakdown torque) that it can operate until before the operating conditions become unstable. Ideally, induction generators work best in the stable region between the no-load condition and maximum torque region. Rating Current The maximum power that can be produced by an induction motor operated as a generator is limited by the rated current of the generator's windings. Grid and stand-alone connections In induction generators, the reactive power required to establish the air gap magnetic flux is provided by a capacitor bank connected to the machine in case of stand-alone system and in case of grid connection it draws reactive power from the grid to maintain its air gap flux. For a grid-connected system, frequency and voltage at the machine will be dictated by the electric grid, since it is very small compared to the whole system. For stand-alone systems, frequency and voltage are complex function of machine parameters, capacitance used for excitation, and load value and type. Uses Induction generators are often used in wind turbines and some micro hydro installations due to their ability to produce useful power at varying rotor speeds. Induction generators are mechanically and electrically simpler than other generator types. They are also more rugged, requiring no brushes or commutators. Limitations An induction generator connected to a capacitor system can generate sufficient reactive power to operate independently. When the load current exceeds the capability of the generator to supply both magnetization reactive power and load power the generator will immediately cease to produce power. The load must be removed and the induction generator restarted with either an external DC motor or if present, residual magnetism in the core. Induction generators are particularly suitable for wind generating stations as in this case speed is always a variable factor. Unlike synchronous motors, induction generators are load-dependent and cannot be used alone for grid frequency control. Example application As an example, consider the use of a 10 hp, 1760 r/min, 440 V, three-phase induction motor (a.k.a. induction electrical machine in an asynchronous generator regime) as asynchronous generator. The full-load current of the motor is 10 A and the full-load power factor is 0.8. Required capacitance per phase if capacitors are connected in delta: Apparent power Active power Reactive power For a machine to run as an asynchronous generator, capacitor bank must supply minimum 4567 / 3 phases = 1523 VAR per phase. Voltage per capacitor is 440 V because capacitors are connected in delta. Capacitive current Ic = Q/E = 1523/440 = 3.46 A Capacitive reactance per phase Xc = E/Ic = 127 Ω Minimum capacitance per phase: C = 1 / (2*π*f*Xc) = 1 / (2 * 3.141 * 60 * 127) = 21 μF. If the load also absorbs reactive power, capacitor bank must be increased in size to compensate. Prime mover speed should be used to generate frequency of 60 Hz: Typically, slip should be similar to full-load value when machine is running as motor, but negative (generator operation): if Ns = 1800, one can choose N=Ns+40 rpm Required prime mover speed N = 1800 + 40 = 1840 rpm. See also Electric generator Induction motor Notes References Electrical Machines, Drives, and Power Systems, 4th edition, Theodore Wildi, Prentice Hall, , pages 311–314. External links Testing of stand-alone and grid connected asynchronous generator Electrical generators Induction motors
Induction generator
[ "Physics", "Technology" ]
1,911
[ "Physical systems", "Electrical generators", "Machines" ]
9,332,179
https://en.wikipedia.org/wiki/A/B%20testing
A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research method. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective. Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations—commonplace with survey data, offline data, and other, more complex phenomena. Definition "A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows. The following example illustrates an A/B test with a single variable: Suppose a company has a customer database of 2,000 people and decides to create an email campaign with a discount code in order to generate sales through its website. The company creates two versions of the email with different call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase) and identifying promotional code. To 1,000 people it sends the email with the call to action stating, "Offer ends this Saturday! Use code A1", To the remaining 1,000 people, it sends the email with the call to action stating, "Offer ends soon! Use code B1". All other elements of the emails' copy and layout are identical. The company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance). In the example above, the purpose of the test is to determine which is the more effective way to encourage customers to make a purchase. If, however, the aim of the test had been to see which email would generate the higher click-ratethat is, the number of people who actually click onto the website after receiving the emailthen the results might have been different. For example, even though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. Consequently, if the purpose of the test had been simply to see which email would bring more traffic to the website, then the email containing code B1 might well have been more successful. An A/B test should have a defined outcome that is measurable such as number of sales made, click-rate conversion, or number of people signing up/registering. Common test statistics Two-sample hypothesis tests are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions when less is assumed. Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used. For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test. Segmentation and targeting A/B tests most commonly apply the same variant (e.g., user interface element) with equal probability to all users. However, in some circumstances, responses to variants may be heterogeneous. That is, while a variant A might have a higher response rate overall, variant B may have an even higher response rate within a specific segment of the customer base. For instance, in the above example, the breakdown of the response rates by gender could have been: In this case, we can see that while variant A had a higher response rate overall, variant B actually had a higher response rate with men. As a result, the company might select a segmented strategy as a result of the A/B test, sending variant B to men and variant A to women in the future. In this example, a segmented strategy would yield an increase in expected response rates from to – constituting a 30% increase. If segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “variant” (variant A vs. variant B). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test. This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attributefor example, customers' age and genderto identify more nuanced patterns that may exist in the test results. Tradeoffs Positives The results of A/B tests are simple to interpret and use to get a clear idea of what users prefer, since it is directly testing one option over another. It is based on real user behavior, so the data can be very helpful especially when determining what works better between two options. A/B tests can also provide answers to highly specific design questions. One example of this is Google's A/B testing with hyperlink colors. In order to optimize revenue, they tested dozens of different hyperlink hues to see which color the users tend to click more on. Negatives A/B tests are sensitive to variance; they require a large sample size in order to reduce standard error and produce a statistically significant result. In applications where active users are abundant, such as popular online social media platforms, obtaining a large sample size is trivial. In other cases, large sample sizes are obtained by increasing the experiment enrollment period. However, using a technique coined by Microsoft as Controlled-experiment Using Pre-Experiment Data (CUPED), variance from before the experiment start can be taken into account so that fewer samples are required to produce a statistically significant result. Due to its nature as an experiment, running an A/B test introduces the risk of wasted time and resources if the test produces unwanted results, such as negative or no impact to business metrics. In December 2018, representatives with experience in large-scale A/B testing from thirteen different organizations (Airbnb, Amazon, Booking.com, Facebook, Google, LinkedIn, Lyft, Microsoft, Netflix, Twitter, Uber, and Stanford University) summarized the top challenges in a SIGKDD Explorations paper. The challenges can be grouped into four areas: Analysis, Engineering and Culture, Deviations from Traditional A/B tests, and Data quality. History It is difficult to definitively establish when A/B testing was first used. The first randomized double-blind trial, to assess the effectiveness of a homeopathic drug, occurred in 1835. Experimentation with advertising campaigns, which has been compared to modern A/B testing, began in the early twentieth century. The advertising pioneer Claude Hopkins used promotional coupons to test the effectiveness of his campaigns. However, this process, which Hopkins described in his Scientific Advertising, did not incorporate concepts such as statistical significance and the null hypothesis, which are used in statistical hypothesis testing. Modern statistical methods for assessing the significance of sample data were developed separately in the same period. This work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test. With the growth of the internet, new ways to sample populations have become available. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google's first test, Google ran over 7,000 different A/B tests. In 2012, a Microsoft employee working on the search engine Microsoft Bing created an experiment to test different ways of displaying advertising headlines. Within hours, the alternative format produced a revenue increase of 12% with no impact on user-experience metrics. Today, major software companies such as Microsoft and Google each conduct over 10,000 A/B tests annually. A/B testing has been claimed by some to be a change in philosophy and business-strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice. Many companies now use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grow in this area. Applications A/B testing in online social media A/B tests have been used by large social media sites like LinkedIn, Facebook, and Instagram to understand user engagement and satisfaction of online features, such as a new feature or product. A/B tests have also been used to conduct complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another. A/B testing for e-commerce On an e-commerce website, the purchase funnel is typically a good candidate for A/B testing, since even marginal-decreases in drop-off rates can represent a significant gain in sales. Significant improvements can be sometimes seen through testing elements like copy text, layouts, images and colors, but not always. In these tests, users only see one of two versions, since the goal is to discover which of the two versions is preferable. A/B testing for product pricing A/B testing can be used to determine the right price for the product, as this is perhaps one of the most difficult tasks when a new product or service is launched. A/B testing (especially valid for digital goods) is an excellent way to find out which price-point and offering maximize the total revenue. Political A/B testing A/B tests have also been used by political campaigns. In 2007, Barack Obama's presidential campaign used A/B testing as a way to garner online attraction and understand what voters wanted to see from the presidential candidate. For example, Obama's team tested four distinct buttons on their website that led users to sign up for newsletters. Additionally, the team used six different accompanying images to draw in users. Through A/B testing, staffers were able to determine how to effectively draw in voters and garner additional interest. HTTP Routing and API feature testing A/B testing is very common when deploying a newer version of an API. For real-time user experience testing, an HTTP Layer-7 Reverse proxy is configured in such a way that, N% of the HTTP traffic goes into the newer version of the backend instance, while the remaining 100-N% of HTTP traffic hits the (stable) older version of the backend HTTP application service. This is usually done for limiting the exposure of customers to a newer backend instance such that, if there is a bug on the newer version, only N% of the total user agents or clients get affected while others get routed to a stable backend, which is a common ingress control mechanism. See also Adaptive control Between-group design experiment Choice modelling Multi-armed bandit Multivariate testing Randomized controlled trial Scientific control Stochastic dominance Test statistic Two-proportion Z-test References Market research Experiments Software testing
A/B testing
[ "Engineering" ]
2,670
[ "Software engineering", "Software testing" ]
11,813,890
https://en.wikipedia.org/wiki/Supersymmetry%20algebras%20in%201%20%2B%201%20dimensions
A two dimensional Minkowski space, i.e. a flat space with one time and one spatial dimension, has a two-dimensional Poincaré group IO(1,1) as its symmetry group. The respective Lie algebra is called the Poincaré algebra. It is possible to extend this algebra to a supersymmetry algebra, which is a -graded Lie superalgebra. The most common ways to do this are discussed below. algebra Let the Lie algebra of IO(1,1) be generated by the following generators: is the generator of the time translation, is the generator of the space translation, is the generator of Lorentz boosts. For the commutators between these generators, see Poincaré algebra. The supersymmetry algebra over this space is a supersymmetric extension of this Lie algebra with the four additional generators (supercharges) , which are odd elements of the Lie superalgebra. Under Lorentz transformations the generators and transform as left-handed Weyl spinors, while and transform as right-handed Weyl spinors. The algebra is given by the Poincaré algebra plus where all remaining commutators vanish, and and are complex central charges. The supercharges are related via . , , and are Hermitian. Subalgebras of the algebra The and subalgebras The subalgebra is obtained from the algebra by removing the generators and . Thus its anti-commutation relations are given by plus the commutation relations above that do not involve or . Both generators are left-handed Weyl spinors. Similarly, the subalgebra is obtained by removing and and fulfills Both supercharge generators are right-handed. The subalgebra The subalgebra is generated by two generators and given by for two real numbers and . By definition, both supercharges are real, i.e. . They transform as Majorana-Weyl spinors under Lorentz transformations. Their anti-commutation relations are given by where is a real central charge. The and subalgebras These algebras can be obtained from the subalgebra by removing resp. from the generators. See also Supersymmetry Super-Poincaré algebra (in 1+3 dimensions) References K. Schoutens, Supersymmetry and factorized scattering, Nucl.Phys. B344, 665–695, 1990 T.J. Hollowood, E. Mavrikis, The N = 1 supersymmetric bootstrap and Lie algebras, Nucl. Phys. B484, 631–652, 1997, arXiv:hep-th/9606116 Supersymmetry Mathematical physics Lie algebras
Supersymmetry algebras in 1 + 1 dimensions
[ "Physics", "Mathematics" ]
581
[ "Applied mathematics", "Theoretical physics", "Unsolved problems in physics", "Physics beyond the Standard Model", "Mathematical physics", "Supersymmetry", "Symmetry" ]
11,817,317
https://en.wikipedia.org/wiki/Mechanical%20explanations%20of%20gravitation
Mechanical explanations of gravitation (or kinetic theories of gravitation) are attempts to explain the action of gravity by aid of basic mechanical processes, such as pressure forces caused by pushes, without the use of any action at a distance. These theories were developed from the 16th until the 19th century in connection with the aether. However, such models are no longer regarded as viable theories within the mainstream scientific community because general relativity is now the standard model to describe gravitation without the use of actions at a distance. Modern "quantum gravity" hypotheses also attempt to describe gravity by more fundamental processes such as particle fields, but they are not based on classical mechanics. Screening This theory is probably the best-known mechanical explanation, and was developed for the first time by Nicolas Fatio de Duillier in 1690, and re-invented, among others, by Georges-Louis Le Sage (1748), Lord Kelvin (1872), and Hendrik Lorentz (1900), and criticized by James Clerk Maxwell (1875), and Henri Poincaré (1908). The theory posits that the force of gravity is the result of tiny particles or waves moving at high speed in all directions, throughout the universe. The intensity of the flux of particles is assumed to be the same in all directions, so an isolated object A is struck equally from all sides, resulting in only an inward-directed pressure but no net directional force. With a second object B present, however, a fraction of the particles that would otherwise have struck A from the direction of B is intercepted, so B works as a shield, so-to-speak—that is, from the direction of B, A will be struck by fewer particles than from the opposite direction. Likewise, B will be struck by fewer particles from the direction of A than from the opposite direction. One can say that A and B are "shadowing" each other, and the two bodies are pushed toward each other by the resulting imbalance of forces. This shadow obeys the inverse square law, because the imbalance of momentum flow over an entire spherical surface enclosing the object is independent of the size of the enclosing sphere, whereas the surface area of the sphere increases in proportion to the square of the radius. To satisfy the need for mass proportionality, the theory posits that a) the basic elements of matter are very small so that gross matter consists mostly of empty space, and b) that the particles are so small, that only a small fraction of them would be intercepted by gross matter. The result is, that the "shadow" of each body is proportional to the surface of every single element of matter. Criticism: This theory was declined primarily for thermodynamic reasons because a shadow only appears in this model if the particles or waves are at least partly absorbed, which should lead to an enormous heating of the bodies. Also drag, i.e. the resistance of the particle streams in the direction of motion, is a great problem too. This problem can be solved by assuming superluminal speeds, but this solution largely increases the thermal problems and contradicts special relativity. Vortex Because of his philosophical beliefs, René Descartes proposed in 1644 that no empty space can exist and that space must consequently be filled with matter. The parts of this matter tend to move in straight paths, but because they lie close together, they cannot move freely, which according to Descartes implies that every motion is circular, so the aether is filled with vortices. Descartes also distinguishes between different forms and sizes of matter in which rough matter resists the circular movement more strongly than fine matter. Due to centrifugal force, matter tends towards the outer edges of the vortex, which causes a condensation of this matter there. The rough matter cannot follow this movement due to its greater inertia—so due to the pressure of the condensed outer matter those parts will be pushed into the center of the vortex. According to Descartes, this inward pressure is nothing other than gravity. He compared this mechanism with the fact that if a rotating, liquid filled vessel is stopped, the liquid goes on to rotate. Now, if one drops small pieces of light matter (e.g. wood) into the vessel, the pieces move to the middle of the vessel. This idea on the formation of the cosmos by vortices of matter was preceded by the ancient pre-Socratic atomists Leucippus and Democritus. Following the basic premises of Descartes, Christiaan Huygens between 1669 and 1690 designed a much more exact vortex model. This model was the first theory of gravitation which was worked out mathematically. He assumed that the aether particles are moving in every direction, but were thrown back at the outer borders of the vortex and this causes (as in the case of Descartes) a greater concentration of fine matter at the outer borders. So also in his model the fine matter presses the rough matter into the center of the vortex. Huygens also found out that the centrifugal force is equal to the force that acts in the direction of the center of the vortex (centripetal force). He also posited that bodies must consist mostly of empty space so that the aether can penetrate the bodies easily, which is necessary for mass proportionality. He further concluded that the aether moves much faster than the falling bodies. At this time, Newton developed his theory of gravitation which is based on attraction, and although Huygens agreed with the mathematical formalism, he said the model was insufficient due to the lack of a mechanical explanation of the force law. Newton's discovery that gravity obeys the inverse square law surprised Huygens and he tried to take this into account by assuming that the speed of the aether is smaller in greater distance. Criticism: Newton objected to the theory because drag must lead to noticeable deviations of the orbits which were not observed. Another problem was that moons often move in different directions, against the direction of the vortex motion. Also, Huygens' explanation of the inverse square law is circular, because this means that the aether obeys Kepler's third law. But a theory of gravitation has to explain those laws and must not presuppose them. Several British physicists developed vortex theory of the atom in the late nineteenth century. However, the physicist, William Thomson, 1st Baron Kelvin, developed a quite distinct approach. Whereas Descartes had outlined three species of matter – each linked respectively to the emission, transmission, and reflection of light – Thomson developed a theory based on a unitary continuum. Streams In a 1675 letter to Henry Oldenburg, and later to Robert Boyle, Newton wrote the following: [Gravity is the result of] “a condensation causing a flow of ether with a corresponding thinning of the ether density associated with the increased velocity of flow.” He also asserted that such a process was consistent with all his other work and Kepler's Laws of Motion. Newtons' idea of a pressure drop associated with increased velocity of flow was mathematically formalised as Bernoulli's principle published in Daniel Bernoulli's book Hydrodynamica in 1738. However, although he later proposed a second explanation (see section below), Newton's comments to that question remained ambiguous. In the third letter to Bentley in 1692 he wrote: It is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter, without mutual contact, as it must do if gravitation in the sense of Epicurus be essential and inherent in it. And this is one reason why I desired you would not ascribe 'innate gravity' to me. That gravity should be innate, inherent, and essential to matter, so that one body may act upon another at a distance, through a vacuum, without the mediation of anything else, by and through which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man who has in philosophical matters a competent faculty of thinking can ever fall into it. Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial, I have left to the consideration of my readers. On the other hand, Newton is also well known for the phrase Hypotheses non fingo, written in 1713: I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction. And according to the testimony of some of his friends, such as Nicolas Fatio de Duillier or David Gregory, Newton thought that gravitation is based directly on divine influence. Similar to Newton, but mathematically in greater detail, Bernhard Riemann assumed in 1853 that the gravitational aether is an incompressible fluid and normal matter represents sinks in this aether. So if the aether is destroyed or absorbed proportionally to the masses within the bodies, a stream arises and carries all surrounding bodies into the direction of the central mass. Riemann speculated that the absorbed aether is transferred into another world or dimension. Another attempt to solve the energy problem was made by Ivan Osipovich Yarkovsky in 1888. Based on his aether stream model, which was similar to that of Riemann, he argued that the absorbed aether might be converted into new matter, leading to a mass increase of the celestial bodies. Criticism: As in the case of Le Sage's theory, the disappearance of energy without explanation violates the energy conservation law. Also some drag must arise, and no process which leads to a creation of matter is known. Static pressure Newton updated the second edition of Optics (1717) with another mechanical-ether theory of gravity. Unlike his first explanation (1675 – see Streams), he proposed a stationary aether which gets thinner and thinner nearby the celestial bodies. On the analogy of the lift, a force arises, which pushes all bodies to the central mass. He minimized drag by stating an extremely low density of the gravitational aether. Like Newton, Leonhard Euler presupposed in 1760 that the gravitational aether loses density in accordance with the inverse square law. Similarly to others, Euler also assumed that to maintain mass proportionality, matter consists mostly of empty space. Criticism: Both Newton and Euler gave no reason why the density of that static aether should change. Furthermore, James Clerk Maxwell pointed out that in this "hydrostatic" model "the state of stress... which we must suppose to exist in the invisible medium, is 3000 times greater than that which the strongest steel could support". Waves Robert Hooke speculated in 1671 that gravitation is the result of all bodies emitting waves in all directions through the aether. Other bodies, which interact with these waves, move in the direction of the source of the waves. Hooke saw an analogy to the fact that small objects on a disturbed surface of water move to the center of the disturbance. A similar theory was worked out mathematically by James Challis from 1859 to 1876. He calculated that the case of attraction occurs if the wavelength is large in comparison with the distance between the gravitating bodies. If the wavelength is small, the bodies repel each other. By a combination of these effects, he also tried to explain all other forces. Criticism: Maxwell objected that this theory requires a steady production of waves, which must be accompanied by an infinite consumption of energy. Challis himself admitted, that he hadn't reached a definite result due to the complexity of the processes. Pulsation Lord Kelvin (1871) and Carl Anton Bjerknes (1871) assumed that all bodies pulsate in the aether. This was in analogy to the fact that, if the pulsation of two spheres in a fluid is in phase, they will attract each other; and if the pulsation of two spheres is not in phase, they will repel each other. This mechanism was also used for explaining the nature of electric charges. Among others, this hypothesis has also been examined by George Gabriel Stokes and Woldemar Voigt. Criticism : To explain universal gravitation, one is forced to assume that all pulsations in the universe are in phase—which appears very implausible. In addition, the aether should be incompressible to ensure that attraction also arises at greater distances. And Maxwell argued that this process must be accompanied by a permanent new production and destruction of aether. Other historical speculations In 1690, Pierre Varignon assumed that all bodies are exposed to pushes by aether particles from all directions, and that there is some sort of limitation at a certain distance from the Earth's surface which cannot be passed by the particles. He assumed that if a body is closer to the Earth than to the limitation boundary, then the body would experience a greater push from above than from below, causing it to fall toward the Earth. In 1748, Mikhail Lomonosov assumed that the effect of the aether is proportional to the complete surface of the elementary components of which matter consists (similar to Huygens and Fatio before him). He also assumed an enormous penetrability of the bodies. However, no clear description was given by him as to how exactly the aether interacts with matter so that the law of gravitation arises. In 1821, John Herapath tried to apply his co-developed model of the kinetic theory of gases on gravitation. He assumed that the aether is heated by the bodies and loses density so that other bodies are pushed to these regions of lower density. However, it was shown by Taylor that the decreased density due to thermal expansion is compensated for by the increased speed of the heated particles; therefore, no attraction arises. Recent theorizing These mechanical explanations for gravity never gained widespread acceptance, although such ideas continued to be studied occasionally by physicists until the beginning of the twentieth century, by which time it was generally considered to be conclusively discredited. However, some researchers outside the scientific mainstream still try to work out some consequences of those theories. Le Sage's theory was studied by Radzievskii and Kagalnikova (1960), Shneiderov (1961), Buonomano and Engels (1976), Adamut (1982), and Edwards (2014). Gravity due to static pressure was recently studied by Arminjon. See also History of gravitational theory Le Sage's theory of gravitation References Sources Theories of gravity Aether theories Natural philosophy History of physics Obsolete scientific theories
Mechanical explanations of gravitation
[ "Physics" ]
3,081
[ "Theoretical physics", "Theories of gravity" ]
11,817,965
https://en.wikipedia.org/wiki/Air%20permeability%20specific%20surface
The air permeability specific surface of a powder material is a single-parameter measurement of the fineness of the powder. The specific surface is derived from the resistance to flow of air (or some other gas) through a porous bed of the powder. The SI units are m2·kg−1 ("mass specific surface") or m2·m−3 ("volume specific surface"). Significance The particle size, or fineness, of powder materials is very often critical to their performance. Measurement of air permeability can be performed very rapidly, and does not require the powder to be exposed to vacuum or to gases or vapours, as is necessary for the BET method for determination of specific surface area. This makes it both very cost-effective, and also allows it to be used for materials which may be unstable under vacuum. When a powder reacts chemically with a liquid or gas at the surface of its particles, the specific surface is directly related to its rate of reaction. The measurement is therefore important in the manufacture of many processed materials. In particular, air permeability is almost universally used in the cement industry as a gauge of product fineness which is directly related to such properties as speed of setting and rate of strength development. Other fields where air permeability has been used to determine specific surface area include: Paint and pigments Pharmaceuticals Metallurgical powders, including sintered metal filters. In some fields, particularly powder metallurgy, the related Fisher number is the parameter of interest. This is the equivalent average particle diameter, assuming that the particles are spherical and have uniform size. Historically, the Fisher number was obtained by measurement using the Fisher Sub-sieve Sizer, a commercial instrument containing an air pump and pressure regulator to establish a constant air flow, which is measured using a flowmeter. A number of manufacturers make equivalent instruments, and the Fisher number can be calculated from air permeability specific surface area values. Methods Measurement consists of packing the powder into a cylindrical "bed" having a known porosity (i.e. volume of air-space between particles divided by total bed volume). A pressure drop is set up along the length of the bed cylinder. The resulting flow-rate of air through the bed yields the specific surface by the Kozeny–Carman equation: where: S is specific surface, m2·kg−1 d is the cylinder diameter, m ρ is the sample particle density, kg·m−3 ε is the volume porosity of the bed (dimensionless) δP is the pressure drop across the bed, Pa l is the cylinder length, m η is the air dynamic viscosity, Pa·s Q is the flowrate, m3·s−1 It can be seen that the specific surface is proportional to the square root of the ratio of pressure to flow. Various standard methods have been proposed: Maintain a constant flowrate, and measure the pressure drop Maintain a constant pressure drop, and measure the flowrate Allow both to vary, deriving the ratio from the characteristics of the apparatus. Lea and Nurse method The second of these was developed by Lea and Nurse. The bed is 25 mm in diameter and 10 mm thick. The desired porosity (which may vary in the range 0.4 to 0.6) is obtained by using a calculated weight of sample, pressed to precisely these dimensions. The required weight is given by: A flowmeter consisting of a long capillary is connected in series with the powder bed. The pressure drop across the flowmeter (measured by a manometer) is proportional to the flowrate, and the proportionality constant can be measured by direct calibration. The pressure drop across the bed is measured by a similar manometer. Thus the required pressure/flow ratio can be obtained from the ratio of the two manometer readings, and when fed into the Carman equation, yields an "absolute" value of the air permeability surface area. The apparatus is maintained at a constant temperature, and dry air is used so that the air viscosity can be obtained from tables. Rigden method This was developed in the desire for a simpler method. The bed is connected to a wide-diameter u-tube containing a liquid such as kerosene. On pressurizing the space between the u-tube and the bed, the liquid is forced down. The level of liquid then acts as a measure of both pressure and volume flow. The liquid level rises as air leaks out through the bed. The time taken for the liquid level to pass between two pre-set marks on the tube is measured by stop-watch. The mean pressure and mean flowrate can be derived from the dimensions of the tube and the density of the liquid. A later development used mercury in the u-tube: because of mercury's greater density, the apparatus could be more compact, and electrical contacts in the tube touching the conductive mercury could automatically start and stop a timer. Blaine method This was developed independently by R L Blaine of the American National Bureau of Standards, and uses a small glass kerosene manometer to apply suction to the powder bed. It differs from the other methods in that, because of uncertainty of the dimensions of the manometer tube, absolute results can't be calculated from the Carman equation. Instead, the apparatus must be calibrated using a known standard material. The original standards, supplied by NBS, were certified using the Lea and Nurse method. Despite this shortcoming, the Blaine method has become by far the most commonly used for cement materials, mainly because of the ease of maintenance of the apparatus and simplicity of the procedure. See also Klinkenberg correction References Chemical engineering Particle technology
Air permeability specific surface
[ "Chemistry", "Engineering" ]
1,159
[ "Particle technology", "Chemical engineering", "nan", "Environmental engineering" ]
7,172,991
https://en.wikipedia.org/wiki/Topological%20censorship
The topological censorship theorem (if valid) states that general relativity does not allow an observer to probe the topology of spacetime: any topological structure collapses too quickly to allow light to traverse it. More precisely, in a globally hyperbolic, asymptotically flat spacetime satisfying the null energy condition, every causal curve from past null infinity to future null infinity is fixed-endpoint homotopic to a curve in a topologically trivial neighbourhood of infinity. A 2013 paper by Sergey Krasnikov claims that the topological censorship theorem was not proven in the original article because of a gap in the proof. References Lorentzian manifolds
Topological censorship
[ "Physics" ]
129
[ "Relativity stubs", "Theory of relativity" ]
7,174,639
https://en.wikipedia.org/wiki/Insertion%20time
In nuclear weaponry, insertion time is the interval required to rearrange a subcritical mass of fissile material into critical mass. Appropriate insertion time is one of the three main requirements to create a working fission atomic bomb. The need for a short insertion time with plutonium-239 is the reason the implosion method was chosen for the first plutonium bomb, while with uranium-235 it is possible to use a gun design. The basic requirements are: Start with a subcritical system Create a super prompt critical system Switch between these two states in a length of time (insertion time) shorter than the time between the random appearance of a neutron in the fissile material through spontaneous fission or by other random processes. At the right moment, neutrons must be injected into the fissile material to start the fission process. This can be done by several methods. Alpha emitters such as polonium or plutonium-238 can be rapidly combined with beryllium to create a neutron source. Neutrons can be generated using an electrostatic discharge tube, this tube uses the D-T reaction. References Nuclear weapons Nuclear physics Nuclear technology
Insertion time
[ "Physics" ]
233
[ "Nuclear technology", "Nuclear physics" ]
7,175,772
https://en.wikipedia.org/wiki/Ferroelectret
A ferroelectret, also known as a piezoelectret, is a thin film of polymer foams, exhibiting piezoelectric and pyroelectric properties after electric charging. Ferroelectret foams usually consist of a cellular polymer structure filled with air. Polymer-air composites are elastically soft due to their high air content as well as due to the size and shape of the polymer walls. Their elastically soft composite structure is one essential key for the working principle of ferroelectrets, besides the permanent trapping of electric charges inside the polymer voids. The elastic properties allow large deformations of the electrically charged voids. However, the composite structure can also possibly limit the stability and consequently the range of applications. How it works The most common effect related to ferroelectrets is the direct and inverse piezoelectricity, but in these materials, the effect occurs in a way different from the respective effect in ferroelectric polymers. In ferroelectric polymers, a stress in the 3-direction mainly decreases the distance between the molecular chains, due to the relatively weak van der Waals and electrostatic interactions between chains in comparison to the strong covalent bonds within the chain. The thickness decrease thus results in an increase of the dipole density and thus in an increase of the charges on the electrodes, yielding a negative d33 coefficient from dipole-density (or secondary) piezoelectricity. In cellular polymers (ferroelectrets), stress in the 3-direction also decreases the thickness of the sample. The thickness decrease occurs dominantly across the voids, the macroscopic dipole moments decrease, and so do the electrode charges, yielding a positive d33 (intrinsic or direct (quasi-)piezoelectricity). New features In recent years, alternatives to the cellular-foam ferroelectrets were developed. In the new polymer systems, the required cavities are formed by means of e.g. stamps, templates, laser cutting, etc. Thermo-forming of layer systems from electret films led to thermally more stable ferroelectrets. Notes References Condensed matter physics Electrical phenomena
Ferroelectret
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
459
[ "Physical phenomena", "Phases of matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Matter" ]
7,177,687
https://en.wikipedia.org/wiki/Gross%E2%80%93Pitaevskii%20equation
The Gross–Pitaevskii equation (GPE, named after Eugene P. Gross and Lev Petrovich Pitaevskii) describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model. A Bose–Einstein condensate (BEC) is a gas of bosons that are in the same quantum state, and thus can be described by the same wavefunction. A free quantum particle is described by a single-particle Schrödinger equation. Interaction between particles in a real gas is taken into account by a pertinent many-body Schrödinger equation. In the Hartree–Fock approximation, the total wave-function of the system of bosons is taken as a product of single-particle functions : where is the coordinate of the -th boson. If the average spacing between the particles in a gas is greater than the scattering length (that is, in the so-called dilute limit), then one can approximate the true interaction potential that features in this equation by a pseudopotential. At sufficiently low temperature, where the de Broglie wavelength is much longer than the range of boson–boson interaction, the scattering process can be well approximated by the s-wave scattering (i.e. in the partial-wave analysis, a.k.a. the hard-sphere potential) term alone. In that case, the pseudopotential model Hamiltonian of the system can be written as where is the mass of the boson, is the external potential, is the boson–boson s-wave scattering length, and is the Dirac delta-function. The variational method shows that if the single-particle wavefunction satisfies the following Gross–Pitaevskii equation the total wave-function minimizes the expectation value of the model Hamiltonian under normalization condition Therefore, such single-particle wavefunction describes the ground state of the system. GPE is a model equation for the ground-state single-particle wavefunction in a Bose–Einstein condensate. It is similar in form to the Ginzburg–Landau equation and is sometimes referred to as the nonlinear Schrödinger equation. The non-linearity of the Gross–Pitaevskii equation has its origin in the interaction between the particles: setting the coupling constant of interaction in the Gross–Pitaevskii equation to zero (see the following section) recovers the single-particle Schrödinger equation describing a particle inside a trapping potential. The Gross–Pitaevskii equation is said to be limited to the weakly interacting regime. Nevertheless, it may also fail to reproduce interesting phenomena even within this regime. In order to study the BEC beyond that limit of weak interactions, one needs to implement the Lee-Huang-Yang (LHY) correction. Alternatively, in 1D systems one can use either an exact approach, namely the Lieb-Liniger model, or an extended equation, e.g. the Lieb-Liniger Gross–Pitaevskii equation (sometimes called modified or generalized nonlinear Schrödinger equation). Form of equation The equation has the form of the Schrödinger equation with the addition of an interaction term. The coupling constant is proportional to the s-wave scattering length of two interacting bosons: where is the reduced Planck constant, and is the mass of the boson. The energy density is where is the wavefunction, or order parameter, and is the external potential (e.g. a harmonic trap). The time-independent Gross–Pitaevskii equation, for a conserved number of particles, is where is the chemical potential, which is found from the condition that the number of particles is related to the wavefunction by From the time-independent Gross–Pitaevskii equation, we can find the structure of a Bose–Einstein condensate in various external potentials (e.g. a harmonic trap). The time-dependent Gross–Pitaevskii equation is From this equation we can look at the dynamics of the Bose–Einstein condensate. It is used to find the collective modes of a trapped gas. Solutions Since the Gross–Pitaevskii equation is a nonlinear partial differential equation, exact solutions are hard to come by. As a result, solutions have to be approximated via a myriad of techniques. Exact solutions Free particle The simplest exact solution is the free-particle solution, with : This solution is often called the Hartree solution. Although it does satisfy the Gross–Pitaevskii equation, it leaves a gap in the energy spectrum due to the interaction: According to the Hugenholtz–Pines theorem, an interacting Bose gas does not exhibit an energy gap (in the case of repulsive interactions). Soliton A one-dimensional soliton can form in a Bose–Einstein condensate, and depending upon whether the interaction is attractive or repulsive, there is either a bright or dark soliton. Both solitons are local disturbances in a condensate with a uniform background density. If the BEC is repulsive, so that , then a possible solution of the Gross–Pitaevskii equation is where is the value of the condensate wavefunction at , and is the coherence length (a.k.a. the healing length, see below). This solution represents the dark soliton, since there is a deficit of condensate in a space of nonzero density. The dark soliton is also a type of topological defect, since flips between positive and negative values across the origin, corresponding to a phase shift. For the solution is where the chemical potential is . This solution represents the bright soliton, since there is a concentration of condensate in a space of zero density. Healing length The healing length gives the minimum distance over which the order parameter can heal, which describes how quickly the wave function of the BEC can adjust to changes in the potential. If the condensate density grows from 0 to n within a distance ξ, the healing length can calculated by equating the quantum pressure and the interaction energy: The healing length must be much smaller than any length scale in the solution of the single-particle wavefunction. The healing length also determines the size of vortices that can form in a superfluid. It is the distance over which the wavefunction recovers from zero in the center of the vortex to the value in the bulk of the superfluid (hence the name "healing" length). Variational solutions In systems where an exact analytical solution may not be feasible, one can make a variational approximation. The basic idea is to make a variational ansatz for the wavefunction with free parameters, plug it into the free energy, and minimize the energy with respect to the free parameters. Numerical solutions Several numerical methods, such as the split-step Crank–Nicolson and Fourier spectral methods, have been used for solving GPE. There are also different Fortran and C programs for its solution for the contact interaction and long-range dipolar interaction. Thomas–Fermi approximation If the number of particles in a gas is very large, the interatomic interaction becomes large so that the kinetic energy term can be neglected in the Gross–Pitaevskii equation. This is called the Thomas–Fermi approximation and leads to the single-particle wavefunction And the density profile is In a harmonic trap (where the potential energy is quadratic with respect to displacement from the center), this gives a density profile commonly referred to as the "inverted parabola" density profile. Bogoliubov approximation Bogoliubov treatment of the Gross–Pitaevskii equation is a method that finds the elementary excitations of a Bose–Einstein condensate. To that purpose, the condensate wavefunction is approximated by a sum of the equilibrium wavefunction and a small perturbation : Then this form is inserted in the time-dependent Gross–Pitaevskii equation and its complex conjugate, and linearized to first order in : Assuming that one finds the following coupled differential equations for and by taking the parts as independent components: For a homogeneous system, i.e. for , one can get from the zeroth-order equation. Then we assume and to be plane waves of momentum , which leads to the energy spectrum For large , the dispersion relation is quadratic in , as one would expect for usual non-interacting single-particle excitations. For small , the dispersion relation is linear: with being the speed of sound in the condensate, also known as second sound. The fact that shows, according to Landau's criterion, that the condensate is a superfluid, meaning that if an object is moved in the condensate at a velocity inferior to s, it will not be energetically favorable to produce excitations, and the object will move without dissipation, which is a characteristic of a superfluid. Experiments have been done to prove this superfluidity of the condensate, using a tightly focused blue-detuned laser. The same dispersion relation is found when the condensate is described from a microscopical approach using the formalism of second quantization. Superfluid in rotating helical potential The optical potential well might be formed by two counterpropagating optical vortices with wavelengths , effective width and topological charge : where . In cylindrical coordinate system the potential well have a remarkable double-helix geometry: In a reference frame rotating with angular velocity , time-dependent Gross–Pitaevskii equation with helical potential is where is the angular-momentum operator. The solution for condensate wavefunction is a superposition of two phase-conjugated matter–wave vortices: The macroscopically observable momentum of condensate is where is number of atoms in condensate. This means that atomic ensemble moves coherently along axis with group velocity whose direction is defined by signs of topological charge and angular velocity : The angular momentum of helically trapped condensate is exactly zero: Numerical modeling of cold atomic ensemble in spiral potential have shown the confinement of individual atomic trajectories within helical potential well. Derivations and Generalisations The Gross–Pitaevskii equation can also be derived as the semi-classical limit of the many body theory of s-wave interacting identical bosons represented in terms of coherent states. The semi-classical limit is reached for a large number of quanta, expressing the field theory either in the positive-P representation (generalised Glauber-Sudarshan P representation) or Wigner representation. Finite-temperature effects can be treated within a generalised Gross–Pitaevskii equation by including scattering between condensate and noncondensate atoms, from which the Gross–Pitaevskii equation may be recovered in the low-temperature limit. References Further reading External links Trotter-Suzuki-MPI Trotter-Suzuki-MPI is a library for large-scale simulations based on the Trotter-Suzuki decomposition that can also address the Gross–Pitaevskii equation. XMDS XMDS is a spectral partial differential equation library that can be used to solve the Gross–Pitaevskii equation. Bose–Einstein condensates Eponymous equations of physics Superfluidity
Gross–Pitaevskii equation
[ "Physics", "Chemistry", "Materials_science" ]
2,394
[ "Bose–Einstein condensates", "Physical phenomena", "Phase transitions", "Equations of physics", "Phases of matter", "Eponymous equations of physics", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
7,180,512
https://en.wikipedia.org/wiki/Nonequilibrium%20Gas%20and%20Plasma%20Dynamics%20Laboratory
The Nonequilibrium Gas and Plasma Dynamics Laboratory (NGPDL) at the Aerospace Engineering Department of the University of Colorado Boulder is headed by Professor Iain D. Boyd and performs research of nonequilibrium gases and plasmas involving the development of physical models for various gas systems of interest, numerical algorithms on the latest supercomputers, and the application of challenging flows for several exciting projects. The lab places a great deal of emphasis on comparison of simulation with external experimental and theoretical results, having ongoing collaborative studies with colleagues at the University of Michigan such as the Plasmadynamics and Electric Propulsion Laboratory, other universities, and government laboratories such as NASA, United States Air Force Research Laboratory, and the United States Department of Defense. Current research areas of the NGPDL include electric propulsion, hypersonic aerothermodynamics, flows involving very small length scales (MEMS devices), and materials processing (jets used in deposition thin films for advanced materials). Due to nonequilibrium effects, these flows cannot always be computed accurately with the macroscopic equations of gas dynamics and plasma physics. Instead, the lab has adopted a microscopic approach in which the atoms/molecules in a gas and the ions/electrons in a plasma are simulated on computationally using a large number of model particles within sophisticated Monte Carlo methods. The lab has developed a general 2D/axi-symmetric/3D code, MONACO, for simulating nonequilibrium neutral flows that can run either on scalar workstations or in a parallel computing environment. The lab also has developed a general 2D/axi-symmetric/3D code, LeMANS, to numerically solve the Navier-Stokes equations using computational fluid dynamics when the Knudsen number is sufficiently small. This allows lab members to explore flows that would otherwise be too computationally expensive with a particle method. Work is currently being done to combine the two codes into a hybrid that uses MONACO when the flow is in the collisional nonequilibrium regime and LeMANS when the flow can be considered continuous. Current and past plasma and nonequilibrium flow projects include simulation of ion thrusters, Hall effect thrusters, and pulsed plasma thrusters) as well as numerous NASA contracts to study reentry aerothermodynamics for space vehicles, including the Crew Exploration Vehicle. Other plasma research includes modeling wall ablation from directed energy weapons and the plasma-propellant interaction in electrothermal chemical guns. Official website https://www.colorado.edu/lab/ngpdl/ University of Michigan Aerospace engineering organizations Plasma physics facilities Plasma processing Computational fluid dynamics
Nonequilibrium Gas and Plasma Dynamics Laboratory
[ "Physics", "Chemistry", "Engineering" ]
530
[ "Aerospace engineering organizations", "Plasma physics", "Computational fluid dynamics", "Aeronautics organizations", "Computational physics", "Plasma physics facilities", "Aerospace engineering", "Fluid dynamics" ]
7,180,591
https://en.wikipedia.org/wiki/Pressure%20head
In fluid mechanics, pressure head is the height of a liquid column that corresponds to a particular pressure exerted by the liquid column on the base of its container. It may also be called static pressure head or simply static head (but not static head pressure). Mathematically this is expressed as: where is pressure head (which is actually a length, typically in units of meters or centimetres of water) is fluid pressure (i.e. force per unit area, typically expressed in pascals) is the specific weight (i.e. force per unit volume, typically expressed in N/m3 units) is the density of the fluid (i.e. mass per unit volume, typically expressed in kg/m3) is acceleration due to gravity (i.e. rate of change of velocity, expressed in m/s2). Note that in this equation, the pressure term may be gauge pressure or absolute pressure, depending on the design of the container and whether it is open to the ambient air or sealed without air. Head equation Pressure head is a component of hydraulic head, in which it is combined with elevation head. When considering dynamic (flowing) systems, there is a third term needed: velocity head. Thus, the three terms of velocity head, elevation head, and pressure head appear in the head equation derived from the Bernoulli equation for incompressible fluids: where is velocity head, is elevation head, is pressure head, and is a constant for the system Practical uses for pressure head Fluid flow is measured with a wide variety of instruments. The venturi meter in the diagram on the left shows two columns of a measurement fluid at different heights. The height of each column of fluid is proportional to the pressure of the fluid. To demonstrate a classical measurement of pressure head, we could hypothetically replace the working fluid with another fluid having different physical properties. For example, if the original fluid was water and we replaced it with mercury at the same pressure, we would expect to see a rather different value for pressure head. In fact the specific weight of water is 9.8 kN/m3 and the specific weight of mercury is 133 kN/m3. So, for any particular measurement of pressure head, the height of a column of water will be about [133/9.8 = 13.6] 13.6 times taller than a column of mercury would be. So if a water column meter reads "13.6 cm H2O", then an equivalent measurement is "1.00 cm Hg". This example demonstrates why there is some confusion surrounding pressure head and its relationship to pressure. Scientists frequently use columns of water (or mercury) to measure pressure (manometric pressure measurement), since for a given fluid, pressure head is proportional to pressure. Measuring pressure in units of "mm of mercury" or "inches of water" makes sense for instrumentation, but these raw measurements of head must frequently be converted to more convenient pressure units using the equations above to solve for pressure. In summary pressure head is a measurement of length, which can be converted to the units of pressure (force per unit area), as long as strict attention is paid to the density of the measurement fluid and the local value of g. Implications for gravitational anomalies on ψ We would normally use pressure head calculations in areas in which is constant. However, if the gravitational field fluctuates, we can prove that pressure head fluctuates with it. If we consider what would happen if gravity decreases, we would expect the fluid in the venturi meter shown above to withdraw from the pipe up into the vertical columns. Pressure head is increased. In the case of weightlessness, the pressure head approaches infinity. Fluid in the pipe may "leak out" of the top of the vertical columns (assuming ). To simulate negative gravity, we could turn the venturi meter shown above upside down. In this case gravity is negative, and we would expect the fluid in the pipe to "pour out" the vertical columns. Pressure head is negative (assuming ). If and , we observe that the pressure head is also negative, and the ambient air is sucked into the columns shown in the venturi meter above. This is called a siphon, and is caused by a partial vacuum inside the vertical columns. In many venturis, the column on the left has fluid in it (), while only the column on the right is a siphon (). If and , we observe that the pressure head is again positive, predicting that the venturi meter shown above would look the same, only upside down. In this situation, gravity causes the working fluid to plug the siphon holes, but the fluid does not leak out because the ambient pressure is greater than the pressure in the pipe. The above situations imply that the Bernoulli equation, from which we obtain static pressure head, is extremely versatile. Applications Static A mercury barometer is one of the classic uses of static pressure head. Such barometers are an enclosed column of mercury standing vertically with gradations on the tube. The lower end of the tube is bathed in a pool of mercury open to the ambient to measure the local atmospheric pressure. The reading of a mercury barometer (in mm of Hg, for example) can be converted into an absolute pressure using the above equations. If we had a column of mercury 767 mm high, we could calculate the atmospheric pressure as (767 mm)•(133 kN/m3) = 102 kPa. See the torr, millimeter of mercury, and pascal (unit) articles for barometric pressure measurements at standard conditions. Differential The venturi meter and manometer is a common type of flow meter which can be used in many fluid applications to convert differential pressure heads into volumetric flow rate, linear fluid speed, or mass flow rate using Bernoulli's principle. The reading of these meters (in inches of water, for example) can be converted into a differential, or gauge pressure, using the above equations. Velocity head The pressure of a fluid is different when it flows than when it is not flowing. This is why static pressure and dynamic pressure are never the same in a system in which the fluid is in motion. This pressure difference arises from a change in fluid velocity that produces velocity head, which is a term of the Bernoulli equation that is zero when there is no bulk motion of the fluid. In the picture on the right, the pressure differential is entirely due to the change in velocity head of the fluid, but it can be measured as a pressure head because of the Bernoulli principle. If, on the other hand, we could measure the velocity of the fluid, the pressure head could be calculated from the velocity head. See the Derivations of Bernoulli equation. See also Centimetre of water Pressure measurement Hydraulic head or velocity head, which includes a component of pressure head Venturi effect External links Engineering Toolbox article on Specific Weight Engineering Toolbox article on Static Pressure Head Fluid dynamics Pressure
Pressure head
[ "Physics", "Chemistry", "Engineering" ]
1,424
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Chemical engineering", "Pressure", "Piping", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
7,181,677
https://en.wikipedia.org/wiki/AACE%20International
AACE International (Association for the Advancement of Cost Engineering) was founded in 1956 by 59 cost estimators and cost engineers during the organizational meeting of the American Association of Cost Engineering at the University of New Hampshire in Durham, New Hampshire. AACE International Headquarters is located in Morgantown, West Virginia, USA. AACE is a 501(c)(3) non-profit professional association. AACE International is a member of the Board of the Council of Engineering and Scientific Specialty Boards (CESB). Activities AACE is a non-profit organization with about 15 employees at its headquarters in Morgantown, WV. A variety of other organizations in the United States provide similar certifications, often specialized for particular industries, such as power, manufacturing, gas and oil. AACE is the publisher of Cost Engineering, a bi-monthly technical journal, Skills and Knowledge of Cost Engineering (currently in its 6th edition), Source magazine (a bi-monthly magazine), 20 different AACE International Professional Practice Guides, approximately 120 Recommended Practices, and its most comprehensive publication, the Total Cost Management Framework: An Integrated Approach to Portfolio, Program and Project Management. Certification programs AACE currently manages eight certification programs, as listed below. All require agreeing to adhere to canons of ethics, and passing an examination. Most require prior industry experience, and also involve recertification by continuing education or reexamination. Certified Cost Technician (formerly known as Interim Cost Consultant), an entry-level certification and is not eligible for renewal Certified Scheduling Technician, an entry-level certification Certified Cost Professional (formerly Certified Cost Consultant / Certified Cost Engineer), which additionally requires a technical paper submission Certified Estimating Professional Certified Forensic Claims Consultant, which has additional requirements, including submission of a publication Decision & Risk Management Professional Earned Value Professional Planning & Scheduling Professional Since becoming a charter member of the Council of Engineering and Scientific Specialty Boards in 1990, six of its certification programs (CCP, CCT, CEP, CST, EVP and PSP) have been accredited by the CESB. Membership As of 2012, AACE reported over 8,000 members. To network in local areas, there are over 80 local sections located in 80 countries. There are also 11 technical subcommittees and 17 special interest groups. References Further reading "Total Cost Management Framework: An Integrated Approach to Portfolio, Program and Project Management," 2nd Edition, AACE International, Morgantown, West Virginia, 2016 "Skills and Knowledge of Cost Engineering," 6th Edition, AACE International, Morgantown, West Virginia, 2016. External links AACE International What is cost engineering? - a white paper The Total Cost Management Framework; An Integrate Approach to Portfolio, Program and Project Management Professional associations based in the United States Cost engineering Engineering societies based in the United States
AACE International
[ "Engineering" ]
567
[ "Cost engineering" ]
7,181,855
https://en.wikipedia.org/wiki/Borel%20conjecture
In geometric topology, the Borel conjecture (named for Armand Borel) asserts that an aspherical closed manifold is determined by its fundamental group, up to homeomorphism. It is a rigidity conjecture, asserting that a weak, algebraic notion of equivalence (namely, homotopy equivalence) should imply a stronger, topological notion (namely, homeomorphism). Precise formulation of the conjecture Let and be closed and aspherical topological manifolds, and let be a homotopy equivalence. The Borel conjecture states that the map is homotopic to a homeomorphism. Since aspherical manifolds with isomorphic fundamental groups are homotopy equivalent, the Borel conjecture implies that aspherical closed manifolds are determined, up to homeomorphism, by their fundamental groups. This conjecture is false if topological manifolds and homeomorphisms are replaced by smooth manifolds and diffeomorphisms; counterexamples can be constructed by taking a connected sum with an exotic sphere. The origin of the conjecture In a May 1953 letter to Jean-Pierre Serre, Armand Borel raised the question whether two aspherical manifolds with isomorphic fundamental groups are homeomorphic. A positive answer to the question "Is every homotopy equivalence between closed aspherical manifolds homotopic to a homeomorphism?" is referred to as the "so-called Borel Conjecture" in a 1986 paper of Jonathan Rosenberg. Motivation for the conjecture A basic question is the following: if two closed manifolds are homotopy equivalent, are they homeomorphic? This is not true in general: there are homotopy equivalent lens spaces which are not homeomorphic. Nevertheless, there are classes of manifolds for which homotopy equivalences between them can be homotoped to homeomorphisms. For instance, the Mostow rigidity theorem states that a homotopy equivalence between closed hyperbolic manifolds is homotopic to an isometry—in particular, to a homeomorphism. The Borel conjecture is a topological reformulation of Mostow rigidity, weakening the hypothesis from hyperbolic manifolds to aspherical manifolds, and similarly weakening the conclusion from an isometry to a homeomorphism. Relationship to other conjectures The Borel conjecture implies the Novikov conjecture for the special case in which the reference map is a homotopy equivalence. The Poincaré conjecture asserts that a closed manifold homotopy equivalent to , the 3-sphere, is homeomorphic to . This is not a special case of the Borel conjecture, because is not aspherical. Nevertheless, the Borel conjecture for the 3-torus implies the Poincaré conjecture for . References Further reading Matthias Kreck, and Wolfgang Lück, The Novikov conjecture. Geometry and algebra. Oberwolfach Seminars, 33. Birkhäuser Verlag, Basel, 2005. Geometric topology Homeomorphisms Conjectures Unsolved problems in geometry Surgery theory
Borel conjecture
[ "Mathematics" ]
620
[ "Geometry problems", "Unsolved problems in mathematics", "Homeomorphisms", "Unsolved problems in geometry", "Geometric topology", "Conjectures", "Topology", "Mathematical problems" ]
7,181,923
https://en.wikipedia.org/wiki/Space-filling%20model
In chemistry, a space-filling model, also known as a calotte model, is a type of three-dimensional (3D) molecular model where the atoms are represented by spheres whose radii are proportional to the radii of the atoms and whose center-to-center distances are proportional to the distances between the atomic nuclei, all in the same scale. Atoms of different chemical elements are usually represented by spheres of different colors. Space-filling calotte models are also referred to as CPK models after the chemists Robert Corey, Linus Pauling, and Walter Koltun, who over a span of time developed the modeling concept into a useful form. They are distinguished from other 3D representations, such as the ball-and-stick and skeletal models, by the use of the "full size" space-filling spheres for the atoms. The models are tactile and manually rotatable. They are useful for visualizing the effective shape and relative dimensions of a molecule, and (because of the rotatability) the shapes of the surface of the various conformers. On the other hand, these models mask the chemical bonds between the atoms, and make it difficult to see the structure of the molecule that is obscured by the atoms nearest to the viewer in a particular pose. For this reason, such models are of greater utility if they can be used dynamically, especially when used with complex molecules (e.g., see the greater understanding of the molecules shape given when the THC model is clicked on to rotate). History Space-filling models arise out of a desire to represent molecules in ways that reflect the electronic surfaces that molecules present, that dictate how they interact, one with another (or with surfaces, or macromolecules such as enzymes, etc.). Crystallographic data are the starting point for understanding static molecular structure, and these data contain the information rigorously required to generate space-filling representations (e.g., see these crystallographic models); most often, however, crystallographers present the locations of atoms derived from crystallography via "thermal ellipsoids" whose cut-off parameters are set for convenience both to show the atom locations (with anisotropies), and to allow representation of the covalent bonds or other interactions between atoms as lines. In short, for reasons of utility, crystallographic data historically have appeared in presentations closer to ball-and-stick models. Hence, while crystallographic data contain the information to create space-filling models, it remained for individuals interested in modeling an effective static shape of a molecule, and the space it occupied, and the ways in which it might present a surface to another molecule, to develop the formalism shown above. In 1952, Robert Corey and Linus Pauling described accurate scale models of molecules which they had built at Caltech. In their models, they envisioned the surface of the molecule as being determined by the van der Waals radius of each atom of the molecule, and crafted atoms as hardwood spheres of diameter proportional to each atom's van der Waals radius, in the scale 1 inch = 1 Å. To allow bonds between atoms a portion of each sphere was cut away to create a pair of matching flat faces, with the cuts dimensioned so that the distance between sphere centers was proportional to the lengths of standard types of chemical bonds. A connector was designed—a metal bushing that threaded into each sphere at the center of each flat face. The two spheres were then firmly held together by a metal rod inserted into the pair of opposing bushing (with fastening by screws). The models also had special features to allow representation of hydrogen bonds. In 1965, Walter L. Koltun designed and patented a simplified system with molded plastic atoms of various colours, which were joined by specially designed snap connectors; this simpler system accomplished essentially the same ends as the Corey-Pauling system, and allowed for the development of the models as a popular way of working with molecules in training and research environments. Such colour-coded, bond length-defined, van der Waals-type space-filling models are now commonly known as CPK models, after these three developers of the specific concept. In modern research efforts, attention returned to use of data-rich crystallographic models in combination with traditional and new computational methods to provide space-filling models of molecules, both simple and complex, where added information such as which portions of the surface of the molecule were readily accessible to solvent, or how the electrostatic characteristics of a space-filling representation—which in the CPK case is almost fully left to the imagination—could be added to the visual models created. The two closing images give examples of the latter type of calculation and representation, and its utility. See also Ball-and-stick model Van der Waals surface CPK coloring Molecular graphics Software for molecular modeling Molecular design software References External links More on molecular models and a couple of examples from chemistry and biology (article is in German) Gallery Molecular modelling Surfaces
Space-filling model
[ "Chemistry" ]
1,027
[ "Theoretical chemistry", "Molecular modelling", "Molecular physics" ]
581,859
https://en.wikipedia.org/wiki/Invertible%20sheaf
In mathematics, an invertible sheaf is a sheaf on a ringed space that has an inverse with respect to tensor product of sheaves of modules. It is the equivalent in algebraic geometry of the topological notion of a line bundle. Due to their interactions with Cartier divisors, they play a central role in the study of algebraic varieties. Definition Let (X, OX) be a ringed space. Isomorphism classes of sheaves of OX-modules form a monoid under the operation of tensor product of OX-modules. The identity element for this operation is OX itself. Invertible sheaves are the invertible elements of this monoid. Specifically, if L is a sheaf of OX-modules, then L is called invertible if it satisfies any of the following equivalent conditions: There exists a sheaf M such that . The natural homomorphism is an isomorphism, where denotes the dual sheaf . The functor from OX-modules to OX-modules defined by is an equivalence of categories. Every locally free sheaf of rank one is invertible. If X is a locally ringed space, then L is invertible if and only if it is locally free of rank one. Because of this fact, invertible sheaves are closely related to line bundles, to the point where the two are sometimes conflated. Examples Let X be an affine scheme . Then an invertible sheaf on X is the sheaf associated to a rank one projective module over R. For example, this includes fractional ideals of algebraic number fields, since these are rank one projective modules over the rings of integers of the number field. The Picard group Quite generally, the isomorphism classes of invertible sheaves on X themselves form an abelian group under tensor product. This group generalises the ideal class group. In general it is written with Pic the Picard functor. Since it also includes the theory of the Jacobian variety of an algebraic curve, the study of this functor is a major issue in algebraic geometry. The direct construction of invertible sheaves by means of data on X leads to the concept of Cartier divisor. See also Vector bundles in algebraic geometry Line bundle First Chern class Picard group Birkhoff-Grothendieck theorem References Geometry of divisors Sheaf theory
Invertible sheaf
[ "Mathematics" ]
487
[ "Topology", "Sheaf theory", "Mathematical structures", "Category theory" ]
581,888
https://en.wikipedia.org/wiki/Luminous%20flux
In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light. Units The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian. In other systems of units, luminous flux may have units of power. Weighting The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function, which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy. This model of the human visual brightness perception, is standardized by the CIE and ISO. Context Luminous flux is often used as an objective measure of the useful light emitted by a light source, and is typically reported on the packaging for light bulbs, although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient. Luminous flux is not used to compare brightness, as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source. Measurement Luminous flux of artificial light sources is typically measured using an integrating sphere, or a goniophotometer outfitted with a photometer or a spectroradiometer. Relationship to luminous intensity Luminous flux (in lumens) is a measure of the total amount of light a lamp puts out. The luminous intensity (in candelas) is a measure of how bright the beam in a particular direction is. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, however the luminous flux remains the same. Examples References Physical quantities Photometry Temporal rates
Luminous flux
[ "Physics", "Mathematics" ]
537
[ "Temporal quantities", "Physical phenomena", "Physical quantities", "Quantity", "Temporal rates", "Physical properties" ]
582,024
https://en.wikipedia.org/wiki/Pushout%20%28category%20theory%29
In category theory, a branch of mathematics, a pushout (also called a fibered coproduct or fibered sum or cocartesian square or amalgamated sum) is the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domain. The pushout consists of an object P along with two morphisms X → P and Y → P that complete a commutative square with the two given morphisms f and g. In fact, the defining universal property of the pushout (given below) essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are and . The pushout is the categorical dual of the pullback. Universal property Explicitly, the pushout of the morphisms f and g consists of an object P and two morphisms i1 : X → P and i2 : Y → P such that the diagram commutes and such that (P, i1, i2) is universal with respect to this diagram. That is, for any other such triple (Q, j1, j2) for which the following diagram commutes, there must exist a unique u : P → Q also making the diagram commute: As with all universal constructions, the pushout, if it exists, is unique up to a unique isomorphism. Examples of pushouts Here are some examples of pushouts in familiar categories. Note that in each case, we are only providing a construction of an object in the isomorphism class of pushouts; as mentioned above, though there may be other ways to construct it, they are all equivalent. Suppose that X, Y, and Z as above are sets, and that f : Z → X and g : Z → Y are set functions. The pushout of f and g is the disjoint union of X and Y, where elements sharing a common preimage (in Z) are identified, together with the morphisms i1, i2 from X and Y, i.e. where ~ is the finest equivalence relation (cf. also this) such that f(z) ~ g(z) for all z in Z. In particular, if X and Y are subsets of some larger set W and Z is their intersection, with f and g the inclusion maps of Z into X and Y, then the pushout can be canonically identified with the union . A specific case of this is the cograph of a function. If is a function, then the cograph of a function is the pushout of along the identity function of . In elementary terms, the cograph is the quotient of by the equivalence relation generated by identifying with . A function may be recovered by its cograph because each equivalence class in contains precisely one element of . Cographs are dual to graphs of functions since the graph may be defined as the pullback of along the identity of . The construction of adjunction spaces is an example of pushouts in the category of topological spaces. More precisely, if Z is a subspace of Y and g : Z → Y is the inclusion map we can "glue" Y to another space X along Z using an "attaching map" f : Z → X. The result is the adjunction space , which is just the pushout of f and g. More generally, all identification spaces may be regarded as pushouts in this way. A special case of the above is the wedge sum or one-point union; here we take X and Y to be pointed spaces and Z the one-point space. Then the pushout is , the space obtained by gluing the basepoint of X to the basepoint of Y. In the category of abelian groups, pushouts can be thought of as "direct sum with gluing" in the same way we think of adjunction spaces as "disjoint union with gluing". The zero group is a subgroup of every group, so for any abelian groups A and B, we have homomorphisms and . The pushout of these maps is the direct sum of A and B. Generalizing to the case where f and g are arbitrary homomorphisms from a common domain Z, one obtains for the pushout a quotient group of the direct sum; namely, we mod out by the subgroup consisting of pairs (f(z), −g(z)). Thus we have "glued" along the images of Z under f and g. A similar approach yields the pushout in the category of R-modules for any ring R. In the category of groups, the pushout is called the free product with amalgamation. It shows up in the Seifert–van Kampen theorem of algebraic topology (see below). In CRing, the category of commutative rings (a full subcategory of the category of rings), the pushout is given by the tensor product of rings with the morphisms and that satisfy . In fact, since the pushout is the colimit of a span and the pullback is the limit of a cospan, we can think of the tensor product of rings and the fibered product of rings (see the examples section) as dual notions to each other. In particular, let A, B, and C be objects (commutative rings with identity) in CRing and let f : C → A and g : C → B be morphisms (ring homomorphisms) in CRing. Then the tensor product is: See Free product of associative algebras for the case of non-commutative rings. In the multiplicative monoid of positive integers , considered as a category with one object, the pushout of two positive integers m and n is just the pair , where the numerators are both the least common multiple of m and n. Note that the same pair is also the pullback. Properties Whenever the pushout A ⊔C B exists, then B ⊔C A exists as well and there is a natural isomorphism A ⊔C B ≅ B ⊔C A. In an abelian category all pushouts exist, and they preserve cokernels in the following sense: if (P, i1, i2) is the pushout of f : Z → X and g : Z → Y, then the natural map coker(f) → coker(i2) is an isomorphism, and so is the natural map coker(g) → coker(i1). There is a natural isomorphism (A ⊔C B) ⊔B D ≅ A ⊔C D. Explicitly, this means: if maps f : C → A, g : C → B and h : B → D are given and the pushout of f and g is given by i : A → P and j : B → P, and the pushout of j and h is given by k : P → Q and l : D → Q, then the pushout of f and hg is given by ki : A → Q and l : D → Q. Graphically this means that two pushout squares, placed side by side and sharing one morphism, form a larger pushout square when ignoring the inner shared morphism. Construction via coproducts and coequalizers Pushouts are equivalent to coproducts and coequalizers (if there is an initial object) in the sense that: Coproducts are a pushout from the initial object, and the coequalizer of f, g : X → Y is the pushout of [f, g] and [1X, 1X], so if there are pushouts (and an initial object), then there are coequalizers and coproducts; Pushouts can be constructed from coproducts and coequalizers, as described below (the pushout is the coequalizer of the maps to the coproduct). All of the above examples may be regarded as special cases of the following very general construction, which works in any category C satisfying: For any objects A and B of C, their coproduct exists in C; For any morphisms j and k of C with the same domain and the same target, the coequalizer of j and k exists in C. In this setup, we obtain the pushout of morphisms f : Z → X and g : Z → Y by first forming the coproduct of the targets X and Y. We then have two morphisms from Z to this coproduct. We can either go from Z to X via f, then include into the coproduct, or we can go from Z to Y via g, then include into the coproduct. The pushout of f and g is the coequalizer of these new maps. Application: the Seifert–van Kampen theorem The Seifert–van Kampen theorem answers the following question. Suppose we have a path-connected space , covered by path-connected open subspaces and whose intersection is also path-connected. (Assume also that the basepoint lies in the intersection of A and B.) If we know the fundamental groups of , and can we recover the fundamental group of ? The answer is yes, provided we also know the induced homomorphisms and The theorem then says that the fundamental group of is the pushout of these two induced maps. Of course, is the pushout of the two inclusion maps of into and . Thus we may interpret the theorem as confirming that the fundamental group functor preserves pushouts of inclusions. We might expect this to be simplest when is simply connected, since then both homomorphisms above have trivial domain. Indeed, this is the case, since then the pushout (of groups) reduces to the free product, which is the coproduct in the category of groups. In a most general case we will be speaking of a free product with amalgamation. There is a detailed exposition of this, in a slightly more general setting (covering groupoids) in the book by J. P. May listed in the references. References May, J. P. A concise course in algebraic topology. University of Chicago Press, 1999. An introduction to categorical approaches to algebraic topology: the focus is on the algebra, and assumes a topological background. Ronald Brown "Topology and Groupoids" pdf available Gives an account of some categorical methods in topology, use the fundamental groupoid on a set of base points to give a generalisation of the Seifert-van Kampen Theorem. Philip J. Higgins, "Categories and Groupoids" free download Explains some uses of groupoids in group theory and topology. References External links pushout in nLab Limits (category theory)
Pushout (category theory)
[ "Mathematics" ]
2,237
[ "Mathematical structures", "Category theory", "Limits (category theory)" ]
582,228
https://en.wikipedia.org/wiki/Rabi%20cycle
In physics, the Rabi cycle (or Rabi flop) is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an optical driving field. The effect is important in quantum optics, magnetic resonance, and quantum computing, and is named after Isidor Isaac Rabi. A two-level system is one that has two possible energy levels. One level is a ground state with lower energy, and the other is an excited state with higher energy. If the energy levels are not degenerate (i.e. don't have equal energies), the system can absorb or emit a quantum of energy and transition from the ground state to the excited state or vice versa. When an atom (or some other two-level system) is illuminated by a coherent beam of photons, it will cyclically absorb photons and emit them by stimulated emission. One such cycle is called a Rabi cycle, and the inverse of its duration is the Rabi frequency of the system. The effect can be modeled using the Jaynes–Cummings model and the Bloch vector formalism. Mathematical description of spin flopping One example of Rabi flopping is the spin flipping within a quantum system containing a spin-1/2 particle and an oscillating magnetic field. We split the magnetic field into a constant 'environment' field, and the oscillating part, so that our field looks likewhere and are the strengths of the environment and the oscillating fields respectively, and is the frequency at which the oscillating field oscillates. We can then write a Hamiltonian describing this field, yieldingwhere , , and are the spin operators. The frequency is known as the Rabi frequency. We can substitute in their matrix forms to find the matrix representing the Hamiltonian:where we have used . This Hamiltonian is a function of time, meaning we cannot use the standard prescription of Schrödinger time evolution in quantum mechanics, where the time evolution operator is , because this formula assume that the Hamiltonian is constant with respect to time. The main strategy in solving this problem is to transform the Hamiltonian so that the time independence is gone, solve the problem in this transformed frame, and then transform the results back to normal. This can be done by shifting the reference frame that we work in to match the rotating magnetic field. If we rotate along with the magnetic field, then from our point of view, the magnetic field is not rotating and appears constant. Therefore, in the rotating reference frame, both the magnetic field and the Hamiltonian are constant with respect to time. We denote our spin-1/2 particle state to be in the stationary reference frame, where and are spin up and spin down states respectively, and . We can transform this state to the rotating reference frame by using a rotation operatorwhich rotates the state counterclockwise around the positive z-axis in state space, which may be visualized as a Bloch sphere. At a time and a frequency , the magnetic field will have precessed around by an angle . To transform into the rotating reference frame, note that the stationary x and y-axes rotate clockwise from the point of view of the rotating reference frame. Because the operator rotates counterclockwise, we must negate the angle to produce the correct state in the rotating reference frame. Thus, the state becomesWe may rewrite the amplitudes so thatThe time dependent Schrödinger equation in the stationary reference frame isExpanding this using the matrix forms of the Hamiltonian and the state yieldsApplying the matrix and separating the components of the vector allows us to write two coupled differential equations as followsTo transform this into the rotating reference frame, we may use the fact that and to write the following:where . Now defineWe now write these two new coupled differential equations back into the form of the Schrödinger equation:In some sense, this is a transformed Schrödinger equation in the rotating reference frame. Crucially, the Hamiltonian does not vary with respect to time, meaning in this reference frame, we can use the familiar solution to Schrödinger time evolution:This transformed problem is equivalent to that of Larmor precession of a spin state, so we have solved the essence of Rabi flopping. The probability that a particle starting in the spin up state flips to the spin down state can be stated aswhere is the generalized Rabi Frequency. Something important to notice is that will not reach 1 unless . In other words, the frequency of the rotating magnetic field must match the environmental field's Larmor frequency in order for the spin to fully flip; they must achieve resonance. When resonance (i.e. ) is achieved, . Within the rotating reference frame, when resonance is achieved, it is as if there is no environmental magnetic field, and the oscillating magnetic field looks constant. Thus both mathematically (as we have derived) and physically, the problem reduces to the precession of a spin state under a constant magnetic field (Larmor precession). To transform the solved state back to the stationary reference frame, we reuse the rotation operator with the opposite angle, thus yielding a full solution to the problem. Applications The Rabi effect is important in quantum optics, magnetic resonance and quantum computing. Quantum optics Rabi flopping may be used to describe a two-level atom with an excited state and a ground state in an electromagnetic field with frequency tuned to the excitation energy. Using the spin-flipping formula but applying it to this system yields where is the Rabi frequency. Quantum computing Any two-state quantum system can be used to model a qubit. Rabi flopping provides a physical way to allow for spin flips in a qubit system. At resonance, the transition probability is given by To go from state to state it is sufficient to adjust the time during which the rotating field acts such that or . This is called a pulse. If a time intermediate between 0 and is chosen, we obtain a superposition of and . In particular for , we have a pulse, which acts as: The equations are essentially identical in the case of a two level atom in the field of a laser when the generally well satisfied rotating wave approximation is made, where is the energy difference between the two atomic levels, is the frequency of laser wave and Rabi frequency is proportional to the product of the transition electric dipole moment of atom and electric field of the laser wave that is . On a quantum computer, these oscillations are obtained by exposing qubits to periodic electric or magnetic fields during suitably adjusted time intervals. See also Atomic coherence Bloch sphere Laser pumping Optical pumping Rabi problem Vacuum Rabi oscillation Neutral particle oscillation References Quantum Mechanics Volume 1 by C. Cohen-Tannoudji, Bernard Diu, Frank Laloe, A Short Introduction to Quantum Information and Quantum Computation by Michel Le Bellac, The Feynman Lectures on Physics, Volume III Modern Approach To Quantum Mechanics by John S Townsend, Quantum optics Atomic physics
Rabi cycle
[ "Physics", "Chemistry" ]
1,517
[ "Quantum optics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
582,273
https://en.wikipedia.org/wiki/Hydraulic%20diameter
The hydraulic diameter, , is a commonly used term when handling flow in non-circular tubes and channels. Using this term, one can calculate many things in the same way as for a round tube. When the cross-section is uniform along the tube or channel length, it is defined as where is the cross-sectional area of the flow, is the wetted perimeter of the cross-section. More intuitively, the hydraulic diameter can be understood as a function of the hydraulic radius , which is defined as the cross-sectional area of the channel divided by the wetted perimeter. Here, the wetted perimeter includes all surfaces acted upon by shear stress from the fluid. Note that for the case of a circular pipe, The need for the hydraulic diameter arises due to the use of a single dimension in the case of a dimensionless quantity such as the Reynolds number, which prefers a single variable for flow analysis rather than the set of variables as listed in the table below. The Manning formula contains a quantity called the hydraulic radius. Despite what the name may suggest, the hydraulic diameter is not twice the hydraulic radius, but four times larger. Hydraulic diameter is mainly used for calculations involving turbulent flow. Secondary flows can be observed in non-circular ducts as a result of turbulent shear stress in the turbulent flow. Hydraulic diameter is also used in calculation of heat transfer in internal-flow problems. Non-uniform and non-circular cross-section channels In the more general case, channels with non-uniform non-circular cross-sectional area, such as the Tesla valve, the hydraulic diameter is defined as: where is the total wetted volume of the channel, is the total wetted surface area. This definition is reduced to for uniform non-circular cross-section channels, and for circular pipes. List of hydraulic diameters For a fully filled duct or pipe whose cross-section is a convex regular polygon, the hydraulic diameter is equivalent to the diameter of a circle inscribed within the wetted perimeter. This can be seen as follows: The -sided regular polygon is a union of triangles, each of height and base . Each such triangle contributes to the total area and to the total perimeter, giving for the hydraulic diameter. References See also Equivalent spherical diameter Hydraulic radius Darcy friction factor Fluid dynamics Heat transfer Hydrology Hydraulics Radii
Hydraulic diameter
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
470
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Thermodynamics", "Environmental engineering", "Piping", "Fluid dynamics" ]
582,410
https://en.wikipedia.org/wiki/Beam%20splitter
A beam splitter or beamsplitter is an optical device that splits a beam of light into a transmitted and a reflected beam. It is a crucial part of many optical experimental and measurement systems, such as interferometers, also finding widespread application in fibre optic telecommunications. Designs In its most common form, a cube, a beam splitter is made from two triangular glass prisms which are glued together at their base using polyester, epoxy, or urethane-based adhesives. (Before these synthetic resins, natural ones were used, e.g. Canada balsam.) The thickness of the resin layer is adjusted such that (for a certain wavelength) half of the light incident through one "port" (i.e., face of the cube) is reflected and the other half is transmitted due to FTIR (frustrated total internal reflection). Polarizing beam splitters, such as the Wollaston prism, use birefringent materials to split light into two beams of orthogonal polarization states. Another design is the use of a half-silvered mirror. This is composed of an optical substrate, which is often a sheet of glass or plastic, with a partially transparent thin coating of metal. The thin coating can be aluminium deposited from aluminium vapor using a physical vapor deposition method. The thickness of the deposit is controlled so that part (typically half) of the light, which is incident at a 45-degree angle and not absorbed by the coating or substrate material, is transmitted and the remainder is reflected. A very thin half-silvered mirror used in photography is often called a pellicle mirror. To reduce loss of light due to absorption by the reflective coating, so-called "Swiss-cheese" beam-splitter mirrors have been used. Originally, these were sheets of highly polished metal perforated with holes to obtain the desired ratio of reflection to transmission. Later, metal was sputtered onto glass so as to form a discontinuous coating, or small areas of a continuous coating were removed by chemical or mechanical action to produce a very literally "half-silvered" surface. Instead of a metallic coating, a dichroic optical coating may be used. Depending on its characteristics (thin-film interference), the ratio of reflection to transmission will vary as a function of the wavelength of the incident light. Dichroic mirrors are used in some ellipsoidal reflector spotlights to split off unwanted infrared (heat) radiation, and as output couplers in laser construction. A third version of the beam splitter is a dichroic mirrored prism assembly which uses dichroic optical coatings to divide an incoming light beam into a number of spectrally distinct output beams. Such a device was used in three-pickup-tube color television cameras and the three-strip Technicolor movie camera. It is currently used in modern three-CCD cameras. An optically similar system is used in reverse as a beam-combiner in three-LCD projectors, in which light from three separate monochrome LCD displays is combined into a single full-color image for projection. Beam splitters with single-mode fiber for PON networks use the single-mode behavior to split the beam. The splitter is done by physically splicing two fibers "together" as an X. Arrangements of mirrors or prisms used as camera attachments to photograph stereoscopic image pairs with one lens and one exposure are sometimes called "beam splitters", but that is a misnomer, as they are effectively a pair of periscopes redirecting rays of light which are already non-coincident. In some very uncommon attachments for stereoscopic photography, mirrors or prism blocks similar to beam splitters perform the opposite function, superimposing views of the subject from two different perspectives through color filters to allow the direct production of an anaglyph 3D image, or through rapidly alternating shutters to record sequential field 3D video. Phase shift Beam splitters are sometimes used to recombine beams of light, as in a Mach–Zehnder interferometer. In this case there are two incoming beams, and potentially two outgoing beams. But the amplitudes of the two outgoing beams are the sums of the (complex) amplitudes calculated from each of the incoming beams, and it may result that one of the two outgoing beams has amplitude zero. In order for energy to be conserved (see next section), there must be a phase shift in at least one of the outgoing beams. For example (see red arrows in picture on the right), if a polarized light wave in air hits a dielectric surface such as glass, and the electric field of the light wave is in the plane of the surface, then the reflected wave will have a phase shift of π, while the transmitted wave will not have a phase shift; the blue arrow does not pick up a phase-shift, because it is reflected from a medium with a lower refractive index. The behavior is dictated by the Fresnel equations. This does not apply to partial reflection by conductive (metallic) coatings, where other phase shifts occur in all paths (reflected and transmitted). In any case, the details of the phase shifts depend on the type and geometry of the beam splitter. Classical lossless beam splitter For beam splitters with two incoming beams, using a classical, lossless beam splitter with electric fields Ea and Eb each incident at one of the inputs, the two output fields Ec and Ed are linearly related to the inputs through where the 2×2 element is the beam-splitter transfer matrix and r and t are the reflectance and transmittance along a particular path through the beam splitter, that path being indicated by the subscripts. (The values depend on the polarization of the light.) If the beam splitter removes no energy from the light beams, the total output energy can be equated with the total input energy, reading Inserting the results from the transfer equation above with produces and similarly for then When both and are non-zero, and using these two results we obtain where "" indicates the complex conjugate. It is now easy to show that where is the identity, i.e. the beam-splitter transfer matrix is a unitary matrix. Each r and t can be written as a complex number having an amplitude and phase factor; for instance, . The phase factor accounts for possible shifts in phase of a beam as it reflects or transmits at that surface. Then we obtain Further simplifying, the relationship becomes which is true when and the exponential term reduces to -1. Applying this new condition and squaring both sides, it becomes where substitutions of the form were made. This leads to the result and similarly, It follows that . Having determined the constraints describing a lossless beam splitter, the initial expression can be rewritten as Applying different values for the amplitudes and phases can account for many different forms of the beam splitter that can be seen widely used. The transfer matrix appears to have 6 amplitude and phase parameters, but it also has 2 constraints: and . To include the constraints and simplify to 4 independent parameters, we may write (and from the constraint ), so that where is the phase difference between the transmitted beams and similarly for , and is a global phase. Lastly using the other constraint that we define so that , hence A 50:50 beam splitter is produced when . The dielectric beam splitter above, for example, has i.e. , while the "symmetric" beam splitter of Loudon has i.e. . Use in experiments Beam splitters have been used in both thought experiments and real-world experiments in the area of quantum theory and relativity theory and other fields of physics. These include: The Fizeau experiment of 1851 to measure the speeds of light in water The Michelson–Morley experiment of 1887 to measure the effect of the (hypothetical) luminiferous aether on the speed of light The Hammar experiment of 1935 to refute Dayton Miller's claim of a positive result from repetitions of the Michelson-Morley experiment The Kennedy–Thorndike experiment of 1932 to test the independence of the speed of light and the velocity of the measuring apparatus Bell test experiments (from ca. 1972) to demonstrate consequences of quantum entanglement and exclude local hidden-variable theories Wheeler's delayed choice experiment of 1978, 1984 etc., to test what makes a photon behave as a wave or a particle and when it happens The FELIX experiment (proposed in 2000) to test the Penrose interpretation that quantum superposition depends on spacetime curvature The Mach–Zehnder interferometer, used in various experiments, including the Elitzur–Vaidman bomb tester involving interaction-free measurement; and in others in the area of quantum computation Quantum mechanical description In quantum mechanics, the electric fields are operators as explained by second quantization and Fock states. Each electrical field operator can further be expressed in terms of modes representing the wave behavior and amplitude operators, which are typically represented by the dimensionless creation and annihilation operators. In this theory, the four ports of the beam splitter are represented by a photon number state and the action of a creation operation is . The following is a simplified version of Ref. The relation between the classical field amplitudes , and produced by the beam splitter is translated into the same relation of the corresponding quantum creation (or annihilation) operators , and , so that where the transfer matrix is given in classical lossless beam splitter section above: Since is unitary, , i.e. This is equivalent to saying that if we start from the vacuum state and add a photon in port a to produce then the beam splitter creates a superposition on the outputs of The probabilities for the photon to exit at ports c and d are therefore and , as might be expected. Likewise, for any input state and the output is Using the multi-binomial theorem, this can be written where and the is a binomial coefficient and it is to be understood that the coefficient is zero if etc. The transmission/reflection coefficient factor in the last equation may be written in terms of the reduced parameters that ensure unitarity: where it can be seen that if the beam splitter is 50:50 then and the only factor that depends on j is the term. This factor causes interesting interference cancellations. For example, if and the beam splitter is 50:50, then where the term has cancelled. Therefore the output states always have even numbers of photons in each arm. A famous example of this is the Hong–Ou–Mandel effect, in which the input has , the output is always or , i.e. the probability of output with a photon in each mode (a coincidence event) is zero. Note that this is true for all types of 50:50 beam splitter irrespective of the details of the phases, and the photons need only be indistinguishable. This contrasts with the classical result, in which equal output in both arms for equal inputs on a 50:50 beam splitter does appear for specific beam splitter phases (e.g. a symmetric beam splitter ), and for other phases where the output goes to one arm (e.g. the dielectric beam splitter ) the output is always in the same arm, not random in either arm as is the case here. From the correspondence principle we might expect the quantum results to tend to the classical one in the limits of large n, but the appearance of large numbers of indistinguishable photons at the input is a non-classical state that does not correspond to a classical field pattern, which instead produces a statistical mixture of different known as Poissonian light. Rigorous derivation is given in the Fearn–Loudon 1987 paper and extended in Ref to include statistical mixtures with the density matrix. Non-symmetric beam-splitter In general, for a non-symmetric beam-splitter, namely a beam-splitter for which the transmission and reflection coefficients are not equal, one can define an angle such that where and are the reflection and transmission coefficients. Then the unitary operation associated with the beam-splitter is then Application for quantum computing In 2000 Knill, Laflamme and Milburn (KLM protocol) proved that it is possible to create a universal quantum computer solely with beam splitters, phase shifters, photodetectors and single photon sources. The states that form a qubit in this protocol are the one-photon states of two modes, i.e. the states |01⟩ and |10⟩ in the occupation number representation (Fock state) of two modes. Using these resources it is possible to implement any single qubit gate and 2-qubit probabilistic gates. The beam splitter is an essential component in this scheme since it is the only one that creates entanglement between the Fock states. Similar settings exist for continuous-variable quantum information processing. In fact, it is possible to simulate arbitrary Gaussian (Bogoliubov) transformations of a quantum state of light by means of beam splitters, phase shifters and photodetectors, given two-mode squeezed vacuum states are available as a prior resource only (this setting hence shares certain similarities with a Gaussian counterpart of the KLM protocol). The building block of this simulation procedure is the fact that a beam splitter is equivalent to a squeezing transformation under partial time reversal. Diffractive beam splitter Reflection beam splitters Reflection beam splitters reflect parts of the incident radiation in different directions. These partial beams show exactly the same intensity. Typically, reflection beam splitters are made of metal and have a broadband spectral characteristic. Due to their compact design, beam splitters of this type are particularly easy to install in infrared detectors. At this application, the radiation enters through the aperture opening of the detector and is split into several beams of equal intensity but different directions by internal highly reflective microstructures. Each beam hits a sensor element with an upstream optical filter. Particularly in NDIR gas analysis, this design enables measurement with only one beam with a minimal beam cross-section, which significantly increases the interference immunity of the measurement. See also Power dividers and directional couplers References Mirrors Optical components Microscopy
Beam splitter
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,951
[ "Glass engineering and science", "Optical components", "Components", "Microscopy" ]
582,440
https://en.wikipedia.org/wiki/Luhn%20algorithm
The Luhn algorithm or Luhn formula, also known as the "modulus 10" or "mod 10" algorithm, named after its creator, IBM scientist Hans Peter Luhn, is a simple check digit formula used to validate a variety of identification numbers. It is described in US patent 2950048A, granted on . The algorithm is in the public domain and is in wide use today. It is specified in ISO/IEC 7812-1. It is not intended to be a cryptographically secure hash function; it was designed to protect against accidental errors, not malicious attacks. Most credit card numbers and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers. Description The check digit is computed as follows: Drop the check digit from the number (if it's already present). This leaves the payload. Start with the payload digits. Moving from right to left, double every second digit, starting from the last digit. If doubling a digit results in a value > 9, subtract 9 from it (or sum its digits). Sum all the resulting digits (including the ones that were not doubled). The check digit is calculated by , where s is the sum from step 3. This is the smallest number (possibly zero) that must be added to to make a multiple of 10. Other valid formulas giving the same value are , , and . Note that the formula will not work in all environments due to differences in how negative numbers are handled by the modulo operation. Example for computing check digit Assume an example of an account number 1789372997 (just the "payload", check digit not yet included): The sum of the resulting digits is 56. The check digit is equal to . This makes the full account number read 17893729974. Example for validating check digit Drop the check digit (last digit) of the number to validate. (e.g. 17893729974 → 1789372997) Calculate the check digit (see above) Compare your result with the original check digit. If both numbers match, the result is valid. Strengths and weaknesses The Luhn algorithm will detect all single-digit errors, as well as almost all transpositions of adjacent digits. It will not, however, detect transposition of the two-digit sequence 09 to 90 (or vice versa). It will detect most of the possible twin errors (it will not detect 22 ↔ 55, 33 ↔ 66 or 44 ↔ 77). Other, more complex check-digit algorithms (such as the Verhoeff algorithm and the Damm algorithm) can detect more transcription errors. The Luhn mod N algorithm is an extension that supports non-numerical strings. Because the algorithm operates on the digits in a right-to-left manner and zero digits affect the result only if they cause shift in position, zero-padding the beginning of a string of numbers does not affect the calculation. Therefore, systems that pad to a specific number of digits (by converting 1234 to 0001234 for instance) can perform Luhn validation before or after the padding and achieve the same result. The algorithm appeared in a United States Patent for a simple, hand-held, mechanical device for computing the checksum. The device took the mod 10 sum by mechanical means. The substitution digits, that is, the results of the double and reduce procedure, were not produced mechanically. Rather, the digits were marked in their permuted order on the body of the machine. Pseudocode implementation The following function takes a card number, including the check digit, as an array of integers and outputs true if the check digit is correct, false otherwise. function isValid(cardNumber[1..length]) sum := 0 parity := length mod 2 for i from 1 to length do if i mod 2 != parity then sum := sum + cardNumber[i] elseif cardNumber[i] > 4 then sum := sum + 2 * cardNumber[i] - 9 else sum := sum + 2 * cardNumber[i] end if end for return cardNumber[length] == ((10 - (sum mod 10)) mod 10) end function Uses The Luhn algorithm is used in a variety of systems, including: Credit card numbers IMEI numbers CUSIP numbers for North American financial instruments National Provider Identifier numbers in the United States Canadian social insurance numbers Israeli ID numbers South African ID numbers South African Tax reference numbers Swedish national identification numbers Swedish Corporate Identity Numbers (OrgNr) Greek Social Security Numbers (ΑΜΚΑ) ICCID of SIM cards European patent application numbers Survey codes appearing on McDonald's, Taco Bell, and Tractor Supply Co. receipts United States Postal Service package tracking numbers use a modified Luhn algorithm Italian VAT numbers (Partita Iva) References External links Luhn test of credit card numbers on Rosetta Code: Luhn algorithm/formula implementation in 160 programming languages Modular arithmetic Checksum algorithms Error detection and correction 1954 introductions Articles with example pseudocode Management cybernetics
Luhn algorithm
[ "Mathematics", "Engineering" ]
1,055
[ "Reliability engineering", "Error detection and correction", "Arithmetic", "Modular arithmetic", "Number theory" ]
582,702
https://en.wikipedia.org/wiki/Quasistatic%20process
In thermodynamics, a quasi-static process, also known as a quasi-equilibrium process (from Latin quasi, meaning ‘as if’), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process. Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness. Only in a quasi-static thermodynamic process can we exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at any instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains P or T, it implies a quasi-static process. Relation to reversible process While all reversible processes are quasi-static, most authors do not require a general quasi-static process to maintain equilibrium between system and surroundings and avoid dissipation, which are defining characteristics of a reversible process. For example, quasi-static compression of a system by a piston subject to friction is irreversible; although the system is always in internal thermal equilibrium, the friction ensures the generation of dissipative entropy, which goes against the definition of reversibility. Any engineer would remember to include friction when calculating the dissipative entropy generation. An example of a quasi-static process that is not idealizable as reversible is slow heat transfer between two bodies on two finitely different temperatures, where the heat transfer rate is controlled by a poorly conductive partition between the two bodies. In this case, no matter how slowly the process takes place, the state of the composite system consisting of the two bodies is far from equilibrium, since thermal equilibrium for this composite system requires that the two bodies be at the same temperature. Nevertheless, the entropy change for each body can be calculated using the Clausius equality for reversible heat transfer. PV-work in various quasi-static processes Constant pressure: Isobaric processes, Constant volume: Isochoric processes, Constant temperature: Isothermal processes, where (pressure) varies with (volume) via , so Polytropic processes, See also Entropy Reversible process (thermodynamics) References Thermodynamic processes Statistical mechanics
Quasistatic process
[ "Physics", "Chemistry" ]
554
[ "Thermodynamic processes", "Statistical mechanics", "Thermodynamics" ]
582,770
https://en.wikipedia.org/wiki/Particle%20number
In thermodynamics, the particle number (symbol ) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems. A constituent particle is one that cannot be broken into smaller pieces at the scale of energy involved in the process (where is the Boltzmann constant and is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent. Determining the particle number The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known amount of substance n expressed in moles, the particle number N can be found by the relation : , where NA is the Avogadro constant. Particle number density A related intensive system parameter is the particle number density (or particle number concentration PNC), a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter n. In quantum mechanics In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle number operator (see Fock state) is conjugate to the phase of the classical wave (see coherent state). In air quality One measure of air pollution used in air quality standards is the atmospheric concentration of particulate matter. This measure is usually expressed in μg/m3 (micrograms per cubic metre). In the current EU emission norms for cars, vans, and trucks and in the upcoming EU emission norm for non-road mobile machinery, particle number measurements and limits are defined, commonly referred to as PN, with units [#/km] or [#/kWh]. In this case, PN expresses a quantity of particles per unit distance (or work). References Thermodynamics Dimensionless numbers of thermodynamics Countable quantities State functions
Particle number
[ "Physics", "Chemistry", "Mathematics" ]
553
[ "State functions", "Scalar physical quantities", "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Thermodynamics", "Dimensionless quantities", "Countable quantities", "Dynamical systems" ]
583,104
https://en.wikipedia.org/wiki/Orthosie%20%28moon%29
Orthosie , also known as , is a natural satellite of Jupiter. It was discovered by a team of astronomers from the University of Hawaii led by Scott S. Sheppard in 2001, and given the temporary designation . Orthosie is about 2 kilometres in diameter, and orbits Jupiter at an average distance of 21,075,662 km in 625.07 days, at an inclination of 146.46° to the ecliptic (143° to Jupiter's equator), in a retrograde direction and with an eccentricity of 0.3376. It was named in August 2003 after Orthosie, the Greek goddess of prosperity and one of the Horae. The Horae (Hours) were daughters of Zeus and Themis. Orthosie belongs to the Ananke group. References Ananke group Moons of Jupiter Irregular satellites Discoveries by Scott S. Sheppard Discoveries by David C. Jewitt Discoveries by Yanga R. Fernandez 20011211 Moons with a retrograde orbit
Orthosie (moon)
[ "Astronomy" ]
207
[ "Astronomy stubs", "Planetary science stubs" ]
583,600
https://en.wikipedia.org/wiki/Theory%20of%20equations
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory. Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra". History Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots. This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 1799 that some fifth degree equations cannot be solved in radicals followed by Niels Henrik Abel's complete proof in 1824 (now known as the Abel–Ruffini theorem). Évariste Galois later introduced a theory (presently called Galois theory) to decide which equations are solvable by radicals. Further problems Other classical problems of the theory of equations are the following: Linear equations: this problem was solved during antiquity. Simultaneous linear equations: The general theoretical solution was provided by Gabriel Cramer in 1750. However devising efficient methods (algorithms) to solve these systems remains an active subject of research now called linear algebra. Finding the integer solutions of an equation or of a system of equations. These problems are now called Diophantine equations, which are considered a part of number theory (see also integer programming). Systems of polynomial equations: Because of their difficulty, these systems, with few exceptions, have been studied only since the second part of the 19th century. They have led to the development of algebraic geometry. See also Root-finding algorithm Properties of polynomial roots Quintic function References https://www.britannica.com/science/mathematics/Theory-of-equations Further reading Uspensky, James Victor, Theory of Equations (McGraw-Hill), 1963 Dickson, Leonard E., Elementary Theory of Equations (Internet Archive), originally 1914 History of algebra Polynomials Equations
Theory of equations
[ "Mathematics" ]
688
[ "History of algebra", "Polynomials", "Mathematical objects", "Equations", "Algebra" ]
584,136
https://en.wikipedia.org/wiki/Topological%20abelian%20group
In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group. That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative. The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis. See also , a topological abelian group that is compact and connected References Fourier analysis on Groups, by Walter Rudin. Abelian group theory Topology Topological groups
Topological abelian group
[ "Physics", "Mathematics" ]
121
[ "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
584,240
https://en.wikipedia.org/wiki/Image%20intensifier
An image intensifier or image intensifier tube is a vacuum tube device for increasing the intensity of available light in an optical system to allow use under low-light conditions, such as at night, to facilitate visual imaging of low-light processes, such as fluorescence of materials in X-rays or gamma rays (X-ray image intensifier), or for conversion of non-visible light sources, such as near-infrared or short wave infrared to visible. They operate by converting photons of light into electrons, amplifying the electrons (usually with a microchannel plate), and then converting the amplified electrons back into photons for viewing. They are used in devices such as night-vision goggles. Introduction Image intensifier tubes (IITs) are optoelectronic devices that allow many devices, such as night vision devices and medical imaging devices, to function. They convert low levels of light from various wavelengths into visible quantities of light at a single wavelength. Operation Image intensifiers convert low levels of light photons into electrons, amplify those electrons, and then convert the electrons back into photons of light. Photons from a low-light source enter an objective lens which focuses an image into a photocathode. The photocathode releases electrons via the photoelectric effect as the incoming photons hit it. The electrons are accelerated through a high-voltage potential into a microchannel plate (MCP). Each high-energy electron that strikes the MCP causes the release of many electrons from the MCP in a process called secondary cascaded emission. The MCP is made up of thousands of tiny conductive channels, tilted at an angle away from normal to encourage more electron collisions and thus enhance the emission of secondary electrons in a controlled Electron avalanche. All the electrons move in a straight line due to the high-voltage difference across the plates, which preserves collimation, and where one or two electrons entered, thousands may emerge. A separate (lower) charge differential accelerates the secondary electrons from the MCP until they hit a phosphor screen at the other end of the intensifier, which releases a photon for every electron. The image on the phosphor screen is focused by an eyepiece lens. The amplification occurs at the microchannel plate stage via its secondary cascaded emission. The phosphor is usually green because the human eye is more sensitive to green than other colors and because historically the original material used to produce phosphor screens produced green light (hence the soldiers' nickname 'green TV' for image intensification devices). History The development of image intensifier tubes began during the 20th century, with continuous development since inception. Pioneering work The idea of an image tube was first proposed by G. Holst and H. De Boer in 1928, in the Netherlands , but early attempts to create one were not successful. It was not until 1934 that Holst, working for Philips, created the first successful infrared converter tube. This tube consisted of a photocathode in proximity to a fluorescent screen. Using a simple lens, an image was focused on the photocathode and a potential difference of several thousand volts was maintained across the tube, causing electrons dislodged from the photocathode by photons to strike the fluorescent screen. This caused the screen to light up with the image of the object focused onto the screen, however the image was non-inverting. With this image converter type tube, it was possible to view infrared light in real time, for the first time. Generation 0: early infrared electro-optical image converters Development continued in the US as well during the 1930s and mid-1930, the first inverting image intensifier was developed at RCA. This tube used an electrostatic inverter to focus an image from a spherical cathode onto a spherical screen. (The choice of spheres was to reduce off-axial aberrations.) Subsequent development of this technology led directly to the first Generation 0 image intensifiers which were used by the military during World War II to allow vision at night with infrared lighting for both shooting and personal night vision. The first military night vision device was introduced by the German army as early as 1939, developed since 1935. Early night vision devices based on these technologies were used by both sides in World War II. Unlike later technologies, early Generation 0 night vision devices were unable to significantly amplify the available ambient light and so, to be useful, required an infrared source. These devices used an S1 photocathode or "silver-oxygen-caesium" photocathode, discovered in 1930, which had a sensitivity of around 60 μA/lm (Microampere per Lumen) and a quantum efficiency of around 1% in the ultraviolet region and around 0.5% in the infrared region. Of note, the S1 photocathode had sensitivity peaks in both the infrared and ultraviolet spectrum and with sensitivity over 950 nm was the only photocathode material that could be used to view infrared light above 950 nm. Solar blind converters Solar blind converters, also known as solar blind photocathodes, are specialized devices that detect ultraviolet (UV) light below 280 nanometers (nm) in wavelength. This UV range is termed "solar blind" because it is shorter than the wavelengths of sunlight that typically penetrate the Earth's atmosphere. Discovered in 1953 by Taft and Apker , solar blind photocathodes were initially developed using cesium telluride. Unlike night-vision technologies that are classified into "generations" based on their military applications, solar blind photocathodes do not fit into this categorization because their utility is not primarily military. Their ability to detect UV light in the solar blind range makes them useful for applications that require sensitivity to UV radiation without interference from visible sunlight. Generation 1: significant amplification With the discovery of more effective photocathode materials, which increased in both sensitivity and quantum efficiency, it became possible to achieve significant levels of gain over Generation 0 devices. In 1936, the S-11 cathode (cesium-antimony) was discovered by Gorlich, which provided sensitivity of approximately 80 μA/lm with a quantum efficiency of around 20%; this only included sensitivity in the visible region with a threshold wavelength of approximately 650 nm. It was not until the development of the bialkali antimonide photocathodes (potassium-cesium-antimony and sodium-potassium-antimony) discovered by A.H. Sommer and his later multialkali photocathode (sodium-potassium-antimony-cesium) S20 photocathode discovered in 1956 by accident, that the tubes had both suitable infrared sensitivity and visible spectrum amplification to be useful militarily. The S20 photocathode has a sensitivity of around 150 to 200 μA/lm. The additional sensitivity made these tubes usable with limited light, such as moonlight, while still being suitable for use with low-level infrared illumination. Cascade (passive) image intensifier tubes Although originally experimented with by the Germans in World War Two, it was not until the 1950s that the U.S. began conducting early experiments using multiple tubes in a "cascade", by coupling the output of an inverting tube to the input of another tube, which allowed for increased amplification of the object light being viewed. These experiments worked far better than expected and night vision devices based on these tubes were able to pick up faint starlight and produce a usable image. However, the size of these tubes, at 17 in (43 cm) long and 3.5 in (8.9 cm) in diameter, were too large to be suitable for military use. Known as "cascade" tubes, they provided the capability to produce the first truly passive night vision scopes. With the advent of fiber optic bundles in the 1960s, it was possible to connect smaller tubes together, which allowed for the first true Starlight scopes to be developed in 1964. Many of these tubes were used in the AN/PVS-2 rifle scope, which saw use in Vietnam. An alternative to the cascade tube explored in the mid 20th century involves optical feedback, with the output of the tube fed back into the input. This scheme has not been used in rifle scopes, but it has been used successfully in lab applications where larger image intensifier assemblies are acceptable. Generation 2: micro-channel plate Second generation image intensifiers use the same multialkali photocathode that the first generation tubes used, however by using thicker layers of the same materials, the S25 photocathode was developed, which provides extended red response and reduced blue response, making it more suitable for military applications. It has a typical sensitivity of around 230 μA/lm and a higher quantum efficiency than S20 photocathode material. Oxidation of the cesium to cesium oxide in later versions improved the sensitivity in a similar way to third generation photocathodes. The same technology that produced the fiber optic bundles that allowed the creation of cascade tubes, allowed, with a slight change in manufacturing, the production of micro-channel plates, or MCPs. The micro-channel plate is a thin glass wafer with a Nichrome electrode on either side across which a large potential difference of up to 1,000 volts is applied. The wafer is manufactured from many thousands of individual hollow glass fibers, aligned at a "bias" angle to the axis of the tube. The micro-channel plate fits between the photocathode and screen. Electrons that strike the side of the "micro-channel" as they pass through it elicit secondary electrons, which in turn elicit additional electrons as they too strike the walls, amplifying the signal. By using the MCP with a proximity focused tube, amplifications of up to 30,000 times with a single MCP layer were possible. By increasing the number of layers of MCP, additional amplification to well over 1,000,000 times could be achieved. Inversion of Generation 2 devices was achieved through one of two different ways. The Inverter tube uses electrostatic inversion, in the same manner as the first generation tubes did, with a MCP included. Proximity focused second generation tubes could also be inverted by using a fiber bundle with a 180 degree twist in it. Generation 3: high sensitivity and improved frequency response While the third generation of tubes were fundamentally the same as the second generation, they possessed two significant differences. Firstly, they used a GaAs—CsO—AlGaAs photocathode, which is more sensitive in the 800 nm-900 nm range than second-generation photocathodes. Secondly, the photocathode exhibits negative electron affinity (NEA), which provides photoelectrons that are excited to the conduction band a free ride to the vacuum band as the Cesium Oxide layer at the edge of the photocathode causes sufficient band-bending. This makes the photocathode very efficient at creating photoelectrons from photons. The Achilles heel of third generation photocathodes, however, is that they are seriously degraded by positive ion poisoning. Due to the high electrostatic field stresses in the tube, and the operation of the MicroChannel Plate, this led to the failure of the photocathode within a short period - as little as 100 hours before photocathode sensitivity dropped below Gen2 levels. To protect the photocathode from positive ions and gases produced by the MCP, they introduced a thin film of sintered aluminium oxide attached to the MCP. The high sensitivity of this photocathode, greater than 900 μA/lm, allows more effective low light response, though this was offset by the thin film, which typically blocked up to 50% of electrons. Super second generation Although not formally recognized under the U.S. generation categories, Super Second Generation or SuperGen was developed in 1989 by Jacques Dupuy and Gerald Wolzak. This technology improved the tri-alkali photocathodes to more than double their sensitivity while also improving the microchannel plate by increasing the open-area ratio to 70% while reducing the noise level. This allowed second generation tubes, which are more economical to manufacture, to achieve comparable results to third generation image intensifier tubes. With sensitivities of the photocathodes approaching 700 μA/lm and extended frequency response to 950 nm, this technology continued to be developed outside of the U.S., notably by Photonis and now forms the basis for most non-US manufactured high-end night vision equipment. Generation 4 In 1998, the US company Litton developed the filmless image tube. These tubes were originally made for the Omni V contract and resulted in significant interest by the US military. However, the tubes suffered greatly from fragility during testing and, by 2002, the NVESD revoked the fourth generation designation for filmless tubes, at which time they simply became known as Gen III Filmless. These tubes are still produced for specialist uses, such as aviation and special operations; however, they are not used for weapon-mounted purposes. To overcome the ion-poisoning problems, they improved scrubbing techniques during manufacture of the MCP ( the primary source of positive ions in a wafer tube ) and implemented autogating, discovering that a sufficient period of autogating would cause positive ions to be ejected from the photocathode before they could cause photocathode poisoning. Generation III Filmless technology is still in production and use today, but officially, there is no Generation 4 of image intensifiers. Generation 3 thin film Also known as Generation 3 Omni VII and Generation 3+, following the issues experienced with generation IV technology, Thin Film technology became the standard for current image intensifier technology. In Thin Film image intensifiers, the thickness of the film is reduced from around 30 Angstrom (standard) to around 10 Angstrom and the photocathode voltage is lowered. This causes fewer electrons to be stopped than with third generation tubes, while providing the benefits of a filmed tube. Generation 3 Thin Film technology is presently the standard for most image intensifiers used by the US military. 4G In 2014, French image tube manufacturer PHOTONIS released the first global, open, performance specification; "4G". The specification had four main requirements that an image intensifier tube would have to meet. Spectral sensitivity from below 400 nm to above 1000 nm A minimum figure-of-merit of FOM1800 High light resolution higher than 57 lp/mm Halo size of less than 0.7mm Terminology There are several common terms used for Image Intensifier tubes. Gating Electronic Gating (or 'gating') is a means by which an image intensifier tube may be switched ON and OFF in a controlled manner. An electronically gated image intensifier tube functions like a camera shutter, allowing images to pass through when the electronic "gate" is enabled. The gating durations can be very short (nanoseconds or even picoseconds). This makes gated image intensifier tubes ideal candidates for use in research environments where very short duration events must be photographed. As an example, in order to assist engineers in designing more efficient combustion chambers, gated imaging tubes have been used to record very fast events such as the wavefront of burning fuel in an internal combustion engine. Often gating is used to synchronize imaging tubes to events whose start cannot be controlled or predicted. In such an instance, the gating operation may be synchronized to the start of an event using 'gating electronics', e.g. high-speed digital delay generators. The gating electronics allows a user to specify when the tube will turn on and off relative to the start of an event. There are many examples of the uses of gated imaging tubes. Because of the combination of the very high speeds at which a gated tube may operate and their light amplification capability, gated tubes can record specific portions of a beam of light. It is possible to capture only the portion of light reflected from a target, when a pulsed beam of light is fired at the target, by controlling the gating parameters. Gated-Pulsed-Active Night Vision (GPANV) devices are another example of an application that uses this technique. GPANV devices can allow a user to see objects of interest that are obscured behind vegetation, foliage, and/or mist. These devices are also useful for locating objects in deep water, where reflections of light off of nearby particles from a continuous light source, such as a high brightness underwater floodlight, would otherwise obscure the image. ATG (auto-gating) Auto-gating is a feature found in many image intensifier tubes manufactured for military purposes after 2006, though it has been around for some time. Autogated tubes gate the image intensifier within so as to control the amount of light that gets through to the microchannel plate. The gating occurs at high frequency and by varying the duty cycle to maintain a constant current draw from the microchannel plate, it is possible to operate the tube during brighter conditions, such as daylight, without damaging the tube or leading to premature failure. Auto-gating of image intensifiers is militarily valuable as it allowed extended operational hours giving enhanced vision during twilight hours while providing better support for soldiers who encounter rapidly changing lighting conditions, such as those assaulting a building. Sensitivity The sensitivity of an image intensifier tube is measured in microamperes per lumen (μA/lm). It defines how many electrons are produced per quantity of light that falls on the photocathode. This measurement should be made at a specific color temperature, such as "at a colour temperature of 2854 K". The color temperature at which this test is made tends to vary slightly between manufacturers. Additional measurements at specific wavelengths are usually also specified, especially for Gen2 devices, such as at 800 nm and 850 nm (infrared). Typically, the higher the value, the more sensitive the tube is to light. Resolution More accurately known as limiting resolution, tube resolution is measured in line pairs per millimeter or lp/mm. This is a measure of how many lines of varying intensity (light to dark) can be resolved within a millimeter of screen area. However the limiting resolution itself is a measure of the Modulation Transfer Function. For most tubes, the limiting resolution is defined as the point at which the modulation transfer function becomes three percent or less. The higher the value, the higher the resolution of the tube. An important consideration, however, is that this is based on the physical screen size in millimeters and is not proportional to the screen size. As such, an 18 mm tube with a resolution of around 64 lp/mm has a higher overall resolution than an 8 mm tube with 72 lp/mm resolution. Resolution is usually measured at the centre and at the edge of the screen and tubes often come with figures for both. Military Specification or milspec tubes only come with a criterion such as "> 64 lp/mm" or "Greater than 64 line pairs/millimeter". Gain The gain of a tube is typically measured using one of two units. The most common (SI) unit is cd·m−2·lx−1, i.e. candelas per meter squared per lux. The older convention is Fl/Fc (foot-lamberts per foot-candle). This creates issues with comparative gain measurements since neither is a pure ratio, although both are measured as a value of output intensity over input intensity. This creates ambiguity in the marketing of night vision devices as the difference between the two measurements is effectively pi or approximately 3.142x. This means that a gain of 10,000 cd/m2/lx is the same as 31.42 Fl/Fc. MTBF (mean time between failure) This value, expressed in hours, gives an idea how long a tube typically should last. It's a reasonably common comparison point, however takes many factors into account. The first is that tubes are constantly degrading. This means that over time, the tube will slowly produce less gain than it did when it was new. When the tube gain reaches 50% of its "new" gain level, the tube is considered to have failed, so primarily this reflects this point in a tube's life. Additional considerations for the tube lifespan are the environment that the tube is being used in and the general level of illumination present in that environment, including bright moonlight and exposure to both artificial lighting and use during dusk/dawn periods, as exposure to brighter light reduces a tube's life significantly. Also, a MTBF only includes operational hours. It is considered that turning a tube on or off does not contribute to reducing overall lifespan, so many civilians tend to turn their night vision equipment on only when they need to, to make the most of the tube's life. Military users tend to keep equipment on for longer periods of time, typically, the entire time while it is being used with batteries being the primary concern, not tube life. Typical examples of tube life are: First Generation: 1000 hrs Second Generation: 2000 to 2500 hrs Third Generation: 10000 to 15000 hrs. Many recent high-end second-generation tubes now have MTBFs approaching 15,000 operational hours. MTF (modulation transfer function) The modulation transfer function of an image intensifier is a measure of the output amplitude of dark and light lines on the display for a given level of input from lines presented to the photocathode at different resolutions. It is usually given as a percentage at a given frequency (spacing) of light and dark lines. For example, if you look at white and black lines with a MTF of 99% @ 2 lp/mm then the output of the dark and light lines is going to be 99% as dark or light as looking at a black image or a white image. This value decreases for a given increase in resolution also. On the same tube if the MTF at 16 and 32 lp/mm was 50% and 3% then at 16 lp/mm the signal would be only half as bright/dark as the lines were for 2 lp/mm and at 32 lp/mm the image of the lines would be only three percent as bright/dark as the lines were at 2 lp/mm. Additionally, since the limiting resolution is usually defined as the point at which the MTF is three percent or less, this would also be the maximum resolution of the tube. The MTF is affected by every part of an image intensifier tube's operation and on a complete system is also affected by the quality of the optics involved. Factors that affect the MTF include transition through any fiber plate or glass, at the screen and the photocathode and also through the tube and the microchannel plate itself. The higher the MTF at a given resolution, the better. See also References External links Historical information on IIT development and inception Discovery of other photocathode materials Several references are made to historical data noted in "Image Tubes" by Illes P Csorba Selected Papers on Image tubes Make Time for the Stars by Antony Cooke Optical devices Vacuum tubes ja:イメージインテンシファイア
Image intensifier
[ "Physics", "Materials_science", "Engineering" ]
4,832
[ "Glass engineering and science", "Optical devices", "Vacuum tubes", "Vacuum", "Matter" ]
584,911
https://en.wikipedia.org/wiki/External%20ballistics
External ballistics or exterior ballistics is the part of ballistics that deals with the behavior of a projectile in flight. The projectile may be powered or un-powered, guided or unguided, spin or fin stabilized, flying through an atmosphere or in the vacuum of space, but most certainly flying under the influence of a gravitational field. Gun-launched projectiles may be unpowered, deriving all their velocity from the propellant's ignition until the projectile exits the gun barrel. However, exterior ballistics analysis also deals with the trajectories of rocket-assisted gun-launched projectiles and gun-launched rockets; and rockets that acquire all their trajectory velocity from the interior ballistics of their on-board propulsion system, either a rocket motor or air-breathing engine, both during their boost phase and after motor burnout. External ballistics is also concerned with the free-flight of other projectiles, such as balls, arrows etc. Forces acting on the projectile When in flight, the main or major forces acting on the projectile are gravity, drag, and if present, wind; if in powered flight, thrust; and if guided, the forces imparted by the control surfaces. In small arms external ballistics applications, gravity imparts a downward acceleration on the projectile, causing it to drop from the line-of-sight. Drag, or the air resistance, decelerates the projectile with a force proportional to the square of the velocity. Wind makes the projectile deviate from its trajectory. During flight, gravity, drag, and wind have a major impact on the path of the projectile, and must be accounted for when predicting how the projectile will travel. For medium to longer ranges and flight times, besides gravity, air resistance and wind, several intermediate or meso variables described in the external factors paragraph have to be taken into account for small arms. Meso variables can become significant for firearms users that have to deal with angled shot scenarios or extended ranges, but are seldom relevant at common hunting and target shooting distances. For long to very long small arms target ranges and flight times, minor effects and forces such as the ones described in the long range factors paragraph become important and have to be taken into account. The practical effects of these minor variables are generally irrelevant for most firearms users, since normal group scatter at short and medium ranges prevails over the influence these effects exert on projectile trajectories. At extremely long ranges, artillery must fire projectiles along trajectories that are not even approximately straight; they are closer to parabolic, although air resistance affects this. Extreme long range projectiles are subject to significant deflections, depending on circumstances, from the line toward the target; and all external factors and long range factors must be taken into account when aiming. In very large-calibre artillery cases, like the Paris Gun, very subtle effects that are not covered in this article can further refine aiming solutions. In the case of ballistic missiles, the altitudes involved have a significant effect as well, with part of the flight taking place in a near-vacuum well above a rotating Earth, steadily moving the target from where it was at launch time. Stabilizing non-spherical projectiles during flight Two methods can be employed to stabilize non-spherical projectiles during flight: Projectiles like arrows or arrow like sabots such as the M829 Armor-Piercing, Fin-Stabilized, Discarding Sabot (APFSDS) achieve stability by forcing their center of pressure (CP) behind their center of mass (CM) with tail surfaces. The CP behind the CM condition yields stable projectile flight, meaning the projectile will not overturn during flight through the atmosphere due to aerodynamic forces. Projectiles like small arms bullets and artillery shells must deal with their CP being in front of their CM, which destabilizes these projectiles during flight. To stabilize such projectiles the projectile is spun around its longitudinal (leading to trailing) axis. The spinning mass creates gyroscopic forces that keep the bullet's length axis resistant to the destabilizing overturning torque of the CP being in front of the CM. Main effects in external ballistics Projectile/bullet drop and projectile path The effect of gravity on a projectile in flight is often referred to as projectile drop or bullet drop. It is important to understand the effect of gravity when zeroing the sighting components of a gun. To plan for projectile drop and compensate properly, one must understand parabolic shaped trajectories. Projectile/bullet drop In order for a projectile to impact any distant target, the barrel must be inclined to a positive elevation angle relative to the target. This is due to the fact that the projectile will begin to respond to the effects of gravity the instant it is free from the mechanical constraints of the bore. The imaginary line down the center axis of the bore and out to infinity is called the line of departure and is the line on which the projectile leaves the barrel. Due to the effects of gravity a projectile can never impact a target higher than the line of departure. When a positively inclined projectile travels downrange, it arcs below the line of departure as it is being deflected off its initial path by gravity. Projectile/Bullet drop is defined as the vertical distance of the projectile below the line of departure from the bore. Even when the line of departure is tilted upward or downward, projectile drop is still defined as the distance between the bullet and the line of departure at any point along the trajectory. Projectile drop does not describe the actual trajectory of the projectile. Knowledge of projectile drop however is useful when conducting a direct comparison of two different projectiles regarding the shape of their trajectories, comparing the effects of variables such as velocity and drag behavior. Projectile/bullet path For hitting a distant target an appropriate positive elevation angle is required that is achieved by angling the line of sight from the shooter's eye through the centerline of the sighting system downward toward the line of departure. This can be accomplished by simply adjusting the sights down mechanically, or by securing the entire sighting system to a sloped mounting having a known downward slope, or by a combination of both. This procedure has the effect of elevating the muzzle when the barrel must be subsequently raised to align the sights with the target. A projectile leaving a muzzle at a given elevation angle follows a ballistic trajectory whose characteristics are dependent upon various factors such as muzzle velocity, gravity, and aerodynamic drag. This ballistic trajectory is referred to as the bullet path. If the projectile is spin stabilized, aerodynamic forces will also predictably arc the trajectory slightly to the right, if the rifling employs "right-hand twist." Some barrels are cut with left-hand twist, and the bullet will arc to the left, as a result. Therefore, to compensate for this path deviation, the sights also have to be adjusted left or right, respectively. A constant wind also predictably affects the bullet path, pushing it slightly left or right, and a little bit more up and down, depending on the wind direction. The magnitude of these deviations are also affected by whether the bullet is on the upward or downward slope of the trajectory, due to a phenomenon called "yaw of repose," where a spinning bullet tends to steadily and predictably align slightly off center from its point mass trajectory. Nevertheless, each of these trajectory perturbations are predictable once the projectile aerodynamic coefficients are established, through a combination of detailed analytical modeling and test range measurements. Projectile/bullet path analysis is of great use to shooters because it allows them to establish ballistic tables that will predict how much vertical elevation and horizontal deflection corrections must be applied to the sight line for shots at various known distances. The most detailed ballistic tables are developed for long range artillery and are based on six-degree-of-freedom trajectory analysis, which accounts for aerodynamic behavior along the three axial directions—elevation, range, and deflection—and the three rotational directions—pitch, yaw, and spin. For small arms applications, trajectory modeling can often be simplified to calculations involving only four of these degrees-of-freedom, lumping the effects of pitch, yaw and spin into the effect of a yaw-of-repose to account for trajectory deflection. Once detailed range tables are established, shooters can relatively quickly adjust sights based on the range to target, wind, air temperature and humidity, and other geometric considerations, such as terrain elevation differences. Projectile path values are determined by both the sight height, or the distance of the line of sight above the bore centerline, and the range at which the sights are zeroed, which in turn determines the elevation angle. A projectile following a ballistic trajectory has both forward and vertical motion. Forward motion is slowed due to air resistance, and in point mass modeling the vertical motion is dependent on a combination of the elevation angle and gravity. Initially, the projectile is rising with respect to the line of sight or the horizontal sighting plane. The projectile eventually reaches its apex (highest point in the trajectory parabola) where the vertical speed component decays to zero under the effect of gravity, and then begins to descend, eventually impacting the earth. The farther the distance to the intended target, the greater the elevation angle and the higher the apex. The projectile path crosses the horizontal sighting plane two times. The point closest to the gun occurs while the bullet is climbing through the line of sight and is called the near zero. The second point occurs as the projectile is descending through the line of sight. It is called the far zero and defines the current sight in distance for the gun. Projectile path is described numerically as distances above or below the horizontal sighting plane at various points along the trajectory. This is in contrast to projectile drop which is referenced to the plane containing the line of departure regardless of the elevation angle. Since each of these two parameters uses a different reference datum, significant confusion can result because even though a projectile is tracking well below the line of departure it can still be gaining actual and significant height with respect to the line of sight as well as the surface of the Earth in the case of a horizontal or near horizontal shot taken over flat terrain. Maximum point-blank range and battle zero Knowledge of the projectile drop and path has some practical uses to shooters even if it does not describe the actual trajectory of the projectile. For example, if the vertical projectile position over a certain range reach is within the vertical height of the target area the shooter wants to hit, the point of aim does not necessarily need to be adjusted over that range; the projectile is considered to have a sufficiently flat point-blank range trajectory for that particular target. Also known as "battle zero", maximum point-blank range is also of importance to the military. Soldiers are instructed to fire at any target within this range by simply placing their weapon's sights on the center of mass of the enemy target. Any errors in range estimation are tactically irrelevant, as a well-aimed shot will hit the torso of the enemy soldier. The current trend for elevated sights and higher-velocity cartridges in assault rifles is in part due to a desire to extend the maximum point-blank range, which makes the rifle easier to use. Drag resistance Mathematical models, such as computational fluid dynamics, are used for calculating the effects of drag or air resistance; they are quite complex and not yet completely reliable, but research is ongoing. The most reliable method, therefore, of establishing the necessary projectile aerodynamic properties to properly describe flight trajectories is by empirical measurement. Fixed drag curve models generated for standard-shaped projectiles Use of ballistics tables or ballistics software based on the Mayevski/Siacci method and G1 drag model, introduced in 1881, are the most common method used to work with external ballistics. Projectiles are described by a ballistic coefficient, or BC, which combines the air resistance of the bullet shape (the drag coefficient) and its sectional density (a function of mass and bullet diameter). The deceleration due to drag that a projectile with mass m, velocity v, and diameter d will experience is proportional to 1/BC, 1/m, v² and d². The BC gives the ratio of ballistic efficiency compared to the standard G1 projectile, which is a fictitious projectile with a flat base, a length of 3.28 calibers/diameters, and a 2 calibers/diameters radius tangential curve for the point. The G1 standard projectile originates from the "C" standard reference projectile defined by the German steel, ammunition and armaments manufacturer Krupp in 1881. The G1 model standard projectile has a BC of 1. The French Gâvre Commission decided to use this projectile as their first reference projectile, giving the G1 name. Sporting bullets, with a calibre d ranging from 0.177 to 0.50 inches (4.50 to 12.7 mm), have G1 BC's in the range 0.12 to slightly over 1.00, with 1.00 being the most aerodynamic, and 0.12 being the least. Very-low-drag bullets with BC's ≥ 1.10 can be designed and produced on CNC precision lathes out of mono-metal rods, but they often have to be fired from custom made full bore rifles with special barrels. Sectional density is a very important aspect of a projectile or bullet, and is for a round projectile like a bullet the ratio of frontal surface area (half the bullet diameter squared, times pi) to bullet mass. Since, for a given bullet shape, frontal surface increases as the square of the calibre, and mass increases as the cube of the diameter, then sectional density grows linearly with bore diameter. Since BC combines shape and sectional density, a half scale model of the G1 projectile will have a BC of 0.5, and a quarter scale model will have a BC of 0.25. Since different projectile shapes will respond differently to changes in velocity (particularly between supersonic and subsonic velocities), a BC provided by a bullet manufacturer will be an average BC that represents the common range of velocities for that bullet. For rifle bullets, this will probably be a supersonic velocity, for pistol bullets it will probably be subsonic. For projectiles that travel through the supersonic, transonic and subsonic flight regimes BC is not well approximated by a single constant, but is considered to be a function BC(M) of the Mach number M; here M equals the projectile velocity divided by the speed of sound. During the flight of the projectile the M will decrease, and therefore (in most cases) the BC will also decrease. Most ballistic tables or software takes for granted that one specific drag function correctly describes the drag and hence the flight characteristics of a bullet related to its ballistics coefficient. Those models do not differentiate between wadcutter, flat-based, spitzer, boat-tail, very-low-drag, etc. bullet types or shapes. They assume one invariable drag function as indicated by the published BC. Several drag curve models optimized for several standard projectile shapes are however available. The resulting fixed drag curve models for several standard projectile shapes or types are referred to as the: G1 or Ingalls (flatbase with 2 caliber (blunt) nose ogive - by far the most popular) G2 (Aberdeen J projectile) G5 (short 7.5° boat-tail, 6.19 calibers long tangent ogive) G6 (flatbase, 6 calibers long secant ogive) G7 (long 7.5° boat-tail, 10 calibers tangent ogive, preferred by some manufacturers for very-low-drag bullets) G8 (flatbase, 10 calibers long secant ogive) GL (blunt lead nose) How different speed regimes affect .338 calibre rifle bullets can be seen in the .338 Lapua Magnum product brochure which states Doppler radar established G1 BC data. The reason for publishing data like in this brochure is that the Siacci/Mayevski G1 model can not be tuned for the drag behavior of a specific projectile whose shape significantly deviates from the used reference projectile shape. Some ballistic software designers, who based their programs on the Siacci/Mayevski G1 model, give the user the possibility to enter several different G1 BC constants for different speed regimes to calculate ballistic predictions that closer match a bullets flight behavior at longer ranges compared to calculations that use only one BC constant. The above example illustrates the central problem fixed drag curve models have. These models will only yield satisfactory accurate predictions as long as the projectile of interest has the same shape as the reference projectile or a shape that closely resembles the reference projectile. Any deviation from the reference projectile shape will result in less accurate predictions. How much a projectile deviates from the applied reference projectile is mathematically expressed by the form factor (i). The form factor can be used to compare the drag experienced by a projectile of interest to the drag experienced by the employed reference projectile at a given velocity (range). The problem that the actual drag curve of a projectile can significantly deviate from the fixed drag curve of any employed reference projectile systematically limits the traditional drag resistance modeling approach. The relative simplicity however makes that it can be explained to and understood by the general shooting public and hence is also popular amongst ballistic software prediction developers and bullet manufacturers that want to market their products. More advanced drag models Pejsa model Another attempt at building a ballistic calculator is the model presented in 1980 by Dr. Arthur J. Pejsa. Dr. Pejsa claims on his website that his method was consistently capable of predicting (supersonic) rifle bullet trajectories within 2.5 mm (0.1 in) and bullet velocities within 0.3 m/s (1 ft/s) out to 914 m (1,000 yd) in theory. The Pejsa model is a closed-form solution. The Pejsa model can predict a projectile within a given flight regime (for example the supersonic flight regime) with only two velocity measurements, a distance between said velocity measurements, and a slope or deceleration constant factor. The model allows the drag curve to change slopes (true/calibrate) or curvature at three different points. Down range velocity measurement data can be provided around key inflection points allowing for more accurate calculations of the projectile retardation rate, very similar to a Mach vs CD table. The Pejsa model allows the slope factor to be tuned to account for subtle differences in the retardation rate of different bullet shapes and sizes. It ranges from 0.1 (flat-nose bullets) to 0.9 (very-low-drag bullets). If this slope or deceleration constant factor is unknown a default value of 0.5 is used. With the help of test firing measurements the slope constant for a particular bullet/rifle system/shooter combination can be determined. These test firings should preferably be executed at 60% and for extreme long range ballistic predictions also at 80% to 90% of the supersonic range of the projectiles of interest, staying away from erratic transonic effects. With this the Pejsa model can easily be tuned. A practical downside of the Pejsa model is that accurate projectile specific down range velocity measurements to provide these better predictions can not be easily performed by the vast majority of shooting enthusiasts. An average retardation coefficient can be calculated for any given slope constant factor if velocity data points are known and distance between said velocity measurements is known. Obviously this is true only within the same flight regime. With velocity actual speed is meant, as velocity is a vector quantity and speed is the magnitude of the velocity vector. Because the power function does not have constant curvature a simple chord average cannot be used. The Pejsa model uses a weighted average retardation coefficient weighted at 0.25 range. The closer velocity is more heavily weighted. The retardation coefficient is measured in feet whereas range is measured in yards hence 0.25 × 3.0 = 0.75, in some places 0.8 rather than 0.75 is used. The 0.8 comes from rounding in order to allow easy entry on hand calculators. Since the Pejsa model does not use a simple chord weighted average, two velocity measurements are used to find the chord average retardation coefficient at midrange between the two velocity measurements points, limiting it to short range accuracy. In order to find the starting retardation coefficient Dr. Pejsa provides two separate equations in his two books. The first involves the power function. The second equation is identical to the one used to find the weighted average at R / 4; add N × (R/2) where R is the range in feet to the chord average retardation coefficient at midrange and where N is the slope constant factor. After the starting retardation coefficient is found the opposite procedure is used in order find the weighted average at R / 4; the starting retardation coefficient minus N × (R/4). In other words, N is used as the slope of the chord line. Dr. Pejsa states that he expanded his drop formula in a power series in order to prove that the weighted average retardation coefficient at R / 4 was a good approximation. For this Dr. Pejsa compared the power series expansion of his drop formula to some other unnamed drop formula's power expansion to reach his conclusions. The fourth term in both power series matched when the retardation coefficient at 0.25 range was used in Pejsa's drop formula. The fourth term was also the first term to use N. The higher terms involving N where insignificant and disappeared at N = 0.36, which according to Dr. Pejsa was a lucky coincidence making for an exceedingly accurate linear approximation, especially for N's around 0.36. If a retardation coefficient function is used exact average values for any N can be obtained because from calculus it is trivial to find the average of any integrable function. Dr. Pejsa states that the retardation coefficient can be modeled by C × VN where C is a fitting coefficient which disappears during the derivation of the drop formula and N the slope constant factor. The retardation coefficient equals the velocity squared divided by the retardation rate A. Using an average retardation coefficient allows the Pejsa model to be a closed-form expression within a given flight regime. In order to allow the use of a G1 ballistic coefficient rather than velocity data Dr. Pejsa provided two reference drag curves. The first reference drag curve is based purely on the Siacci/Mayevski retardation rate function. The second reference drag curve is adjusted to equal the Siacci/Mayevski retardation rate function at a projectile velocity of 2600 fps (792.5 m/s) using a .30-06 Springfield Cartridge, Ball, Caliber .30 M2 rifle spitzer bullet with a slope or deceleration constant factor of 0.5 in the supersonic flight regime. In other flight regimes the second Pejsa reference drag curve model uses slope constant factors of 0.0 or -4.0. These deceleration constant factors can be verified by backing out Pejsa's formulas (the drag curve segments fits the form V(2 - N) / C and the retardation coefficient curve segments fits the form V2 / (V(2 - N) / C) = C × VN where C is a fitting coefficient). The empirical test data Pejsa used to determine the exact shape of his chosen reference drag curve and pre-defined mathematical function that returns the retardation coefficient at a given Mach number was provided by the US military for the Cartridge, Ball, Caliber .30 M2 bullet. The calculation of the retardation coefficient function also involves air density, which Pejsa did not mention explicitly. The Siacci/Mayevski G1 model uses the following deceleration parametrization (60 °F, 30 inHg and 67% humidity, air density ρ = 1.2209 kg/m3). Dr. Pejsa suggests using the second drag curve because the Siacci/Mayevski G1 drag curve does not provide a good fit for modern spitzer bullets. To obtain relevant retardation coefficients for optimal long range modeling Dr. Pejsa suggested using accurate projectile specific down range velocity measurement data for a particular projectile to empirically derive the average retardation coefficient rather than using a reference drag curve derived average retardation coefficient. Further he suggested using ammunition with reduced propellant loads to empirically test actual projectile flight behavior at lower velocities. When working with reduced propellant loads utmost care must be taken to avoid dangerous or catastrophic conditions (detonations) with can occur when firing experimental loads in firearms. Manges model Although not as well known as the Pejsa model, an additional alternative ballistic model was presented in 1989 by Colonel Duff Manges (U S Army Retired) at the American Defense Preparedness (ADPA) 11th International Ballistic Symposium held at the Brussels Congress Center, Brussels, Belgium, May 9–11, 1989. A paper titled "Closed Form Trajectory Solutions for Direct Fire Weapons Systems" appears in the proceedings, Volume 1, Propulsion Dynamics, Launch Dynamics, Flight Dynamics, pages 665–674. Originally conceived to model projectile drag for 120 mm tank gun ammunition, the novel drag coefficient formula has been applied subsequently to ballistic trajectories of center-fired rifle ammunition with results comparable to those claimed for the Pejsa model. The Manges model uses a first principles theoretical approach that eschews "G" curves and "ballistic coefficients" based on the standard G1 and other similarity curves. The theoretical description has three main parts. The first is to develop and solve a formulation of the two dimensional differential equations of motion governing flat trajectories of point mass projectiles by defining mathematically a set of quadratures that permit closed form solutions for the trajectory differential equations of motion. A sequence of successive approximation drag coefficient functions is generated that converge rapidly to actual observed drag data. The vacuum trajectory, simplified aerodynamic, d'Antonio, and Euler drag law models are special cases. The Manges drag law thereby provides a unifying influence with respect to earlier models used to obtain two dimensional closed form solutions to the point-mass equations of motion. The third purpose of this paper is to describe a least squares fitting procedure for obtaining the new drag functions from observed experimental data. The author claims that results show excellent agreement with six degree of freedom numerical calculations for modern tank ammunition and available published firing tables for center-fired rifle ammunition having a wide variety of shapes and sizes. A Microsoft Excel application has been authored that uses least squares fits of wind tunnel acquired tabular drag coefficients. Alternatively, manufacturer supplied ballistic trajectory data, or Doppler acquired velocity data can be fitted as well to calibrate the model. The Excel application then employs custom macroinstructions to calculate the trajectory variables of interest. A modified 4th order Runge–Kutta integration algorithm is used. Like Pejsa, Colonel Manges claims center-fired rifle accuracies to the nearest one tenth of an inch for bullet position, and nearest foot per second for the projectile velocity. The Proceedings of the 11th International Ballistic Symposium are available through the National Defense Industrial Association (NDIA) at the website http://www.ndia.org/Resources/Pages/Publication_Catalog.aspx . Six degrees of freedom model There are also advanced professional ballistic models like PRODAS available. These are based on six degrees of freedom (6 DoF) calculations. 6 DoF modeling accounts for x, y, and z position in space along with the projectiles pitch, yaw, and roll rates. 6 DoF modeling needs such elaborate data input, knowledge of the employed projectiles and expensive data collection and verification methods that it is impractical for non-professional ballisticians, but not impossible for the curious, computer literate, and mathematically inclined. Semi-empirical aeroprediction models have been developed that reduced extensive test range data on a wide variety of projectile shapes, normalizing dimensional input geometries to calibers; accounting for nose length and radius, body length, and boattail size, and allowing the full set of 6-dof aerodynamic coefficients to be estimated. Early research on spin-stabilized aeroprediction software resulted in the SPINNER computer program. The FINNER aeroprediction code calculates 6-dof inputs for fin stabilized projectiles. Solids modeling software that determines the projectile parameters of mass, center of gravity, axial and transverse moments of inertia necessary for stability analysis are also readily available, and simple to computer program. Finally, algorithms for 6-dof numerical integration suitable to a 4th order Runge-Kutta are readily available. All that is required for the amateur ballistician to investigate the finer analytical details of projectile trajectories, along with bullet nutation and precession behavior, is computer programming determination. Nevertheless, for the small arms enthusiast, aside from academic curiosity, one will discover that being able to predict trajectories to 6-dof accuracy is probably not of practical significance compared to more simplified point mass trajectories based on published bullet ballistic coefficients. 6 DoF is generally used by the aerospace and defense industry and military organizations that study the ballistic behavior of a limited number of (intended) military issue projectiles. Calculated 6 DoF trends can be incorporated as correction tables in more conventional ballistic software applications. Though 6 DoF modeling and software applications are used by professional well equipped organizations for decades, the computing power restrictions of mobile computing devices like (ruggedized) personal digital assistants, tablet computers or smartphones impaired field use as calculations generally have to be done on the fly. In 2016 the Scandinavian ammunition manufacturer Nammo Lapua Oy released a 6 DoF calculation model based ballistic free software named Lapua Ballistics. The software is distributed as a mobile app only and available for Android and iOS devices. The employed 6 DoF model is however limited to Lapua bullets as a 6 DoF solver needs bullet specific drag coefficient (Cd)/Doppler radar data and geometric dimensions of the projectile(s) of interest. For other bullets the Lapua Ballistics solver is limited to and based on G1 or G7 ballistic coefficients and the Mayevski/Siacci method. Artillery software suites Military organizations have developed ballistic models like the NATO Armament Ballistic Kernel (NABK) for fire-control systems for artillery like the SG2 Shareable (Fire Control) Software Suite (S4) from the NATO Army Armaments Group (NAAG). The NATO Armament Ballistic Kernel is a 4-DoF modified point mass model. This is a compromise between a simple point mass model and a computationally intensive 6-DoF model. A six- and seven-degree-of-freedom standard called BALCO has also been developed within NATO working groups. BALCO is a trajectory simulation program based on the mathematical model defined by the NATO Standardization Recommendation 4618. The primary goal of BALCO is to compute high-fidelity trajectories for both conventional axisymmetric and precision-guided projectiles featuring control surfaces. The BALCO trajectory model is a FORTRAN 2003 program that implements the following features: 6/7‐DoF equations of motion 7th‐order Runge‐Kutta‐Fehlberg integration Earth models Atmosphere models Aerodynamic models Thrust and Base Burn models Actuator models The predictions these models yield are subject to comparison study. Doppler radar measurements For the precise establishment of drag or air resistance effects on projectiles, Doppler radar measurements are required. Weibel 1000e or Infinition BR-1001 Doppler radars are used by governments, professional ballisticians, defence forces and a few ammunition manufacturers to obtain real-world data of the flight behavior of projectiles of their interest. Correctly established state of the art Doppler radar measurements can determine the flight behavior of projectiles as small as airgun pellets in three-dimensional space to within a few millimetres accuracy. The gathered data regarding the projectile deceleration can be derived and expressed in several ways, such as ballistic coefficients (BC) or drag coefficients (Cd). Because a spinning projectile experiences both precession and nutation about its center of gravity as it flies, further data reduction of doppler radar measurements is required to separate yaw induced drag and lift coefficients from the zero yaw drag coefficient, in order to make measurements fully applicable to 6-dof trajectory analysis. Doppler radar measurement results for a lathe-turned monolithic solid .50 BMG very-low-drag bullet (Lost River J40 .510-773 grain monolithic solid bullet / twist rate 1:15 in) look like this: The initial rise in the BC value is attributed to a projectile's always present yaw and precession out of the bore. The test results were obtained from many shots not just a single shot. The bullet was assigned 1.062 for its BC number by the bullet's manufacturer Lost River Ballistic Technologies. Doppler radar measurement results for a Lapua GB528 Scenar 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag bullet look like this: This tested bullet experiences its maximum drag coefficient when entering the transonic flight regime around Mach 1.200. With the help of Doppler radar measurements projectile specific drag models can be established that are most useful when shooting at extended ranges where the bullet speed slows to the transonic speed region near the speed of sound. This is where the projectile drag predicted by mathematic modeling can significantly depart from the actual drag experienced by the projectile. Further Doppler radar measurements are used to study subtle in-flight effects of various bullet constructions. Governments, professional ballisticians, defence forces and ammunition manufacturers can supplement Doppler radar measurements with measurements gathered by telemetry probes fitted to larger projectiles. General trends in drag or ballistic coefficient In general, a pointed projectile will have a better drag coefficient (Cd) or ballistic coefficient (BC) than a round nosed bullet, and a round nosed bullet will have a better Cd or BC than a flat point bullet. Large radius curves, resulting in a shallower point angle, will produce lower drags, particularly at supersonic velocities. Hollow point bullets behave much like a flat point of the same point diameter. Projectiles designed for supersonic use often have a slightly tapered base at the rear, called a boat tail, which reduces air resistance in flight. The usefulness of a "tapered rear" for long-range firing was well established already by early 1870s, but technological difficulties prevented their wide adoption before well into 20th century. Cannelures, which are recessed rings around the projectile used to crimp the projectile securely into the case, will cause an increase in drag. Analytical software was developed by the Ballistics Research Laboratory – later called the Army Research Laboratory – which reduced actual test range data to parametric relationships for projectile drag coefficient prediction. Large caliber artillery also employ drag reduction mechanisms in addition to streamlining geometry. Rocket-assisted projectiles employ a small rocket motor that ignites upon muzzle exit providing additional thrust to overcome aerodynamic drag. Rocket assist is most effective with subsonic artillery projectiles. For supersonic long range artillery, where base drag dominates, base bleed is employed. Base bleed is a form of a gas generator that does not provide significant thrust, but rather fills the low-pressure area behind the projectile with gas, effectively reducing the base drag and the overall projectile drag coefficient. Transonic problem A projectile fired at supersonic muzzle velocity will at some point slow to approach the speed of sound. At the transonic region (about Mach 1.2–0.8) the centre of pressure (CP) of most non spherical projectiles shifts forward as the projectile decelerates. That CP shift affects the (dynamic) stability of the projectile. If the projectile is not well stabilized, it cannot remain pointing forward through the transonic region (the projectile starts to exhibit an unwanted precession or coning motion called limit cycle yaw that, if not damped out, can eventually end in uncontrollable tumbling along the length axis). However, even if the projectile has sufficient stability (static and dynamic) to be able to fly through the transonic region and stays pointing forward, it is still affected. The erratic and sudden CP shift and (temporary) decrease of dynamic stability can cause significant dispersion (and hence significant accuracy decay), even if the projectile's flight becomes well behaved again when it enters the subsonic region. This makes accurately predicting the ballistic behavior of projectiles in the transonic region very difficult. Because of this, marksmen normally restrict themselves to engaging targets close enough that the projectile is still supersonic. In 2015, the American ballistician Bryan Litz introduced the "Extended Long Range" concept to define rifle shooting at ranges where supersonic fired (rifle) bullets enter the transonic region. According to Litz, "Extended Long Range starts whenever the bullet slows to its transonic range. As the bullet slows down to approach Mach 1, it starts to encounter transonic effects, which are more complex and difficult to account for, compared to the supersonic range where the bullet is relatively well-behaved." The ambient air density has a significant effect on dynamic stability during transonic transition. Though the ambient air density is a variable environmental factor, adverse transonic transition effects can be negated better by a projectile traveling through less dense air, than when traveling through denser air. Projectile or bullet length also affects limit cycle yaw. Longer projectiles experience more limit cycle yaw than shorter projectiles of the same diameter. Another feature of projectile design that has been identified as having an effect on the unwanted limit cycle yaw motion is the chamfer at the base of the projectile. At the very base, or heel of a projectile or bullet, there is a chamfer, or radius. The presence of this radius causes the projectile to fly with greater limit cycle yaw angles. Rifling can also have a subtle effect on limit cycle yaw. In general faster spinning projectiles experience less limit cycle yaw. Research into guided projectiles To circumvent the transonic problems encountered by spin-stabilized projectiles, projectiles can theoretically be guided during flight. The Sandia National Laboratories announced in January 2012 it has researched and test-fired 4-inch (102 mm) long prototype dart-like, self-guided bullets for small-caliber, smooth-bore firearms that could hit laser-designated targets at distances of more than a mile (about 1,610 meters or 1760 yards). These projectiles are not spin stabilized and the flight path can steered within limits with an electromagnetic actuator 30 times per second. The researchers also claim they have video of the bullet radically pitching as it exits the barrel and pitching less as it flies down range, a disputed phenomenon known to long-range firearms experts as “going to sleep”. Because the bullet's motions settle the longer it is in flight, accuracy improves at longer ranges, Sandia researcher Red Jones said. “Nobody had ever seen that, but we’ve got high-speed video photography that shows that it’s true,” he said. Recent testing indicates it may be approaching or already achieved initial operational capability. Testing the predictive qualities of software Due to the practical inability to know in advance and compensate for all the variables of flight, no software simulation, however advanced, will yield predictions that will always perfectly match real world trajectories. It is however possible to obtain predictions that are very close to actual flight behavior. Empirical measurement method Ballistic prediction computer programs intended for (extreme) long ranges can be evaluated by conducting field tests at the supersonic to subsonic transition range (the last 10 to 20% of the supersonic range of the rifle/cartridge/bullet combination). For a typical .338 Lapua Magnum rifle for example, shooting standard 16.2 gram (250 gr) Lapua Scenar GB488 bullets at 905 m/s (2969 ft/s) muzzle velocity, field testing of the software should be done at ≈ 1200-1300 meters (1312-1422 yd) under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³). To check how well the software predicts the trajectory at shorter to medium range, field tests at 20, 40 and 60% of the supersonic range have to be conducted. At those shorter to medium ranges, transonic problems and hence unbehaved bullet flight should not occur, and the BC is less likely to be transient. Testing the predictive qualities of software at (extreme) long ranges is expensive because it consumes ammunition; the actual muzzle velocity of all shots fired must be measured to be able to make statistically dependable statements. Sample groups of less than 24 shots may not obtain the desired statistically significant confidence interval. Doppler radar measurement method Governments, professional ballisticians, defence forces and a few ammunition manufacturers use Doppler radars and/or telemetry probes fitted to larger projectiles to obtain precise real world data regarding the flight behavior of the specific projectiles of their interest and thereupon compare the gathered real world data against the predictions calculated by ballistic computer programs. The normal shooting or aerodynamics enthusiast, however, has no access to such expensive professional measurement devices. Authorities and projectile manufacturers are generally reluctant to share the results of Doppler radar tests and the test derived drag coefficients (Cd) of projectiles with the general public. Around 2020 more affordable but less capable (amateur) Doppler rader equipment to determine free flight drag coefficients became available for the general public. In January 2009, the Scandinavian ammunition manufacturer Nammo/Lapua published Doppler radar test-derived drag coefficient data for most of their rifle projectiles. In 2015 the US ammunition manufacturer Berger Bullets announced the use of Doppler radar in unison with PRODAS 6 DoF software to generate trajectory solutions. In 2016 US ammunition manufacturer Hornady announced the use of Doppler radar derived drag data in software utilizing a modified point mass model to generate trajectory solutions. With the measurement derived Cd data engineers can create algorithms that utilize both known mathematical ballistic models as well as test specific, tabular data in unison. When used by predictive software like QuickTARGET Unlimited, Lapua Edition, Lapua Ballistics or Hornady 4DOF the Doppler radar test-derived drag coefficient data can be used for more accurate external ballistic predictions. Some of the Lapua-provided drag coefficient data shows drastic increases in the measured drag around or below the Mach 1 flight velocity region. This behavior was observed for most of the measured small calibre bullets, and not so much for the larger calibre bullets. This implies some (mostly smaller calibre) rifle bullets exhibited more limit cycle yaw (coning and/or tumbling) in the transonic/subsonic flight velocity regime. The information regarding unfavourable transonic/subsonic flight behavior for some of the tested projectiles is important. This is a limiting factor for extended range shooting use, because the effects of limit cycle yaw are not easily predictable and potentially catastrophic for the best ballistic prediction models and software. Presented Cd data can not be simply used for every gun-ammunition combination, since it was measured for the barrels, rotational (spin) velocities and ammunition lots the Lapua testers used during their test firings. Variables like differences in rifling (number of grooves, depth, width and other dimensional properties), twist rates and/or muzzle velocities impart different rotational (spin) velocities and rifling marks on projectiles. Changes in such variables and projectile production lot variations can yield different downrange interaction with the air the projectile passes through that can result in (minor) changes in flight behavior. This particular field of external ballistics is currently (2009) not elaborately studied nor well understood. Predictions of several drag resistance modelling and measuring methods The method employed to model and predict external ballistic behavior can yield differing results with increasing range and time of flight. To illustrate this several external ballistic behavior prediction methods for the Lapua Scenar GB528 19.44 g (300 gr) 8.59 mm (0.338 in) calibre very-low-drag rifle bullet with a manufacturer stated G1 ballistic coefficient (BC) of 0.785 fired at 830 m/s (2723 ft/s) muzzle velocity under International Standard Atmosphere sea level conditions (air density ρ = 1.225 kg/m³), Mach 1 = 340.3 m/s, Mach 1.2 = 408.4 m/s), predicted this for the projectile velocity and time of flight from 0 to 3,000 m (0 to 3,281 yd): The table shows the Doppler radar test derived drag coefficients (Cd) prediction method and the 2017 Lapua Ballistics 6 DoF App predictions produce similar results. The 6 DoF modeling estimates bullet stability ((Sd) and (Sg)) that gravitates to over-stabilization for ranges over for this bullet. At the total drop predictions deviate 47.5 cm (19.7 in) or 0.20 mil (0.68 moa) at 50° latitude and up to the total drop predictions are within 0.30 mil (1 moa) at 50° latitude. The 2016 Lapua Ballistics 6 DoF App version predictions were even closer to the Doppler radar test predictions. The traditional Siacci/Mayevski G1 drag curve model prediction method generally yields more optimistic results compared to the modern Doppler radar test derived drag coefficients (Cd) prediction method. At range the differences will be hardly noticeable, but at and beyond the differences grow over 10 m/s (32.8 ft/s) projectile velocity and gradually become significant. At range the projectile velocity predictions deviate 25 m/s (82.0 ft/s), which equates to a predicted total drop difference of 125.6 cm (49.4 in) or 0.83 mil (2.87 moa) at 50° latitude. The Pejsa drag model closed-form solution prediction method, without slope constant factor fine tuning, yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions deviate 10 m/s (32.8 ft/s), which equates to a predicted total drop difference of 23.6 cm (9.3 in) or 0.16 mil (0.54 moa) at 50° latitude. The G7 drag curve model prediction method (recommended by some manufacturers for very-low-drag shaped rifle bullets) when using a G7 ballistic coefficient (BC) of 0.377 yields very similar results in the supersonic flight regime compared to the Doppler radar test derived drag coefficients (Cd) prediction method. At range the projectile velocity predictions have their maximum deviation of 10 m/s (32.8 ft/s). The predicted total drop difference at is 0.4 cm (0.16 in) at 50° latitude. The predicted total drop difference at is 45.0 cm (17.7 in), which equates to 0.25 mil (0.86 moa). Decent prediction models are expected to yield similar results in the supersonic flight regime. The five example models down to all predict supersonic Mach 1.2+ projectile velocities and total drop differences within a 51 cm (20.1 in) bandwidth. In the transonic flight regime at the models predict projectile velocities around Mach 1.0 to Mach 1.1 and total drop differences within a much larger 150 cm (59 in) bandwidth. External factors Wind Wind has a range of effects, the first being the effect of making the projectile deviate to the side (horizontal deflection). From a scientific perspective, the "wind pushing on the side of the projectile" is not what causes horizontal wind drift. What causes wind drift is drag. Drag makes the projectile turn into the wind, much like a weather vane, keeping the centre of air pressure on its nose. From the shooter’s perspective, this causes the nose of the projectile to turn into the wind and the tail to turn away from the wind. The result of this turning effect is that the drag pushes the projectile downwind in a nose-to-tail direction. Wind also causes aerodynamic jump which is the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight of a projectile or at or very near the muzzle leading to dynamic imbalance. The amount of aerodynamic jump is dependent on cross wind speed, the gyroscopic stability of the bullet at the muzzle and if the barrel twist is clockwise or anti-clockwise. Like the wind direction reversing the twist direction will reverse the aerodynamic jump direction. A somewhat less obvious effect is caused by head or tailwinds. A headwind will slightly increase the relative velocity of the projectile, and increase drag and the corresponding drop. A tailwind will reduce the drag and the projectile/bullet drop. In the real world, pure head or tailwinds are rare, since wind is seldom constant in force and direction and normally interacts with the terrain it is blowing over. This often makes ultra long range shooting in head or tailwind conditions difficult. Vertical angles The vertical angle (or elevation) of a shot will also affect the trajectory of the shot. Ballistic tables for small calibre projectiles (fired from pistols or rifles) assume a horizontal line of sight between the shooter and target with gravity acting perpendicular to the earth. Therefore, if the shooter-to-target angle is up or down, (the direction of the gravity component does not change with slope direction), then the trajectory curving acceleration due to gravity will actually be less, in proportion to the cosine of the slant angle. As a result, a projectile fired upward or downward, on a so-called "slant range," will over-shoot the same target distance on flat ground. The effect is of sufficient magnitude that hunters must adjust their target hold off accordingly in mountainous terrain. A well known formula for slant range adjustment to horizontal range hold off is known as the Rifleman's rule. The Rifleman's rule and the slightly more complex and less well known Improved Rifleman's rule models produce sufficiently accurate predictions for many small arms applications. Simple prediction models however ignore minor gravity effects when shooting uphill or downhill. The only practical way to compensate for this is to use a ballistic computer program. Besides gravity at very steep angles over long distances, the effect of air density changes the projectile encounters during flight become problematic. The mathematical prediction models available for inclined fire scenarios, depending on the amount and direction (uphill or downhill) of the inclination angle and range, yield varying accuracy expectation levels. Less advanced ballistic computer programs predict the same trajectory for uphill and downhill shots at the same vertical angle and range. The more advanced programs factor in the small effect of gravity on uphill and on downhill shots resulting in slightly differing trajectories at the same vertical angle and range. No publicly available ballistic computer program currently (2017) accounts for the complicated phenomena of differing air densities the projectile encounters during flight. Ambient air density Air pressure, temperature, and humidity variations make up the ambient air density. Humidity has a counter intuitive impact. Since water vapor has a density of 0.8 grams per litre, while dry air averages about 1.225 grams per litre, higher humidity actually decreases the air density, and therefore decreases the drag. Precipitation Precipitation can cause significant yaw and accompanying deflection when a bullet collides with a raindrop. The further downrange such a coincidental collision occurs, the less the deflection on target will be. The weight of the raindrop and bullet also influences how much yaw is induced during such a collision. A big heavy raindrop and a light bullet will yield maximal yaw effect. A heavy bullet colliding with an equal raindrop will experience significant less yaw effect. Long range factors Gyroscopic drift (spin drift) Gyroscopic drift is an interaction of the bullet's mass and aerodynamics with the atmosphere that it is flying in. Even in completely calm air, with no sideways air movement at all, a spin-stabilized projectile will experience a spin-induced sideways component, due to a gyroscopic phenomenon known as "yaw of repose." For a right hand (clockwise) direction of rotation this component will always be to the right. For a left hand (counterclockwise) direction of rotation this component will always be to the left. This is because the projectile's longitudinal axis (its axis of rotation) and the direction of the velocity vector of the center of gravity (CG) deviate by a small angle, which is said to be the equilibrium yaw or the yaw of repose. The magnitude of the yaw of repose angle is typically less than 0.5 degree. Since rotating objects react with an angular velocity vector 90 degrees from the applied torque vector, the bullet's axis of symmetry moves with a component in the vertical plane and a component in the horizontal plane; for right-handed (clockwise) spinning bullets, the bullet's axis of symmetry deflects to the right and a little bit upward with respect to the direction of the velocity vector, as the projectile moves along its ballistic arc. As the result of this small inclination, there is a continuous air stream, which tends to deflect the bullet to the right. Thus the occurrence of the yaw of repose is the reason for the bullet drifting to the right (for right-handed spin) or to the left (for left-handed spin). This means that the bullet is "skidding" sideways at any given moment, and thus experiencing a sideways component. The following variables affect the magnitude of gyroscopic drift: Projectile or bullet length: longer projectiles experience more gyroscopic drift because they produce more lateral "lift" for a given yaw angle. Spin rate: faster spin rates will produce more gyroscopic drift because the nose ends up pointing farther to the side. Range, time of flight and trajectory height: gyroscopic drift increases with all of these variables. density of the atmosphere: denser air will increase gyroscopic drift. Doppler radar measurement results for the gyroscopic drift of several US military and other very-low-drag bullets at 1000 yards (914.4 m) look like this: The table shows that the gyroscopic drift cannot be predicted on weight and diameter alone. In order to make accurate predictions on gyroscopic drift several details about both the external and internal ballistics must be considered. Factors such as the twist rate of the barrel, the velocity of the projectile as it exits the muzzle, barrel harmonics, and atmospheric conditions, all contribute to the path of a projectile. Magnus effect Spin stabilized projectiles are affected by the Magnus effect, whereby the spin of the bullet creates a force acting either up or down, perpendicular to the sideways vector of the wind. In the simple case of horizontal wind, and a right hand (clockwise) direction of rotation, the Magnus effect induced pressure differences around the bullet cause a downward (wind from the right) or upward (wind from the left) force viewed from the point of firing to act on the projectile, affecting its point of impact. The vertical deflection value tends to be small in comparison with the horizontal wind induced deflection component, but it may nevertheless be significant in winds that exceed 4 m/s (14.4 km/h or 9 mph). Magnus effect and bullet stability The Magnus effect has a significant role in bullet stability because the Magnus force does not act upon the bullet's center of gravity, but the center of pressure affecting the yaw of the bullet. The Magnus effect will act as a destabilizing force on any bullet with a center of pressure located ahead of the center of gravity, while conversely acting as a stabilizing force on any bullet with the center of pressure located behind the center of gravity. The location of the center of pressure depends on the flow field structure, in other words, depending on whether the bullet is in supersonic, transonic or subsonic flight. What this means in practice depends on the shape and other attributes of the bullet, in any case the Magnus force greatly affects stability because it tries to "twist" the bullet along its flight path. Paradoxically, very-low-drag bullets, owing to their length, have a tendency to exhibit greater Magnus destabilizing errors because they have a greater surface area to present to the oncoming air they are travelling through, thereby reducing their aerodynamic efficiency. This subtle effect is one of the reasons why a calculated Cd or BC based on shape and sectional density is of limited use. Poisson effect Another minor cause of drift, which depends on the nose of the projectile being above the trajectory, is the Poisson Effect. This, if it occurs at all, acts in the same direction as the gyroscopic drift and is even less important than the Magnus effect. It supposes that the uptilted nose of the projectile causes an air cushion to build up underneath it. It further supposes that there is an increase of friction between this cushion and the projectile so that the latter, with its spin, will tend to roll off the cushion and move sideways. This simple explanation is quite popular. There is, however, no evidence to show that increased pressure means increased friction and unless this is so, there can be no effect. Even if it does exist it must be quite insignificant compared with the gyroscopic and Coriolis drifts. Both the Poisson and Magnus Effects will reverse their directions of drift if the nose falls below the trajectory. When the nose is off to one side, as in equilibrium yaw, these effects will make minute alterations in range. Coriolis drift The Coriolis effect causes Coriolis drift in a direction perpendicular to the Earth's axis; for most locations on Earth and firing directions, this deflection includes horizontal and vertical components. The deflection is to the right of the trajectory in the northern hemisphere, to the left in the southern hemisphere, upward for eastward shots, and downward for westward shots. The vertical Coriolis deflection is also known as the Eötvös effect. Coriolis drift is not an aerodynamic effect; it is a consequence of the rotation of the Earth. The magnitude of the Coriolis effect is small. For small arms, the magnitude of the Coriolis effect is generally insignificant (for high powered rifles in the order of about at ), but for ballistic projectiles with long flight times, such as extreme long-range rifle projectiles, artillery, and rockets like intercontinental ballistic missiles, it is a significant factor in calculating the trajectory. The magnitude of the drift depends on the firing and target location, azimuth of firing, projectile velocity and time of flight. Horizontal effect Viewed from a non-rotating reference frame (i.e. not one rotating with the Earth) and ignoring the forces of gravity and air resistance, a projectile moves in a straight line. When viewed from a reference frame fixed with respect to the Earth, that straight trajectory appears to curve sideways. The direction of this horizontal curvature is to the right in the northern hemisphere and to the left in the southern hemisphere, and does not depend on the azimuth of the shot. The horizontal curvature is largest at the poles and decreases to zero at the equator. Vertical (Eötvös) effect The Eötvös effect changes the perceived gravitational pull on a moving object based on the relationship between the direction and velocity of movement and the direction of the Earth's rotation. The Eötvös effect is largest at the equator and decreases to zero at the poles. It causes eastward-traveling projectiles to deflect upward, and westward-traveling projectiles to deflect downward. The effect is less pronounced for trajectories in other directions, and is zero for trajectories aimed due north or south. In the case of large changes of momentum, such as a spacecraft being launched into Earth orbit, the effect becomes significant. It contributes to the fastest and most fuel-efficient path to orbit: a launch from the equator that curves to a directly eastward heading. Equipment factors Though not forces acting on projectile trajectories there are some equipment related factors that influence trajectories. Since these factors can cause otherwise unexplainable external ballistic flight behavior they have to be briefly mentioned. Lateral jump Lateral jump is caused by a slight lateral and rotational movement of a gun barrel at the instant of firing. It has the effect of a small error in bearing. The effect is ignored, since it is small and varies from round to round. Lateral throw-off Lateral throw-off is caused by mass imbalance in applied spin stabilized projectiles or pressure imbalances during the transitional flight phase when a projectile leaves a gun barrel off axis leading to static imbalance. If present it causes dispersion. The effect is unpredictable, since it is generally small and varies from projectile to projectile, round to round and/or gun barrel to gun barrel. Maximum effective small arms range The maximum practical range of all small arms and especially high-powered sniper rifles depends mainly on the aerodynamic or ballistic efficiency of the spin stabilised projectiles used. Long-range shooters must also collect relevant information to calculate elevation and windage corrections to be able to achieve first shot strikes at point targets. The data to calculate these fire control corrections has a long list of variables including: ballistic coefficient or test derived drag coefficients (Cd)/behavior of the bullets used height of the sighting components above the rifle bore axis the zero range at which the sighting components and rifle combination were sighted in bullet mass actual muzzle velocity (powder temperature affects muzzle velocity, primer ignition is also temperature dependent) range to target supersonic range of the employed gun, cartridge and bullet combination inclination angle in case of uphill/downhill firing target speed and direction wind speed and direction (main cause for horizontal projectile deflection and generally the hardest ballistic variable to measure and judge correctly. Wind effects can also cause vertical deflection.) air pressure, temperature, altitude and humidity variations (these make up the ambient air density) Earth's gravity (changes slightly with latitude and altitude) gyroscopic drift (horizontal and vertical plane gyroscopic effect — often known as spin drift - induced by the barrel's twist direction and twist rate) Coriolis effect drift (latitude, direction of fire and northern or southern hemisphere data dictate this effect) Eötvös effect (interrelated with the Coriolis effect, latitude and direction of fire dictate this effect) aerodynamic jump (the vertical component of cross wind deflection caused by lateral (wind) impulses activated during free flight or at or very near the muzzle leading to dynamic imbalance) lateral throw-off (dispersion that is caused by mass imbalance in the applied projectile or it leaving the barrel off axis leading to static imbalance) the inherent potential accuracy and adjustment range of the sighting components the inherent potential accuracy of the rifle the inherent potential accuracy of the ammunition the inherent potential accuracy of the computer program and other firing control components used to calculate the trajectory The ambient air density is at its maximum at Arctic sea level conditions. Cold gunpowder also produces lower pressures and hence lower muzzle velocities than warm powder. This means that the maximum practical range of rifles will be at it shortest at Arctic sea level conditions. The ability to hit a point target at great range has a lot to do with the ability to tackle environmental and meteorological factors and a good understanding of exterior ballistics and the limitations of equipment. Without (computer) support and highly accurate laser rangefinders and meteorological measuring equipment as aids to determine ballistic solutions, long-range shooting beyond 1000 m (1100 yd) at unknown ranges becomes guesswork for even the most expert long-range marksmen. Interesting further reading: Marksmanship Wikibook Using ballistics data Here is an example of a ballistic table for a .30 calibre Speer 169 grain (11 g) pointed boat tail match bullet, with a BC of 0.480. It assumes sights 1.5 inches (38 mm) above the bore line, and sights adjusted to result in point of aim and point of impact matching 200 yards (183 m) and 300 yards (274 m) respectively. This table demonstrates that, even with a fairly aerodynamic bullet fired at high velocity, the "bullet drop" or change in the point of impact is significant. This change in point of impact has two important implications. Firstly, estimating the distance to the target is critical at longer ranges, because the difference in the point of impact between 400 and is 25–32 in (depending on zero), in other words if the shooter estimates that the target is 400 yd away when it is in fact 500 yd away the shot will impact 25–32 in (635–813 mm) below where it was aimed, possibly missing the target completely. Secondly, the rifle should be zeroed to a distance appropriate to the typical range of targets, because the shooter might have to aim so far above the target to compensate for a large bullet drop that he may lose sight of the target completely (for instance being outside the field of view of a telescopic sight). In the example of the rifle zeroed at , the shooter would have to aim 49 in or more than 4 ft (1.2 m) above the point of impact for a target at 500 yd. See also Internal ballistics - The behavior of the projectile and propellant before it leaves the barrel. Transitional ballistics - The behavior of the projectile from the time it leaves the muzzle until the pressure behind the projectile is equalized. Terminal ballistics - The behavior of the projectile upon impact with the target. Trajectory of a projectile - Basic external ballistics mathematic formulas. Rifleman's rule - Procedures or "rules" for a rifleman for aiming at targets at a distance either uphill or downhill. Franklin Ware Mann - Early scientific study of external ballistics. Table of handgun and rifle cartridges Sighting in - Calibrating the sights on a ranged weapon so that the point of aim intersects with the trajectory at a given distance, allowing the user to consistently hit the target being aimed at. Notes References External links General external ballistics (Simplified calculation of the motion of a projectile under a drag force proportional to the square of the velocity) - basketball ballistics. Small arms external ballistics Software for calculating ball ballistics How do bullets fly? by Ruprecht Nennstiel, Wiesbaden, Germany Exterior Ballistics.com articles A Short Course in External Ballistics Articles on long range shooting by Bryan Litz Probabalistic Weapon Employment Zone (WEZ) Analysis A Conceptual Overview by Bryan Litz Weite Schüsse - part 4, Basic explanation of the Pejsa model by Lutz Möller Patagonia Ballistics ballistics mathematical software engine JBM Small Arms Ballistics with online ballistics calculators Bison Ballistics Point Mass Online Ballistics Calculator Virtual Wind Tunnel Experiments for Small Caliber Ammunition Aerodynamic Characterization - Paul Weinacht US Army Research Laboratory Aberdeen Proving Ground, MD Artillery external ballistics British Artillery Fire Control - Ballistics & Data Field Artillery, Volume 6, Ballistics and Ammunition The Production of Firing Tables for Cannon Artillery, BRL rapport no. 1371 by Elizabeth R. Dickinson, U.S. Army Materiel Command Ballistic Research Laboratories, November 1967 NABK (NATO Armament Ballistic Kernel) Based Next Generation Ballistic Table Tookit, 23rd International Symposium on Ballistics, Tarragona, Spain 16-20 April 2007 Trajectory Calculator in C++ that can deduce drag function from firing tables Freeware small arms external ballistics software Hawke X-ACT Pro FREE ballistics app. iOS, Android, OSX & Windows. ChairGun Pro free ballistics for rim fire and pellet guns. Ballistic_XLR. (MS Excel spreadsheet)] - A substantial enhancement & modification of the Pejsa spreadsheet (below). GNU Exterior Ballistics Computer (GEBC) - An open source 3DOF ballistics computer for Windows, Linux, and Mac - Supports the G1, G2, G5, G6, G7, and G8 drag models. Created and maintained by Derek Yates. 6mmbr.com ballistics section links to / hosts 4 freeware external ballistics computer programs. 2DOF & 3DOF R.L. McCoy - Gavre exterior ballistics (zip file) - Supports the G1, G2, G5, G6, G7, G8, GS, GL, GI, GB and RA4 drag models PointBlank Ballistics (zip file) - Siacci/Mayevski G1 drag model. Remington Shoot! A ballistic calculator for Remington factory ammunition (based on Pinsoft's Shoot! software). - Siacci/Mayevski G1 drag model. JBM's small-arms ballistics calculators Online trajectory calculators - Supports the G1, G2, G5, G6, G7 (for some projectiles experimentally measured G7 ballistic coefficients), G8, GI, GL and for some projectiles doppler radar-test derived (Cd) drag models. Pejsa Ballistics (MS Excel spreadsheet) - Pejsa model. Sharpshooter Friend (Palm PDA software) - Pejsa model. Quick Target Unlimited, Lapua Edition - A version of QuickTARGET Unlimited ballistic software (requires free registration to download) - Supports the G1, G2, G5, G6, G7, G8, GL, GS Spherical 9/16"SAAMI, GS Spherical Don Miller, RA4, Soviet 1943, British 1909 Hatches Notebook and for some Lapua projectiles doppler radar-test derived (Cd) drag models. Lapua Ballistics Exterior ballistic software for Java or Android mobile phones. Based on doppler radar-test derived (Cd) drag models for Lapua projectiles and cartridges. Lapua Ballistics App 6 DoF model limited to Lapua bullets for Android and iOS. BfX - Ballistics for Excel Set of MS Excel add-ins functions - Supports the G1, G2, G5, G6, G7 G8 and RA4 and Pejsa drag models as well as one for air rifle pellets. Able to handle user supplied models, e.g. Lapua projectiles doppler radar-test derived (Cd) ones. GunSim "GunSim" free browser-based ballistics simulator program for Windows and Mac. BallisticSimulator "Ballistic Simulator" free ballistics simulator program for Windows. 5H0T Free online web-based ballistics calculator, with data export capability and charting. SAKO Ballistics Free online ballistic calculatoy by SAKO. Calculator also available as an android app (maybe on iOS also, I don't know) under "SAKO Ballistics" name. py-ballisticcalc LGPL Python library for point-mass ballistic calculations . Ballistics Projectiles Aerodynamics Articles containing video clips
External ballistics
[ "Physics", "Chemistry", "Engineering" ]
14,776
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
585,102
https://en.wikipedia.org/wiki/Julius%20von%20Mayer
Julius Robert von Mayer (25 November 1814 – 20 March 1878) was a German physician, chemist, and physicist and one of the founders of thermodynamics. He is best known for enunciating in 1841 one of the original statements of the conservation of energy or what is now known as one of the first versions of the first law of thermodynamics, namely that "energy can be neither created nor destroyed". In 1842, Mayer described the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. He also proposed that plants convert light into chemical energy. His achievements were overlooked and priority for the discovery in 1842 of the mechanical equivalent of heat was attributed to James Joule in the following year. Early life Mayer was born on 25 November 1814 in Heilbronn, Württemberg (Baden-Württemberg, modern day Germany), the son of a pharmacist. He grew up in Heilbronn. After completing his Abitur, he studied medicine at the University of Tübingen, where he was a member of the Corps Guestphalia, a German Student Corps. During 1838 he attained his doctorate as well as passing the Staatsexamen. After a stay in Paris (1839/40) he left as a ship's physician on a Dutch three-mast sailing ship for a journey to Jakarta. Although he had hardly been interested before this journey in physical phenomena, his observation that storm-whipped waves are warmer than the calm sea started him thinking about the physical laws, in particular about the physical phenomenon of warmth and the question whether the directly developed heat alone (the heat of combustion), or the sum of the quantities of heat developed in direct and indirect ways are to be accounted for in the burning process. After his return in February 1841 Mayer dedicated his efforts to solve this problem. In 1841 he settled in Heilbronn and married. Development of ideas Even as a young child, Mayer showed an intense interest with various mechanical mechanisms. He was a young man who performed various experiments of the physical and chemical variety. In fact, one of his favorite hobbies was creating various types of electrical devices and air pumps. It was obvious that he was intelligent. Hence, Mayer attended Eberhard-Karls University in May 1832. He studied medicine during his time there. In 1837, he and some of his friends were arrested for wearing the couleurs of a forbidden organization. The consequences for this arrest included a one year expulsion from the college and a brief period of incarceration. This diversion sent Mayer traveling to Switzerland, France, and the Dutch East Indies. Mayer drew some additional interest in mathematics and engineering from his friend Carl Baur through private tutoring. In 1841, Mayer returned to Heilbronn to practice medicine, but physics became his new passion. In June 1841 he completed his first scientific paper entitled "On the Quantitative and Qualitative Determination of Forces". It was largely ignored by other professionals in the area. Then, Mayer became interested in the area of heat and its motion. He presented a value in numerical terms for the mechanical equivalent of heat. He also was the first person to describe the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. In 1848 he calculated that in the absence of a source of energy the Sun would cool down in only 5000 years, and he suggested that the impact of meteorites kept it hot. Since he was not taken seriously at the time, his achievements were overlooked and credit was given to James Joule. Mayer almost committed suicide after he discovered this fact. He spent some time in mental institutions to recover from this and the loss of some of his children. Several of his papers were published due to the advanced nature of the physics and chemistry. He was awarded an honorary doctorate in 1859 by the philosophical faculty at the University of Tübingen. His overlooked work was revived in 1862 by fellow physicist John Tyndall in a lecture at the London Royal Institution. In July 1867 Mayer published "Die Mechanik der Wärme." This publication dealt with the mechanics of heat and its motion. On 5 November 1867 Mayer was awarded personal nobility by the Kingdom of Württemberg (von Mayer) which is the German equivalent of a British knighthood. von Mayer died in Germany in 1878. After Sadi Carnot stated it for caloric, Mayer was the first person to state the law of the conservation of energy, one of the most fundamental tenets of modern day physics. The law of the conservation of energy states that the total mechanical energy of a system remains constant in any isolated system of objects that interact with each other only by way of forces that are conservative. Mayer's first attempt at stating the conservation of energy was a paper he sent to Johann Christian Poggendorff's Annalen der Physik, in which he postulated a conservation of force (Erhaltungssatz der Kraft). However, owing to Mayer's lack of advanced training in physics, it contained some fundamental mistakes and was not published. Mayer continued to pursue the idea steadfastly and argued with the Tübingen physics professor Johann Gottlieb Nörremberg, who rejected his hypothesis. Nörremberg did, however, give Mayer a number of valuable suggestions on how the idea could be examined experimentally; for example, if kinetic energy transforms into heat energy, water should be warmed by vibration. Mayer not only performed this demonstration, but determined also the quantitative factor of the transformation, calculating the mechanical equivalent of heat. The result of his investigations was published 1842 in the May edition of Justus von Liebig's Annalen der Chemie und Pharmacie. It was translated as Remarks on the Forces of Inorganic Nature In his booklet Die organische Bewegung im Zusammenhang mit dem Stoffwechsel (The Organic Movement in Connection with the Metabolism, 1845) he specified the numerical value of the mechanical equivalent of heat: at first as 365 kgf·m/kcal, later as 425 kgf·m/kcal; the modern values are 4.184 kJ/kcal (426.6 kgf·m/kcal) for the thermochemical calorie and 4.1868 kJ/kcal (426.9 kgf·m/kcal) for the international steam table calorie. This relation implies that, although work and heat are different forms of energy, they can be transformed into one another. This law is now called the first law of thermodynamics, and led to the formulation of the general principle of conservation of energy, definitively stated by Hermann von Helmholtz in 1847. Mayer's relation Mayer derived a relation between specific heat at constant pressure and the specific heat at constant volume for an ideal gas. The relation is: , where CP,m is the molar specific heat at constant pressure, CV,m is the molar specific heat at constant volume and R is the gas constant. Later life Mayer was aware of the importance of his discovery, but his inability to express himself scientifically led to degrading speculation and resistance from the scientific establishment. Contemporary physicists rejected his principle of conservation of energy, and even acclaimed physicists Hermann von Helmholtz and James Prescott Joule viewed his ideas with hostility. The former doubted Mayer's qualifications in physical questions, and a bitter dispute over priority developed with the latter. In 1848 two of his children died rapidly in succession, and Mayer's mental health deteriorated. He attempted suicide on 18 May 1850 and was committed to a mental institution. After he was released, he was a broken man and only timidly re-entered public life in 1860. However, in the meantime, his scientific fame had grown and he received a late appreciation of his achievement, although perhaps at a stage where he was no longer able to enjoy it. He continued to work vigorously as a physician until his death. Honors 1840 Mayer received the Knight Cross of the Order of the Crown (Württemberg). 1869 Mayer received the prix Poncelet. The Robert-Mayer-Gymnasium and the Robert-Mayer-Volks- und Schulsternwarte in Heilbronn bear his name. In chemistry, he invented Mayer's reagent which is used in detecting alkaloids. Works Ueber das Santonin : eine Inaugural-Dissertation, welche zur Erlangung der Doctorwürde in der Medicin & Chirurgie unter dem Praesidium von Wilhelm Rapp im July 1838 der öffentlichen Prüfung vorlegt Julius Robert Mayer . M. Müller, Heilbronn 1838 Digital edition by the University and State Library Düsseldorf References Further reading External links 1814 births 1878 deaths 19th-century German physicists Recipients of the Copley Medal People from Heilbronn Thermodynamicists
Julius von Mayer
[ "Physics", "Chemistry" ]
1,815
[ "Thermodynamics", "Thermodynamicists" ]
585,226
https://en.wikipedia.org/wiki/Cosmography
The term cosmography has two distinct meanings: traditionally it has been the protoscience of mapping the general features of the cosmos, heaven and Earth; more recently, it has been used to describe the ongoing effort to determine the large-scale features of the observable universe. Premodern views of cosmography can be traditionally divided into those following the tradition of ancient near eastern cosmology, dominant in the Ancient Near East and in early Greece. Traditional usage The 14th-century work 'Aja'ib al-makhluqat wa-ghara'ib al-mawjudat by Persian physician Zakariya al-Qazwini is considered to be an early work of cosmography. Traditional Hindu, Buddhist and Jain cosmography schematize a universe centered on Mount Meru surrounded by rivers, continents and seas. These cosmographies posit a universe being repeatedly created and destroyed over time cycles of immense lengths. In 1551, Martín Cortés de Albacar, from Zaragoza, Spain, published Breve compendio de la esfera y del arte de navegar. Translated into English and reprinted several times, the work was of great influence in Britain for many years. He proposed spherical charts and mentioned magnetic deviation and the existence of magnetic poles. Peter Heylin's 1652 book Cosmographie (enlarged from his Microcosmos of 1621) was one of the earliest attempts to describe the entire world in English, and is the first known description of Australia, and among the first of California. The book has four sections, examining the geography, politics, and cultures of Europe, Asia, Africa, and America, with an addendum on Terra Incognita, including Australia, and extending to Utopia, Fairyland, and the "Land of Chivalrie". In 1659, Thomas Porter published a smaller, but extensive Compendious Description of the Whole World, which also included a chronology of world events from Creation forward. These were all part of a major trend in the European Renaissance to explore (and perhaps comprehend) the known world. Modern usage In astrophysics, the term "cosmography" is beginning to be used to describe attempts to determine the large-scale matter distribution and kinematics of the observable universe, dependent on the Friedmann–Lemaître–Robertson–Walker metric but independent of the temporal dependence of the scale factor on the matter/energy composition of the Universe. The word was also commonly used by Buckminster Fuller in his lectures. Using the Tully-Fisher relation on a catalog of 10000 galaxies has allowed the construction of 3D images of the local structure of the cosmos. This led to the identification of a local supercluster named the Laniakea Supercluster. See also Johann Bayer Andreas Cellarius Cosmographia Julius Schiller Star cartography Chronology of the Universe Cosmogony Cosmology Timeline of cosmological theories Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure Large-scale structure of the cosmos Timeline of astronomical maps, catalogs, and surveys References Physical cosmology
Cosmography
[ "Physics", "Astronomy" ]
643
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
585,388
https://en.wikipedia.org/wiki/Homology%20sphere
In algebraic topology, a homology sphere is an n-manifold X having the homology groups of an n-sphere, for some integer . That is, and for all other i. Therefore X is a connected space, with one non-zero higher Betti number, namely, . It does not follow that X is simply connected, only that its fundamental group is perfect (see Hurewicz theorem). A rational homology sphere is defined similarly but using homology with rational coefficients. Poincaré homology sphere The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere, first constructed by Henri Poincaré. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. Since the fundamental group of the 3-sphere is trivial, this shows that there exist 3-manifolds with the same homology groups as the 3-sphere that are not homeomorphic to it. Construction A simple construction of this space begins with a dodecahedron. Each face of the dodecahedron is identified with its opposite face, using the minimal clockwise twist to line up the faces. Gluing each pair of opposite faces together using this identification yields a closed 3-manifold. (See Seifert–Weber space for a similar construction, using more "twist", that results in a hyperbolic 3-manifold.) Alternatively, the Poincaré homology sphere can be constructed as the quotient space SO(3)/I where I is the icosahedral group (i.e., the rotational symmetry group of the regular icosahedron and dodecahedron, isomorphic to the alternating group A5). More intuitively, this means that the Poincaré homology sphere is the space of all geometrically distinguishable positions of an icosahedron (with fixed center and diameter) in Euclidean 3-space. One can also pass instead to the universal cover of SO(3) which can be realized as the group of unit quaternions and is homeomorphic to the 3-sphere. In this case, the Poincaré homology sphere is isomorphic to where is the binary icosahedral group, the perfect double cover of I embedded in . Another approach is by Dehn surgery. The Poincaré homology sphere results from +1 surgery on the right-handed trefoil knot. Cosmology In 2003, lack of structure on the largest scales (above 60 degrees) in the cosmic microwave background as observed for one year by the WMAP spacecraft led to the suggestion, by Jean-Pierre Luminet of the Observatoire de Paris and colleagues, that the shape of the universe is a Poincaré sphere. In 2008, astronomers found the best orientation on the sky for the model and confirmed some of the predictions of the model, using three years of observations by the WMAP spacecraft. Data analysis from the Planck spacecraft suggests that there is no observable non-trivial topology to the universe. Constructions and examples Surgery on a knot in the 3-sphere S3 with framing +1 or −1 gives a homology sphere. More generally, surgery on a link gives a homology sphere whenever the matrix given by intersection numbers (off the diagonal) and framings (on the diagonal) has determinant +1 or −1. If p, q, and r are pairwise relatively prime positive integers then the link of the singularity xp + yq + zr = 0 (in other words, the intersection of a small 3-sphere around 0 with this complex surface) is a Brieskorn manifold that is a homology 3-sphere, called a Brieskorn 3-sphere Σ(p, q, r). It is homeomorphic to the standard 3-sphere if one of p, q, and r is 1, and Σ(2, 3, 5) is the Poincaré sphere. The connected sum of two oriented homology 3-spheres is a homology 3-sphere. A homology 3-sphere that cannot be written as a connected sum of two homology 3-spheres is called irreducible or prime, and every homology 3-sphere can be written as a connected sum of prime homology 3-spheres in an essentially unique way. (See Prime decomposition (3-manifold).) Suppose that are integers all at least 2 such that any two are coprime. Then the Seifert fiber space over the sphere with exceptional fibers of degrees a1, ..., ar is a homology sphere, where the b'''s are chosen so that (There is always a way to choose the b′s, and the homology sphere does not depend (up to isomorphism) on the choice of b′s.) If r is at most 2 this is just the usual 3-sphere; otherwise they are distinct non-trivial homology spheres. If the a′s are 2, 3, and 5 this gives the Poincaré sphere. If there are at least 3 a′s, not 2, 3, 5, then this is an acyclic homology 3-sphere with infinite fundamental group that has a Thurston geometry modeled on the universal cover of SL2(R). Invariants The Rokhlin invariant is a -valued invariant of homology 3-spheres. The Casson invariant is an integer valued invariant of homology 3-spheres, whose reduction mod 2 is the Rokhlin invariant. Applications If A is a homology 3-sphere not homeomorphic to the standard 3-sphere, then the suspension of A is an example of a 4-dimensional homology manifold that is not a topological manifold. The double suspension of A is homeomorphic to the standard 5-sphere, but its triangulation (induced by some triangulation of A) is not a PL manifold. In other words, this gives an example of a finite simplicial complex that is a topological manifold but not a PL manifold. (It is not a PL manifold because the link of a point is not always a 4-sphere.) Galewski and Stern showed that all compact topological manifolds (without boundary) of dimension at least 5 are homeomorphic to simplicial complexes if and only if there is a homology 3 sphere Σ with Rokhlin invariant 1 such that the connected sum Σ#Σ of Σ with itself bounds a smooth acyclic 4-manifold. Ciprian Manolescu showed that there is no such homology sphere with the given property, and therefore, there are 5-manifolds not homeomorphic to simplicial complexes. In particular, the example originally given by Galewski and Stern is not triangulable. See also Eilenberg–MacLane space Moore space (algebraic topology) References Selected reading Robion Kirby, Martin Scharlemann, Eight faces of the Poincaré homology 3-sphere. Geometric topology (Proc. Georgia Topology Conf., Athens, Ga., 1977), pp. 113–146, Academic Press, New York-London, 1979. Nikolai Saveliev, Invariants of Homology 3-Spheres'', Encyclopaedia of Mathematical Sciences, vol 140. Low-Dimensional Topology, I. Springer-Verlag, Berlin, 2002. External links A 16-Vertex Triangulation of the Poincaré Homology 3-Sphere and Non-PL Spheres with Few Vertices by Anders Björner and Frank H. Lutz Lecture by David Gillman on The best picture of Poincare's homology sphere Topological spaces Homology theory 3-manifolds Spheres
Homology sphere
[ "Mathematics" ]
1,607
[ "Topological spaces", "Topology", "Mathematical structures", "Space (mathematics)" ]
328,252
https://en.wikipedia.org/wiki/Pick%27s%20theorem
In geometry, Pick's theorem provides a formula for the area of a simple polygon with integer vertex coordinates, in terms of the number of integer points within it and on its boundary. The result was first described by Georg Alexander Pick in 1899. It was popularized in English by Hugo Steinhaus in the 1950 edition of his book Mathematical Snapshots. It has multiple proofs, and can be generalized to formulas for certain kinds of non-simple polygons. Formula Suppose that a polygon has integer coordinates for all of its vertices. Let be the number of integer points interior to the polygon, and let be the number of integer points on its boundary (including both vertices and points along the sides). Then the area of this polygon is: The example shown has interior points and boundary points, so its area is square units. Proofs Via Euler's formula One proof of this theorem involves subdividing the polygon into triangles with three integer vertices and no other integer points. One can then prove that each subdivided triangle has area exactly . Therefore, the area of the whole polygon equals half the number of triangles in the subdivision. After relating area to the number of triangles in this way, the proof concludes by using Euler's polyhedral formula to relate the number of triangles to the number of grid points in the polygon. The first part of this proof shows that a triangle with three integer vertices and no other integer points has area exactly , as Pick's formula states. The proof uses the fact that all triangles tile the plane, with adjacent triangles rotated by 180° from each other around their shared edge. For tilings by a triangle with three integer vertices and no other integer points, each point of the integer grid is a vertex of six tiles. Because the number of triangles per grid point (six) is twice the number of grid points per triangle (three), the triangles are twice as dense in the plane as the grid points. Any scaled region of the plane contains twice as many triangles (in the limit as the scale factor goes to infinity) as the number of grid points it contains. Therefore, each triangle has area , as needed for the proof. A different proof that these triangles have area is based on the use of Minkowski's theorem on lattice points in symmetric convex sets. This already proves Pick's formula for a polygon that is one of these special triangles. Any other polygon can be subdivided into special triangles: add non-crossing line segments within the polygon between pairs of grid points until no more line segments can be added. The only polygons that cannot be subdivided in this way are the special triangles considered above; therefore, only special triangles can appear in the resulting subdivision. Because each special triangle has area , a polygon of area will be subdivided into special triangles. The subdivision of the polygon into triangles forms a planar graph, and Euler's formula gives an equation that applies to the number of vertices, edges, and faces of any planar graph. The vertices are just the grid points of the polygon; there are of them. The faces are the triangles of the subdivision, and the single region of the plane outside of the polygon. The number of triangles is , so altogether there are faces. To count the edges, observe that there are sides of triangles in the subdivision. Each edge interior to the polygon is the side of two triangles. However, there are edges of triangles that lie along the polygon's boundary and form part of only one triangle. Therefore, the number of sides of triangles obeys the equation , from which one can solve for the number of edges, . Plugging these values for , , and into Euler's formula gives Pick's formula is obtained by solving this linear equation for . An alternative but similar calculation involves proving that the number of edges of the same subdivision is , leading to the same result. It is also possible to go the other direction, using Pick's theorem (proved in a different way) as the basis for a proof of Euler's formula. Other proofs Alternative proofs of Pick's theorem that do not use Euler's formula include the following. One can recursively decompose the given polygon into triangles, allowing some triangles of the subdivision to have area larger than 1/2. Both the area and the counts of points used in Pick's formula add together in the same way as each other, so the truth of Pick's formula for general polygons follows from its truth for triangles. Any triangle subdivides its bounding box into the triangle itself and additional right triangles, and the areas of both the bounding box and the right triangles are easy to compute. Combining these area computations gives Pick's formula for triangles, and combining triangles gives Pick's formula for arbitrary polygons. Alternatively, instead of using grid squares centered on the grid points, it is possible to use grid squares having their vertices at the grid points. These grid squares cut the given polygon into pieces, which can be rearranged (by matching up pairs of squares along each edge of the polygon) into a polyomino with the same area. Pick's theorem may also be proved based on complex integration of a doubly periodic function related to Weierstrass elliptic functions. Applying the Poisson summation formula to the characteristic function of the polygon leads to another proof. Pick's theorem was included in a 1999 web listing of the "top 100 mathematical theorems", which later became used by Freek Wiedijk as a benchmark set to test the power of different proof assistants. , Pick's theorem had been formalized and proven in only two of the ten proof assistants recorded by Wiedijk. Generalizations Generalizations to Pick's theorem to non-simple polygons are more complicated and require more information than just the number of interior and boundary vertices. For instance, a polygon with holes bounded by simple integer polygons, disjoint from each other and from the boundary, has area It is also possible to generalize Pick's theorem to regions bounded by more complex planar straight-line graphs with integer vertex coordinates, using additional terms defined using the Euler characteristic of the region and its boundary, or to polygons with a single boundary polygon that can cross itself, using a formula involving the winding number of the polygon around each integer point as well as its total winding number. The Reeve tetrahedra in three dimensions have four integer points as vertices and contain no other integer points, but do not all have the same volume. Therefore, there does not exist an analogue of Pick's theorem in three dimensions that expresses the volume of a polyhedron as a function only of its numbers of interior and boundary points. However, these volumes can instead be expressed using Ehrhart polynomials. Related topics Several other mathematical topics relate the areas of regions to the numbers of grid points. Blichfeldt's theorem states that every shape can be translated to contain at least its area in grid points. The Gauss circle problem concerns bounding the error between the areas and numbers of grid points in circles. The problem of counting integer points in convex polyhedra arises in several areas of mathematics and computer science. In application areas, the dot planimeter is a transparency-based device for estimating the area of a shape by counting the grid points that it contains. The Farey sequence is an ordered sequence of rational numbers with bounded denominators whose analysis involves Pick's theorem. Another simple method for calculating the area of a polygon is the shoelace formula. It gives the area of any simple polygon as a sum of terms computed from the coordinates of consecutive pairs of its vertices. Unlike Pick's theorem, the shoelace formula does not require the vertices to have integer coordinates. References External links Pick's Theorem by Ed Pegg, Jr., the Wolfram Demonstrations Project. Pi using Pick's Theorem by Mark Dabbs, GeoGebra Digital geometry Lattice points Euclidean plane geometry Area Theorems about polygons Articles containing proofs Analytic geometry
Pick's theorem
[ "Physics", "Mathematics" ]
1,672
[ "Scalar physical quantities", "Planes (geometry)", "Physical quantities", "Quantity", "Lattice points", "Euclidean plane geometry", "Size", "Number theory", "Articles containing proofs", "Wikipedia categories named after physical quantities", "Area" ]
328,274
https://en.wikipedia.org/wiki/Palynology
Palynology is the study of microorganisms and microscopic fragments of mega-organisms that are composed of acid-resistant organic material and occur in sediments, sedimentary rocks, and even some metasedimentary rocks. Palynomorphs are the microscopic, acid-resistant organic remains and debris produced by a wide variety of plants, animals, and Protista that have existed since the late Proterozoic. It is the science that studies contemporary and fossil palynomorphs (paleopalynology), including pollen, spores, orbicules, dinocysts, acritarchs, chitinozoans and scolecodonts, together with particulate organic matter (POM) and kerogen found in sedimentary rocks and sediments. Palynology does not include diatoms, foraminiferans or other organisms with siliceous or calcareous tests. The name of the science and organisms is derived from the Greek , "strew, sprinkle" and -logy) or of "particles that are strewn". Palynology is an interdisciplinary science that stands at the intersection of earth science (geology or geological science) and biological science (biology), particularly plant science (botany). Biostratigraphy, a branch of paleontology and paleobotany, involves fossil palynomorphs from the Precambrian to the Holocene for their usefulness in the relative dating and correlation of sedimentary strata. Palynology is also used to date and understand the evolution of many kinds of plants and animals. In paleoclimatology, fossil palynomorphs are studied for their usefulness in understanding ancient Earth history in terms of reconstructing paleoenvironments and paleoclimates. Palynology is quite useful in disciplines such as archeology, in honey production, and criminal and civil law. In archaeology, palynology is widely used to reconstruct ancient paleoenvironments and environmental shifts that significantly influenced past human societies and reconstruct the diet of prehistoric and historic humans. Melissopalynology, the study of pollen and other palynomorphs in honey, identifies the sources of pollen in terms of geographical location(s) and genera of plants. This not only provides important information on the ecology of honey bees, it also an important tool in discovering and policing the criminal adultriation and mislabeling of honey and its products. Forensic palynology uses palynomorphs as evidence in criminal and civil law to prove or disprove a physical link between objects, people, and places. Palynomorphs Palynomorphs are broadly defined as the study of organic remains, including microfossils, and microscopic fragments of mega-organisms that are composed of acid-resistant organic material and range in size between 5 and 500 micrometres. They are extracted from soils, sedimentary rocks and sediment cores, and other materials by a combination of physical (ultrasonic treatment and wet sieving) and chemical (acid digestion) procedures to remove the non-organic fraction. Palynomorphs may be composed of organic material such as chitin, pseudochitin and sporopollenin. Palynomorphs form a geological record of importance in determining the type of prehistoric life that existed at the time the sedimentary strata was laid down. As a result, these microfossils give important clues to the prevailing climatic conditions of the time. Their paleontological utility derives from an abundance numbering in millions of palynomorphs per gram in organic marine deposits, even when such deposits are generally not fossiliferous. Palynomorphs, however, generally have been destroyed in metamorphic or recrystallized rocks. Typical palynomorphs include dinoflagellate cysts, acritarchs, spores, pollen, plant tissue, fungi, scolecodonts (scleroprotein teeth, jaws, and associated features of polychaete annelid worms), arthropod organs (such as insect mouthparts), and chitinozoans. Palynomorph microscopic structures that are abundant in most sediments are resistant to routine pollen extraction. Palynofacies A palynofacies is the complete assemblage of organic matter and palynomorphs in a fossil deposit. The term was introduced by the French geologist in 1964. Palynofacies studies are often linked to investigations of the organic geochemistry of sedimentary rocks. The study of the palynofacies of a sedimentary depositional environment can be used to learn about the depositional palaeoenvironments of sedimentary rocks in exploration geology, often in conjunction with palynological analysis and vitrinite reflectance. Palynofacies can be used in two ways: Organic palynofacies considers all the acid insoluble particulate organic matter (POM), including kerogen and palynomorphs in sediments and palynological preparations of sedimentary rocks. The sieved or unsieved preparations may be examined using strew mounts on microscope slides that may be examined using a transmitted light biological microscope or ultraviolet (UV) fluorescence microscope. The abundance, composition and preservation of the various components, together with the thermal alteration of the organic matter is considered. Palynomorph palynofacies considers the abundance, composition and diversity of palynomorphs in a sieved palynological preparation of sediments or palynological preparation of sedimentary rocks. The ratio of marine fossil phytoplankton (acritarchs and dinoflagellate cysts), together with chitinozoans, to terrestrial palynomorphs (pollen and spores) can be used to derive a terrestrial input index in marine sediments. History Early history The earliest reported observations of pollen under a microscope are likely to have been in the 1640s by the English botanist Nehemiah Grew, who described pollen and the stamen, and concluded that pollen is required for sexual reproduction in flowering plants. By the late 1870s, as optical microscopes improved and the principles of stratigraphy were worked out, Robert Kidston and P. Reinsch were able to examine the presence of fossil spores in the Devonian and Carboniferous coal seams and make comparisons between the living spores and the ancient fossil spores. Early investigators include Christian Gottfried Ehrenberg (radiolarians, diatoms and dinoflagellate cysts), Gideon Mantell (desmids) and Henry Hopley White (dinoflagellate cysts). 1890s to 1940s Quantitative analysis of pollen began with Lennart von Post's published work. Although he published in the Swedish language, his methodology gained a wide audience through his lectures. In particular, his Kristiania lecture of 1916 was important in gaining a wider audience. Because the early investigations were published in the Nordic languages (Scandinavian languages), the field of pollen analysis was confined to those countries. The isolation ended with the German publication of Gunnar Erdtman's 1921 thesis. The methodology of pollen analysis became widespread throughout Europe and North America and revolutionized Quaternary vegetation and climate change research. Earlier pollen researchers include Früh (1885), who enumerated many common tree pollen types, and a considerable number of spores and herb pollen grains. There is a study of pollen samples taken from sediments of Swedish lakes by Trybom (1888); pine and spruce pollen was found in such profusion that he considered them to be serviceable as "index fossils". Georg F. L. Sarauw studied fossil pollen of middle Pleistocene age (Cromerian) from the harbour of Copenhagen. Lagerheim (in Witte 1905) and C. A.Weber (in H. A. Weber 1918) appear to be among the first to undertake 'percentage frequency' calculations. 1940s to 1989 The term palynology was introduced by Hyde and Williams in 1944, following correspondence with the Swedish geologist Ernst Antevs, in the pages of the Pollen Analysis Circular (one of the first journals devoted to pollen analysis, produced by Paul Sears in North America). Hyde and Williams chose palynology on the basis of the Greek words paluno meaning 'to sprinkle' and pale meaning 'dust' (and thus similar to the Latin word pollen). The archive-based background to the adoption of the term palynology and to alternative names (e.g. paepalology, pollenology) has been exhaustively explored. It has been argued there that the word gained general acceptance once used by the influential Swedish palynologist Gunnar Erdtman. Pollen analysis in North America stemmed from Phyllis Draper, an MS student under Sears at the University of Oklahoma. During her time as a student, she developed the first pollen diagram from a sample that depicted the percentage of several species at different depths at Curtis Bog. This was the introduction of pollen analysis in North America; pollen diagrams today still often remain in the same format with depth on the y-axis and abundances of species on the x-axis. 1990s to the 21st century Pollen analysis advanced rapidly in this period due to advances in optics and computers. Much of the science was revised by Johannes Iversen and Knut Fægri in their textbook on the subject. Methods of studying palynomorphs Chemical preparation Chemical digestion follows a number of steps. Initially the only chemical treatment used by researchers was treatment with potassium hydroxide (KOH) to remove humic substances; defloculation was accomplished through surface treatment or ultra-sonic treatment, although sonification may cause the pollen exine to rupture. In 1924, the use of hydrofluoric acid (HF) to digest silicate minerals was introduced by Assarson and Granlund, greatly reducing the amount of time required to scan slides for palynomorphs. Palynological studies using peats presented a particular challenge because of the presence of well-preserved organic material, including fine rootlets, moss leaflets and organic litter. This was the last major challenge in the chemical preparation of materials for palynological study. Acetolysis was developed by Gunnar Erdtman and his brother to remove these fine cellulose materials by dissolving them. In acetolysis the specimen is treated with acetic anhydride and sulfuric acid, dissolving cellulistic materials and thus providing better visibility for palynomorphs. Some steps of the chemical treatments require special care for safety reasons, in particular the use of HF which diffuses very fast through the skin and, causes severe chemical burns, and can be fatal. Another treatment includes kerosene flotation for chitinous materials. Analysis Once samples have been prepared chemically, they are mounted on microscope slides using silicon oil, glycerol or glycerol-jelly and examined using light microscopy or mounted on a stub for scanning electron microscopy. Researchers will often study either modern samples from a number of unique sites within a given area, or samples from a single site with a record through time, such as samples obtained from peat or lake sediments. More recent studies have used the modern analog technique in which paleo-samples are compared to modern samples for which the parent vegetation is known. When the slides are observed under a microscope, the researcher counts the number of grains of each pollen taxon. This record is next used to produce a pollen diagram. These data can be used to detect anthropogenic effects, such as logging, traditional patterns of land use or long term changes in regional climate Applications Palynology can be applied to problems in many scientific disciplines including geology, botany, paleontology, archaeology, pedology (soil study), and physical geography: Biostratigraphy and geochronology. Geologists use palynological studies in biostratigraphy to correlate strata and determine the relative age of a given bed, horizon, formation or stratigraphical sequence. Because the distribution of acritarchs, chitinozoans, dinoflagellate cysts, pollen and spores provides evidence of stratigraphical correlation through biostratigraphy and palaeoenvironmental reconstruction, one common and lucrative application of palynology is in oil and gas exploration. Paleoecology and climate change. Palynology can be used to reconstruct past vegetation (land plants) and marine and Freshwater phytoplankton communities, and so infer past environmental (palaeoenvironmental) and palaeoclimatic conditions in an area thousands or millions of years ago, a fundamental part of research into climate change. Organic palynofacies studies, which examine the preservation of the particulate organic matter and palynomorphs provides information on the depositional environment of sediments and depositional palaeoenvironments of sedimentary rocks. Geothermal alteration studies examine the colour of palynomorphs extracted from rocks to give the thermal alteration and maturation of sedimentary sequences, which provides estimates of maximum palaeotemperatures. Limnology studies. Freshwater palynomorphs and animal and plant fragments, including the prasinophytes and desmids (green algae) can be used to study past lake levels and long term climate change. Taxonomy and evolutionary studies. Involving the use of pollen morphological characters as source of taxonomic data to delimit plant species under same family or genus. Pollen apertural status is frequently used for differential sorting or finding similarities between species of the same taxa. This is also called Palynotaxonomy. Forensic palynology: the study of pollen and other palynomorphs for evidence at a crime scene. Allergy studies and pollen counting. Studies of the geographic distribution and seasonal production of pollen, can be used to forecast pollen conditions, helping sufferers of allergies such as hay fever. Melissopalynology: the study of pollen and spores found in honey. Archaeological palynology examines human uses of plants in the past. This can help determine seasonality of site occupation, presence or absence of agricultural practices or products, and 'plant-related activity areas' within an archaeological context. Bonfire Shelter is one such example of this application. See also References Sources Moore, P.D., et al. (1991), Pollen Analysis (Second Edition). Blackwell Scientific Publications. Traverse, A. (1988), Paleopalynology. Unwin Hyman. Roberts, N. (1998), The Holocene an environmental history, Blackwell Publishing. External links The AASP - The Palynological Society International Federation of Palynological Societies Palynology Laboratory, French Institute of Pondicherry, India The Palynology Unit, Kew Gardens, UK PalDat, palynological database hosted by the University of Vienna, Austria The Micropalaeontological Society Commission Internationale de Microflore du Paléozoique (CIMP), International Commission for Palaeozoic Palynology Centre for Palynology, University of Sheffield, UK Linnean Society Palynology Specialist Group (LSPSG) Canadian Association of Palynologists Pollen and Spore Identification Literature Palynologische Kring, The Netherlands and Belgium Palynofacies, an annotated link directory. Acosta et al., 2018. Climate change and peopling of the Neotropics during the Pleistocene-Holocene transition. Boletín de la Sociedad Geológica Mexicana. http://boletinsgm.igeolcu.unam.mx/bsgm/index.php/component/content/article/368-sitio/articulos/cuarta-epoca/7001/1857-7001-1-Acosta Earth sciences Archaeological science Subfields of paleontology Microfossils Branches of botany Sedimentology
Palynology
[ "Chemistry", "Biology" ]
3,268
[ "Branches of botany", "Microfossils", "Microscopy" ]
328,684
https://en.wikipedia.org/wiki/Annihilator%20%28ring%20theory%29
In mathematics, the annihilator of a subset of a module over a ring is the ideal formed by the elements of the ring that give always zero when multiplied by each element of . Over an integral domain, a module that has a nonzero annihilator is a torsion module, and a finitely generated torsion module has a nonzero annihilator. The above definition applies also in the case of noncommutative rings, where the left annihilator of a left module is a left ideal, and the right-annihilator, of a right module is a right ideal. Definitions Let R be a ring, and let M be a left R-module. Choose a non-empty subset S of M. The annihilator of S, denoted AnnR(S), is the set of all elements r in R such that, for all s in S, . In set notation, for all It is the set of all elements of R that "annihilate" S (the elements for which S is a torsion set). Subsets of right modules may be used as well, after the modification of "" in the definition. The annihilator of a single element x is usually written AnnR(x) instead of AnnR({x}). If the ring R can be understood from the context, the subscript R can be omitted. Since R is a module over itself, S may be taken to be a subset of R itself, and since R is both a right and a left R-module, the notation must be modified slightly to indicate the left or right side. Usually and or some similar subscript scheme are used to distinguish the left and right annihilators, if necessary. If M is an R-module and , then M is called a faithful module. Properties If S is a subset of a left R-module M, then Ann(S) is a left ideal of R. If S is a submodule of M, then AnnR(S) is even a two-sided ideal: (ac)s = a(cs) = 0, since cs is another element of S. If S is a subset of M and N is the submodule of M generated by S, then in general AnnR(N) is a subset of AnnR(S), but they are not necessarily equal. If R is commutative, then the equality holds. M may be also viewed as an R/AnnR(M)-module using the action . Incidentally, it is not always possible to make an R-module into an R/I-module this way, but if the ideal I is a subset of the annihilator of M, then this action is well-defined. Considered as an R/AnnR(M)-module, M is automatically a faithful module. For commutative rings Throughout this section, let be a commutative ring and a finitely generated -module. Relation to support The support of a module is defined as Then, when the module is finitely generated, there is the relation , where is the set of prime ideals containing the subset. Short exact sequences Given a short exact sequence of modules, the support property together with the relation with the annihilator implies More specifically, the relations If the sequence splits then the inequality on the left is always an equality. This holds for arbitrary direct sums of modules, as Quotient modules and annihilators Given an ideal and let be a finitely generated module, then there is the relation on the support. Using the relation to support, this gives the relation with the annihilator Examples Over the integers Over any finitely generated module is completely classified as the direct sum of its free part with its torsion part from the fundamental theorem of abelian groups. Then the annihilator of a finitely generated module is non-trivial only if it is entirely torsion. This is because since the only element killing each of the is . For example, the annihilator of is the ideal generated by . In fact the annihilator of a torsion module is isomorphic to the ideal generated by their least common multiple, . This shows the annihilators can be easily be classified over the integers. Over a commutative ring R There is a similar computation that can be done for any finitely presented module over a commutative ring . The definition of finite presentedness of implies there exists an exact sequence, called a presentation, given by where is in . Writing explicitly as a matrix gives it as hence has the direct sum decomposition If each of these ideals is written as then the ideal given by presents the annihilator. Over k[x,y] Over the commutative ring for a field , the annihilator of the module is given by the ideal Chain conditions on annihilator ideals The lattice of ideals of the form where S is a subset of R is a complete lattice when partially ordered by inclusion. There is interest in studying rings for which this lattice (or its right counterpart) satisfies the ascending chain condition or descending chain condition. Denote the lattice of left annihilator ideals of R as and the lattice of right annihilator ideals of R as . It is known that satisfies the ascending chain condition if and only if satisfies the descending chain condition, and symmetrically satisfies the ascending chain condition if and only if satisfies the descending chain condition. If either lattice has either of these chain conditions, then R has no infinite pairwise orthogonal sets of idempotents. If R is a ring for which satisfies the A.C.C. and RR has finite uniform dimension, then R is called a left Goldie ring. Category-theoretic description for commutative rings When R is commutative and M is an R-module, we may describe AnnR(M) as the kernel of the action map determined by the adjunct map of the identity along the Hom-tensor adjunction. More generally, given a bilinear map of modules , the annihilator of a subset is the set of all elements in that annihilate : Conversely, given , one can define an annihilator as a subset of . The annihilator gives a Galois connection between subsets of and , and the associated closure operator is stronger than the span. In particular: annihilators are submodules An important special case is in the presence of a nondegenerate form on a vector space, particularly an inner product: then the annihilator associated to the map is called the orthogonal complement. Relations to other properties of rings Given a module M over a Noetherian commutative ring R, a prime ideal of R that is an annihilator of a nonzero element of M is called an associated prime of M. Annihilators are used to define left Rickart rings and Baer rings. The set of (left) zero divisors DS of S can be written as (Here we allow zero to be a zero divisor.) In particular DR is the set of (left) zero divisors of R taking S = R and R acting on itself as a left R-module. When R is commutative and Noetherian, the set is precisely equal to the union of the associated primes of the R-module R. See also Faltings' annihilator theorem Socle Support of a module Notes References Israel Nathan Herstein (1968) Noncommutative Rings, Carus Mathematical Monographs #15, Mathematical Association of America, page 3. Richard S. Pierce. Associative algebras. Graduate Texts in Mathematics, Vol. 88, Springer-Verlag, 1982, Ideals (ring theory) Module theory Ring theory
Annihilator (ring theory)
[ "Mathematics" ]
1,632
[ "Fields of abstract algebra", "Ring theory", "Module theory" ]
13,328,505
https://en.wikipedia.org/wiki/Combustion%20light-gas%20gun
A combustion light-gas gun (CLGG) is a projectile weapon that utilizes the explosive force of low molecular-weight combustible gases, such as hydrogen mixed with oxygen, as propellant. When the gases are ignited, they burn, expand and propel the projectile out of the barrel with higher efficiency relative to solid propellant and have achieved higher muzzle velocities in experiments. Combustion light-gas gun technology is one of the areas being explored in an attempt to achieve higher velocities from artillery to gain greater range. Conventional guns use solid propellants, usually nitrocellulose-based compounds, to develop the chamber pressures needed to accelerate the projectiles. CLGGs' gaseous propellants are able to increase the propellant's specific impulse. Therefore, hydrogen is typically the first choice; however, other propellants like methane can be used. While this technology does appear to provide higher velocities, the main drawback with gaseous or liquid propellants for gun systems is the difficulty in getting uniform and predictable ignition and muzzle velocities. Variance with muzzle velocities affects precision in range, and the further a weapon shoots, the more significant these variances become. If an artillery system cannot maintain uniform and predictable muzzle velocities it will be of no use at longer ranges. Another issue is the survival of projectile payloads at higher accelerations. Fuzes, explosive fill, and guidance systems all must be "hardened" against the significant acceleration loads of conventional artillery to survive and function properly. Higher velocity weapons, like the CLGG, face these engineering challenges as they edge the boundaries of firing accelerations higher. The research and development firm UTRON, Inc is experimenting with a combustion light-gas gun design for field use. The corporation claims to have a system ready for testing as a potential long-range naval fire support weapon for emerging ships, such as the Zumwalt-class destroyer. The CLGG, like the railgun, is a possible candidate technology for greater ranges for naval systems, among others. UTRON has built and tested 45mm and 155mm combustion light-gas guns. See also Light-gas gun Scram cannon Electrothermal-chemical technology Potato cannon References https://apps.dtic.mil/dtic/tr/fulltext/u2/a462130.pdf UTRON 2006 Test Report Artillery by type Ballistics
Combustion light-gas gun
[ "Physics" ]
498
[ "Applied and interdisciplinary physics", "Ballistics" ]
13,330,831
https://en.wikipedia.org/wiki/Ostwald%27s%20rule
In materials science, Ostwald's rule or Ostwald's step rule, conceived by Wilhelm Ostwald, describes the formation of polymorphs. The rule states that usually the less stable polymorph crystallizes first. Ostwald's rule is not a universal law but a common tendency observed in nature. This can be explained on the basis of irreversible thermodynamics, structural relationships, or a combined consideration of statistical thermodynamics and structural variation with temperature. Unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. For example, out of hot water, metastable, fibrous crystals of benzamide appear first, only later to spontaneously convert to the more stable rhombic polymorph. A dramatic example is phosphorus, which upon sublimation first forms the less stable white phosphorus, which only slowly polymerizes to the red allotrope. This is notably the case for the anatase polymorph of titanium dioxide, which having a lower surface energy is commonly the first phase to form by crystallisation from amorphous precursors or solutions despite being metastable, with rutile being the equilibrium phase at all temperatures and pressures. References Mineralogy Gemology Crystallography
Ostwald's rule
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
273
[ "Crystallography", "Polymorphism (materials science)", "Condensed matter physics", "Materials science" ]
13,337,084
https://en.wikipedia.org/wiki/Nanomesh
The nanomesh is an inorganic nanostructured two-dimensional material, similar to graphene. It was discovered in 2003 at the University of Zurich, Switzerland. It consists of a single layer of boron (B) and nitrogen (N) atoms, which forms by self-assembly into a highly regular mesh after high-temperature exposure of a clean rhodium or ruthenium surface to borazine under ultra-high vacuum. The nanomesh looks like an assembly of hexagonal pores (see right image) at the nanometer (nm) scale. The distance between two pore centers is only 3.2 nm, whereas each pore has a diameter of about 2 nm and is 0.05 nm deep. The lowest regions bind strongly to the underlying metal, while the wires (highest regions) are only bound to the surface through strong cohesive forces within the layer itself. The boron nitride nanomesh is not only stable under vacuum, air and some liquids, but also up to temperatures of 796 °C (1070 K). In addition it shows the extraordinary ability to trap molecules and metallic clusters, which have similar sizes to the nanomesh pores, forming a well-ordered array. These characteristics may provide applications of the material in areas like, surface functionalisation, spintronics, quantum computing and data storage media like hard drives. Structure h-BN nanomesh is a single sheet of hexagonal boron nitride, which forms on substrates like rhodium Rh(111) or ruthenium Ru(0001) crystals by a self-assembly process. The unit cell of the h-BN nanomesh consists of 13x13 BN or 12x12 Rh atoms with a lattice constant of 3.2 nm. In a cross-section it means that 13 boron or nitrogen atoms are sitting on 12 rhodium atoms. This implies a modification of the relative positions of each BN towards the substrate atoms within a unit cell, where some bonds are more attractive or repulsive than other (site selective bonding), what induces the corrugation of the nanomesh (see right image with pores and wires). The nanomesh corrugation amplitude of 0.05 nm causes a strong effect on the electronic structure, where two distinct BN regions are observed. They are easily recognized in the lower right image, which is a scanning tunneling microscopy (STM) measurement, as well as in the lower left image representing a theoretical calculation of the same area. A strongly bounded region assigned to the pores is visible in blue in the left image below (center of bright rings in the right image) and a weakly bound region assigned to the wires appears yellow-red in the left image below (area in-between rings in the right image). See for more details. Properties The nanomesh is stable under a wide range of environments like air, water and electrolytes among others. It is also temperature resistant since it does not decompose in temperatures up to 1275K under a vacuum. In addition to these exceptional stabilities, the nanomesh shows the extraordinary ability to act as a scaffold for metallic nanoclusters and to trap molecules forming a well-ordered array. In the case of gold (Au), its evaporation on the nanomesh leads to formation of well-defined round Au nanoparticles, which are centered at the nanomesh pores. The STM figure on the right shows Naphthalocyanine (Nc) molecules, which were vapor-deposited onto the nanomesh. These planar molecules have a diameter of about 2 nm, whose size is comparable to that of the nanomesh pores (see upper inset). It is spectacularly visible how the molecules form a well-ordered array with the periodicity of the nanomesh (3.22 nm). The lower inset shows a region of this substrate with higher resolution, where individual molecules are trapped inside the pores. In addition, the molecules seem to keep their native conformation, what means that their functionality is kept, which is nowadays a challenge in nanoscience. Such systems with wide spacing between individual molecules/clusters and negligible intermolecular interactions might be interesting for applications such as molecular electronics and memory elements, in photochemistry or in optical devices. See for more detailed information. Preparation and analysis Well-ordered nanomeshes are grown by thermal decomposition of borazine (HBNH)3, a colorless substance that is liquid at room temperature. The nanomesh results after exposing the atomically clean Rh(111) or Ru(0001) surface to borazine by chemical vapor deposition (CVD). The substrate is kept at a temperature of 796 °C (1070 K) when borazine is introduced in the vacuum chamber at a dose of about 40 L (1 Langmuir = 10−6 torr sec). A typical borazine vapor pressure inside the ultrahigh vacuum chamber during the exposure is 3x10−7 mbar. After cooling down to room temperature, the regular mesh structure is observed using different experimental techniques. Scanning tunneling microscopy (STM) gives a direct look on the local real space structure of the nanomesh, while low energy electron diffraction (LEED) gives information about the surface structures ordered over the whole sample. Ultraviolet photoelectron spectroscopy (UPS) gives information about the electronic states in the outermost atomic layers of a sample, i.e. electronic information of the top substrate layers and the nanomesh. See also Other forms CVD of borazine on other substrates has not led so far to the formation of a corrugated nanomesh. A flat BN layer is observed on nickel and palladium, whereas stripped structures appear on molybdenum instead. References and notes Other links http://www.nanomesh.ch http://www.nanomesh.org Two-dimensional nanomaterials Self-organization Thin films Nitrides Boron compounds III-V compounds Transition metals NASA spin-off technologies
Nanomesh
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,274
[ "Self-organization", "Inorganic compounds", "Materials science", "Nanotechnology", "Planes (geometry)", "III-V compounds", "Thin films", "Dynamical systems" ]
13,337,318
https://en.wikipedia.org/wiki/Model-based%20design
Model-based design (MBD) is a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software. Overview Model-based design provides an efficient approach for establishing a common framework for communication throughout the design process while supporting the development cycle (V-model). In model-based design of control systems, development is manifested in these four steps: modeling a plant, analyzing and synthesizing a controller for the plant, simulating the plant and controller, integrating all these phases by deploying the controller. The model-based design is significantly different from traditional design methodology. Rather than using complex structures and extensive software code, designers can use Model-based design to define plant models with advanced functional characteristics using continuous-time and discrete-time building blocks. These built models used with simulation tools can lead to rapid prototyping, software testing, and verification. Not only is the testing and verification process enhanced, but also, in some cases, hardware-in-the-loop simulation can be used with the new design paradigm to perform testing of dynamic effects on the system more quickly and much more efficiently than with traditional design methodology. History As early as the 1920s two aspects of engineering, control theory and control systems, converged to make large-scale integrated systems possible. In those early days controls systems were commonly used in the industrial environment. Large process facilities started using process controllers for regulating continuous variables such as temperature, pressure, and flow rate. Electrical relays built into ladder-like networks were one of the first discrete control devices to automate an entire manufacturing process. Control systems gained momentum, primarily in the automotive and aerospace sectors. In the 1950s and 1960s, the push to space generated interest in embedded control systems. Engineers constructed control systems such as engine control units and flight simulators, that could be part of the end product. By the end of the twentieth century, embedded control systems were ubiquitous, as even major household consumer appliances such as washing machines and air conditioners contained complex and advanced control algorithms, making them much more "intelligent". In 1969, the first computer-based controllers were introduced. These early programmable logic controllers (PLC) mimicked the operations of already available discrete control technologies that used the out-dated relay ladders. The advent of PC technology brought a drastic shift in the process and discrete control market. An off-the-shelf desktop loaded with adequate hardware and software can run an entire process unit, and execute complex and established PID algorithms or work as a Distributed Control System (DCS). Steps The main steps in model-based design approach are: Plant modeling. Plant modeling can be data-driven or based on first principles. Data-driven plant modeling uses techniques such as System identification. With system identification, the plant model is identified by acquiring and processing raw data from a real-world system and choosing a mathematical algorithm with which to identify a mathematical model. Various kinds of analysis and simulations can be performed using the identified model before it is used to design a model-based controller. First-principles based modeling is based on creating a block diagram model that implements known differential-algebraic equations governing plant dynamics. A type of first-principles based modeling is physical modeling, where a model consists in connected blocks that represent the physical elements of the actual plant. Controller analysis and synthesis. The mathematical model conceived in step 1 is used to identify dynamic characteristics of the plant model. A controller can then be synthesized based on these characteristics. Offline simulation and real-time simulation. The time response of the dynamic system to complex, time-varying inputs is investigated. This is done by simulating a simple LTI (Linear Time-Invariant) model, or by simulating a non-linear model of the plant with the controller. Simulation allows specification, requirements, and modeling errors to be found immediately, rather than later in the design effort. Real-time simulation can be done by automatically generating code for the controller developed in step 2. This code can be deployed to a special real-time prototyping computer that can run the code and control the operation of the plant. If a plant prototype is not available, or testing on the prototype is dangerous or expensive, code can be automatically generated from the plant model. This code can be deployed to the special real-time computer that can be connected to the target processor with running controller code. Thus a controller can be tested in real-time against a real-time plant model. Deployment. Ideally this is done via code generation from the controller developed in step 2. It is unlikely that the controller will work on the actual system as well as it did in simulation, so an iterative debugging process is carried out by analyzing results on the actual target and updating the controller model. Model-based design tools allow all these iterative steps to be performed in a unified visual environment. Disadvantages The disadvantages of model-based design are fairly well understood this late in development lifecycle of the product and development. One major disadvantage is that the approach taken is a blanket or coverall approach to standard embedded and systems development. Often the time it takes to port between processors and ecosystems can outweigh the temporal value it offers in the simpler lab based implementations. Much of the compilation tool chain is closed source, and prone to fence post errors, and other such common compilation errors that are easily corrected in traditional systems engineering. Design and reuse patterns can lead to implementations of models that are not well suited to that task. Such as implementing a controller for a conveyor belt production facility that uses a thermal sensor, speed sensor, and current sensor. That model is generally not well suited for re-implementation in a motor controller etc. Though its very easy to port such a model over, and introduce all the software faults therein. Version control issues: Model-based design can encounter significant challenges due to the lack of high-quality tools for managing version control, particularly for handling diff and merge operations. This can lead to difficulties in managing concurrent changes and maintaining robust revision control practices. Although newer tools, such as 3-way merge, have been introduced to address these issues, effectively integrating these solutions into existing workflows remains a complex task. While Model-based design has the ability to simulate test scenarios and interpret simulations well, in real world production environments, it is often not suitable. Over reliance on a given toolchain can lead to significant rework and possibly compromise entire engineering approaches. While it's suitable for bench work, the choice to use this for a production system should be made very carefully. Advantages Some of the advantages model-based design offers in comparison to the traditional approach are: Model-based design provides a common design environment, which facilitates general communication, data analysis, and system verification between various (development) groups. Engineers can locate and correct errors early in system design, when the time and financial impact of system modification are minimized. Design reuse, for upgrades and for derivative systems with expanded capabilities, is facilitated. Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was time-consuming, and highly prone to error. In addition, debugging text-based programs is a tedious process, requiring much trial and error before a final fault-free model could be created, especially since mathematical models undergo unseen changes during the translation through the various design stages. Graphical modeling tools aim to improve these aspects of design. These tools provide a very generic and unified graphical modeling environment, and they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks. Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another. Graphical models also help engineers to conceptualize the entire system and simplify the process of transporting the model from one stage to another in the design process. Boeing's simulator EASY5 was among the first modeling tools to be provided with a graphical user interface, together with AMESim, a multi-domain, multi-level platform based on the Bond Graph theory. This was soon followed by tool like 20-sim and Dymola, which allowed models to be composed of physical components like masses, springs, resistors, etc. These were later followed by many other modern tools such as Simulink and LabVIEW. See also Control theory Functional specification Model-driven engineering Scientific modelling Specification (technical standard) Systems engineering References Control engineering
Model-based design
[ "Engineering" ]
1,744
[ "Control engineering" ]
990,197
https://en.wikipedia.org/wiki/Concerted%20reaction
In chemistry, a concerted reaction is a chemical reaction in which all bond breaking and bond making occurs in a single step. Reactive intermediates or other unstable high energy intermediates are not involved. Concerted reaction rates tend not to depend on solvent polarity ruling out large buildup of charge in the transition state. The reaction is said to progress through a concerted mechanism as all bonds are formed and broken in concert. Pericyclic reactions, the S2 reaction, and some rearrangements - such as the Claisen rearrangement - are concerted reactions. The rate of the SN2 reaction is second order overall due to the reaction being bimolecular (i.e. there are two molecular species involved in the rate-determining step). The reaction does not have any intermediate steps, only a transition state. This means that all the bond making and bond breaking takes place in a single step. In order for the reaction to occur both molecules must be situated correctly. References Organic reactions
Concerted reaction
[ "Chemistry" ]
208
[ "Organic reactions" ]
990,454
https://en.wikipedia.org/wiki/Interior%20algebra
In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras. Definition An interior algebra is an algebraic structure with the signature ⟨S, ·, +, ′, 0, 1, I⟩ where ⟨S, ·, +, ′, 0, 1⟩ is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities: xI ≤ x xII = xI (xy)I = xIyI 1I = 1 xI is called the interior of x. The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities: xC ≥ x xCC = xC (x + y)C = xC + yC 0C = 0 If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok. Open and closed elements Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements that are both open and closed are called clopen. 0 and 1 are clopen. An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras, which are the single element interior algebras characterized by the identity 0 = 1. Morphisms of interior algebras Homomorphisms Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence: f(xI) = f(x)I; f(xC) = f(x)C. Topomorphisms Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence: If x is open in A, then f(x) is open in B; If x is closed in A, then f(x) is closed in B. (Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism. Boolean homomorphisms Early research often considered mappings between interior algebras that were homomorphisms of the underlying Boolean algebras but that did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms—these preserve countable meets and joins. Continuous morphisms The earliest generalization of continuity to interior algebras was Sikorski's, based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map—the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.) Relationships to other areas of mathematics Topology Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X: and extend it to an interior algebra , where I is the usual topological interior operator. For all S ⊆ X it is defined by For all S ⊆ X the corresponding closure operator is given by SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense. Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras. Given a continuous map between two topological spaces we can define a complete topomorphism by A(f)(S) = f−1[S] for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map. Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties: X is empty if and only if A(X) is trivial X is indiscrete if and only if A(X) is simple X is discrete if and only if A(X) is Boolean X is almost discrete if and only if A(X) is semisimple X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively X is connected if and only if A(X) is directly indecomposable X is ultraconnected if and only if A(X) is finitely subdirectly irreducible X is compact ultra-connected if and only if A(X) is subdirectly irreducible Generalized topology The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form ⟨B, ·, +, ′, 0, 1, T⟩ where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that: T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T) T is closed under finite meets For every element b of B, the join exists T is said to be a generalized topology in the Boolean algebra. Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space ⟨B, ·, +, ′, 0, 1, T⟩ we can define an interior operator on B by thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras. Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply. Neighbourhood functions and neighbourhood lattices The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if . The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras: A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that: For all exists For all if and only if there is a such that and . The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by thereby obtaining an interior algebra. will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions. In terms of neighbourhood functions, the open elements are precisely those elements x such that . In terms of open elements if and only if there is an open element z such that . Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra. Modal logic Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra: L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩ where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic. The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false. Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5. Preorders Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a Kripke frame. The Kripke frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic. Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra from the power set Boolean algebra of X where the interior operator I is given by for all S ⊆ X. The corresponding closure operator is given by for all S ⊆ X. SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field). This construction and representation theorem is a special case of the more general result for modal algebras and Kripke frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are: . The corresponding closed sets are: . In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)). Monadic Boolean algebras Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras. In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation. Heyting algebras The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to be an interior algebra generated by its open elements—such interior algebras correspond one-to-one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter. Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one-to-one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz. Derivative algebras Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator. Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic wK4. Hence derivative algebras stand to topological derived sets and wK4 as interior/closure algebras stand to topological interiors/closures and S4. Given a derivative algebra V with derivative operator D, we can form an interior algebra with the same underlying Boolean algebra as V, with interior and closure operators defined by and , respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have . However, does not necessarily hold for every derivative algebra V. Stone duality and representation for interior algebras Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras. Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem, which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets—a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps, C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras). The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and Kripke frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey–Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey–Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey–Tarski topology of an interior algebra is the intersection of the former two topologies. Metamathematics Grzegorczyk proved the first-order theory of closure algebras undecidable. Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories. Notes References Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam. Esakia, L., 2004, "Intuitionistic logic and modality via topology," Annals of Pure and Applied Logic 127: 155-70. McKinsey, J.C.C. and Alfred Tarski, 1944, "The Algebra of Topology," Annals of Mathematics 45: 141-91. Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics. Bezhanishvili, G., Mines, R. and Morandi, P.J., 2008, Topo-canonical completions of closure algebras and Heyting algebras, Algebra Universalis 58: 1-34. Schmid, J., 1973, On the compactification of closure algebras, Fundamenta Mathematicae 79: 33-48 Sikorski R., 1955, Closure homomorphisms and interior mappings, Fundamenta Mathematicae 41: 12-20 Algebraic structures Mathematical logic Boolean algebra Closure operators Modal logic
Interior algebra
[ "Mathematics" ]
5,079
[ "Boolean algebra", "Mathematical structures", "Closure operators", "Mathematical logic", "Mathematical objects", "Modal logic", "Fields of abstract algebra", "Algebraic structures", "Order theory" ]
990,491
https://en.wikipedia.org/wiki/Anti-fouling%20paint
Anti-fouling paint is a specialized category of coatings applied as the outer (outboard) layer to the hull of a ship or boat, to slow the growth of and facilitate detachment of subaquatic organisms that attach to the hull and can affect a vessel's performance and durability. It falls into a category of commercially available underwater hull paints, also known as bottom paints. Anti-fouling paints are often applied as one component of multi-layer coating systems which may have other functions in addition to their antifouling properties, such as acting as a barrier against corrosion on metal hulls that will degrade and weaken the metal, or improving the flow of water past the hull of a fishing vessel or high-performance racing yachts. Although commonly discussed as being applied to ships, antifouling paints are also of benefit in many other sectors such as off-shore structures and fish farms. History In the Age of Sail, sailing vessels suffered severely from the growth of barnacles and weeds on the hull, called "fouling". Starting in the mid-1700s thin sheets of copper and approximately 100 years later, Muntz metal, were nailed onto the hull in an attempt to prevent marine growth. One famous example of the traditional use of metal sheathing is the clipper Cutty Sark, which is preserved as a museum ship in dry-dock at Greenwich in England. Marine growth affected performance (and profitability) in many ways: The maximum speed of a ship decreases as its hull becomes fouled with marine growth, and its displacement increases. Fouling hampers a ship's ability to sail upwind. Some marine growth, such as shipworms, would bore into the hull causing severe damage over time. The ship may transport harmful marine organisms to other areas. While anti-fouling coatings began to be developed from 1840 onwards, the first practical commercial anti-fouling coatings were established around 1860. One of the first successful commercial patents was for 'McIness', a metallic soap compound with copper sulphate that was applied heated over a quick-drying rosin varnish primer with an iron oxide pigment. The Bonnington Chemical Works began marketing copper sulphide anti-fouling paint around 1850. Other widely used anti-fouling paints were developed in the late 19th century, with some 213 anti-fouling patents being recorded by 1872. Among the most widely used in the 1880s and 1890s was a hot plastic composition known as Italian Morovian. In an official 1900 Letter from the U.S. Navy to the U.S. Senate Committee on Naval Affairs, it was noted that the (British) Admiralty had considered a proposal in 1847 to limit the number of iron ships (only recently introduced into naval service) and even to consider the sale of all iron ships in its possession, due to significant problems with biofouling. However, once an antifouling paint "with very fair results" was found, the iron ships were instead retained and continued to be built. During World War II, which included a substantial naval component, the U.S. Navy provided significant funding to the Woods Hole Oceanographic Institution to gather information and conduct research on marine biofouling and technologies for its prevention. This work was published as a book in 1952, the contents of which are available online as individual chapters. The third and final part of this book includes a number of chapters that go into the state of the art at that time for the formulation of anti-fouling paints. Lunn (1974) provides further history. Modern antifouling paints In modern times, antifouling paints are formulated with cuprous oxide (or other copper compounds) and/or other biocides—special chemicals which impede growth of barnacles, algae, and marine organisms. Historically, copper paints were red, leading to ship bottoms still being painted red today. "Soft", or ablative bottom paints slowly slough off in the water, releasing a copper or zinc based biocide into the water column. The movement of water increases the rate of this action. Ablative paints are widely used on the hulls of recreational vessels and typically are reapplied every 1–3 years. "Contact leaching" paints "create a porous film on the surface. Biocides are held in the pores, and released slowly." Another type of hard bottom paint includes Teflon and silicone coatings which are too slippery for growth to stick. SealCoat systems, which must be professionally applied, dry with small fibers sticking out from the coating surface. These small fibers move in the water, preventing bottom growth from adhering. Environmental concerns In the 1960s and 1970s, commercial vessels commonly used bottom paints containing tributyltin, which has been banned in the International Convention on the Control of Harmful Anti-fouling Systems on Ships of the International Maritime Organization due to its serious toxic effects on marine life (such as the collapse of a French shellfish fishery). Now that tributyltin has been banned, the most commonly used anti-fouling bottom paints are copper-based. Copper-based antifouling paints can also have adverse effects on marine organisms. Copper occurs naturally in aquatic systems but can build up in ports or marinas where there are lots of boats. Copper can leach out of anti-fouling paint from the hulls of the boats or fall off the hulls in different sized paint particles. This can lead to higher-than-normal concentrations of copper in the ports or bays. This excess of copper in the marine ecosystem can have adverse effects on the marine environment and its organisms. In marinas, the river nerite, a brackish water snail, was found to have higher mortality, negative growth, and a large decrease in reproduction compared to areas with no boating. The snails in marinas had more tissue (histopathological) issues and alternations in areas like their gills and gonads as well. Increased exposure to copper from antifouling paint has also been found to decrease enzyme activity in brine shrimp. Antifouling paint particles can be eaten by zooplankton or other marine species and move up the food chain, bioaccumulating in fish. This accumulation of copper through the food web can cause damage to not only the species eating the particle, but those that are accumulating it in their tissues from their diet.  Antifouling paint particles can also end up in the sediment of harbors or bays and damage the benthic environment or the organisms that live in them. These are the known effects of copper based antifouling paint; however, it has not been a large focus of study so the extent of the effects is not fully known. More research is needed to fully understand how these paints and the metals in them affect their environments. The Port of San Diego is investigating how to reduce copper input from copper-based antifouling coatings, and Washington State has passed a law which may phase in a ban on copper antifouling coatings on recreational vessels beginning in January 2018. However, despite the toxic chemistry of bottom paint and its accumulation in water ways across the globe, a similar ban was rescinded in the Netherlands after the European Union's Scientific Committee on Health and Environmental Risks concluded The Hague had insufficiently justified the law. In an expert opinion, the committee concluded the Netherlands government's explanation "does not provide sufficient sound scientific evidence to show that the use of copper-based antifouling paints in leisure boats presents significant environmental risk." "Sloughing bottom paints", or "ablative" paints, are an older type of paint designed to create a hull coating which ablates (wears off) slowly, exposing a fresh layer of biocides. Scrubbing a hull with sloughing bottom paint while it is in the water releases its biocides into the environment. One way to reduce the environmental impact from hulls with sloughing bottom paint is to have them hauled out and cleaned at boatyards with a "closed loop" system. Some innovative bottom paints that do not rely on copper or tin have been developed in response to the increasing scrutiny that copper-based ablative bottom paints have received as environmental pollutants. A possible future replacement for antifouling paint may be slime. A mesh would cover a ship's hull beneath which a series of pores would supply the slime compound. The compound would turn into a viscous slime on contact with water and coat the mesh. The slime would constantly slough off, carrying away micro-organisms and barnacle larvae. See also Biofouling Biomimetic antifouling coating Environmental impact of paint References External links Selecting an anti-fouling paint, West Marine Clean Boating Tip Sheet, Selecting a Bottom Paint, .pdf chart, Maryland Dept. of Natural Resources Bottom Paint for Racing Boats, Sailing World, 2007 Are foul-release paints for you? Coating calculator, National Fisherman Using Antifouling paint against the Gribble Menace, Teamac Marine Coatings Paints Shipbuilding Fouling
Anti-fouling paint
[ "Chemistry", "Materials_science", "Engineering" ]
1,873
[ "Paints", "Coatings", "Shipbuilding", "Marine engineering", "Materials degradation", "Fouling" ]
990,534
https://en.wikipedia.org/wiki/Norm%20%28mathematics%29
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself. A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin. A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space. The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm". It can also refer to a norm that can take infinite values, or to certain functions parametrised by a directed set. Definition Given a vector space over a subfield of the complex numbers a norm on is a real-valued function with the following properties, where denotes the usual absolute value of a scalar : Subadditivity/Triangle inequality: for all Absolute homogeneity: for all and all scalars Positive definiteness/positiveness/: for all if then Because property (2.) implies some authors replace property (3.) with the equivalent condition: for every if and only if A seminorm on is a function that has properties (1.) and (2.) so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if is a norm (or more generally, a seminorm) then and that also has the following property: Non-negativity: for all Some authors include non-negativity as part of the definition of "norm", although this is not necessary. Although this article defined "" to be a synonym of "positive definite", some authors instead define "" to be a synonym of "non-negative"; these definitions are not equivalent. Equivalent norms Suppose that and are two norms (or seminorms) on a vector space Then and are called equivalent, if there exist two positive real constants and such that for every vector The relation " is equivalent to " is reflexive, symmetric ( implies ), and transitive and thus defines an equivalence relation on the set of all norms on The norms and are equivalent if and only if they induce the same topology on Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces. Notation If a norm is given on a vector space then the norm of a vector is usually denoted by enclosing it within double vertical lines: Such notation is also sometimes used if is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation with single vertical lines is also widespread. Examples Every (real or complex) vector space admits a norm: If is a Hamel basis for a vector space then the real-valued map that sends (where all but finitely many of the scalars are ) to is a norm on There are also a large number of norms that exhibit additional properties that make them useful for specific problems. Absolute-value norm The absolute value is a norm on the vector space formed by the real or complex numbers. The complex numbers form a one-dimensional vector space over themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures. Any norm on a one-dimensional vector space is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces where is either or and norm-preserving means that This isomorphism is given by sending to a vector of norm which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm. Euclidean norm On the -dimensional Euclidean space the intuitive notion of length of the vector is captured by the formula This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem. This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares. The Euclidean norm is by far the most commonly used norm on but there are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces. The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis. Hence, the Euclidean norm can be written in a coordinate-free way as The Euclidean norm is also called the quadratic norm, norm, norm, 2-norm, or square norm; see space. It defines a distance function called the Euclidean length, distance, or distance. The set of vectors in whose Euclidean norm is a given positive constant forms an -sphere. Euclidean norm of complex numbers The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane This identification of the complex number as a vector in the Euclidean plane, makes the quantity (as first suggested by Euler) the Euclidean norm associated with the complex number. For , the norm can also be written as where is the complex conjugate of Quaternions and octonions There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers the complex numbers the quaternions and lastly the octonions where the dimensions of these spaces over the real numbers are respectively. The canonical norms on and are their absolute value functions, as discussed previously. The canonical norm on of quaternions is defined by for every quaternion in This is the same as the Euclidean norm on considered as the vector space Similarly, the canonical norm on the octonions is just the Euclidean norm on Finite-dimensional complex normed spaces On an -dimensional complex space the most common norm is In this case, the norm can be expressed as the square root of the inner product of the vector and itself: where is represented as a column vector and denotes its conjugate transpose. This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation: Taxicab norm or Manhattan norm The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York borough of Manhattan) to get from the origin to the point The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1. The Taxicab norm is also called the norm. The distance derived from this norm is called the Manhattan distance or distance. The 1-norm is simply the sum of the absolute values of the columns. In contrast, is not a norm because it may yield negative results. p-norm Let be a real number. The -norm (also called -norm) of vector is For we get the taxicab norm, for we get the Euclidean norm, and as approaches the -norm approaches the infinity norm or maximum norm: The -norm is related to the generalized mean or power mean. For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can be expressed in terms of the norm by using the polarization identity. On this inner product is the defined by while for the space associated with a measure space which consists of all square-integrable functions, this inner product is This definition is still of some interest for but the resulting function does not define a norm, because it violates the triangle inequality. What is true for this case of even in the measurable analog, is that the corresponding class is a vector space, and it is also true that the function (without th root) defines a distance that makes into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis. However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional. The partial derivative of the -norm is given by The derivative with respect to therefore, is where denotes Hadamard product and is used for absolute value of each component of the vector. For the special case of this becomes or Maximum norm (special case of: infinity norm, uniform norm, or supremum norm) If is some vector such that then: The set of vectors whose infinity norm is a given constant, forms the surface of a hypercube with edge length Energy norm The energy norm of a vector is defined in terms of a symmetric positive definite matrix as It is clear that if is the identity matrix, this norm corresponds to the Euclidean norm. If is diagonal, this norm is also called a weighted norm. The energy norm is induced by the inner product given by for . In general, the value of the norm is dependent on the spectrum of : For a vector with a Euclidean norm of one, the value of is bounded from below and above by the smallest and largest absolute eigenvalues of respectively, where the bounds are achieved if coincides with the corresponding (normalized) eigenvectors. Based on the symmetric matrix square root , the energy norm of a vector can be written in terms of the standard Euclidean norm as Zero norm In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm Here we mean by F-norm some real-valued function on an F-space with distance such that The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property. Hamming distance of a vector from zero In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous. In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks. Following Donoho's notation, the zero "norm" of is simply the number of non-zero coordinates of or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit of -norms as approaches 0. Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument. Abusing terminology, some engineers omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the norm, echoing the notation for the Lebesgue space of measurable functions. Infinite dimensions The generalization of the above norms to an infinite number of components leads to and spaces for with norms for complex-valued sequences and functions on respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as , giving a supremum norm, and are called and Any inner product induces in a natural way the norm Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article. Generally, these norms do not give the same topologies. For example, an infinite-dimensional space gives a strictly finer topology than an infinite-dimensional space when Composite norms Other norms on can be constructed by combining the above; for example is a norm on For any norm and any injective linear transformation we can define a new norm of equal to In 2D, with a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation. In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base). There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in (centered at zero) defines a norm on (see below). All the above formulas also yield norms on without modification. There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms. In abstract algebra Let be a finite extension of a field of inseparable degree and let have algebraic closure If the distinct embeddings of are then the Galois-theoretic norm of an element is the value As that function is homogeneous of degree , the Galois-theoretic norm is not a norm in the sense of this article. However, the -th root of the norm (assuming that concept makes sense) is a norm. Composition algebras The concept of norm in composition algebras does share the usual properties of a norm since null vectors are allowed. A composition algebra consists of an algebra over a field an involution and a quadratic form called the "norm". The characteristic feature of composition algebras is the homomorphism property of : for the product of two elements and of the composition algebra, its norm satisfies In the case of division algebras and the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form. Properties For any norm on a vector space the reverse triangle inequality holds: If is a continuous linear map between normed spaces, then the norm of and the norm of the transpose of are equal. For the norms, we have Hölder's inequality A special case of this is the Cauchy–Schwarz inequality: Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function. Equivalence The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any -norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and for a -norm). In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors is said to converge in norm to if as Equivalently, the topology consists of all sets that can be represented as a union of open balls. If is a normed space then Two norms and on a vector space are called if they induce the same topology, which happens if and only if there exist positive real numbers and such that for all For instance, if on then In particular, That is, If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent. Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic. Classification of seminorms: absolutely convex absorbing sets All seminorms on a vector space can be classified in terms of absolutely convex absorbing subsets of To each such subset corresponds a seminorm called the gauge of defined as where is the infimum, with the property that Conversely: Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family of seminorms that separates points: the collection of all finite intersections of sets turns the space into a locally convex topological vector space so that every p is continuous. Such a method is used to design weak and weak* topologies. norm case: Suppose now that contains a single since is separating, is a norm, and is its open unit ball. Then is an absolutely convex bounded neighbourhood of 0, and is continuous. The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely: If is an absolutely convex bounded neighbourhood of 0, the gauge (so that is a norm. See also References Bibliography Functional analysis Linear algebra
Norm (mathematics)
[ "Mathematics" ]
3,821
[ "Functions and mappings", "Mathematical analysis", "Functional analysis", "Mathematical objects", "Mathematical relations", "Norms (mathematics)", "Linear algebra", "Algebra" ]
990,632
https://en.wikipedia.org/wiki/Dynamical%20systems%20theory
Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations or difference equations. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations. This theory deals with the long-term qualitative behavior of dynamical systems, and studies the nature of, and when possible the solutions of, the equations of motion of systems that are often primarily mechanical or otherwise physical in nature, such as planetary orbits and the behaviour of electronic circuits, as well as systems that arise in biology, economics, and elsewhere. Much of modern research is focused on the study of chaotic systems and bizarre systems. This field of study is also called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems. Overview Dynamical systems theory and chaos theory deal with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible steady states?", or "Does the long-term behavior of the system depend on its initial condition?" An important goal is to describe the fixed points, or steady states of a given dynamical system; these are values of the variable that do not change over time. Some of these fixed points are attractive, meaning that if the system starts out in a nearby state, it converges towards the fixed point. Similarly, one is interested in periodic points, states of the system that repeat after several timesteps. Periodic points can also be attractive. Sharkovskii's theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system. Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos. The branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory. History The concept of dynamical systems theory has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future. Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems. Some excellent presentations of mathematical dynamic system theory include , , , and . Concepts Dynamical systems The dynamical system concept is a mathematical formalization for any fixed "rule" that describes the time dependence of a point's position in its ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each spring in a lake. A dynamical system has a state determined by a collection of real numbers, or more generally by a set of points in an appropriate state space. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic (for a given time interval one future state can be precisely predicted given the current state) or stochastic (the evolution of the state can only be predicted with a certain probability). Dynamicism Dynamicism, also termed the dynamic hypothesis or the dynamic hypothesis in cognitive science or dynamic cognition, is a new approach in cognitive science exemplified by the work of philosopher Tim van Gelder. It argues that differential equations are more suited to modelling cognition than more traditional computer models. Nonlinear system In mathematics, a nonlinear system is a system that is not linear—i.e., a system that does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to solve for cannot be written as a linear sum of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system as long as a particular solution is known. Related fields Arithmetic dynamics Arithmetic dynamics is a field that emerged in the 1990s that amalgamates two areas of mathematics, dynamical systems and number theory. Classically, discrete dynamics refers to the study of the iteration of self-maps of the complex plane or real line. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, -adic, and/or algebraic points under repeated application of a polynomial or rational function. Chaos theory Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions (popularly referred to as the butterfly effect). As a result of this sensitivity, which manifests itself as an exponential growth of perturbations in the initial conditions, the behavior of chaotic systems appears random. This happens even though these systems are deterministic, meaning that their future dynamics are fully defined by their initial conditions, with no random elements involved. This behavior is known as deterministic chaos, or simply chaos. Complex systems Complex systems is a scientific field that studies the common properties of systems considered complex in nature, society, and science. It is also called complex systems theory, complexity science, study of complex systems and/or sciences of complexity. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes. The study of complex systems is bringing new vitality to many areas of science where a more typical reductionist strategy has fallen short. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including neurosciences, social sciences, meteorology, chemistry, physics, computer science, psychology, artificial life, evolutionary computation, economics, earthquake prediction, molecular biology and inquiries into the nature of living cells themselves. Control theory Control theory is an interdisciplinary branch of engineering and mathematics, in part it deals with influencing the behavior of dynamical systems. Ergodic theory Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics. Functional analysis Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. This usage of the word functional goes back to the calculus of variations, implying a function whose argument is a function. Its use in general has been attributed to mathematician and physicist Vito Volterra and its founding is largely attributed to mathematician Stefan Banach. Graph dynamical systems The concept of graph dynamical systems (GDS) can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of graph dynamical systems is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result. Projected dynamical systems Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation. Symbolic dynamics Symbolic dynamics is the practice of modelling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. System dynamics System dynamics is an approach to understanding the behaviour of systems over time. It deals with internal feedback loops and time delays that affect the behaviour and state of the entire system. What makes using system dynamics different from other approaches to studying systems is the language used to describe feedback loops with stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity. Topological dynamics Topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology. Applications In biomechanics In sports biomechanics, dynamical systems theory has emerged in the movement sciences as a viable framework for modeling athletic performance and efficiency. It comes as no surprise, since dynamical systems theory has its roots in Analytical mechanics. From psychophysiological perspective, the human movement system is a highly intricate network of co-dependent sub-systems (e.g. respiratory, circulatory, nervous, skeletomuscular, perceptual) that are composed of a large number of interacting components (e.g. blood cells, oxygen molecules, muscle tissue, metabolic enzymes, connective tissue and bone). In dynamical systems theory, movement patterns emerge through generic processes of self-organization found in physical and biological systems. There is no research validation of any of the claims associated to the conceptual application of this framework. In cognitive science Dynamical system theory has been applied in the field of neuroscience and cognitive development, especially in the neo-Piagetian theories of cognitive development. It is the belief that cognitive development is best represented by physical theories rather than theories based on syntax and AI. It also believed that differential equations are the most appropriate tool for modeling human behavior. These equations are interpreted to represent an agent's cognitive trajectory through state space. In other words, dynamicists argue that psychology should be (or is) the description (via differential equations) of the cognitions and behaviors of an agent under certain environmental and internal pressures. The language of chaos theory is also frequently adopted. In it, the learner's mind reaches a state of disequilibrium where old patterns have broken down. This is the phase transition of cognitive development. Self-organization (the spontaneous creation of coherent forms) sets in as activity levels link to each other. Newly formed macroscopic and microscopic structures support each other, speeding up the process. These links form the structure of a new state of order in the mind through a process called scalloping (the repeated building up and collapsing of complex performance.) This new, novel state is progressive, discrete, idiosyncratic and unpredictable. Dynamic systems theory has recently been used to explain a long-unanswered problem in child development referred to as the A-not-B error. Further, since the middle of the 1990s cognitive science, oriented towards a system theoretical connectionism, has increasingly adopted the methods from (nonlinear) “Dynamic Systems Theory (DST)“. A variety of neurosymbolic cognitive neuroarchitectures in modern connectionism, considering their mathematical structural core, can be categorized as (nonlinear) dynamical systems. These attempts in neurocognition to merge connectionist cognitive neuroarchitectures with DST come from not only neuroinformatics and connectionism, but also recently from developmental psychology (“Dynamic Field Theory (DFT)”) and from “evolutionary robotics” and “developmental robotics” in connection with the mathematical method of “evolutionary computation (EC)”. For an overview see Maurer. In second language development The application of Dynamic Systems Theory to study second language acquisition is attributed to Diane Larsen-Freeman who published an article in 1997 in which she claimed that second language acquisition should be viewed as a developmental process which includes language attrition as well as language acquisition. In her article she claimed that language should be viewed as a dynamic system which is dynamic, complex, nonlinear, chaotic, unpredictable, sensitive to initial conditions, open, self-organizing, feedback sensitive, and adaptive. See also Related subjects List of dynamical system topics Baker's map Biological applications of bifurcation theory Dynamical system (definition) Embodied Embedded Cognition Fibonacci numbers Fractals Gingerbreadman map Halo orbit List of types of systems theory Oscillation Postcognitivism Recurrent neural network Combinatorics and dynamical systems Synergetics Systemography Related scientists People in systems and control Dmitri Anosov Vladimir Arnold Nikolay Bogolyubov Andrey Kolmogorov Nikolay Krylov Jürgen Moser Yakov G. Sinai Stephen Smale Hillel Furstenberg Grigory Margulis Elon Lindenstrauss Notes Further reading External links Dynamic Systems Encyclopedia of Cognitive Science entry. Definition of dynamical system in MathWorld. DSWeb Dynamical Systems Magazine Dynamical systems Complex systems theory Computational fields of study
Dynamical systems theory
[ "Physics", "Mathematics", "Technology" ]
2,763
[ "Computational fields of study", "Mechanics", "Computing and society", "Dynamical systems" ]
991,459
https://en.wikipedia.org/wiki/Hopper%20crystal
A hopper crystal is a form of crystal, the shape of which resembles that of a pyramidal hopper container. The edges of hopper crystals are fully developed, but the interior spaces are not filled in. This results in what appears to be a hollowed out step lattice formation, as if someone had removed interior sections of the individual crystals. In fact, the "removed" sections never filled in, because the crystal was growing so rapidly that there was not enough time (or material) to fill in the gaps. The interior edges of a hopper crystal still show the crystal form characteristic to the specific mineral, and so appear to be a series of smaller and smaller stepped down miniature versions of the original crystal. Hoppering occurs when electrical attraction is higher along the edges of the crystal; this causes faster growth at the edges than near the face centers. This attraction draws the mineral molecules more strongly than the interior sections of the crystal, thus the edges develop more quickly. However, the basic physics of this type of growth is the same as that of dendrites but, because the anisotropy in the solid–liquid inter-facial energy is so large, the dendrite so produced exhibits a faceted morphology. Hoppering is common in many minerals, including lab-grown bismuth, galena, quartz (called skeletal or fenster crystals), gold, calcite, halite (salt), and water (ice). In 2017, Frito-Lay filed for (and later received) a patent for a salt cube hopper crystal. Because the shape increases surface area to volume, it allows people to taste more salt compared to the amount actually consumed. References "Hopper crystals" in A New Kind of Science by Stephen Wolfram, p. 993. External links Images of hopper crystals, Glendale Community College Earth Science Image Archive Crystals
Hopper crystal
[ "Chemistry", "Materials_science" ]
373
[ "Crystallography", "Crystals" ]
991,712
https://en.wikipedia.org/wiki/Eddington%20Medal
The Eddington Medal is awarded by the Royal Astronomical Society for investigations of outstanding merit in theoretical astrophysics. It is named after Sir Arthur Eddington. First awarded in 1953, the frequency of the prize has varied over the years, at times being every one, two or three years. Since 2013 it has been awarded annually. Recipients Source is unless otherwise noted. See also List of astronomy awards List of physics awards List of prizes named after people References External links Winners Physics awards Awards established in 1953 Awards of the Royal Astronomical Society 1953 establishments in the United Kingdom Astrophysics
Eddington Medal
[ "Physics", "Astronomy", "Technology" ]
114
[ "Astronomy prizes", "Astrophysics", "Awards of the Royal Astronomical Society", "Science and technology awards", "Astronomical sub-disciplines", "Physics awards" ]
991,849
https://en.wikipedia.org/wiki/Hose
A hose is a flexible hollow tube or pipe designed to carry fluids from one location to another, often from a faucet or hydrant. Early hoses were made of leather, although modern hoses are typically made of rubber, canvas, and helically wound wire. Hoses may also be made from plastics such as polyvinyl chloride, polytetrafluoroethylene, and polyethylene terephthalate, or from metals such as stainless steel. See also Heated hose References Further reading Hydraulics
Hose
[ "Physics", "Chemistry" ]
110
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
992,412
https://en.wikipedia.org/wiki/Wireless%20distribution%20system
A wireless distribution system (WDS) is a system enabling the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the traditional requirement for a wired backbone to link them. The notable advantage of WDS over other solutions is that it preserves the MAC addresses of client frames across links between access points. An access point can be either a main, relay, or remote base station. A main base station is typically connected to the (wired) Ethernet. A relay base station relays data between remote base stations, wireless clients, or other relay stations; to either a main, or another relay base station. A remote base station accepts connections from wireless clients and passes them on to relay stations or to main stations. Connections between "clients" are made using MAC addresses. All base stations in a wireless distribution system must be configured to use the same radio channel, method of encryption (none, WEP, WPA or WPA2) and the same encryption keys. They may be configured to different service set identifiers (SSIDs). WDS also requires every base station to be configured to forward to others in the system. WDS may also be considered a repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). However, with the repeater method, throughput is halved for all clients connected wirelessly. This is because Wi-Fi is an inherently half duplex medium and therefore any Wi-Fi device functioning as a repeater must use the Store and forward method of communication. WDS may be incompatible between different products (even occasionally from the same vendor) since the IEEE 802.11-1999 standard does not define how to construct any such implementations or how stations interact to arrange for exchanging frames of this format. The IEEE 802.11-1999 standard merely defines the 4-address frame format that makes it possible. Technical WDS may provide two modes of access point-to-access point (AP-to-AP) connectivity: Wireless bridging, in which WDS APs (AP-to-AP on local routers AP) communicate only with each other and don't allow wireless stations (STA, also known as wireless clients) to access them Wireless repeating, in which APs (WDS on local routers) communicate with each other and with wireless STAs Two disadvantages to using WDS are: The maximum wireless effective throughput may be halved after the first retransmission (hop) being made. For example, in the case of two APs connected via WDS, and communication is made between a computer which is plugged into the Ethernet port of AP A and a laptop which is connected wirelessly to AP B. The throughput is halved, because AP B has to retransmit the information during the communication of the two sides. However, in the case of communications between a computer which is plugged into the Ethernet port of AP A and a computer which is plugged into the Ethernet port of AP B, the throughput is not halved since there is no need to retransmit the information. Dual band/radio APs may avoid this problem, by connecting to clients on one band/radio, and making a WDS network link with the other. Dynamically assigned and rotated encryption keys are usually not supported in a WDS connection. This means that dynamic Wi-Fi Protected Access (WPA) and other dynamic key assignment technology in most cases cannot be used, though WPA using pre-shared keys is possible. This is due to the lack of standardization in this field, which may be resolved with the upcoming 802.11s standard. As a result, only static WEP or WPA keys may be used in a WDS connection, including any STAs that associate to a WDS repeating AP. OpenWRT, a universal third party router firmware, supports WDS with WPA-PSK, WPA2-PSK, WPA-PSK/WPA2-PSK Mixed-Mode encryption modes. Recent Apple base stations allow WDS with WPA, though in some cases firmware updates are required. Firmware for the Renasis SAP36g super access point and most third party firmware for the Linksys WRT54G(S)/GL support AES encryption using WPA2-PSK mixed-mode security, and TKIP encryption using WPA-PSK, while operating in WDS mode. However, this mode may not be compatible with other units running stock or alternate firmware. Example Suppose one has a Wi-Fi-capable game console. This device needs to send one packet to a WAN host, and receive one packet in reply. Network 1: A wireless base station acting as a simple (non-WDS) wireless router. The packet leaves the game console, goes over-the-air to the router, which then transmits it across the WAN. One packet comes back, through the router, which transmits it wirelessly to the game console. Total packets sent over-the-air: 2. Network 2: Two wireless base stations employing WDS: WAN connects to the master base station. The master base station connects over-the-air to the remote base station. The Remote base station connects over-the-air to the game console. The game console sends one packet over-the-air to the remote base station, which forwards it over-the-air to the master base station, which forwards it to the WAN. The reply packet comes from the WAN to the master base station, over-the-air to the remote, and then over-the-air again to the game console. Total packets sent over-the-air: 4. Network 3: Two wireless base stations employing WDS, but this time the game console connects by Ethernet cable to the remote base station. One packet is sent from the game console over the Ethernet cable to the remote, from there by air to the master, and on to the WAN. Reply comes from WAN to master, over-the-air to remote, over cable to game console. Total packets sent over-the-air: 2. Notice that network 1 (non-WDS) and network 3 (WDS) send the same number of packets over-the-air. The only slowdown is the potential halving due to the half-duplex nature of Wi-Fi. Network 2 gets an additional halving because the remote base station uses double the air time because it is re-transmitting over-the-air packets that it has just received over-the-air. This is the halving that is usually attributed to WDS, but that halving only happens when the route through a base station uses over-the-air links on both sides of it. That does not always happen in a WDS, and can happen in non-WDS. Important Note: This "double hop" (one wireless hop from the main station to the remote station, and a second hop from the remote station to the wireless client [game console]) is not necessarily twice as slow. End to end latency introduced here is in the "store and forward" delay associated with the remote station forwarding packets. In order to accurately identify the true latency contribution of relaying through a wireless remote station vs. simply increasing the broadcast power of the main station, more comprehensive tests specific to the environment would be required. See also Ad hoc wireless network Network bridge Wireless mesh network References External links Swallow-Wifi Wiki (WDS Network dashboard for DD-WRT devices) Alternative Wireless Signal-repeating Scheme with DD-WRT and AutoAP What is Third Generation Mesh? Review of three generation of mesh networking architectures. Wi-Fi Range Extender Vs Mesh Network System Explanation how wifi extender and mesh network works. How to Extend Your Wireless Network with Tomato-Powered Routers Polarcloud.com (How Do I Use WDS) Me IEEE 802.11
Wireless distribution system
[ "Technology", "Engineering" ]
1,664
[ "Wireless networking", "Computer networks engineering" ]
992,421
https://en.wikipedia.org/wiki/Cisco%20PIX
Cisco PIX (Private Internet eXchange) was a popular IP firewall and network address translation (NAT) appliance. It was one of the first products in this market segment. In 2005, Cisco introduced the newer Cisco Adaptive Security Appliance (Cisco ASA), that inherited many of the PIX features, and in 2008 announced PIX end-of-sale. The PIX technology was sold in a blade, the FireWall Services Module (FWSM), for the Cisco Catalyst 6500 switch series and the 7600 Router series, but has reached end of support status as of September 26, 2007. PIX History PIX was originally conceived in early 1994 by John Mayes of Redwood City, California and designed and coded by Brantley Coile of Athens, Georgia. The PIX name is derived from its creators' aim of creating the functional equivalent of an IP PBX to solve the then-emerging registered IP address shortage. At a time when NAT was just being investigated as a viable approach, they wanted to conceal a block or blocks of IP addresses behind a single or multiple registered IP addresses, much as PBXs do for internal phone extensions. When they began, RFC 1597 and RFC 1631 were being discussed, but the now-familiar RFC 1918 had not yet been submitted. The design, and testing were carried out in 1994 by John Mayes, Brantley Coile and Johnson Wu of Network Translation, Inc., with Brantley Coile being the sole software developer. Beta testing of PIX serial number 000000 was completed and first customer acceptance was on December 21, 1994 at KLA Instruments in San Jose, California. The PIX quickly became one of the leading enterprise firewall products and was awarded the Data Communications Magazine "Hot Product of the Year" award in January 1995. Shortly before Cisco acquired Network Translation in November 1995, Mayes and Coile hired two longtime associates, Richard (Chip) Howes and Pete Tenereillo, and shortly after acquisition 2 more longtime associates, Jim Jordan and Tom Bohannon. Together they continued development on Finesse OS and the original version of the Cisco PIX Firewall, now known as the PIX "Classic". During this time, the PIX shared most of its code with another Cisco product, the LocalDirector. On January 28, 2008, Cisco announced the end-of-sale and end-of-life dates for all Cisco PIX Security Appliances, software, accessories, and licenses. The last day for purchasing Cisco PIX Security Appliance platforms and bundles was July 28, 2008. The last day to purchase accessories and licenses was January 27, 2009. Cisco ended support for Cisco PIX Security Appliance customers on July 29, 2013. In May 2005, Cisco introduced the ASA which combines functionality from the PIX, VPN 3000 series and IPS product lines. The ASA series of devices run PIX code 7.0 and later. Through PIX OS release 7.x the PIX and the ASA use the same software images. Beginning with PIX OS version 8.x, the operating system code diverges, with the ASA using a Linux kernel and PIX continuing to use the traditional Finesse/PIX OS combination. Software The PIX runs a custom-written proprietary operating system originally called Finese (Fast Internet Service Executive), but the software is known simply as PIX OS. Though classified as a network-layer firewall with stateful inspection, technically the PIX would more precisely be called a Layer 4, or Transport Layer Firewall, as its access is not restricted to Network Layer routing, but socket-based connections (a port and an IP Address: port communications occur at Layer 4). By default it allows internal connections out (outbound traffic), and only allows inbound traffic that is a response to a valid request or is allowed by an Access Control List (ACL) or by a conduit. Administrators can configure the PIX to perform many functions including network address translation (NAT) and port address translation (PAT), as well as serving as a virtual private network (VPN) endpoint appliance. The PIX became the first commercially available firewall product to introduce protocol specific filtering with the introduction of the "fixup" command. The PIX "fixup" capability allows the firewall to apply additional security policies to connections identified as using specific protocols. Protocols for which specific fixup behaviors were developed include DNS and SMTP. The DNS fixup originally implemented a very simple but effective security policy; it allowed just one DNS response from a DNS server on the Internet (known as outside interface) for each DNS request from a client on the protected (known as inside) interface. "Inspect" has superseded "fixup" in later versions of PIX OS. The Cisco PIX was also one of the first commercially available security appliances to incorporate IPSec VPN gateway functionality. Administrators can manage the PIX via a command line interface (CLI) or via a graphical user interface (GUI). They can access the CLI from the serial console, telnet and SSH. GUI administration originated with version 4.1, and it has been through several incarnations: PIX Firewall Manager (PFM) for PIX OS versions 4.x and 5.x, which runs locally on a Windows NT client PIX Device Manager (PDM) for PIX OS version 6.x, which runs over https and requires Java Adaptive Security Device Manager (ASDM) for PIX OS version 7 and greater, which can run locally on a client or in reduced-functionality mode over HTTPS. Because Cisco acquired the PIX from Network Translation, the CLI originally did not align with the Cisco IOS syntax. Starting with version 7.0, the configuration became much more IOS-like. Hardware The original NTI PIX and the PIX Classic had cases that were sourced from OEM provider Appro. All flash cards and the early encryption acceleration cards, the PIX-PL and PIX-PL2, were sourced from Productivity Enhancement Products (PEP). Later models had cases from Cisco OEM manufacturers. The PIX was constructed using Intel-based/Intel-compatible motherboards; the PIX 501 used an Am5x86 processor, and all other standalone models used Intel 80486 through Pentium III processors. The PIX boots off a proprietary ISA flash memory daughtercard in the case of the NTI PIX, PIX Classic, 10000, 510, 520, and 535, and it boots off integrated flash memory in the case of the PIX 501, 506/506e, 515/515e, 525, and WS-SVC-FWM-1-K9. The latter is the part code for the PIX technology implemented in the Fire Wall Services Module, for the Catalyst 6500 and the 7600 Router. Adaptive Security Appliance (ASA) The Adaptive Security Appliance is a network firewall made by Cisco. It was introduced in 2005 to replace the Cisco PIX line. Along with stateful firewall functionality another focus of the ASA is Virtual Private Network (VPN) functionality. It also features intrusion prevention and Voice over IP. The ASA 5500 series was followed up by the 5500-X series. The 5500-X series focuses more on virtualization than it does on hardware acceleration security modules. History In 2005 Cisco released the 5510, 5520, and 5540 models. Software The ASA continues using the PIX codebase but, when the ASA OS software transitioned from major version 7.X to 8.X, it moved from the Finesse/Pix OS operating system platform to the Linux operating system platform. It also integrates features of the Cisco IPS 4200 intrusion prevention system, and the Cisco VPN 3000 Concentrator. Hardware The ASA continues the PIX lineage of Intel 80x86 hardware. Security vulnerabilities The Cisco PIX VPN product was hacked by the NSA-tied group Equation Group sometime before 2016. Equation Group developed a tool code-named BENIGNCERTAIN that reveals the pre-shared password(s) to the attacker (). Equation Group was later hacked by another group called The Shadow Brokers, which published their exploit publicly, among others. According to Ars Technica, the NSA likely used this vulnerability to wiretap VPN-connections for more than a decade, citing the Snowden leaks. The Cisco ASA-brand was also hacked by Equation Group. The vulnerability requires that both SSH and SNMP are accessible to the attacker. The codename given to this exploit by NSA was EXTRABACON. The bug and exploit () was also leaked by The ShadowBrokers, in the same batch of exploits and backdoors. According to Ars Technica, the exploit can easily be made to work against more modern versions of Cisco ASA than what the leaked exploit can handle. On the 29th of January 2018 a security problem at the Cisco ASA-brand was disclosed by Cedric Halbronn from the NCC Group. A use after free-bug in the Secure Sockets Layer (SSL) VPN functionality of the Cisco Adaptive Security Appliance (ASA) Software could allow an unauthenticated remote attacker to cause a reload of the affected system or to remotely execute code. The bug is listed as . See also Cisco LocalDirector References Pix Computer network security Server appliance
Cisco PIX
[ "Engineering" ]
1,955
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
992,611
https://en.wikipedia.org/wiki/Polonium-210
Polonium-210 (210Po, Po-210, historically radium F) is an isotope of polonium. It undergoes alpha decay to stable 206Pb with a half-life of 138.376 days (about months), the longest half-life of all naturally occurring polonium isotopes (210–218Po). First identified in 1898, and also marking the discovery of the element polonium, 210Po is generated in the decay chain of uranium-238 and radium-226. 210Po is a prominent contaminant in the environment, mostly affecting seafood and tobacco. Its extreme toxicity is attributed to intense radioactivity, mostly due to alpha particles, which easily cause radiation damage, including cancer in surrounding tissue. The specific activity of is 166 TBq/g, i.e., 1.66 × 10 Bq/g. At the same time, 210Po is not readily detected by common radiation detectors, because its gamma rays have a very low energy. Therefore, can be considered as a quasi-pure alpha emitter. History In 1898, Marie and Pierre Curie discovered a strongly radioactive substance in pitchblende and determined that it was a new element; it was one of the first radioactive elements discovered. Having identified it as such, they named the element polonium after Marie's home country, Poland. Willy Marckwald discovered a similar radioactive activity in 1902 and named it radio-tellurium, and at roughly the same time, Ernest Rutherford identified the same activity in his analysis of the uranium decay chain and named it radium F (originally radium E). By 1905, Rutherford concluded that all these observations were due to the same substance, 210Po. Further discoveries and the concept of isotopes, first proposed in 1913 by Frederick Soddy, firmly placed 210Po as the penultimate step in the uranium series. In 1943, 210Po was studied as a possible neutron initiator in nuclear weapons, as part of the Dayton Project. In subsequent decades, concerns for the safety of workers handling 210Po led to extensive studies on its health effects. In the 1950s, scientists of the United States Atomic Energy Commission at Mound Laboratories, Ohio explored the possibility of using 210Po in radioisotope thermoelectric generators (RTGs) as a heat source to power satellites. A 2.5-watt atomic battery using 210Po was developed by 1958. However, the isotope plutonium-238 was chosen instead, as it has a longer half-life of 87.7 years. Polonium-210 was used to kill Russian dissident and ex-FSB officer Alexander V. Litvinenko in 2006, and was suspected as a possible cause of Yasser Arafat's death, following exhumation and analysis of his corpse in 2012–2013. The radioisotope may also have been used to kill Yuri Shchekochikhin, Lecha Islamov and Roman Tsepov. Decay properties 210Po is an alpha emitter that has a half-life of 138.376 days; it decays directly to stable 206Pb. The majority of the time, 210Po decays by emission of an alpha particle only, not by emission of an alpha particle and a gamma ray; about one in 100,000 decays results in the emission of a gamma ray. This low gamma ray production rate makes it more difficult to find and identify this isotope. Rather than gamma ray spectroscopy, alpha spectroscopy is the best method of measuring this isotope. Owing to its much shorter half-life, a milligram of 210Po emits as many alpha particles per second as 5 grams of 226Ra. A few curies of 210Po emit a blue glow caused by excitation of surrounding air. 210Po occurs in minute amounts in nature, where it is the penultimate isotope in the uranium series decay chain. It is generated via beta decay from 210Pb and 210Bi. The astrophysical s-process is terminated by the decay of 210Po, as the neutron flux is insufficient to lead to further neutron captures in the short lifetime of 210Po. Instead, 210Po alpha decays to 206Pb, which then captures more neutrons to become 210Po and repeats the cycle, thus consuming the remaining neutrons. This results in a buildup of lead and bismuth, and ensures that heavier elements such as thorium and uranium are only produced in the much faster r-process. Production Deliberate Although 210Po occurs in trace amounts in nature, it is not abundant enough (0.1 ppb) for extraction from uranium ore to be feasible. Instead, most 210Po is produced synthetically, through neutron bombardment of 209Bi in a nuclear reactor. This process converts 209Bi to 210Bi, which beta decays to 210Po with a five-day half-life. Through this method, approximately of 210Po are produced in Russia and shipped to the United States every month for commercial applications. By irradiating certain bismuth salts containing light element nuclei such as beryllium, a cascading (α,n) reaction can also be induced to produce 210Po in large quantities. Byproduct The production of polonium-210 is a downside to reactors cooled with lead-bismuth eutectic rather than pure lead. However, given the eutectic properties of this alloy, some proposed Generation IV reactor designs still rely on lead-bismuth. Applications A single gram of 210Po generates 140 watts of power. Because it emits many alpha particles, which are stopped within a very short distance in dense media and release their energy, 210Po has been used as a lightweight heat source to power thermoelectric cells in artificial satellites. A 210Po heat source was also in each of the Lunokhod rovers deployed on the surface of the Moon, to keep their internal components warm during the lunar nights. Some anti-static brushes, used for neutralizing static electricity on materials like photographic film, contain a few microcuries of 210Po as a source of charged particles. 210Po was also used in initiators for atomic bombs through the (α,n) reaction with beryllium. Small neutron sources reliant on the (α,n) reaction also usually use polonium as a convenient source of alpha particles due to its comparatively low gamma emissions (allowing easy shielding) and high specific activity. Hazards 210Po is extremely toxic; it and other polonium isotopes are some of the most radiotoxic substances to humans. With one microgram of 210Po being more than enough to kill the average adult, it is 250,000 times more toxic than hydrogen cyanide by weight. One gram of 210Po would hypothetically be enough to kill 50 million people and sicken another 50 million. This is a consequence of its ionizing alpha radiation, as alpha particles are especially damaging to organic tissues inside the body. However, 210Po does not pose a radiation hazard when contained outside the body. The alpha particles it produces cannot penetrate the outer layer of dead skin cells. The toxicity of 210Po stems entirely from its radioactivity. It is not chemically toxic in itself, but its solubility in aqueous solution as well as that of its salts poses a hazard because its spread throughout the body is facilitated in solution. Intake of 210Po occurs primarily through contaminated air, food, or water, as well as through open wounds. Once inside the body, 210Po concentrates in soft tissues (especially in the reticuloendothelial system) and the bloodstream. Its biological half-life is approximately 50 days. In the environment, 210Po can accumulate in seafood. It has been detected in various organisms in the Baltic Sea, where it can propagate in, and thus contaminate, the food chain. 210Po is also known to contaminate vegetation, primarily originating from the decay of atmospheric radon-222 and absorption from soil. In particular, 210Po attaches to, and concentrates in, tobacco leaves. Elevated concentrations of 210Po in tobacco were documented as early as 1964, and cigarette smokers were thus found to be exposed to considerably greater doses of radiation from 210Po and its parent 210Pb. Heavy smokers may be exposed to the same amount of radiation (estimates vary from 100 µSv to 160 mSv per year) as individuals in Poland were from Chernobyl fallout traveling from Ukraine. As a result, 210Po is most dangerous when inhaled from cigarette smoke. References Isotopes of polonium Carcinogens Tobacco Radioisotope fuels
Polonium-210
[ "Chemistry", "Environmental_science" ]
1,755
[ "Carcinogens", "Toxicology", "Isotopes", "Isotopes of polonium" ]
7,018,709
https://en.wikipedia.org/wiki/File%20transfer
File transfer is the transmission of a computer file through a communication channel from one computer system to another. Typically, file transfer is mediated by a communications protocol. In the history of computing, numerous file transfer protocols have been designed for different contexts. Protocols A file transfer protocol is a convention that describes how to transfer files between two computing endpoints. As well as the stream of bits from a file stored as a single unit in a file system, some may also send relevant metadata such as the filename, file size and timestamp – and even file-system permissions and file attributes. Some examples: FTP is an older cross-platform file transfer protocol SSH File Transfer Protocol a file transfer protocol secured by the Secure Shell (SSH) protocol Secure copy (scp) is based on the Secure Shell (SSH) protocol HTTP can support file transfer BitTorrent, Gnutella and other distributed file transfers systems use peer-to-peer In Systems Network Architecture, LU 6.2 Connect:Direct and XCOM Data Transport are traditionally used to transfer files Many instant messaging or LAN messenger systems support the ability to transfer files Computers may transfer files to peripheral devices such as USB flash drives Dial-up modems null modem links used XMODEM, YMODEM, ZMODEM and similar See also File sharing Managed file transfer Peer-to-peer file sharing Pull technology Push technology Sideloading References Internet terminology Network file transfer protocols
File transfer
[ "Technology" ]
295
[ "Computing stubs", "Computing terminology", "Internet terminology", "Computer network stubs" ]
7,019,701
https://en.wikipedia.org/wiki/Air%20data%20inertial%20reference%20unit
An Air Data Inertial Reference Unit (ADIRU) is a key component of the integrated Air Data Inertial Reference System (ADIRS), which supplies air data (airspeed, angle of attack and altitude) and inertial reference (position and attitude) information to the pilots' electronic flight instrument system displays as well as other systems on the aircraft such as the engines, autopilot, aircraft flight control system and landing gear systems. An ADIRU acts as a single, fault tolerant source of navigational data for both pilots of an aircraft. It may be complemented by a secondary attitude air data reference unit (SAARU), as in the Boeing 777 design. This device is used on various military aircraft as well as civilian airliners starting with the Airbus A320 and Boeing 777. Description An ADIRS consists of up to three fault tolerant ADIRUs located in the aircraft electronic rack, an associated control and display unit (CDU) in the cockpit and remotely mounted air data modules (ADMs). The No 3 ADIRU is a redundant unit that may be selected to supply data to either the commander's or the co-pilot's displays in the event of a partial or complete failure of either the No 1 or No 2 ADIRU. There is no cross-channel redundancy between the Nos 1 and 2 ADIRUs, as No 3 ADIRU is the only alternate source of air and inertial reference data. An inertial reference (IR) fault in ADIRU No 1 or 2 will cause a loss of attitude and navigation information on their associated primary flight display (PFD) and navigation display (ND) screens. An air data reference (ADR) fault will cause the loss of airspeed and altitude information on the affected display. In either case the information can only be restored by selecting the No 3 ADIRU. Each ADIRU comprises an ADR and an inertial reference (IR) component. Air data reference The air data reference (ADR) component of an ADIRU provides airspeed, Mach number, angle of attack, temperature and barometric altitude data. Ram air pressure and static pressures used in calculating airspeed are measured by small ADMs located as close as possible to the respective pitot and static pressure sensors. ADMs transmit their pressures to the ADIRUs through ARINC 429 data buses. Inertial reference The IR component of an ADIRU gives attitude, flight path vector, ground speed and positional data. The ring laser gyroscope is a core enabling technology in the system, and is used together with accelerometers, GPS and other sensors to provide raw data. The primary benefits of a ring laser over older mechanical gyroscopes are that there are no moving parts, it is rugged and lightweight, frictionless and does not resist a change in precession. Complexity in redundancy Analysis of complex systems is itself so difficult as to be subject to errors in the certification process. Complex interactions between flight computers and ADIRUs can lead to counter-intuitive behaviour for the crew in the event of a failure. In the case of Qantas Flight 72, the captain switched the source of IR data from ADIRU1 to ADIRU3 following a failure of ADIRU1; however ADIRU1 continued to supply ADR data to the captain's primary flight display. In addition, the master flight control computer (PRIM1) was switched from PRIM1 to PRIM2, then PRIM2 back to PRIM1, thereby creating a situation of uncertainty for the crew who did not know which redundant systems they were relying upon. Reliance on redundancy of aircraft systems can also lead to delays in executing needed repairs, as airline operators rely on the redundancy to keep the aircraft system working without having to repair faults immediately. Failures and directives FAA Airworthiness directive 2000-07-27 On May 3, 2000, the FAA issued airworthiness directive 2000-07-27, addressing dual critical failures during flight, attributed to power supply issues affecting early Honeywell HG2030 and HG2050 ADIRU ring laser gyros used on several Boeing 737, 757, Airbus A319, A320, A321, A330, and A340 models. Airworthiness directive 2003-26-03 On 27 January 2004 the FAA issued airworthiness directive 2003-26-03 (later superseded by AD 2008-17-12) which called for modification to the mounting of ADIRU3 in Airbus A320 family aircraft to prevent failure and loss of critical attitude and airspeed data. Alitalia A320 On 25 June 2005, an Alitalia Airbus A320-200 registered as I-BIKE departed Milan with a defective ADIRU as permitted by the Minimum Equipment List. While approaching London Heathrow Airport during deteriorating weather another ADIRU failed, leaving only one operable. In the subsequent confusion the third was inadvertently reset, losing its reference heading and disabling several automatic functions. The crew was able to effect a safe landing after declaring a Pan-pan. Malaysia Airlines Flight 124 On 1 August 2005, a serious incident involving Malaysia Airlines Flight 124 occurred when an ADIRU fault in a Boeing 777-2H6ER (9M-MRG) flying from Perth to Kuala Lumpur International caused the aircraft to act on false indications, resulting in uncommanded manoeuvres. In that incident the incorrect data impacted all planes of movement while the aircraft was climbing through . The aircraft pitched up and climbed to around , with the stall warning activated. The pilots recovered the aircraft with the autopilot disengaged and requested a return to Perth. During the return to Perth, both the left and right autopilots were briefly activated by the crew, but in both instances the aircraft pitched down and banked to the right. The aircraft was flown manually for the remainder of the flight and landed safely in Perth. There were no injuries and no damage to the aircraft. The ATSB found that the main probable cause of this incident was a latent software error which allowed the ADIRU to use data from a failed accelerometer. The US Federal Aviation Administration issued Emergency Airworthiness Directive (AD) 2005-18-51 requiring all 777 operators to install upgraded software to resolve the error. Qantas Flight 68 On 12 September 2006, Qantas Flight 68, Airbus A330 registration VH-QPA, from Singapore to Perth exhibited ADIRU problems but without causing any disruption to the flight. At and estimated position north of Learmonth, Western Australia, NAV IR1 FAULT then, 30 minutes later, NAV ADR 1 FAULT notifications were received on the ECAM identifying navigation system faults in Inertial Reference Unit 1, then in ADR 1 respectively. The crew reported to the later Qantas Flight 72 investigation involving the same airframe and ADIRU that they had received numerous warning and caution messages which changed too quickly to be dealt with. While investigating the problem, the crew noticed a weak and intermittent ADR 1 FAULT light and elected to switch off ADR 1, after which they experienced no further problems. There was no impact on the flight controls throughout the event. The ADIRU manufacturer's recommended maintenance procedures were carried out after the flight and system testing found no further fault. Jetstar Flight 7 On 7 February 2008, a similar aircraft (VH-EBC) operated by Qantas subsidiary Jetstar Airways was involved in a similar occurrence while conducting the JQ7 service from Sydney to Ho Chi Minh City, Vietnam. In this event - which occurred east of Learmonth - many of the same errors occurred in the ADIRU unit. The crew followed the relevant procedure applicable at the time and the flight continued without problems. The ATSB has yet to confirm if this event is related to the other Airbus A330 ADIRU occurrences. Airworthiness directive 2008-17-12 On 6 August 2008, the FAA issued airworthiness directive 2008-17-12 expanding on the requirements of the earlier AD 2003-26-03 which had been determined to be an insufficient remedy. In some cases it called for replacement of ADIRUs with newer models, but allowed 46 months from October 2008 to implement the directive. Qantas Flight 72 On 7 October 2008, Qantas Flight 72, using the same aircraft involved in the Flight 68 incident, departed Singapore for Perth. Some time into the flight, while cruising at 37,000 ft, a failure in the No.1 ADIRU led to the autopilot automatically disengaging followed by two sudden uncommanded pitch down manoeuvres, according to the Australian Transport Safety Bureau (ATSB). The accident injured up to 74 passengers and crew, ranging from minor to serious injuries. The aircraft was able to make an emergency landing without further injuries. The aircraft was equipped with a Northrop Grumman made ADIRS, which investigators sent to the manufacturer for further testing. Qantas Flight 71 On 27 December 2008, Qantas Flight 71 from Perth to Singapore, a different Qantas A330-300 with registration VH-QPG was involved in an incident at 36,000 feet approximately north-west of Perth and south of Learmonth Airport at 1729 WST. The autopilot disconnected and the crew received an alert indicating a problem with ADIRU Number 1. Emergency Airworthiness Directive No 2009-0012-E On 15 January 2009, the European Aviation Safety Agency issued Emergency Airworthiness Directive No 2009-0012-E to address the above A330 and A340 Northrop-Grumman ADIRU problem of incorrectly responding to a defective inertial reference. In the event of a NAV IR fault the directed crew response is now to "select OFF the relevant IR, select OFF the relevant ADR, and then turn the IR rotary mode selector to the OFF position." The effect is to ensure that the faulted IR is powered off so that it no longer can send erroneous data to other systems. Air France Flight 447 On 1 June 2009, Air France Flight 447, an Airbus A330 en route from Rio de Janeiro to Paris, crashed in the Atlantic Ocean after transmitting automated messages indicating faults with various equipment, including the ADIRU. While examining possibly related events of weather-related loss of ADIRS, the NTSB decided to investigate two similar cases on cruising A330s. On a 21 May 2009 Miami–São Paulo TAM Flight 8091 registered as PT-MVB, and on a 23 June 2009 Hong Kong-Tokyo Northwest Airlines Flight 8 registered as N805NW each saw sudden loss of airspeed data at cruise altitude and consequent loss of ADIRS control. Ryanair Flight 6606 On 9 October 2018, the Boeing 737-800 operating the flight from Porto Airport to Edinburgh Airport suffered a left ADIRU failure that resulted in the aircraft pitching up and climbing 600 feet. The left ADIRU was put in ATT (attitude-only) mode in accordance with the Quick Reference Handbook, but it continued to display erroneous attitude information to the captain. The remainder of the flight was flown manually with an uneventful landing. The UK's AAIB released the final report on 31 October 2019, with the following recommendation:It is recommended that Boeing Commercial Aircraft amend the Boeing 737 Quick Reference Handbook to include a non-normal checklist for situations when pitch and roll comparator annunciations appear on the attitude display. See also Acronyms and abbreviations in avionics References Further reading Aerospace engineering Aircraft instruments Avionics Flight control systems Global Positioning System Navigational equipment Technology systems
Air data inertial reference unit
[ "Technology", "Engineering" ]
2,398
[ "Systems engineering", "Wireless locating", "Technology systems", "Avionics", "Measuring instruments", "Aircraft instruments", "nan", "Aerospace engineering", "Global Positioning System" ]
7,020,766
https://en.wikipedia.org/wiki/Spectral%20Genomics
Spectral Genomics, Inc. was a technology spin-off company from Baylor College of Medicine, selling aCGH microarrays and related software. History The company was founded in February 2000 by BCM technologies. Spectral licensed technology invented by its founders Alan Bradley, Ph.D., Wei-wen Cai, Ph.D.. The company raised $3.0 million in the first financing round in August 2001. In March 2004 the company raised additional $9.4 million in its second financing round. In March 2005, GE Healthcare became the exclusive distributor for Spectral Genomics's products outside of North America. Spectral Genomics was acquired by PerkinElmer in May 2006, ending GE's distribution agreement. External links Corporate website Defunct biotechnology companies of the United States Microarrays
Spectral Genomics
[ "Chemistry", "Materials_science", "Biology" ]
163
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
7,021,041
https://en.wikipedia.org/wiki/Swan%20band
Swan bands are a characteristic of the spectra of carbon stars, comets and of burning hydrocarbon fuels. They are named for the Scottish physicist William Swan, who first studied the spectral analysis of radical diatomic carbon (C2) in 1856. Swan bands consist of several sequences of vibrational bands scattered throughout the visible spectrum. See also Spectroscopy References Emission spectroscopy Fire Astronomical spectroscopy Astrochemistry Carbon
Swan band
[ "Physics", "Chemistry", "Astronomy" ]
79
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Fire", "Emission spectroscopy", "Astronomy stubs", "Astrophysics", "Astrochemistry", "Astrophysics stubs", "Combustion", "Astronomical spectroscopy", "nan", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs", "Astr...
7,022,543
https://en.wikipedia.org/wiki/Bray%E2%80%93Liebhafsky%20reaction
The Bray–Liebhafsky reaction is a chemical clock first described by William C. Bray in 1921 and the first oscillating reaction in a stirred homogeneous solution. He investigated the role of the iodate (), the anion of iodic acid, in the catalytic conversion of hydrogen peroxide to oxygen and water by the iodate. He observed that the concentration of iodine molecules oscillated periodically and that hydrogen peroxide was consumed during the reaction. An increase in temperature reduces the cycle in the range of hours. This oscillating reaction consisting of free radical on non-radical steps was investigated further by his student Herman A. Liebhafsky, hence the name Bray–Liebhafsky reaction. During this period, most chemists rejected the phenomenon and tried to explain the oscillation by invoking heterogeneous impurities. A fundamental property of this system is that hydrogen peroxide has a redox potential which enables the simultaneous oxidation of iodine to iodate: 5 H2O2 + I2 → 2  + 2 H+ + 4 H2O and the reduction of iodate back to iodine: 5 H2O2 + 2  + 2 H+ → I2 + 5 O2 + 6 H2O Between these two reactions the system oscillates causing a concentration jump of the iodide and the oxygen production. The net reaction is: 2 H2O2 → 2 H2O + O2 necessitating a catalyst and . References Further reading Name reactions
Bray–Liebhafsky reaction
[ "Chemistry" ]
314
[ "Name reactions" ]
7,022,979
https://en.wikipedia.org/wiki/Bayesian%20inference%20in%20phylogeny
Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics. Bayesian inference of phylogeny background and bases Bayesian inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian inference. Independently, unaware of Bayes' work, Pierre-Simon Laplace developed Bayes' theorem in 1774. Bayesian inference or the inverse probability method was the standard approach in statistical thinking until the early 1900s before RA Fisher developed what's now known as the classical/frequentist/Fisherian inference. Computational difficulties and philosophical objections had prevented the widespread adoption of the Bayesian approach until the 1990s, when Markov Chain Monte Carlo (MCMC) algorithms revolutionized Bayesian computation. The Bayesian approach to phylogenetic reconstruction combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will be the probability that the tree is correct, given the prior, the data, and the correctness of the likelihood model. MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new value is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run thousands or millions of times. The number of times a single tree is visited during the course of the chain is an approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis–Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon. Metropolis–Hastings algorithm One of the most common MCMC methods used is the Metropolis–Hastings algorithm, a modified version of the original Metropolis algorithm. It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps: An initial tree, Ti, is randomly selected. A neighbour tree, Tj, is selected from the collection of trees. The ratio, R, of the probabilities (or probability density functions) of Tj and Ti is computed as follows: R = f(Tj)/f(Ti) If R ≥ 1, Tj is accepted as the current tree. If R < 1, Tj is accepted as the current tree with probability R, otherwise Ti is kept. At this point the process is repeated from Step 2 N times. The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree Tj when we are at the old tree state Ti, is the same probability of proposing Ti when we are at Tj. When this is not the case Hastings corrections are applied. The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components: A potential transition from one state to another (i → j) using a transition probability function qi,j Movement of the chain to state j with probability αi,j and remains in i with probability 1 – αi,j. Metropolis-coupled MCMC Metropolis-coupled MCMC algorithm (MC³) has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions , , where the first one, is the target density, while , are chosen to improve mixing. For example, one can choose incremental heating of the form: so that the first chain is the cold chain with the correct target density, while chains are heated chains. Note that raising the density to the power with has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let be the current state in chain , . A swap between the states of chains and is accepted with probability: At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally. An obvious disadvantage of the algorithm is that chains are run and only one chain is used for inference. For this reason, is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration. LOCAL algorithm of Larget and Simon The LOCAL algorithms offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999) in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree. Suppose we began by selecting the internal branch with length that separates taxa and from the rest. Suppose also that we have (randomly) selected branches with lengths and from each side, and that we oriented these branches. Let , be the current length of the clothesline. We select the new length to be , where is a uniform random variable on . Then for the LOCAL algorithm, the acceptance probability can be computed to be: Assessing convergence To estimate a branch length of a 2-taxon tree under JC, in which sites are unvaried and are variable, assume exponential prior distribution with rate . The density is . The probabilities of the possible site patterns are: for unvaried sites, and Thus the unnormalized posterior distribution is: or, alternately, Update branch length by choosing new value uniformly at random from a window of half-width centered at the current value: where is uniformly distributed between and . The acceptance probability is: Example: , . We will compare results for two values of , and . In each case, we will begin with an initial length of and update the length times. Maximum parsimony and maximum likelihood There are many approaches to reconstructing phylogenetic trees, each with advantages and disadvantages, and there is no straightforward answer to “what is the best method?”. Maximum parsimony (MP) and maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do. Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible. The support of the tree branches is represented by bootstrap percentage. For the same reason that it has been widely used, its simplicity, MP has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent, meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomenon where taxa with long branches (numerous character state changes) tend to appear more closely related in the phylogeny than they really are. For morphological data, recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches, potentially due to overprecision, although this has been disputed. Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used. As in maximum parsimony, maximum likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model of evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones. In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, with changes being more likely to happen along long branches than short ones. This approach might eliminate long branch attraction and explain the greater consistency of ML over MP. Although considered by many to be the best approach to inferring phylogenies from a theoretical point of view, ML is computationally intensive and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that it is computationally more efficient than traditional methods, it quantifies and addresses the source of uncertainty and is able to incorporate complex models of evolution. Pitfalls and controversies Bootstrap values vs posterior probabilities. It has been observed that bootstrap support values, calculated under parsimony or maximum likelihood, tend to be lower than the posterior probabilities obtained by Bayesian inference. This leads to a number of questions such as: Do posterior probabilities lead to overconfidence in the results? Are bootstrap values more robust than posterior probabilities? One fact underlying this controversy is that all data are used during Bayesian analysis and the calculation of posterior probabilities, while the nature of bootstrapping means that most bootstrap replicates will be missing some of the original data. As a result, bipartitions (branches) supported by relatively few characters in the dataset may receive very high posterior probabilities but moderate or even low bootstrap support, as many of the bootstrap replicates don't contain enough of the critical characters to retrieve the bipartition. Controversy of using prior probabilities. Using prior probabilities for Bayesian analysis has been seen by many as an advantage as it provides a way of incorporating information from sources other than the data being analyzed. However, when such external information is lacking, one is forced to use a prior even if it is impossible to use a statistical distribution to represent total ignorance. It is also a concern that the Bayesian posterior probabilities may reflect subjective opinions when the prior is arbitrary and subjective. Model choice. The results of the Bayesian analysis of a phylogeny are directly correlated to the model of evolution chosen so it is important to choose a model that fits the observed data, otherwise inferences in the phylogeny will be erroneous. Many scientists have raised questions about the interpretation of Bayesian inference when the model is unknown or incorrect. For example, an oversimplified model might give higher posterior probabilities. MrBayes software MrBayes is a free software tool that performs Bayesian inference of phylogeny. It was originally written by John P. Huelsenbeck and Frederik Ronquist in 2001. As Bayesian methods increased in popularity, MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format. MrBayes uses MCMC to approximate the posterior probabilities of trees. The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program includes, among several nucleotide models, the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occur with equal probability. It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites. MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters. MrBayes 3 was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default. MrBayes 3.2 was released in 2012. This version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers. List of phylogenetics software This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods. Applications Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include: Inference of phylogenies. Inference and evaluation of uncertainty of phylogenies. Inference of ancestral character state evolution. Inference of ancestral areas. Molecular dating analysis. Model dynamics of species diversification and extinction Elucidate patterns in pathogens dispersal. Inference of phenotypic trait evolution. References External links MrBayes official website BEAST official website Computational phylogenetics Phylogeny
Bayesian inference in phylogeny
[ "Biology" ]
3,307
[ "Bioinformatics", "Phylogenetics", "Computational phylogenetics", "Genetics techniques" ]
7,023,098
https://en.wikipedia.org/wiki/ER%20oxidoreductin
ER oxidoreductin 1 (Ero1) is an oxidoreductase enzyme that catalyses the formation and isomerization of protein disulfide bonds in the endoplasmic reticulum (ER) of eukaryotes. ER Oxidoreductin 1 (Ero1) is a conserved, luminal, glycoprotein that is tightly associated with the ER membrane, and is essential for the oxidation of protein dithiols. Since disulfide bond formation is an oxidative process, the major pathway of its catalysis has evolved to utilise oxidoreductases, which become reduced during the thiol-disulfide exchange reactions that oxidise the cysteine thiol groups of nascent polypeptides. Ero1 is required for the introduction of oxidising equivalents into the ER and their direct transfer to protein disulfide isomerase (PDI), thereby ensuring the correct folding and assembly of proteins that contain disulfide bonds in their native state. Ero1 exists in two isoforms: Ero1-α and Ero1-β. Ero1-α is mainly induced by hypoxia (HIF-1), whereas Ero1-β is mainly induced by the unfolded protein response (UPR). During endoplasmic reticulum stress (such as occurs in beta cells of the pancreas or in macrophages causing atherosclerosis), CHOP can induce activation of Ero1, causing calcium release from the endoplasmic reticulum into the cytoplasm, resulting in apoptosis. Homologues of the Saccharomyces cerevisiae Ero1 proteins have been found in all eukaryotic organisms examined, and contain seven cysteine residues that are absolutely conserved, including three that form the sequence Cys–X–X–Cys–X–X–Cys (where X can be any residue). The mechanism of thiol–disulfide exchange between oxidoreductases The mechanism of thiol–disulfide exchange between oxidoreductases is understood to begin with the nucleophilic attack on the sulfur atoms of a disulfide bond in the oxidised partner, by a thiolate anion derived from a reactive cysteine in a reduced partner. This generates mixed disulfide intermediates, and is followed by a second, this time intramolecular, nucleophilic attack by the remaining thiolate anion in the formerly reduced partner, to liberate both oxidoreductases. The balance of evidence discussed thus far supports a model in which oxidising equivalents are sequentially transferred from Ero1 via a thiol–disulfide exchange reaction to PDI, with PDI then undergoing a thiol–disulfide exchange with the nascent polypeptide, thereby enabling the formation of disulfide bonds within the nascent polypeptide. References Biomolecules Enzymes
ER oxidoreductin
[ "Chemistry", "Biology" ]
646
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
7,024,176
https://en.wikipedia.org/wiki/Casson%20handle
In 4-dimensional topology, a branch of mathematics, a Casson handle is a 4-dimensional topological 2-handle constructed by an infinite procedure. They are named for Andrew Casson, who introduced them in about 1973. They were originally called "flexible handles" by Casson himself, and introduced the name "Casson handle" by which they are known today. In that work he showed that Casson handles are topological 2-handles, and used this to classify simply connected compact topological 4-manifolds. Motivation In the proof of the h-cobordism theorem, the following construction is used. Given a circle in the boundary of a manifold, we would often like to find a disk embedded in the manifold whose boundary is the given circle. If the manifold is simply connected then we can find a map from a disc to the manifold with boundary the given circle, and if the manifold is of dimension at least 5 then by putting this disc in "general position" it becomes an embedding. The number 5 appears for the following reason: submanifolds of dimension m and n in general position do not intersect provided the dimension of the manifold containing them has dimension greater than . In particular, a disc (of dimension 2) in general position will have no self intersections inside a manifold of dimension greater than 2+2. If the manifold is 4 dimensional, this does not work: the problem is that a disc in general position may have double points where two points of the disc have the same image. This is the main reason why the usual proof of the h-cobordism theorem only works for cobordisms whose boundary has dimension at least 5. We can try to get rid of these double points as follows. Draw a line on the disc joining two points with the same image. If the image of this line is the boundary of an embedded disc (called a Whitney disc), then it is easy to remove the double point. However this argument seems to be going round in circles: in order to eliminate a double point of the first disc, we need to construct a second embedded disc, whose construction involves exactly the same problem of eliminating double points. Casson's idea was to iterate this construction an infinite number of times, in the hope that the problems about double points will somehow disappear in the infinite limit. Construction A Casson handle has a 2-dimensional skeleton, which can be constructed as follows. Start with a 2-disc . Identify a finite number of pairs of points in the disc. For each pair of identified points, choose a path in the disc joining these points, and construct a new disc with boundary this path. (So we add a disc for each pair of identified points.) Repeat steps 2–3 on each new disc. We can represent these skeletons by rooted trees such that each point is joined to only a finite number of other points: the tree has a point for each disc, and a line joining points if the corresponding discs intersect in the skeleton. A Casson handle is constructed by "thickening" the 2-dimensional construction above to give a 4-dimensional object: we replace each disc by a copy of . Informally we can think of this as taking a small neighborhood of the skeleton (thought of as embedded in some 4-manifold). There are some minor extra subtleties in doing this: we need to keep track of some framings, and intersection points now have an orientation. Casson handles correspond to rooted trees as above, except that now each vertex has a sign attached to it to indicate the orientation of the double point. We may as well assume that the tree has no finite branches, as finite branches can be "unravelled" so make no difference. The simplest exotic Casson handle corresponds to the tree which is just a half infinite line of points (with all signs the same). It is diffeomorphic to with a cone over the Whitehead continuum removed. There is a similar description of more complicated Casson handles, with the Whitehead continuum replaced by a similar but more complicated set. Structure Freedman's main theorem about Casson handles states that they are all homeomorphic to ; or in other words they are topological 2-handles. In general they are not diffeomorphic to as follows from Donaldson's theorem, and there are an uncountable infinite number of different diffeomorphism types of Casson handles. However the interior of a Casson handle is diffeomorphic to ; Casson handles differ from standard 2 handles only in the way the boundary is attached to the interior. Freedman's structure theorem can be used to prove the h-cobordism theorem for 5-dimensional topological cobordisms, which in turn implies the 4-dimensional topological Poincaré conjecture. References 4-manifolds Geometric topology
Casson handle
[ "Mathematics" ]
979
[ "Topology", "Geometric topology" ]
7,024,760
https://en.wikipedia.org/wiki/Czech%20Hydrometeorological%20Institute
The Czech Hydrometeorological Institute (CHMI; ) is the central state office of the Czech Republic in the fields of air quality, meteorology, climatology and hydrology. It is an organization established by the Ministry of the Environment of the Czech Republic. The head office and centralized workplaces of the CHMI, including the data processing, telecommunication and technical services, are located at the Institute's own campus in Prague. History The National Meteorological Institute was established in 1919 shortly after Czechoslovakia was established at the end of World War I. On 1 January 1954, the National Meteorological Institute was united with the hydrology service and the Czech Hydrometeorological Institute was established. Its charter was amended in 1994 and in 1995 by the Ministry of the Environment of the Czech Republic. Structure The CHMI is made up of three specialized sections (meteorology and climatology section, hydrology section, and air quality section) with two support sections (finance and administration and Information technology (IT) section), and finally, the director section. In addition to the central office in Prague-Komořany, the CHMI has a regional offices (branches) in six other Czech cities, not all sections are represented in each branch. Those other offices are in Brno, Ostrava, Plzeň, Ústí nad Labem, Hradec Králové, and České Budějovice. Air pollution dispersion modelling activities The Air Quality division has seven departments: Air Quality Information System Emission and Sources Modelling and Expertise Pool National Inventorization System Air Quality Monitoring Central Air Quality Laboratory Calibration Laboratory The work of the Modelling and Expertise Pool department is focused upon: the development of air pollution dispersion models; the application of such models in the preparation of expert reports and opinions; forecasts of air quality control; the processing of operating information on pollutant concentrations obtained by the Airborne Monitoring section. The SYMOS97 air pollution dispersion model was developed at the CHMI. It models the dispersion of continuous, neutral or buoyant plumes from single or multiple point, area or line sources. It can handle complex terrain and it can also be used to simulate the dispersion of cooling tower plumes. See also List of atmospheric dispersion models FMI, the Finnish Meteorological Institute KNMI, the Royal Dutch Meteorological Institute NILU, the Norwegian Institute for Air Research Swedish Meteorological and Hydrological Institute Royal Meteorological Society References External links CMHI website (English version) Governmental meteorological agencies in Europe Science and technology in the Czech Republic Environment of the Czech Republic Atmospheric dispersion modeling Air pollution Science and technology in Czechoslovakia 1954 establishments in Czechoslovakia Organizations established in 1954
Czech Hydrometeorological Institute
[ "Chemistry", "Engineering", "Environmental_science" ]
546
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
10,849,414
https://en.wikipedia.org/wiki/Lever%20rule
In chemistry, the lever rule is a formula used to determine the mole fraction (xi) or the mass fraction (wi) of each phase of a binary equilibrium phase diagram. It can be used to determine the fraction of liquid and solid phases for a given binary composition and temperature that is between the liquidus and solidus line. In an alloy or a mixture with two phases, α and β, which themselves contain two elements, A and B, the lever rule states that the mass fraction of the α phase is where is the mass fraction of element B in the α phase is the mass fraction of element B in the β phase is the mass fraction of element B in the entire alloy or mixture all at some fixed temperature or pressure. Derivation Suppose an alloy at an equilibrium temperature T consists of mass fraction of element B. Suppose also that at temperature T the alloy consists of two phases, α and β, for which the α consists of , and β consists of . Let the mass of the α phase in the alloy be so that the mass of the β phase is , where is the total mass of the alloy. By definition, then, the mass of element B in the α phase is , while the mass of element B in the β phase is . Together these two quantities sum to the total mass of element B in the alloy, which is given by . Therefore, By rearranging, one finds that This final fraction is the mass fraction of the α phase in the alloy. Calculations Binary phase diagrams Before any calculations can be made, a tie line is drawn on the phase diagram to determine the mass fraction of each element; on the phase diagram to the right it is line segment LS. This tie line is drawn horizontally at the composition's temperature from one phase to another (here the liquid to the solid). The mass fraction of element B at the liquidus is given by wBl (represented as wl in this diagram) and the mass fraction of element B at the solidus is given by wBs (represented as ws in this diagram). The mass fraction of solid and liquid can then be calculated using the following lever rule equations: where wB is the mass fraction of element B for the given composition (represented as wo in this diagram). The numerator of each equation is the original composition that we are interested in is +/- the opposite lever arm. That is if you want the mass fraction of solid then take the difference between the liquid composition and the original composition. And then the denominator is the overall length of the arm so the difference between the solid and liquid compositions. If you're having difficulty realising why this is so, try visualising the composition when wo approaches wl. Then the liquid concentration will start increasing. Eutectic phase diagrams There is now more than one two-phase region. The tie line drawn is from the solid alpha to the liquid and by dropping a vertical line down at these points the mass fraction of each phase is directly read off the graph, that is the mass fraction in the x axis element. The same equations can be used to find the mass fraction of alloy in each of the phases, i.e. wl is the mass fraction of the whole sample in the liquid phase. References Metallurgy Phase transitions Materials science Charts Diagrams
Lever rule
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
670
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Metallurgy", "Phases of matter", "Critical phenomena", "Materials science", "nan", "Statistical mechanics", "Matter" ]
3,010,589
https://en.wikipedia.org/wiki/Flory%E2%80%93Huggins%20solution%20theory
Flory–Huggins solution theory is a lattice model of the thermodynamics of polymer solutions which takes account of the great dissimilarity in molecular sizes in adapting the usual expression for the entropy of mixing. The result is an equation for the Gibbs free energy change for mixing a polymer with a solvent. Although it makes simplifying assumptions, it generates useful results for interpreting experiments. Theory The thermodynamic equation for the Gibbs energy change accompanying mixing at constant temperature and (external) pressure is A change, denoted by , is the value of a variable for a solution or mixture minus the values for the pure components considered separately. The objective is to find explicit formulas for and , the enthalpy and entropy increments associated with the mixing process. The result obtained by Flory and Huggins is The right-hand side is a function of the number of moles and volume fraction of solvent (component ), the number of moles and volume fraction of polymer (component ), with the introduction of a parameter to take account of the energy of interdispersing polymer and solvent molecules. is the gas constant and is the absolute temperature. The volume fraction is analogous to the mole fraction, but is weighted to take account of the relative sizes of the molecules. For a small solute, the mole fractions would appear instead, and this modification is the innovation due to Flory and Huggins. In the most general case the mixing parameter, , is a free energy parameter, thus including an entropic component. Derivation We first calculate the entropy of mixing, the increase in the uncertainty about the locations of the molecules when they are interspersed. In the pure condensed phases – solvent and polymer – everywhere we look we find a molecule. Of course, any notion of "finding" a molecule in a given location is a thought experiment since we can't actually examine spatial locations the size of molecules. The expression for the entropy of mixing of small molecules in terms of mole fractions is no longer reasonable when the solute is a macromolecular chain. We take account of this dissymmetry in molecular sizes by assuming that individual polymer segments and individual solvent molecules occupy sites on a lattice. Each site is occupied by exactly one molecule of the solvent or by one monomer of the polymer chain, so the total number of sites is where is the number of solvent molecules and is the number of polymer molecules, each of which has segments. For a random walk on a lattice we can calculate the entropy change (the increase in spatial uncertainty) as a result of mixing solute and solvent. where is the Boltzmann constant. Define the lattice volume fractions and These are also the probabilities that a given lattice site, chosen at random, is occupied by a solvent molecule or a polymer segment, respectively. Thus For a small solute whose molecules occupy just one lattice site, equals one, the volume fractions reduce to molecular or mole fractions, and we recover the usual entropy of mixing. In addition to the entropic effect, we can expect an enthalpy change. There are three molecular interactions to consider: solvent-solvent , monomer-monomer (not the covalent bonding, but between different chain sections), and monomer-solvent . Each of the last occurs at the expense of the average of the other two, so the energy increment per monomer-solvent contact is The total number of such contacts is where is the coordination number, the number of nearest neighbors for a lattice site, each one occupied either by one chain segment or a solvent molecule. That is, is the total number of polymer segments (monomers) in the solution, so is the number of nearest-neighbor sites to all the polymer segments. Multiplying by the probability that any such site is occupied by a solvent molecule, we obtain the total number of polymer-solvent molecular interactions. An approximation following mean field theory is made by following this procedure, thereby reducing the complex problem of many interactions to a simpler problem of one interaction. The enthalpy change is equal to the energy change per polymer monomer-solvent interaction multiplied by the number of such interactions The polymer-solvent interaction parameter chi is defined as It depends on the nature of both the solvent and the solute, and is the only material-specific parameter in the model. The enthalpy change becomes Assembling terms, the total free energy change is where we have converted the expression from molecules and to moles and by transferring the Avogadro constant to the gas constant . The value of the interaction parameter can be estimated from the Hildebrand solubility parameters and where is the actual volume of a polymer segment. In the most general case the interaction and the ensuing mixing parameter, , is a free energy parameter, thus including an entropic component. This means that aside to the regular mixing entropy there is another entropic contribution from the interaction between solvent and monomer. This contribution is sometimes very important in order to make quantitative predictions of thermodynamic properties. More advanced solution theories exist, such as the Flory–Krigbaum theory. Liquid-liquid phase separation Polymers can separate out from the solvent, and do so in a characteristic way. The Flory–Huggins free energy per unit volume, for a polymer with monomers, can be written in a simple dimensionless form for the volume fraction of monomers, and . The osmotic pressure (in reduced units) is . The polymer solution is stable with respect to small fluctuations when the second derivative of this free energy is positive. This second derivative is and the solution first becomes unstable when this and the third derivative are both equal to zero. A little algebra then shows that the polymer solution first becomes unstable at a critical point at This means that for all values of the monomer-solvent effective interaction is weakly repulsive, but this is too weak to cause liquid/liquid separation. However, when , there is separation into two coexisting phases, one richer in polymer but poorer in solvent, than the other. The unusual feature of the liquid/liquid phase separation is that it is highly asymmetric: the volume fraction of monomers at the critical point is approximately , which is very small for large polymers. The amount of polymer in the solvent-rich/polymer-poor coexisting phase is extremely small for long polymers. The solvent-rich phase is close to pure solvent. This is peculiar to polymers, a mixture of small molecules can be approximated using the Flory–Huggins expression with , and then and both coexisting phases are far from pure. Polymer blends Synthetic polymers rarely consist of chains of uniform length in solvent. The Flory–Huggins free energy density can be generalized to an N-component mixture of polymers with lengths by For a binary polymer blend, where one species consists of monomers and the other monomers this simplifies to As in the case for dilute polymer solutions, the first two terms on the right-hand side represent the entropy of mixing. For large polymers of and these terms are negligibly small. This implies that for a stable mixture to exist , so for polymers A and B to blend their segments must attract one another. Limitations Flory–Huggins theory tends to agree well with experiments in the semi-dilute concentration regime and can be used to fit data for even more complicated blends with higher concentrations. The theory qualitatively predicts phase separation, the tendency for high molecular weight species to be immiscible, the interaction-temperature dependence and other features commonly observed in polymer mixtures. However, unmodified Flory–Huggins theory fails to predict the lower critical solution temperature observed in some polymer blends and the lack of dependence of the critical temperature on chain length . Additionally, it can be shown that for a binary blend of polymer species with equal chain lengths the critical concentration should be ; however, polymers blends have been observed where this parameter is highly asymmetric. In certain blends, mixing entropy can dominate over monomer interaction. By adopting the mean-field approximation, parameter complex dependence on temperature, blend composition, and chain length was discarded. Specifically, interactions beyond the nearest neighbor may be highly relevant to the behavior of the blend and the distribution of polymer segments is not necessarily uniform, so certain lattice sites may experience interaction energies disparate from that approximated by the mean-field theory. One well-studied effect on interaction energies neglected by unmodified Flory–Huggins theory is chain correlation. In dilute polymer mixtures, where chains are well separated, intramolecular forces between monomers of the polymer chain dominate and drive demixing leading to regions where polymer concentration is high. As the polymer concentration increases, chains tend to overlap and the effect becomes less important. In fact, the demarcation between dilute and semi-dilute solutions is commonly defined by the concentration where polymers begin to overlap which can be estimated as Here, m is the mass of a single polymer chain, and is the chain's radius of gyration. Footnotes "Thermodynamics of High Polymer Solutions", Paul J. Flory Journal of Chemical Physics, August 1941, Volume 9, Issue 8, p. 660 Abstract. Flory suggested that Huggins' name ought to be first since he had published several months earlier: Flory, P.J., "Thermodynamics of high polymer solutions", J. Chem. Phys. 10:51-61 (1942) Citation Classic No. 18, May 6, 1985 "Solutions of Long Chain Compounds", Maurice L. Huggins Journal of Chemical Physics, May 1941 Volume 9, Issue 5, p. 440 Abstract We are ignoring the free volume due to molecular disorder in liquids and amorphous solids as compared to crystals. This, and the assumption that monomers and solute molecules are really the same size, are the main geometric approximations in this model. For a real synthetic polymer, there is a statistical distribution of chain lengths, so would be an average. The enthalpy is the internal energy corrected for any pressure-volume work at constant (external) . We are not making any distinction here. This allows the approximation of Helmholtz free energy, which is the natural form of free energy from the Flory–Huggins lattice theory, to Gibbs free energy. In fact, two of the sites adjacent to a polymer segment are occupied by other polymer segments since it is part of a chain; and one more, making three, for branching sites, but only one for terminals. References External links "Conformations, Solutions and Molecular Weight" (book chapter), Chapter 3 of Book Title: Polymer Science and Technology; by Joel R. Fried; 2nd Edition, 2003 Polymer chemistry Solutions Thermodynamic free energy Statistical mechanics
Flory–Huggins solution theory
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,220
[ "Thermodynamic properties", "Physical quantities", "Statistical mechanics", "Materials science", "Homogeneous chemical mixtures", "Energy (physics)", "Thermodynamic free energy", "Polymer chemistry", "Solutions", "Wikipedia categories named after physical quantities" ]
3,011,353
https://en.wikipedia.org/wiki/Preferential%20entailment
Preferential entailment is a non-monotonic logic based on selecting only models that are considered the most plausible. The plausibility of models is expressed by an ordering among models called a preference relation, hence the name preference entailment. Formally, given a propositional formula and an ordering over propositional models , preferential entailment selects only the models of that are minimal according to . This selection leads to a non-monotonic inference relation: holds if and only if all minimal models of according to are also models of . Circumscription can be seen as the particular case of preferential entailment when the ordering is based on containment of the sets of variables assigned to true (in the propositional case) or containment of the extensions of predicates (in the first-order logic case). See also References Logic in computer science Knowledge representation Non-classical logic
Preferential entailment
[ "Mathematics" ]
183
[ "Mathematical logic", "Logic in computer science" ]
3,011,538
https://en.wikipedia.org/wiki/Entropy%20of%20vaporization
In thermodynamics, the entropy of vaporization is the increase in entropy upon vaporization of a liquid. This is always positive, since the degree of disorder increases in the transition from a liquid in a relatively small volume to a vapor or gas occupying a much larger space. At standard pressure , the value is denoted as and normally expressed in joules per mole-kelvin, J/(mol·K). For a phase transition such as vaporization or fusion (melting), both phases may coexist in equilibrium at constant temperature and pressure, in which case the difference in Gibbs free energy is equal to zero: where is the heat or enthalpy of vaporization. Since this is a thermodynamic equation, the symbol refers to the absolute thermodynamic temperature, measured in kelvins (K). The entropy of vaporization is then equal to the heat of vaporization divided by the boiling point: According to Trouton's rule, the entropy of vaporization (at standard pressure) of most liquids has similar values. The typical value is variously given as 85 J/(mol·K), 88 J/(mol·K) and 90 J/(mol·K). Hydrogen-bonded liquids have somewhat higher values of See also Entropy of fusion References Thermodynamic entropy Thermodynamic properties
Entropy of vaporization
[ "Physics", "Chemistry", "Mathematics" ]
283
[ "Thermodynamics stubs", "Statistical mechanics stubs", "Thermodynamic properties", "Physical quantities", "Quantity", "Thermodynamic entropy", "Entropy", "Thermodynamics", "Statistical mechanics", "Physical chemistry stubs" ]
3,011,773
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem%20for%20smooth%20manifolds
In mathematics, a Riemann–Roch theorem for smooth manifolds is a version of results such as the Hirzebruch–Riemann–Roch theorem or Grothendieck–Riemann–Roch theorem (GRR) without a hypothesis making the smooth manifolds involved carry a complex structure. Results of this kind were obtained by Michael Atiyah and Friedrich Hirzebruch in 1959, reducing the requirements to something like a spin structure. Formulation Let X and Y be oriented smooth closed manifolds, and f: X → Y a continuous map. Let vf=f*(TY) − TX in the K-group K(X). If dim(X) ≡ dim(Y) mod 2, then where ch is the Chern character, d(vf) an element of the integral cohomology group H2(Y, Z) satisfying d(vf) ≡ f* w2(TY)-w2(TX) mod 2, fK* the Gysin homomorphism for K-theory, and fH* the Gysin homomorphism for cohomology . This theorem was first proven by Atiyah and Hirzebruch. The theorem is proven by considering several special cases. If Y is the Thom space of a vector bundle V over X, then the Gysin maps are just the Thom isomorphism. Then, using the splitting principle, it suffices to check the theorem via explicit computation for line bundles. If f: X → Y is an embedding, then the Thom space of the normal bundle of X in Y can be viewed as a tubular neighborhood of X in Y, and excision gives a map and . The Gysin map for K-theory/cohomology is defined to be the composition of the Thom isomorphism with these maps. Since the theorem holds for the map from X to the Thom space of N, and since the Chern character commutes with u and v, the theorem is also true for embeddings. f: X → Y. Finally, we can factor a general map f: X → Y into an embedding and the projection The theorem is true for the embedding. The Gysin map for the projection is the Bott-periodicity isomorphism, which commutes with the Chern character, so the theorem holds in this general case also. Corollaries Atiyah and Hirzebruch then specialised and refined in the case X = a point, where the condition becomes the existence of a spin structure on Y. Corollaries are on Pontryagin classes and the J-homomorphism. Notes Theorems in differential geometry Algebraic surfaces Bernhard Riemann
Riemann–Roch theorem for smooth manifolds
[ "Mathematics" ]
562
[ "Theorems in differential geometry", "Theorems in geometry" ]
3,012,047
https://en.wikipedia.org/wiki/Combined%20sewer
A combined sewer is a type of gravity sewer with a system of pipes, tunnels, pump stations etc. to transport sewage and urban runoff together to a sewage treatment plant or disposal site. This means that during rain events, the sewage gets diluted, resulting in higher flowrates at the treatment site. Uncontaminated stormwater simply dilutes sewage, but runoff may dissolve or suspend virtually anything it contacts on roofs, streets, and storage yards. As rainfall travels over roofs and the ground, it may pick up various contaminants including soil particles and other sediment, heavy metals, organic compounds, animal waste, and oil and grease. Combined sewers may also receive dry weather drainage from landscape irrigation, construction dewatering, and washing buildings and sidewalks. Combined sewers can cause serious water pollution problems during combined sewer overflow (CSO) events when combined sewage and surface runoff flows exceed the capacity of the sewage treatment plant, or of the maximum flow rate of the system which transmits the combined sources. In instances where exceptionally high surface runoff occurs (such as large rainstorms), the load on individual tributary branches of the sewer system may cause a back-up to a point where raw sewage flows out of input sources such as toilets, causing inhabited buildings to be flooded with a toxic sewage-runoff mixture, incurring massive financial burdens for cleanup and repair. When combined sewer systems experience these higher than normal throughputs, relief systems cause discharges containing human and industrial waste to flow into rivers, streams, or other bodies of water. Such events frequently cause both negative environmental and lifestyle consequences, including beach closures, contaminated shellfish unsafe for consumption, and contamination of drinking water sources, rendering them temporarily unsafe for drinking and requiring boiling before uses such as bathing or washing dishes. Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins, screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems. This type of gravity sewer design is less often used nowadays when constructing new sewer systems. Modern-day sewer designs exclude surface runoff by building sanitary sewers instead, but many older cities and towns continue to operate previously constructed combined sewer systems. Development The earliest sewers were designed to carry street runoff away from inhabited areas and into surface waterways without treatment. Before the 19th century, it was commonplace to empty human waste receptacles, e.g., chamber pots, into town and city streets and slaughter animals in open street "shambles". The use of draft animals such as horses and herding of livestock through city streets meant that most contained large amounts of excrement. Before the development of macadam as a paving material in the 19th century, paving systems were mostly porous, so that precipitation could soak away and not run off, and urban rooftop rainwater was often saved in rainwater tanks. Open sewers, consisting of gutters and urban streambeds, were common worldwide before the 20th century. In the majority of developed countries, large efforts were made during the late 19th and early 20th centuries to cover the formerly open sewers, converting them to closed systems with cast iron, steel, or concrete pipes, masonry, and concrete arches, while streets and footpaths were increasingly covered with impermeable paving systems. Most sewage collection systems of the 19th and early to mid-20th century used single-pipe systems that collect both sewage and urban runoff from streets and roofs (to the extent that relatively clean rooftop rainwater was not saved in butts and cisterns for drinking and washing.) This type of collection system is referred to as a "combined sewer system". The rationale for combining the two was that it would be cheaper to build just a single system. Most cities at that time did not have sewage treatment plants, so there was no perceived public health advantage in constructing a separate "surface water sewerage" (UK terminology) or "storm sewer" (US terminology) system. Moreover, before the automobile era, runoff was likely to be typically highly contaminated with animal waste. Further, until the mid-late 19th century the frequent use of shambles contributed more waste. The widespread replacement of horses with automotive propulsion, paving of city streets and surfaces, construction of municipal slaughterhouses, and provision of mains water in the 20th century changed the nature and volume of urban runoff to be initially cleaner, include water that formerly soaked away and previously saved rooftop rainwater after combined sewers were already widely adopted. When constructed, combined sewer systems were typically sized to carry three to 160 times the average dry weather sewage flows. It is generally infeasible to treat the volume of mixed sewage and surface runoff flowing in a combined sewer during peak runoff events caused by snowmelt or convective precipitation. As cities built sewage treatment plants, those plants were typically built to treat only the volume of sewage flowing during dry weather. Relief structures were installed in the collection system to bypass untreated sewage mixed with surface runoff during wet weather, protecting sewage treatment plants from damage caused if peak flows reached the headworks. Combined sewer overflows (CSOs) These relief structures, called "storm-water regulators" (in American English - or "combined sewer overflows" in British English) are constructed in combined sewer systems to divert flows in excess of the peak design flow of the sewage treatment plant. Combined sewers are built with control sections establishing stage-discharge or pressure differential-discharge relationships which may be either predicted or calibrated to divert flows in excess of sewage treatment plant capacity. A leaping weir may be used as a regulating device allowing typical dry-weather sewage flow rates to fall into an interceptor sewer to the sewage treatment plant, but causing a major portion of higher flow rates to leap over the interceptor into the diversion outfall. Alternatively, an orifice may be sized to accept the sewage treatment plant design capacity and cause excess flow to accumulate above the orifice until it overtops a side-overflow weir to the diversion outfall. CSO statistics may be confusing because the term may describe either the number of events or the number of relief structure locations at which such events may occur. A CSO event, as the term is used in American English, occurs when mixed sewage and stormwater are bypassed from a combined sewer system control section into a river, stream, lake, or ocean through a designed diversion outfall, but without treatment. Overflow frequency and duration varies both from system to system, and from outfall to outfall, within a single combined sewer system. Some CSO outfalls discharge infrequently, while others activate every time it rains. The storm water component contributes pollutants to CSO; but a major faction of pollution is the first foul flush of accumulated biofilm and sanitary solids scoured from the dry weather wetted perimeter of combined sewers during peak flow turbulence. Each storm is different in the quantity and type of pollutants it contributes. For example, storms that occur in late summer, when it has not rained for a while, have the most pollutants. Pollutants like oil, grease, fecal coliform from pet and wildlife waste, and pesticides get flushed into the sewer system. In cold weather areas, pollutants from cars, people and animals also accumulate on hard surfaces and grass during the winter and then are flushed into the sewer systems during heavy spring rains. Health impacts CSO discharges during heavy storms can cause serious water pollution problems. The discharges contain human and industrial waste, and can cause beach closings, restrictions on shellfish consumption and contamination of drinking water sources. Comparison to sanitary sewer overflows CSOs differ from sanitary sewer overflows in that the latter are caused by sewer system obstructions, damage, or flows in excess of sewer capacity (rather than treatment plant capacity.) Sanitary sewer overflows may occur at any low spot in the sewer system rather than at the CSO relief structures. Absence of a diversion outfall often causes sanitary sewer overflows to flood residential structures and/or flow over traveled road surfaces before reaching natural drainage channels. Sanitary sewer overflows may cause greater health risks and environmental damage than CSOs if they occur during dry weather when there is no precipitation runoff to dilute and flush away sewage pollutants. CSOs in the United States About 860 communities in the US have combined sewer systems, serving about 40million people. Pollutants from CSO discharges can include bacteria and other pathogens, toxic chemicals, and debris. These pollutants have also been linked with antimicrobial resistance, posing serious public health concerns. The U.S. Environmental Protection Agency (EPA) issued a policy in 1994 requiring municipalities to make improvements to reduce or eliminate CSO-related pollution problems. The policy is implemented through the National Pollutant Discharge Elimination System (NPDES) permit program. The policy defined water quality parameters for the safety of an ecosystem; it allowed for action that are site specific to control CSOs in most practical way for community; it made sure the CSO control is not beyond a community's budget; and allowed water quality parameters to be flexible, based upon the site specific conditions. The CSO Control Policy required all publicly owned treatment works to have "nine minimum controls" in place by January 1, 1997, in order to decrease the effects of sewage overflow by making small improvements in existing processes. In 2000 Congress amended the Clean Water Act to require the municipalities to comply with the EPA policy. Mitigation of CSOs Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins, screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems. For example, cities with combined sewer overflows employ one or more engineering approaches to reduce discharges of untreated sewage, including: utilizing a green infrastructure approach to improve storm water management capacity throughout the system, and reduce the hydraulic overloading of the treatment plant repair and replacement of leaking and malfunctioning equipment increasing overall hydraulic capacity of the sewage collection system (often a very expensive option). The United Kingdom Environment Agency identified unsatisfactory intermittent discharges and issued an Urban Wastewater Treatment Directive requiring action to limit pollution from combined sewer overflows. In 2009, the Canadian Council of Ministers of the Environment adopted a Canada-wide Strategy for the Management of Municipal Wastewater Effluent including national standards to (1) remove floating material from combined sewer overflows, (2) prevent combined sewer overflows during dry weather, and (3) prevent development or redevelopment from increasing the frequency of combined sewer overflows. Rehabilitation of combined sewer systems to mitigate CSOs require extensive monitoring networks which are becoming more prevalent with decreasing sensor and communication costs. These monitoring networks can identify bottlenecks causing the main CSO problem, or aid in the calibration of hydrodynamic or hydrological models to enable cost effective CSO mitigation. Municipalities in the US have been undertaking projects to mitigate CSO since the 1990s. For example, prior to 1990, the quantity of untreated combined sewage discharged annually to lakes, rivers, and streams in southeast Michigan was estimated at more than per year. In 2005, with nearly $1 billion of a planned $2.4 billion CSO investment put into operation, untreated discharges have been reduced by more than per year. This investment that has yielded an 85 percent reduction in CSO has included numerous sewer separation, CSO storage and treatment facilities, and wastewater treatment plant improvements constructed by local and regional governments. Many other areas in the US are undertaking similar projects (see, for example, in the Puget Sound of Washington). Cities like Pittsburgh, Seattle, Philadelphia, and New York are focusing on these projects partly because they are under federal consent decrees to solve their CSO issues. Both up-front penalties and stipulated penalties are utilized by EPA and state agencies to enforce CSO-mitigating initiatives and the efficiency of their schedules. Municipalities' sewage departments, engineering and design firms, and environmental organizations offer different approaches to potential solutions. Sewer separation Some US cities have undertaken sewer separation projects—building a second piping system for all or part of the community. In many of these projects, cities have been able to separate only portions of their combined systems. High costs or physical limitations may preclude building a completely separate system. In 2011, Washington, D.C., separated its sewers in four small neighborhoods at a cost of $11 million. (The project cost also included improvements to the drinking water piping system.) CSO storage Another solution is to build a CSO storage facility, such as a tunnel that can store flow from many sewer connections. Because a tunnel can share capacity among several outfalls, it can reduce the total volume of storage that must be provided for a specific number of outfalls. Storage tunnels store combined sewage but do not treat it. When the storm is over, the flows are pumped out of the tunnel and sent to a wastewater treatment plant. One of the main concerns with CSO storage is the length of time it is stored before it is released. Without careful management of this storage period, the water in the CSO storage facility runs the risk of going septic. Washington, D.C., is building underground storage capacity as its primary strategy to address CSOs. In 2011, the city began construction on a system of four deep storage tunnels, adjacent to the Anacostia River, that will reduce overflows to the river by 98 percent, and 96 percent system-wide. The system will comprise over of tunnels with a storage capacity of . The first segment of the tunnel system, in length, went online in 2018. The remaining segments of the storage system are scheduled for completion in 2023. (The city's overall "Clean Rivers" project, projected to cost $2.6 billion, includes other components, such as reducing stormwater flows.) The South Boston CSO Storage Tunnel is a similar project, completed in 2011. Indianapolis, Indiana, is building underground storage capacity in the form of a diameter deep rock tunnel system which will connect the two existing wastewater treatment plants, and provide collection of discharge water from the various CSO sites located along the White River, Eagle Creek, Fall Creek, Pogue's Run, and Pleasant Run. Citizens Energy Group is managing the efforts to construct the first phases of the work, which includes a deep Deep Rock Tunnel Connector between the Belmont Wastewater Treatment Plant and the Southport Wastewater Treatment Plant. Additional tunnels will branch under the existing watercourses located in Indianapolis. The planned cost for the project will total $1.9 billion. Fort Wayne, Indiana, is constructing a , diameter, $180M tunnel under the 3RPORT (Three Rivers Protection and Overflow Reduction Tunnel) to address the myriad CSOs which outfall into the St. Mary's, St. Joseph, and Maumee Rivers. The 3RPORT is approximately below grade, and is anticipated to enter service in 2023. Expanding sewage treatment capacity Some cities have expanded their basic sewage treatment capacity to handle some or all of the CSO volume. In 2002 litigation forced the city of Toledo, Ohio, to double its treatment capacity and build a storage basin in order to eliminate most overflows. The city also agreed to study ways to reduce stormwater flows into the sewer system. (See Reducing stormwater flows.) Retention basins Retention treatment basins or large concrete tanks that store and treat combined sewage are another solution. These underground structures can range in storage and treatment capacity from to of combined sewage. While each facility is unique, a typical facility operation is as follows. Flows from the overloaded sewers are pumped into a basin that is divided into compartments. The first flush compartment captures and stores flows with the highest level of pollutants from the first part of a storm. These pollutants include motor oil, sediment, road salt, and lawn chemicals (pesticides and fertilizers) that are picked up by the stormwater as it runs off roads and lawns. The flows from this compartment are stored and sent to the wastewater treatment plant when there is capacity in the interceptor sewer after the storm. The second compartment is a treatment or flow-through compartment. The flows are disinfected by injecting sodium hypochlorite, or bleach, as they enter this compartment. It then takes about 20‑30 minutes for the flows to move to the end of the compartment. During this time, bacteria are killed and large solid materials settle out. At the end of the compartment, any remaining sanitary trash is skimmed off the top and the treated flows are discharged into the river or lake. The City of Detroit, Michigan, utilizes a system of nine CSO retention basins and screening/disinfection facilities that are owned and operated by the Great Lakes Water Authority. These basins are located at original combined sewer outfalls located along the Detroit River and Rouge River within metropolitan Detroit. These facilities are generally designed to contain two inches of stormwater runoff, with the ability to disinfect overflows during extreme wet-weather rainfall events. Screening and disinfection facilities Screening and disinfection facilities treat CSO without ever storing it. Called "flow-through" facilities, they use fine screens to remove solids and sanitary trash from the combined sewage. Flows are injected with sodium hypochlorite for disinfection and mixed as they travel through a series of fine screens to remove debris. The fine screens have openings that range in size from 4 to 6 mm, or a little less than a quarter inch. The flow is sent through the facility at a rate that provides enough time for the sodium hypochlorite to kill bacteria. All of the materials removed by the screens are then sent to the sewage treatment plant through the interceptor sewer. Reducing stormwater flows Communities may implement low impact development techniques to reduce flows of stormwater into the collection system. This includes: constructing new and renovated streets, parking lots and sidewalks with interlocking stones, permeable paving and pervious concrete installing green roofs on buildings installing bioretention systems, also called rain gardens, in landscaped areas installing rainwater harvesting equipment to collect runoff from building roofs during wet weather for irrigating landscapes and gardens during dry weather implementing graywater collection and use on site to reduce sewage discharges at all times Green infrastructure CSO mitigating initiatives that are solely composed of sewer system reconstruction are referred to as gray infrastructure, while techniques like permeable pavement and rainwater harvesting are referred to as green infrastructure. Conflict often occurs between a municipality's sewage authority and its environmentally active organizations between gray and green infrastructural plans. The 2004 EPA Report to Congress on CSO's provides a review of available technologies to mitigate CSO impacts. Real-time decision support systems Recent technological advances in sensing and control have enabled the implementation of real-time decision support systems (RT-DSS) for CSO mitigation. Through the use of internet of things technology and cloud computing, CSO events can now be mitigated by dynamically adjusting setpoints for movable gates, pump stations, and other actuated assets in sewers and storm water management systems. Similar technology, called adaptive traffic control is used to control the flow of vehicles through traffic lights. RT-DSS systems take advantage of storm temporal and spatial variability as well as varying concentration times due to diverse land uses across the sewershed to coordinate and optimize control assets. By maximizing storage and conveyance RT-DSS are able to minimize overflows using existing infrastructure. Successful implementations of RT-DSS have been carried out throughout the United States and Europe. Real-time control (RTC) can be either heuristic or model based. Model-based control is theoretically more optimal, but due to the ease of implementation, heuristic control is more commonly applied. Generating sufficient evidence that RTC is a suitable option for CSO mitigation remains problematic, although new performance methods might make this possible. Regulations United Kingdom There is in the UK a legal difference between a storm sewer and a surface water sewer. There is no right of connection to a storm-water overflow sewer under section 106 of the Water Industry Act. These are normally the pipe line that discharges to a watercourse, downstream of a combined sewer overflow. It takes the excess flow from a combined sewer. A surface water sewer conveys rainwater; legally there is a right of connection for rainwater to this public sewer. A public storm water sewer can discharge to a public surface water, but not the other way around, without a legal change in sewer status by the water company. History Combined sewer systems were common when urban sewerage systems were first developed, in the late 19th and early 20th centuries. Society and culture The image of the sewer recurs in European culture as they were often used as hiding places or routes of escape by the scorned or the hunted, including partisans and resistance fighters in World War II. Fighting erupted in the sewers during the Battle of Stalingrad. The only survivors from the Warsaw Uprising and Warsaw Ghetto made their final escape through city sewers. Some have commented that the engravings of imaginary prisons by Piranesi were inspired by the Cloaca Maxima, one of the world's earliest sewers. In fiction The theme of traveling through, hiding, or even residing in combined sewers is a common plot device in media. Famous examples of sewer dwelling are the Teenage Mutant Ninja Turtles, Stephen King's It, Les Misérables, The Third Man, Ladyhawke, Mimic, The Phantom of the Opera, Beauty and the Beast, and Jet Set Radio Future. The Todd Strasser novel Y2K-9: the Dog Who Saved the World is centered on a dog thwarting terroristic threats to electronically sabotage American sewage treatment plants. Sewer alligators A well-known urban legend, the sewer alligator, is that of giant alligators or crocodiles residing in combined sewers, especially of major metropolitan areas. Two public sculptures in New York depict an alligator dragging a hapless victim into a manhole. Alligators have been known to get into combined storm sewers in the southeastern United States. Closed-circuit television by a sewer repair company captured an alligator in a combined storm sewer on tape. See also Fatberg (sewer obstruction) Sanitary sewer overflow Thames Tideway Scheme Storm drain References External links U.S. EPA – Combined Sewer Overflows Environmental engineering Hydraulic engineering Sewerage infrastructure Water pollution
Combined sewer
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
4,635
[ "Hydrology", "Water treatment", "Chemical engineering", "Sewerage infrastructure", "Water pollution", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]