id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,101,904
https://en.wikipedia.org/wiki/Flux%20balance%20analysis
In biochemistry, flux balance analysis (FBA) is a mathematical method for simulating the metabolism of cells or entire unicellular organisms, such as E. coli or yeast, using genome-scale reconstructions of metabolic networks. Genome-scale reconstructions describe all the biochemical reactions in an organism based on its entire genome. These reconstructions model metabolism by focusing on the interactions between metabolites, identifying which metabolites are involved in the various reactions taking place in a cell or organism, and determining the genes that encode the enzymes which catalyze these reactions (if any). In comparison to traditional methods of modeling, FBA is less intensive in terms of the input data required for constructing the model. Simulations performed using FBA are computationally inexpensive and can calculate steady-state metabolic fluxes for large models (over 10,000 reactions) in a few seconds on modern personal computers. The related method of metabolic pathway analysis seeks to find and list all possible pathways between metabolites. FBA finds applications in bioprocess engineering to systematically identify modifications to the metabolic networks of microbes used in fermentation processes that improve product yields of industrially important chemicals such as ethanol and succinic acid. It has also been used for the identification of putative drug targets in cancer and pathogens, rational design of culture media, and host–pathogen interactions. The results of FBA can be visualized for smaller networks using flux maps similar to the image on the right, which illustrates the steady-state fluxes carried by reactions in glycolysis. The thickness of the arrows is proportional to the flux through the reaction. FBA formalizes the system of equations describing the concentration changes in a metabolic network as the dot product of a matrix of the stoichiometric coefficients (the stoichiometric matrix S) and the vector v of the unsolved fluxes. The right-hand side of the dot product is a vector of zeros representing the system at steady state. At steady state, metabolite concentrations remain constant as the rates of production and consumption are balanced, resulting in no net change over time. Since the system of equations is often underdetermined, there can be multiple possible solutions. To obtain a single solution, the flux that maximizes a reaction of interest, such as biomass or ATP production, is selected. Linear programming is then used to calculate one of the possible solutions of fluxes corresponding to the steady state. History Some of the earliest work in FBA dates back to the early 1980s. Papoutsakis demonstrated that it was possible to construct flux balance equations using a metabolic map. It was Watson, however, who first introduced the idea of using linear programming and an objective function to solve for the fluxes in a pathway. The first significant study was subsequently published by Fell and Small, who used flux balance analysis together with more elaborate objective functions to study the constraints in fat synthesis. Simulations FBA is not computationally intensive, taking on the order of seconds to calculate optimal fluxes for biomass production for a typical network (around 10,000 reactions). This means that the effect of deleting reactions from the network and/or changing flux constraints can be sensibly modelled on a single computer. Gene/reaction deletion and perturbation studies Single reaction deletion A frequently used technique to search a metabolic network for reactions that are particularly critical to the production of biomass. By removing each reaction in a network in turn and measuring the predicted flux through the biomass function or any other objective such as ATP production, each reaction can be classified as either essential (if the flux through the biomass function is substantially reduced) or non-essential (if the flux through the biomass function is unchanged or only slightly reduced). Pairwise reaction deletion Pairwise reaction deletion of all possible pairs of reactions is useful when looking for drug targets, as it allows the simulation of multi-target treatments, either by a single drug with multiple targets or by drug combinations. Double deletion studies can also quantify the synthetic lethal interactions between different pathways providing a measure of the contribution of the pathway to overall network robustness. Single and multiple gene deletions Genes are connected to enzyme-catalyzed reactions by Boolean expressions known as Gene-Protein-Reaction expressions (GPR). Typically a GPR takes the form (Gene A AND Gene B) to indicate that the products of genes A and B are protein sub-units that assemble to form the complete protein and therefore the absence of either would result in deletion of the reaction. On the other hand, if the GPR is (Gene A OR Gene B) it implies that the products of genes A and B are isozymes, meaning that the expression of either one is sufficient to maintain an active reaction. A reaction can also be regulated by a single gene, or in the case of diffusion, it may not be associated with any gene at all. Therefore, it is possible to evaluate the effect of single or multiple gene deletions by evaluation of the GPR as a Boolean expression. If the GPR evaluates to false, the reaction is constrained to zero in the model prior to performing FBA. Thus gene knockouts can be simulated using FBA. Logically, reactions that are not associated with any genes cannot be deleted. Interpretation of gene and reaction deletion results The utility of reaction inhibition and deletion analyses becomes most apparent if a gene-protein-reaction matrix has been assembled for the network being studied with FBA. The gene-protein-reaction matrix is a binary matrix connecting genes with the proteins made from them. Using this matrix, reaction essentiality can be converted into gene essentiality indicating the gene defects which may cause a certain disease phenotype or the proteins/enzymes which are essential (and thus what enzymes are the most promising drug targets in pathogens). However, the gene-protein-reaction matrix does not specify the Boolean relationship between genes with respect to the enzyme, instead it merely indicates an association between them. Therefore, it should be used only if the Boolean GPR expression is unavailable. Reaction inhibition The effect of inhibiting a reaction, rather than removing it entirely, can be simulated in FBA by restricting the allowed flux through it. The effect of an inhibition can be classified as lethal or non-lethal by applying the same criteria as in the case of a deletion where a suitable threshold is used to distinguish “substantially reduced” from “slightly reduced”. Generally the choice of threshold is arbitrary but a reasonable estimate can be obtained from growth experiments where the simulated inhibitions/deletions are actually performed and growth rate is measured. Growth media optimization To design optimal growth media with respect to enhanced growth rates or useful by-product secretion, it is possible to use a method known as Phenotypic Phase Plane analysis. PhPP involves applying FBA repeatedly on the model while co-varying the nutrient uptake constraints and observing the value of the objective function (or by-product fluxes). PhPP makes it possible to find the optimal combination of nutrients that favor a particular phenotype or a mode of metabolism resulting in higher growth rates or secretion of industrially useful by-products. The predicted growth rates of bacteria in varying media have been shown to correlate well with experimental results, as well as to define precise minimal media for the culture of Salmonella typhimurium. Host-pathogen interactions The human microbiota is a complex system with as many as 400 trillion microbes and bacteria interacting with each other and the host. To understand key factors in this system; a multi-scale, dynamic flux-balance analysis is proposed as FBA is classified as less computationally intensive. Mathematical description In contrast to the traditionally followed approach of metabolic modeling using coupled ordinary differential equations, flux balance analysis requires very little information in terms of the enzyme kinetic parameters and concentration of metabolites in the system. It achieves this by making two assumptions, steady state and optimality. The first assumption is that the modeled system has entered a steady state, where the metabolite concentrations no longer change, i.e. in each metabolite node the producing and consuming fluxes cancel each other out. The second assumption is that the organism has been optimized through evolution for some biological goal, such as optimal growth or conservation of resources. The steady-state assumption reduces the system to a set of linear equations, which is then solved to find a flux distribution that satisfies the steady-state condition subject to the stoichiometry constraints while maximizing the value of a pseudo-reaction (the objective function) representing the conversion of biomass precursors into biomass. The steady-state assumption dates to the ideas of material balance developed to model the growth of microbial cells in fermenters in bioprocess engineering. During microbial growth, a substrate consisting of a complex mixture of carbon, hydrogen, oxygen and nitrogen sources along with trace elements are consumed to generate biomass. The material balance model for this process becomes: If we consider the system of microbial cells to be at steady state then we may set the accumulation term to zero and reduce the material balance equations to simple algebraic equations. In such a system, substrate becomes the input to the system which is consumed and biomass is produced becoming the output from the system. The material balance may then be represented as: Mathematically, the algebraic equations can be represented as a dot product of a matrix of coefficients and a vector of the unknowns. Since the steady-state assumption puts the accumulation term to zero. The system can be written as: Extending this idea to metabolic networks, it is possible to represent a metabolic network as a stoichiometry balanced set of equations. Moving to the matrix formalism, we can represent the equations as the dot product of a matrix of stoichiometry coefficients (stoichiometric matrix ) and the vector of fluxes as the unknowns and set the right hand side to 0 implying the steady state. Metabolic networks typically have more reactions than metabolites and this gives an under-determined system of linear equations containing more variables than equations. The standard approach to solve such under-determined systems is to apply linear programming. Linear programs are problems that can be expressed in canonical form: where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and is the matrix transpose. The expression to be maximized or minimized is called the objective function (cTx in this case). The inequalities Ax ≤ b are the constraints which specify a convex polytope over which the objective function is to be optimized. Linear Programming requires the definition of an objective function. The optimal solution to the LP problem is considered to be the solution which maximizes or minimizes the value of the objective function depending on the case in point. In the case of flux balance analysis, the objective function Z for the LP is often defined as biomass production. Biomass production is simulated by an equation representing a lumped reaction that converts various biomass precursors into one unit of biomass. Therefore, the canonical form of a Flux Balance Analysis problem would be: where represents the vector of fluxes (to be determined), is a (known) matrix of coefficients. The expression to be maximized or minimized is called the objective function ( in this case). The inequalities and define, respectively, the minimal and the maximal rates of flux for every reaction corresponding to the columns of the matrix. These rates can be experimentally determined to constrain and improve the predictive accuracy of the model even further or they can be specified to an arbitrarily high value indicating no constraint on the flux through the reaction. The main advantage of the flux balance approach is that it does not require any knowledge of the metabolite concentrations, or more importantly, the enzyme kinetics of the system; the homeostasis assumption precludes the need for knowledge of metabolite concentrations at any time as long as that quantity remains constant, and additionally it removes the need for specific rate laws since it assumes that at steady state, there is no change in the size of the metabolite pool in the system. The stoichiometric coefficients alone are sufficient for the mathematical maximization of a specific objective function. The objective function is essentially a measure of how each component in the system contributes to the production of the desired product. The product itself depends on the purpose of the model, but one of the most common examples is the study of total biomass. A notable example of the success of FBA is the ability to accurately predict the growth rate of the prokaryote E. coli when cultured in different conditions. In this case, the metabolic system was optimized to maximize the biomass objective function. However this model can be used to optimize the production of any product, and is often used to determine the output level of some biotechnologically relevant product. The model itself can be experimentally verified by cultivating organisms using a chemostat or similar tools to ensure that nutrient concentrations are held constant. Measurements of the production of the desired objective can then be used to correct the model. A good description of the basic concepts of FBA can be found in the freely available supplementary material to Edwards et al. 2001 which can be found at the Nature website. Further sources include the book "Systems Biology" by B. Palsson dedicated to the subject and a useful tutorial and paper by J. Orth. Many other sources of information on the technique exist in published scientific literature including Lee et al. 2006, Feist et al. 2008, and Lewis et al. 2012. Model preparation and refinement The key parts of model preparation are: creating a metabolic network without gaps, adding constraints to the model, and finally adding an objective function (often called the Biomass function), usually to simulate the growth of the organism being modelled. Metabolic network and software tools Metabolic networks can vary in scope from those describing a single pathway, up to the cell, tissue or organism. The main requirement of a metabolic network that forms the basis of an FBA-ready network is that it contains no gaps. This typically means that extensive manual curation is required, making the preparation of a metabolic network for flux-balance analysis a process that can take months or years. However, recent advances such as so-called gap-filling methods can reduce the required time to weeks or months. Software packages for creation of FBA models include: Pathway Tools/MetaFlux, Simpheny, MetNetMaker, COBRApy, CarveMe, MIOM, or COBREXA.jl. Generally models are created in BioPAX or SBML format so that further analysis or visualization can take place in other software although this is not a requirement. Constraints A key part of FBA is the ability to add constraints to the flux rates of reactions within networks, forcing them to stay within a range of selected values. This lets the model more accurately simulate real metabolism. The constraints belong to two subsets from a biological perspective; boundary constraints that limit nutrient uptake/excretion and internal constraints that limit the flux through reactions within the organism. In mathematical terms, the application of constraints can be considered to reduce the solution space of the FBA model. In addition to constraints applied at the edges of a metabolic network, constraints can be applied to reactions deep within the network. These constraints are usually simple; they may constrain the direction of a reaction due to energy considerations or constrain the maximum speed of a reaction due to the finite speed of all reactions in nature. Growth media constraints Organisms, and all other metabolic systems, require some input of nutrients. Typically the rate of uptake of nutrients is dictated by their availability (a nutrient that is not present cannot be absorbed), their concentration and diffusion constants (higher concentrations of quickly-diffusing metabolites are absorbed more quickly) and the method of absorption (such as active transport or facilitated diffusion versus simple diffusion). If the rate of absorption (and/or excretion) of certain nutrients can be experimentally measured then this information can be added as a constraint on the flux rate at the edges of a metabolic model. This ensures that nutrients that are not present or not absorbed by the organism do not enter its metabolism (the flux rate is constrained to zero) and also means that known nutrient uptake rates are adhered to by the simulation. This provides a secondary method of making sure that the simulated metabolism has experimentally verified properties rather than just mathematically acceptable ones. Thermodynamical reaction constraints In principle, all reactions are reversible however in practice reactions often effectively occur in only one direction. This may be due to significantly higher concentration of reactants compared to the concentration of the products of the reaction. But more often it happens because the products of a reaction have a much lower free energy than the reactants and therefore the forward direction of a reaction is favored more. For ideal reactions, For certain reactions a thermodynamic constraint can be applied implying direction (in this case forward) Realistically the flux through a reaction cannot be infinite (given that enzymes in the real system are finite) which implies that, Experimentally measured flux constraints Certain flux rates can be measured experimentally () and the fluxes within a metabolic model can be constrained, within some error (), to ensure these known flux rates are accurately reproduced in the simulation. Flux rates are most easily measured for nutrient uptake at the edge of the network. Measurements of internal fluxes is possible using radioactively labelled or NMR visible metabolites. Constrained FBA-ready metabolic models can be analyzed using software such as the COBRA toolbox (available implementations in MATLAB and Python), SurreyFBA, or the web-based FAME. Additional software packages have been listed elsewhere. A comprehensive review of all such software and their functionalities has been recently reviewed. An open-source alternative is available in the R (programming language) as the packages or sybil for performing FBA and other constraint based modeling techniques. Objective function FBA can give a large number of mathematically acceptable solutions to the steady-state problem . However solutions of biological interest are the ones which produce the desired metabolites in the correct proportion. The objective function defines the proportion of these metabolites. For instance when modelling the growth of an organism the objective function is generally defined as biomass. Mathematically, it is a column in the stoichiometry matrix the entries of which place a "demand" or act as a "sink" for biosynthetic precursors such as fatty acids, amino acids and cell wall components which are present on the corresponding rows of the S matrix. These entries represent experimentally measured, dry weight proportions of cellular components. Therefore, this column becomes a lumped reaction that simulates growth and reproduction. Therefore, the accuracy of experimental measurements plays an essential role in the correct definition of the biomass function and makes the results of FBA biologically applicable by ensuring that the correct proportion of metabolites are produced by metabolism. When modeling smaller networks the objective function can be changed accordingly. An example of this would be in the study of the carbohydrate metabolism pathways where the objective function would probably be defined as a certain proportion of ATP and NADH and thus simulate the production of high energy metabolites by this pathway. Optimization of the objective/biomass function Linear programming can be used to find a single optimal solution. The most common biological optimization goal for a whole-organism metabolic network would be to choose the flux vector that maximises the flux through a biomass function composed of the constituent metabolites of the organism placed into the stoichiometric matrix and denoted or simply In the more general case any reaction can be defined and added to the biomass function with either the condition that it be maximised or minimised if a single “optimal” solution is desired. Alternatively, and in the most general case, a vector can be introduced, which defines the weighted set of reactions that the linear programming model should aim to maximise or minimise, In the case of there being only a single separate biomass function/reaction within the stoichiometric matrix would simplify to all zeroes with a value of 1 (or any non-zero value) in the position corresponding to that biomass function. Where there were multiple separate objective functions would simplify to all zeroes with weighted values in the positions corresponding to all objective functions. Reducing the solution space – biological considerations for the system The analysis of the null space of matrices is implemented in software packages specialized for matrix operations such as Matlab and Octave. Determination of the null space of tells us all the possible collections of flux vectors (or linear combinations thereof) that balance fluxes within the biological network. The advantage of this approach becomes evident in biological systems which are described by differential equation systems with many unknowns. The velocities in the differential equations above - and - are dependent on the reaction rates of the underlying equations. The velocities are generally taken from the Michaelis–Menten kinetic theory, which involves the kinetic parameters of the enzymes catalyzing the reactions and the concentration of the metabolites themselves. Isolating enzymes from living organisms and measuring their kinetic parameters is a difficult task, as is measuring the internal concentrations and diffusion constants of metabolites within an organism. Therefore, the differential equation approach to metabolic modeling is beyond the current scope of science for all but the most studied organisms. FBA avoids this impediment by applying the homeostatic assumption, which is a reasonably approximate description of biological systems. Although FBA avoids that biological obstacle, the mathematical issue of a large solution space remains. FBA has a two-fold purpose. Accurately representing the biological limits of the system and returning the flux distribution closest to the natural fluxes within the target system/organism. Certain biological principles can help overcome the mathematical difficulties. While the stoichiometric matrix is almost always under-determined initially (meaning that the solution space to is very large), the size of the solution space can be reduced and be made more reflective of the biology of the problem through the application of certain constraints on the solutions. Extensions The success of FBA and the realization of its limitations has led to extensions that attempt to mediate the limitations of the technique. Flux variability analysis The optimal solution to the flux-balance problem is rarely unique with many possible, and equally optimal, solutions existing. Flux variability analysis (FVA), built into some analysis software, returns the boundaries for the fluxes through each reaction that can, paired with the right combination of other fluxes, estimate the optimal solution. Reactions which can support a low variability of fluxes through them are likely to be of a higher importance to an organism and FVA is a promising technique for the identification of reactions that are important. Minimization of metabolic adjustment (MOMA) When simulating knockouts or growth on media, FBA gives the final steady-state flux distribution. This final steady state is reached in varying time-scales. For example, the predicted growth rate of E. coli on glycerol as the primary carbon source did not match the FBA predictions; however, on sub-culturing for 40 days or 700 generations, the growth rate adaptively evolved to match the FBA prediction. Sometimes it is of interest to find out what is the immediate effect of a perturbation or knockout, since it takes time for regulatory changes to occur and for the organism to re-organize fluxes to optimally utilize a different carbon source or circumvent the effect of the knockout. MOMA predicts the immediate sub-optimal flux distribution following the perturbation by minimizing the distance (Euclidean) between the wild-type FBA flux distribution and the mutant flux distribution using quadratic programming. This yields an optimization problem of the form. where represents the wild-type (or unperturbed state) flux distribution and represents the flux distribution on gene deletion that is to be solved for. This simplifies to: This is the MOMA solution which represents the flux distribution immediately post-perturbation. Regulatory on-off minimization (ROOM) ROOM attempts to improve the prediction of the metabolic state of an organism after a gene knockout. It follows the same premise as MOMA that an organism would try to restore a flux distribution as close as possible to the wild-type after a knockout. However it further hypothesizes that this steady state would be reached through a series of transient metabolic changes by the regulatory network and that the organism would try to minimize the number of regulatory changes required to reach the wild-type state. Instead of using a distance metric minimization however it uses a mixed integer linear programming method. Dynamic FBA Dynamic FBA attempts to add the ability for models to change over time, thus in some ways avoiding the strict steady state condition of pure FBA. Typically the technique involves running an FBA simulation, changing the model based on the outputs of that simulation, and rerunning the simulation. By repeating this process an element of feedback is achieved over time. Comparison with other techniques FBA provides a less simplistic analysis than Choke Point Analysis while requiring far less information on reaction rates and a much less complete network reconstruction than a full dynamic simulation would require. In filling this niche, FBA has been shown to be a very useful technique for analysis of the metabolic capabilities of cellular systems. Choke point analysis Unlike choke point analysis which only considers points in the network where metabolites are produced but not consumed or vice versa, FBA is a true form of metabolic network modelling because it considers the metabolic network as a single complete entity (the stoichiometric matrix) at all stages of analysis. This means that network effects, such as chemical reactions in distant pathways affecting each other, can be reproduced in the model. The upside to the inability of choke point analysis to simulate network effects is that it considers each reaction within a network in isolation and thus can suggest important reactions in a network even if a network is highly fragmented and contains many gaps. Dynamic metabolic simulation Unlike dynamic metabolic simulation, FBA assumes that the internal concentration of metabolites within a system stays constant over time and thus is unable to provide anything other than steady-state solutions. It is unlikely that FBA could, for example, simulate the functioning of a nerve cell. Since the internal concentration of metabolites is not considered within a model, it is possible that an FBA solution could contain metabolites at a concentration too high to be biologically acceptable. This is a problem that dynamic metabolic simulations would probably avoid. One advantage of the simplicity of FBA over dynamic simulations is that they are far less computationally expensive, allowing the simulation of large numbers of perturbations to the network. A second advantage is that the reconstructed model can be substantially simpler by avoiding the need to consider enzyme rates and the effect of complex interactions on enzyme kinetics. See also Isotopic labeling Metabolomics Metabolic engineering Metabolic network modelling References Bioinformatics Systems biology Computational biology
Flux balance analysis
[ "Engineering", "Biology" ]
5,559
[ "Bioinformatics", "Biological engineering", "Computational biology", "Systems biology" ]
4,102,366
https://en.wikipedia.org/wiki/Verneuil%20method
The Verneuil method (or Verneuil process or Verneuil technique), also called flame fusion, was the first commercially successful method of manufacturing synthetic gemstones, developed in the late 1883 by the French chemist Auguste Verneuil. It is primarily used to produce the ruby, sapphire and padparadscha varieties of corundum, as well as the diamond simulants rutile, strontium titanate and spinel. The principle of the process involves melting a finely powdered substance using an oxyhydrogen flame, and crystallising the melted droplets into a boule. The process is considered to be the founding step of modern industrial crystal growth technology, and remains in wide use to this day. History Since the study of alchemy began, there have been attempts to synthetically produce precious stones, and ruby, being one of the prized cardinal gems, has long been a prime candidate. In the 19th century, significant advances were achieved, with the first ruby formed by melting two smaller rubies together in 1817, and the first microscopic crystals created from alumina (aluminium oxide) in a laboratory in 1837. By 1877, chemist Edmond Frémy had devised an effective method for commercial ruby manufacture by using molten baths of alumina, yielding the first gemstone-quality synthetic stones. The Parisian chemist Auguste Verneuil collaborated with Frémy on developing the method, but soon went on to independently develop the flame fusion process, which would eventually come to bear his name. One of Verneuil's sources of inspiration for developing his own method was the appearance of synthetic rubies sold by an unknown Genevan merchant in 1880. These "Geneva rubies" were dismissed as artificial at the time, but are now believed to be the first rubies produced by flame fusion, predating Verneuil's work on the process by 20 years. After examining the "Geneva rubies", Verneuil came to the conclusion that it was possible to recrystallise finely ground aluminium oxide into a large gemstone. This realisation, along with the availability of the recently developed oxyhydrogen torch and growing demand for synthetic rubies, led him to design the Verneuil furnace, where finely ground purified alumina and chromium oxide were melted by a flame of at least , and recrystallised on a support below the flame, creating a large crystal. He announced his work in 1902, publishing details outlining the process in 1904. By 1910, Verneuil's laboratory had expanded into a 30-furnace production facility, with annual gemstone production by the Verneuil process having reached in 1907. By 1912, production reached , and would go on to reach in 1980 and in 2000, led by Hrand Djevahirdjian's factory in Monthey, Switzerland, founded in 1914. The most notable improvements in the process were made in 1932, by S. K. Popov, who helped establish the capability for producing high-quality sapphires in the Soviet Union through the next 20 years. A large production capability was also established in the United States during World War II, when European sources were not available, and jewels were in high demand for their military applications such as for timepieces. The process was designed primarily for the synthesis of rubies, which became the first gemstone to be produced on an industrial scale. However, the Verneuil process could also be used for the production of other stones, including blue sapphire, which required oxides of iron and titanium to be used in place of chromium oxide, as well as more elaborate ones, such as star sapphires, where titania (titanium dioxide) was added and the boule was kept in the heat longer, allowing needles of rutile to crystallise within it. In 1947, the Linde Air Products division of Union Carbide pioneered the use of the Verneuil process for creating such star sapphires, until production was discontinued in 1974 owing to overseas competition. Despite some improvements in the method, the Verneuil process remains virtually unchanged to this day, while maintaining a leading position in the manufacture of synthetic corundum and spinel gemstones. Its most significant setback came in 1917, when Jan Czochralski introduced the Czochralski process, which has found numerous applications in the semiconductor industry, where a much higher quality of crystals is required than the Verneuil process can produce. Other alternatives to the process emerged in 1957, when Bell Labs introduced the hydrothermal process, and in 1958, when Carroll Chatham introduced the flux method. In 1989 Larry P Kelley of ICT, Inc. also developed a variant of the Czochralski process where natural ruby is used as the 'feed' material. Process One of the most crucial factors in successfully crystallising an artificial gemstone is obtaining highly pure starting material, with at least 99.9995% purity. In the case of manufacturing rubies, sapphires or padparadscha, this material is alumina. The presence of sodium impurities is especially undesirable, as it makes the crystal opaque. But because the bauxite from which alumina is obtained is most likely by way of the Bayer process (the first stage of which introduces caustic soda in order to separate the Al2O3) particular attention must be paid to the feedstock. Depending on the desired colouration of the crystal, small quantities of various oxides are added, such as chromium oxide for a red ruby, or ferric oxide and titania for a blue sapphire. Other starting materials include titania for producing rutile, or titanyl double oxalate for producing strontium titanate. Alternatively, small, valueless crystals of the desired product can be used. This starting material is finely powdered, and placed in a container within a Verneuil furnace, with an opening at the bottom through which the powder can escape when the container is vibrated. While the powder is being released, oxygen is supplied into the furnace, and travels with the powder down a narrow tube. This tube is located within a larger tube, into which hydrogen is supplied. At the point where the narrow tube opens into the larger one, combustion occurs, with a flame of at least at its core. As the powder passes through the flame, it melts into small droplets, which fall onto an earthen support rod placed below. The droplets gradually form a sinter cone on the rod, the tip of which is close enough to the core to remain liquid. It is at that tip that the seed crystal eventually forms. As more droplets fall onto the tip, a single crystal, called a boule, starts to form, and the support is slowly moved downward, allowing the base of the boule to crystallise, while its cap always remains liquid. The boule is formed in the shape of a tapered cylinder, with a diameter broadening away from the base and eventually remaining more or less constant. With a constant supply of powder and withdrawal of the support, very long cylindrical boules can be obtained. Once removed from the furnace and allowed to cool, the boule is split along its vertical axis to relieve internal pressure, otherwise the crystal will be prone to fracture when the stalk is broken due to a vertical parting plane. When initially outlining the process, Verneuil specified a number of conditions crucial for good results. These include: a flame temperature that is not higher than necessary for fusion; always keeping the melted product in the same part of the oxyhydrogen flame; and reducing the point of contact between the melted product and support to as small an area as possible. The average commercially produced boule using the process is in diameter and long, weighing about . The process can also be performed with a custom-oriented seed crystal to achieve a specific desired crystallographic orientation. Crystals produced by the Verneuil process are chemically and physically equivalent to their naturally occurring counterparts, and strong magnification is usually required to distinguish between the two. A telltale characteristic is the Verneuil crystal is curved growth lines (curved striae) form, as the cylindrical boule grows upwards in an environment with a high thermal gradient, while the equivalent lines in natural crystals are straight. Another distinguishing feature is the common presence of microscopic gas bubbles formed due to an excess of oxygen in the furnace; imperfections in natural crystals are usually solid impurities. See also Bridgman–Stockbarger method Czochralski method Float-zone silicon Kyropoulos method Laser-heated pedestal growth Micro-pulling-down Shelby Gem Factory References R. T. Liddicoat Jr., Gem, McGraw-Hill AccessScience, January 2002, Page 2. Chemical processes Mineralogy Gemology French inventions Industrial processes Crystals Science and technology in France Methods of crystal growth
Verneuil method
[ "Chemistry", "Materials_science" ]
1,813
[ "Methods of crystal growth", "Chemical processes", "Crystallography", "Crystals", "nan", "Chemical process engineering" ]
4,102,521
https://en.wikipedia.org/wiki/Glycol%20cleavage
Glycol cleavage is a specific type of organic chemistry oxidation. The carbon–carbon bond in a vicinal diol (glycol) is cleaved and instead the two oxygen atoms become double-bonded to their respective carbon atoms. Depending on the substitution pattern in the diol, these carbonyls will be ketones and/or aldehydes. Glycol cleavage is an important for determining the structures of sugars. After cleavage of the glycol, the ketone and aldehyde fragments can be inspected and the location of the former hydroxyl groups ascertained. Reagents Iodine-based reagents such as periodic acid (HIO4) and (diacetoxyiodo)benzene (PhI(OAc)2) are commonly used. Another reagent is lead tetraacetate (Pb(OAc)4). These I- and Pb-based methods are called the Malaprade reaction and Criegee oxidation, respectively. The former is favored for aqueous solutions, the latter for nonaqueous solutions. Cyclic intermediate are invariably invoked. The ring then fragments, with cleavage of the carbon–carbon bond and formation of carbonyl groups. Warm concentrated potassium permanganate (KMnO4) will react with an alkene to form a glycol. Following this dihydroxylation, the KMnO4 can then cleave the glycol to give aldehydes or ketones. The aldehydes will react further with (KMnO4), being oxidized to become carboxylic acids. Controlling the temperature, concentration of the reagent and the pH of the solution can keep the reaction from continuing past the formation of the glycol. References External links www.cem.msu.edu Periodate oxidation of polysaccharides Organic redox reactions
Glycol cleavage
[ "Chemistry" ]
397
[ "Organic redox reactions", "Organic reactions" ]
1,512,013
https://en.wikipedia.org/wiki/Wald%27s%20equation
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands. The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation. Basic version Let be a sequence of real-valued, independent and identically distributed random variables and let be an integer-valued random variable that is independent of the sequence . Suppose that and the have finite expectations. Then Example Roll a six-sided dice. Take the number on the die (call it ) and roll that number of six-sided dice to get the numbers , and add up their values. By Wald's equation, the resulting value on average is General version Let be an infinite sequence of real-valued random variables and let be a nonnegative integer-valued random variable. Assume that: . are all integrable (finite-mean) random variables, . for every natural number , and . the infinite series satisfies Then the random sums are integrable and If, in addition, . all have the same expectation, and . has finite expectation, then Remark: Usually, the name Wald's equation refers to this last equality. Discussion of assumptions Clearly, assumption () is needed to formulate assumption () and Wald's equation. Assumption () controls the amount of dependence allowed between the sequence and the number of terms; see the counterexample below for the necessity. Note that assumption () is satisfied when is a stopping time for a sequence of independent random variables . Assumption () is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof. If assumption () is satisfied, then assumption () can be strengthened to the simpler condition . there exists a real constant such that for all natural numbers . Indeed, using assumption (), and the last series equals the expectation of  [Proof], which is finite by assumption (). Therefore, () and () imply assumption (). Assume in addition to () and () that . is independent of the sequence and . there exists a constant such that for all natural numbers . Then all the assumptions (), (), () and (), hence also () are satisfied. In particular, the conditions () and () are satisfied if . the random variables all have the same distribution. Note that the random variables of the sequence don't need to be independent. The interesting point is to admit some dependence between the random number of terms and the sequence . A standard version is to assume (), (), () and the existence of a filtration such that . is a stopping time with respect to the filtration, and . and are independent for every . Then () implies that the event is in , hence by () independent of . This implies (), and together with () it implies (). For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence and the filtration , the following additional assumption is often imposed: . the sequence is adapted to the filtration , meaning the is -measurable for every . Note that () and () together imply that the random variables are independent. Application An application is in actuarial science when considering the total claim amount follows a compound Poisson process within a certain time period, say one year, arising from a random number of individual insurance claims, whose sizes are described by the random variables . Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions, Panjer's recursion can be used to calculate the distribution of . Examples Example with dependent terms Let be an integrable, -valued random variable, which is independent of the integrable, real-valued random variable with . Define for all . Then assumptions (), (), (), and () with are satisfied, hence also () and (), and Wald's equation applies. If the distribution of is not symmetric, then () does not hold. Note that, when is not almost surely equal to the zero random variable, then () and () cannot hold simultaneously for any filtration , because cannot be independent of itself as is impossible. Example where the number of terms depends on the sequence Let be a sequence of independent, symmetric, and }-valued random variables. For every let be the σ-algebra generated by and define when is the first random variable taking the value . Note that , hence by the ratio test. The assumptions (), () and (), hence () and () with , (), (), and () hold, hence also (), and () and Wald's equation applies. However, () does not hold, because is defined in terms of the sequence . Intuitively, one might expect to have in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading. Counterexamples A counterexample illustrating the necessity of assumption () Consider a sequence of i.i.d. (Independent and identically distributed random variables) random variables, taking each of the two values 0 and 1 with probability  (actually, only is needed in the following). Define . Then is identically equal to zero, hence , but and and therefore Wald's equation does not hold. Indeed, the assumptions (), (), () and () are satisfied, however, the equation in assumption () holds for all except for . A counterexample illustrating the necessity of assumption () Very similar to the second example above, let be a sequence of independent, symmetric random variables, where takes each of the values and with probability . Let be the first such that . Then, as above, has finite expectation, hence assumption () holds. Since for all , assumptions () and () hold. However, since almost surely, Wald's equation cannot hold. Since is a stopping time with respect to the filtration generated by , assumption () holds, see above. Therefore, only assumption () can fail, and indeed, since and therefore for every , it follows that A proof using the optional stopping theorem Assume (), (), (), (), () and (). Using assumption (), define the sequence of random variables Assumption () implies that the conditional expectation of given equals almost surely for every , hence is a martingale with respect to the filtration by assumption (). Assumptions (), () and () make sure that we can apply the optional stopping theorem, hence is integrable and Due to assumption (), and due to assumption () this upper bound is integrable. Hence we can add the expectation of to both sides of Equation () and obtain by linearity Remark: Note that this proof does not cover the above example with dependent terms. General proof This proof uses only Lebesgue's monotone and dominated convergence theorems. We prove the statement as given above in three steps. Step 1: Integrability of the random sum We first show that the random sum is integrable. Define the partial sums Since takes its values in and since , it follows that The Lebesgue monotone convergence theorem implies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain where the second inequality follows using the monotone convergence theorem. By assumption (), the infinite sequence on the right-hand side of () converges, hence is integrable. Step 2: Integrability of the random sum We now show that the random sum is integrable. Define the partial sums of real numbers. Since takes its values in and since , it follows that As in step 1, the Lebesgue monotone convergence theorem implies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain By assumption (), Substituting this into () yields which is finite by assumption (), hence is integrable. Step 3: Proof of the identity To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sums and in order to show that they have the same expectation. Using the dominated convergence theorem with dominating random variable and the definition of the partial sum given in (), it follows that Due to the absolute convergence proved in () above using assumption (), we may rearrange the summation and obtain that where we used assumption () and the dominated convergence theorem with dominating random variable for the second equality. Due to assumption () and the σ-additivity of the probability measure, Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see () above), using linearity of expectation and the definition of the partial sum of expectations given in (), By using dominated convergence again with dominating random variable , If assumptions () and () are satisfied, then by linearity of expectation, This completes the proof. Further generalizations Wald's equation can be transferred to -valued random variables by applying the one-dimensional version to every component. If are Bochner-integrable random variables taking values in a Banach space, then the general proof above can be adjusted accordingly. See also Lorden's inequality Wald's martingale Spitzer's formula Notes References External links Probability theory Articles containing proofs Actuarial science
Wald's equation
[ "Mathematics" ]
2,136
[ "Articles containing proofs", "Actuarial science", "Applied mathematics" ]
1,512,189
https://en.wikipedia.org/wiki/Hot%20springs%20in%20Taiwan
Taiwan is part of the collision zone between the Yangtze Plate and Philippine Sea Plate. Eastern and southern Taiwan are the northern end of the Philippine Mobile Belt. Located next to an oceanic trench and volcanic system in a tectonic collision zone, Taiwan has evolved a unique environment that produces high-temperature springs with crystal-clear water, usually both clean and safe to drink. These hot springs are commonly used for spas and resorts. Soaking in hot springs became popular in Taiwan around 1895 during the 50-year long colonial rule by Japan. History The first mention of Taiwan's hot springs came from a 1697 manuscript, , but they were not developed until 1893, when a German businessman discovered Beitou and later established a small local spa. Under Japanese rule, the government constantly promoted and further enhanced the natural hot springs. The Japanese rule brought with them their rich onsen culture of spring soaking, which had a great influence on Taiwan. In March 1896, from Osaka, Japan opened Taiwan's first hot spring hotel, called . He not only heralded a new era of hot spring bathing in Beitou, but also paved the road for a whole new hot spring culture for Taiwan. In the Japanese onsen culture, hot springs are claimed to offer many health benefits. As well as raising energy levels, the minerals in the water are commonly suggested to help treat chronic fatigue, eczema or arthritis. During Japanese rule, the four major hot springs in Taiwan were in modern-day Beitou, Yangmingshan, Guanziling and Sichongxi. However, under Republic of China administration starting from 1945, the hot spring culture in Taiwan gradually lost momentum. It was not until 1999 that the authorities again started large-scale promotion of Taiwan's hot springs, setting off a renewed hot spring fever. In recent years, hot spring spas and resorts on Taiwan have gained more popularity. With the support of the government, the hot spring has become not only another industry but also again part of Taiwanese culture. Taiwan has one of the highest concentrations (more than 100 hot springs) and greatest variety of thermal springs in the world varying from hot springs to cold springs, mud springs, and seabed hot springs. Geology Taiwan is located on a faultline where several continental plates meet; the Philippine Sea Plate and the Eurasian Plate intersect in the Circum-Pacific seismic zone. Types of springs Sodium carbonate springs Sulfur springs Ferrous springs Sodium hydrogen carbonate springs Mud springs (spring water contains alkaline and iodine, is salty and has a light sulfuric smell) Salt or hydrogen sulfide springs Partial list of hot springs in Taiwan Jiaoxi Dakeng Beitou - is considered the "hot spring capital of Taiwan". Zhiben Tai-an - is an odorless and colorless alkaline carbonate hot spring. Yangmingshan Guguan Guanziling - is known for its mud baths. Sichongxi Wulai Ruisui, Hualien - this hot spring has a high iron content, consequently the water has a brownish tint. Zhaori See also Onsen Culture of Taiwan List of hot springs References External links Taiwanzen, website about Taiwan and hot springs Taiwanese Hot Springs Taiwan Journal Hot spring tour, Tourism Bureau, R.O.C. Tourism in Taiwan Geothermal areas Culture of Taiwan Balneotherapy Geology of Taiwan Hydrology Spa towns Thermal treatment
Hot springs in Taiwan
[ "Chemistry", "Engineering", "Environmental_science" ]
684
[ "Hydrology", "Environmental engineering" ]
1,515,898
https://en.wikipedia.org/wiki/Thermodynamic%20equations
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. Introduction One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power: During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics. A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. Notation Some of the most common thermodynamic quantities are: The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions. The most important thermodynamic potentials are the following functions: Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems. Common material properties determined from the thermodynamic functions are the following: The following constants are constants that occur in many relationships due to the application of a standard system of units. Laws of thermodynamics The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are: Zeroth law of thermodynamics If A, B, C are thermodynamic systems such that A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, C is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated. First law of thermodynamics where is the infinitesimal increase in internal energy of the system, is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system. The first law is the law of conservation of energy. The symbol instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that Q and W are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as . Second law of thermodynamics The entropy of an isolated system never decreases: for an isolated system. A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged). Third law of thermodynamics when The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure. Onsager reciprocal relations – sometimes called the Fourth law of thermodynamics The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law. The fundamental equation The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k  different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: Some important aspects of this equation should be noted: , , The thermodynamic space has k+2 dimensions The differential quantities (U, S, V, Ni) are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance. The equation may be seen as a particular case of the chain rule. In other words: from which the following identifications can be made: These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system. The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for and find that Thermodynamic potentials By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: The thermodynamic square can be used as a tool to recall and derive these potentials. First order equations Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: where the are the natural variables of the potential. If is conjugate to then we have the equations of state for that potential, one for each set of conjugate variables. Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: For an ideal gas, this becomes the familiar PV=NkBT. Euler integrals Because all of the natural variables of the internal energy U are extensive quantities, it follows from Euler's homogeneous function theorem that Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: Note that the Euler integrals are sometimes also referred to as fundamental equations. Gibbs–Duhem relationship Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem. Second order equations There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations). Maxwell relations Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: {| |- | |width="80"| | |- | |width="80"| | |} The thermodynamic square can be used as a tool to recall and derive these relations. Material properties Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: Compressibility at constant temperature or constant entropy Specific heat (per-particle) at constant pressure or constant volume Coefficient of thermal expansion These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. Thermodynamic property relations Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, represents the specific latent heat, represents temperature, and represents the change in specific volume. The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R See also Thermodynamics Timeline of thermodynamics Notes References Chapters 1 - 10, Part 1: Equilibrium. (reprinted from Oxford University Press, 1978) Thermodynamics Chemical engineering
Thermodynamic equations
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,198
[ "Thermodynamic equations", "Equations of physics", "Chemical engineering", "Thermodynamics", "nan", "Dynamical systems" ]
1,516,624
https://en.wikipedia.org/wiki/Chemical%20table%20file
Chemical table file (CT file) is a family of text-based chemical file formats that describe molecules and chemical reactions. One format, for example, lists each atom in a molecule, the x-y-z coordinates of that atom, and the bonds among the atoms. File formats There are several file formats in the family. The formats were created by MDL Information Systems (MDL), which was acquired by Symyx Technologies then merged with Accelrys Corp., and now called BIOVIA, a subsidiary of Dassault Systemes of Dassault Group. The CT file is an open format. BIOVIA publishes its specification. BIOVIA requires users to register to download the CT file format specifications. Molfile An MDL Molfile is a file format for holding information about the atoms, bonds, connectivity and coordinates of a molecule. The molfile consists of some header information, the Connection Table (CT) containing atom info, then bond connections and types, followed by sections for more complex information. The molfile is sufficiently common that most, if not all, cheminformatics software systems/applications are able to read the format, though not always to the same degree. It is also supported by some computational software such as Mathematica. The current de facto standard version is molfile V2000, although, more recently, the V3000 format has been circulating widely enough to present a potential compatibility issue for those applications that are not yet V3000-capable. Counts line block specification Bond block specification The Bond Block is made up of bond lines, one line per bond, with the following format: 111 222 ttt sss xxx rrr ccc where the values are described in the following table: Extended Connection Table (V3000) The extended (V3000) molfile consists of a regular molfile “no structure” followed by a single molfile appendix that contains the body of the connection table (Ctab). The following figure shows both an alanine structure and the extended molfile corresponding to it. Note that the “no structure” is flagged with the “V3000” instead of the “V2000” version stamp. There are two other changes to the header in addition to the version: The number of appendix lines is always written as 999, regardless of how many there actually are. (All current readers will disregard the count and stop at M END.) The “dimensional code” is maintained more explicitly. Thus “3D” really means 3D, although “2D” will be interpreted as 3D if any non-zero Z-coordinates are found. Unlike the V2000 molfile, the V3000 extended Rgroup molfile has the same header format as a non-Rgroup molfile. Counts line A counts line is required, and must be first. It specifies the number of atoms, bonds, 3D objects, and Sgroups. It also specifies whether or not the CHIRAL flag is set. Optionally, the counts line can specify molregno. This is only used when the regno exceeds 999999 (the limit of the format in the molfile header line). The format of the counts line is: SDF SDF is one of a family of chemical-data file formats developed by MDL; it is intended especially for structural information. "SDF" stands for structure-data format, and SDF files actually wrap the molfile (MDL Molfile) format. Multiple records are delimited by lines consisting of four dollar signs ($$$$). A key feature of this format is its ability to include associated data. Associated data items are denoted as follows: > <Unique_ID> XCA3464366 > <ClogP> 5.825 > <Vendor> Sigma > <Molecular Weight> 499.611 Multiple-line data items are also supported. The MDL SDF-format specification requires that a hard-carriage-return character be inserted if a single line of any text field exceeds 200 characters. This requirement is frequently violated in practice, as many SMILES and InChI strings exceed that length. Other formats of the family There are other, less commonly used formats of the family: RXNFile - for representing a single chemical reaction; RDFile - for representing a list of records with associated data. Each record can contain chemical structures, reactions, textual and tabular data; RGFile - for representing the Markush structures (deprecated, Molfile V3000 can represent Markush structures); XDFile - for representing chemical information in XML format. See also Chemical file format#Converting between formats References External links Adroit Repository paid software to process SD files (SDF) from Adroit DI. SDF Toolkit free software to process SD files (SDF). NCI/CADD Chemical Identifier Resolver generates SD files (SDF) from chemical names, CAS Registry Numbers, SMILES, InChI, InChIKey, .... KNIME free software to manipulate data and do datamining, can also read and write SD files (SDF). Comparative Toxicology Dashboard service provided by the Environmental Protection Agency (EPA) which generates SD files (SDF) from chemical names, CAS Registry Numbers, SMILES, InChI, InChIKey, ... Computational chemistry Chemical file formats
Chemical table file
[ "Chemistry" ]
1,120
[ "Theoretical chemistry", "Computational chemistry", "Chemistry software", "Chemical file formats" ]
1,516,694
https://en.wikipedia.org/wiki/Long-range%20dependence
Long-range dependence (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence of two points with increasing time interval or spatial distance between the points. A phenomenon is usually considered to have long-range dependence if the dependence decays more slowly than an exponential decay, typically a power-like decay. LRD is often related to self-similar processes or fields. LRD has been used in various fields such as internet traffic modelling, econometrics, hydrology, linguistics and the earth sciences. Different mathematical definitions of LRD are used for different contexts and purposes. Short-range dependence versus long-range dependence One way of characterising long-range and short-range dependent stationary process is in terms of their autocovariance functions. For a short-range dependent process, the coupling between values at different times decreases rapidly as the time difference increases. Either the autocovariance drops to zero after a certain time-lag, or it eventually has an exponential decay. In the case of LRD, there is much stronger coupling. The decay of the autocovariance function is power-like and so is slower than exponential. A second way of characterizing long- and short-range dependence is in terms of the variance of partial sum of consecutive values. For short-range dependence, the variance grows typically proportionally to the number of terms. As for LRD, the variance of the partial sum increases more rapidly which is often a power function with the exponent greater than 1. A way of examining this behavior uses the rescaled range. This aspect of long-range dependence is important in the design of dams on rivers for water resources, where the summations correspond to the total inflow to the dam over an extended period. The above two ways are mathematically related to each other, but they are not the only ways to define LRD. In the case where the autocovariance of the process does not exist (heavy tails), one has to find other ways to define what LRD means, and this is often done with the help of self-similar processes. The Hurst parameter H is a measure of the extent of long-range dependence in a time series (while it has another meaning in the context of self-similar processes). H takes on values from 0 to 1. A value of 0.5 indicates the absence of long-range dependence. The closer H is to 1, the greater the degree of persistence or long-range dependence. H less than 0.5 corresponds to anti-persistency, which as the opposite of LRD indicates strong negative correlation so that the process fluctuates violently. Estimation of the Hurst parameter Slowly decaying variances, LRD, and a spectral density obeying a power-law are different manifestations of the property of the underlying covariance of a stationary process. Therefore, it is possible to approach the problem of estimating the Hurst parameter from three difference angles: Variance-time plot: based on the analysis of the variances of the aggregate processes R/S statistics: based on the time-domain analysis of the rescaled adjusted range Periodogram: based on a frequency-domain analysis Relation to self-similar processes Given a stationary LRD sequence, the partial sum if viewed as a process indexed by the number of terms after a proper scaling, is a self-similar process with stationary increments asymptotically, the most typical one being fractional Brownian motion. In the converse, given a self-similar process with stationary increments with Hurst index H > 0.5, its increments (consecutive differences of the process) is a stationary LRD sequence. This also holds true if the sequence is short-range dependent, but in this case the self-similar process resulting from the partial sum can only be Brownian motion (H = 0.5). Models Among stochastic models that are used for long-range dependence, some popular ones are autoregressive fractionally integrated moving average models, which are defined for discrete-time processes, while continuous-time models might start from fractional Brownian motion. See also Long-tail traffic Traffic generation model Detrended fluctuation analysis Tweedie distributions Fractal dimension Hurst exponent Notes Further reading Autocorrelation Teletraffic Time series Spatial analysis
Long-range dependence
[ "Physics" ]
917
[ "Spacetime", "Space", "Spatial analysis" ]
1,516,915
https://en.wikipedia.org/wiki/Swine%20influenza
Swine influenza is an infection caused by any of several types of swine influenza viruses. Swine influenza virus (SIV) or swine-origin influenza virus (S-OIV) refers to any strain of the influenza family of viruses that is endemic in pigs. As of 2009, identified SIV strains include influenza C and the subtypes of influenza A known as H1N1, H1N2, H2N1, H3N1, H3N2, and H2N3. The swine influenza virus is common throughout pig populations worldwide. Transmission of the virus from pigs to humans is rare and does not always lead to human illness, often resulting only in the production of antibodies in the blood. If transmission causes human illness, it is called a zoonotic swine flu. People with regular exposure to pigs are at increased risk of swine flu infections. Around the mid-20th century, the identification of influenza subtypes was made possible, allowing accurate diagnosis of transmission to humans. Since then, only 50 such transmissions have been confirmed. These strains of swine flu rarely pass from human to human. Symptoms of zoonotic swine flu in humans are similar to those of influenza and influenza-like illness and include chills, fever, sore throat, muscle pains, severe headache, coughing, weakness, shortness of breath, and general discomfort. It is estimated that, in the 2009 flu pandemic, 11–21% of the then global population (of about 6.8 billion), equivalent to around 700 million to 1.4 billion people, contracted the illness—more, in absolute terms, than the Spanish flu pandemic. There were 18,449 confirmed fatalities. However, in a 2012 study, the CDC estimated more than 284,000 possible fatalities worldwide, with numbers ranging from 150,000 to 575,000. In August 2010, the World Health Organization declared the swine flu pandemic officially over. Subsequent cases of swine flu were reported in India in 2015, with over 31,156 positive test cases and 1,841 deaths. Signs and symptoms In pigs, a swine influenza infection produces fever, lethargy, discharge from the nose or eyes, sneezing, coughing, difficulty breathing, eye redness or inflammation, and decreased appetite. In some cases, the infection can cause miscarriage. However, infected pigs may not exhibit any symptoms. Although mortality is usually low (around 1–4%), the virus can cause weight loss and poor growth, in turn causing economic loss to farmers. Infected pigs can lose up to 12 pounds of body weight over a three- to four-week period. Influenza A is responsible for infecting swine and was first identified in 1918. Because both avian and mammalian influenza viruses can bind to receptors in pigs, pigs have often been seen as "mixing vessels", facilitating the evolution of strains that can be passed on to other mammals, such as humans. Humans Direct transmission of a swine flu virus from pigs to humans is possible (zoonotic swine flu). Fifty cases are known to have occurred since the first report in medical literature in 1958, which have resulted in a total of six deaths. Of these six people, one was pregnant, one had leukemia, one had Hodgkin's lymphoma, and two were known to be previously healthy. No medical history was reported for the remaining case The true rate of infection may be higher, as most cases only cause a very mild disease and may never be reported or diagnosed. According to the United States Centers for Disease Control and Prevention (CDC), in humans the symptoms of the 2009 "swine flu" H1N1 virus are similar to influenza and influenza-like illness. Symptoms include fever, cough, sore throat, watery eyes, body aches, shortness of breath, headache, weight loss, chills, sneezing, runny nose, coughing, dizziness, abdominal pain, lack of appetite, and fatigue. During the 2009 outbreak, an elevated percentage of patients reporting diarrhea and vomiting. Because these symptoms are not specific to swine flu, a differential diagnosis of probable swine flu requires not only symptoms, but also a high likelihood of swine flu due to the person's recent and past medical history. For example, during the 2009 swine flu outbreak in the United States, the CDC advised physicians to "consider swine influenza infection in the differential diagnosis of patients with acute febrile respiratory illness who have either been in contact with persons with confirmed swine flu, or who were in one of the five U.S. states that have reported swine flu cases or in Mexico during the seven days preceding their illness onset." A diagnosis of confirmed swine flu requires laboratory testing of a respiratory sample (a simple nose and throat swab). The most common cause of death is respiratory failure. Other causes of death are pneumonia (leading to sepsis), high fever (leading to neurological problems), dehydration (from excessive vomiting and diarrhea), electrolyte imbalance and kidney failure. Fatalities are more likely in young children and the elderly. Virology Transmission Between pigs Influenza is common in pigs. About half of breeding pigs in the USA have been exposed to the virus. Antibodies to the virus are also common in pigs in other countries. The main route of transmission is through direct contact between infected and uninfected animals. These close contacts are particularly common during animal transport. Intensive farming may also increase the risk of transmission, as the pigs are raised in very close proximity to each other. Direct transfer of the virus probably occurs though pigs touching noses or through dried mucus. Airborne transmission through the aerosols produced by pigs coughing or sneezing are also an important means of infection. The virus usually spreads quickly through a herd, infecting all the pigs within just a few days. Transmission may also occur through wild animals, such as wild boar, which can spread the disease between farms. To humans People who work with poultry and swine, especially those with intense exposures, are at increased risk of zoonotic infection with influenza virus endemic in these animals, and constitute a population of human hosts in which zoonosis and reassortment can co-occur. Vaccination of these workers against influenza and surveillance for new influenza strains among this population may therefore be an important public health measure. Transmission of influenza from swine to humans who work with swine was documented in a small surveillance study performed in 2004 at the University of Iowa. This study, among others, forms the basis of a recommendation that people whose jobs involve handling poultry and swine be the focus of increased public health surveillance. Other professions at particular risk of infection are veterinarians and meat processing workers, although the risk of infection for both of these groups is lower than that of farm workers. Interaction with avian H5N1 in pigs Pigs are unusual because they can be infected with influenza strains that usually infect three different species: pigs, birds, and humans. Within pigs, influenza viruses may exchange genes and produce novel strains. Avian influenza virus H3N2 is endemic in pigs in China and has been detected in pigs in Vietnam, increasing fears of the emergence of new variant strains. H3N2 evolved from H2N2 by antigenic shift. In August 2004, researchers in China found H5N1 in pigs. These H5N1 infections may be common. In a survey of 10 apparently healthy pigs housed near poultry farms in West Java, where avian flu had broken out, five of the pig samples contained the H5N1 virus. The Indonesian government found similar results in the same region, though additional tests of 150 pigs outside the area were negative. Structure The influenza virion is roughly spherical. It is an enveloped virus; the outer layer is a lipid membrane which is taken from the host cell in which the virus multiplies. Inserted into the lipid membrane are glycoprotein "spikes" of hemagglutinin (HA) and neuraminidase (NA). The combination of HA and NA proteins determine the subtype of influenza virus (A/H1N1, for example). HA and NA are important in the immune response against the virus, and antibodies against these spikes may protect against infection. The antiviral drugs Relenza and Tamiflu target NA by inhibiting neuraminidase and preventing the release of viruses from host cells. Also embedded in the lipid membrane is the M2 protein, which is the target of the antiviral adamantanes amantadine and rimantadine. Classification Of the three genera of influenza viruses that cause human flu, two also cause influenza in pigs, with influenza A being common in pigs and influenza C being rare. Influenza B has not been reported in pigs. Within influenza A and influenza C, the strains found in pigs and humans are largely distinct, although because of reassortment there have been transfers of genes among strains crossing swine, avian, and human species boundaries. Influenza C Influenza viruses infect both humans and pigs, but do not infect birds. Transmission between pigs and humans have occurred in the past. For example, influenza C caused small outbreaks of a mild form of influenza amongst children in Japan and California. As a result of the limited host range and lack of genetic diversity in influenza C, this form of influenza does not cause pandemics in humans. Influenza A Swine influenza is caused by influenza A subtypes H1N1, H1N2, H2N3, H3N1, and H3N2. In pigs, four influenza A virus subtypes (H1N1, H1N2, H3N2 and H7N9) are the most common strains worldwide. In the United States, the H1N1 subtype was exclusively prevalent among swine populations before 1998. Since late August 1998, H3N2 subtypes have been isolated from pigs. As of 2004, H3N2 virus isolates in US swine and turkey stocks were triple reassortants, containing genes from human (HA, NA, and PB1), swine (NS, NP, and M), and avian (PB2 and PA) lineages. In August 2012, the Center for Disease Control and Prevention confirmed 145 human cases (113 in Indiana, 30 in Ohio, one in Hawaii and one in Illinois) of H3N2v since July 2012. The death of a 61-year-old Madison County, Ohio woman is the first in the USA associated with a new swine flu strain. She contracted the illness after having contact with hogs at the Ross County Fair. Diagnosis The CDC recommends real-time PCR as the method of choice for diagnosing H1N1. The oral or nasal fluid collection and RNA virus-preserving filter-paper card is commercially available. This method allows a specific diagnosis of novel influenza (H1N1) as opposed to seasonal influenza. Near-patient point-of-care tests are in development. Prevention Prevention of swine influenza has three components: prevention in pigs, prevention of transmission to humans, and prevention of its spread among humans. Proper handwashing techniques can prevent the virus from spreading. Individuals can prevent infection by not touching the eyes, nose, or mouth, distancing from others who display symptoms of the cold or flu, and avoiding contact with others when displaying symptoms. Swine Methods of preventing the spread of influenza among swine include facility management, herd management, and vaccination (ATCvet code: ). Because much of the illness and death associated with swine flu involves secondary infection by other pathogens, control strategies that rely on vaccination may be insufficient. Control of swine influenza by vaccination has become more difficult in recent decades, as the evolution of the virus has resulted in inconsistent responses to traditional vaccines. Standard commercial swine flu vaccines are effective in controlling the infection when the virus strains match enough to have significant cross-protection, and custom (autogenous) vaccines made from the specific viruses isolated are created and used in the more difficult cases. Present vaccination strategies for SIV control and prevention in swine farms typically include the use of one of several bivalent SIV vaccines commercially available in the United States. Of the 97 recent H3N2 isolates examined, only 41 isolates had strong serologic cross-reactions with antiserum to three commercial SIV vaccines. Since the protective ability of influenza vaccines depends primarily on the closeness of the match between the vaccine virus and the epidemic virus, the presence of nonreactive H3N2 SIV variants suggests current commercial vaccines might not effectively protect pigs from infection with a majority of H3N2 viruses. The United States Department of Agriculture researchers say while pig vaccination keeps pigs from getting sick, it does not block infection or shedding of the virus. Facility management includes using disinfectants and ambient temperature to control viruses in the environment. They are unlikely to survive outside living cells for more than two weeks, except in cold (but above freezing) conditions, and are readily inactivated by disinfectants. Herd management includes not adding pigs carrying influenza to herds that have not been exposed to the virus. The virus survives in healthy carrier pigs for up to three months and can be recovered from them between outbreaks. Carrier pigs are usually responsible for the introduction of SIV into previously uninfected herds and countries, so new animals should be quarantined. After an outbreak, as immunity in exposed pigs wanes, new outbreaks of the same strain can occur. Humans Prevention of pig-to-human transmission Swine can be infected by both avian and human flu strains of influenza, and therefore are hosts where the antigenic shifts can occur that create new influenza strains. The transmission from swine to humans is believed to occur mainly in swine farms, where farmers are in close contact with live pigs. Although strains of swine influenza are usually not able to infect humans, it may occasionally happen, so farmers and veterinarians are encouraged to use face masks when dealing with infected animals. The use of vaccines on swine to prevent their infection is a major method of limiting swine-to-human transmission. Risk factors that may contribute to the swine-to-human transmission include smoking and, especially, not wearing gloves when working with sick animals, thereby increasing the likelihood of subsequent hand-to-eye, hand-to-nose, or hand-to-mouth transmission. Prevention of human-to-human transmission Influenza spreads between humans when infected people cough or sneeze, then other people breathe in the virus or touch something with the virus on it and then touch their own face. The CDC warned against touching mucosal membranes such as the eyes, nose, or mouth during the 2009 H1N1 pandemic, as these are common entry points for flu viruses. Swine flu cannot be spread by pork products, since the virus is not transmitted through food. The swine flu in humans is most contagious during the first five days of the illness, although some people, most commonly children, can remain contagious for up to ten days. Diagnosis can be made by sending a specimen, collected during the first five days, for analysis. Recommendations to prevent the spread of the virus among humans include using standard infection control, which includes frequent washing of hands with soap and water or with alcohol-based hand sanitizers, especially after being out in public. Chance of transmission is also reduced by disinfecting household surfaces, which can be done effectively with a diluted chlorine bleach solution. Influenza can spread in coughs or sneezes, but an increasing body of evidence shows small droplets containing the virus can linger on tabletops, telephones, and other surfaces and be transferred via the fingers to the eyes, nose, or mouth. Alcohol-based gel or foam hand sanitizers work well to destroy viruses and bacteria. Anyone with flu-like symptoms, such as a sudden fever, cough, or muscle aches, should stay away from work or public transportation and should contact a doctor for advice. Social distancing can be another infection control tactic. Individuals should avoid other people who might be infected or if infected themselves isolate from others for the duration of the infection. During active outbreaks, avoiding large gatherings, increasing physical distance in public places, or if possible remaining at home as much as is feasible can prevent further spread of disease. Public health and other responsible authorities have action plans which may request or require social distancing actions, depending on the severity of the outbreak. Vaccination Vaccines are available for different kinds of swine flu. The U.S. Food and Drug Administration (FDA) approved the new swine flu vaccine for use in the United States on September 15, 2009. Studies by the National Institutes of Health show a single dose creates enough antibodies to protect against the virus within about 10 days. In the aftermath of the 2009 pandemic, several studies were conducted to see which population groups were most likely to have received an influenza vaccine. These studies demonstrated that caucasians are much more likely to be vaccinated for seasonal influenza and for the H1N1 strain than African Americans. This could be due to several factors. Historically, there has been mistrust of vaccines and of the medical community from African Americans. Many African Americans do not believe vaccines or doctors to be effective. This mistrust stems from the exploitation of the African American communities during studies like the Tuskegee study. Additionally, vaccines are typically administered in clinics, hospitals, or doctor's offices. Many people of lower socioeconomic status are less likely to receive vaccinations because they do not have health insurance. Surveillance Although there is no formal national surveillance system in the United States to determine what viruses are circulating in pigs, an informal surveillance network in the United States is part of a world surveillance network. Treatment Swine As swine influenza is rarely fatal to pigs, little treatment beyond rest and supportive care is required. Instead, veterinary efforts are focused on preventing the spread of the virus throughout the farm or to other farms. Vaccination and animal management techniques are most important in these efforts. Antibiotics are also used to treat the disease, which, although they have no effect against the influenza virus, do help prevent bacterial pneumonia and other secondary infections in influenza-weakened herds. In Europe the avian-like H1N1 and the human-like H3N2 and H1N2 are the most common influenza subtypes in swine, of which avian-like H1N1 is the most frequent. Since 2009 another subtype, pdmH1N1(2009), emerged globally and also in European pig population. The prevalence varies from country to country but all of the subtypes are continuously circulating in swine herds. In the EU region whole-virus vaccines are available which are inactivated and adjuvanted. Vaccination of sows is common practice and reveals also a benefit to young pigs by prolonging the maternally level of antibodies. Several commercial vaccines are available including a trivalent one being used in sow vaccination and a vaccine against pdmH1N1(2009). In vaccinated sows multiplication of viruses and virus shedding are significantly reduced. Humans If a human becomes sick with swine flu, antiviral drugs can make the illness milder and make the patient feel better faster. They may also prevent serious flu complications. For treatment, antiviral drugs work best if started soon after getting sick (within two days of symptoms). Beside antivirals, supportive care at home or in a hospital focuses on controlling fevers, relieving pain and maintaining fluid balance, as well as identifying and treating any secondary infections or other medical problems. The U.S. Centers for Disease Control and Prevention recommends the use of oseltamivir (Tamiflu) or zanamivir (Relenza) for the treatment and/or prevention of infection with swine influenza viruses; however, the majority of people infected with the virus make a full recovery without requiring medical attention or antiviral drugs. The virus isolated in the 2009 outbreak have been found resistant to amantadine and rimantadine. History Pandemics Swine influenza was first proposed to be a disease related to human flu during the 1918 flu pandemic, when pigs became ill at the same time as humans. The first identification of an influenza virus as a cause of disease in pigs occurred about ten years later, in 1930. For the following 60 years, swine influenza strains were almost exclusively H1N1. Then, between 1997 and 2002, new strains of three different subtypes and five different genotypes emerged as causes of influenza among pigs in North America. In 1997–1998, H3N2 strains emerged. These strains, which include genes derived by reassortment from human, swine and avian viruses, have become a major cause of swine influenza in North America. Reassortment between H1N1 and H3N2 produced H1N2. In 1999 in Canada, a strain of H4N6 crossed the species barrier from birds to pigs, but was contained on a single farm. The H1N1 form of swine flu is one of the descendants of the strain that caused the 1918 flu pandemic. As well as persisting in pigs, the descendants of the 1918 virus have also circulated in humans through the 20th century, contributing to the normal seasonal epidemics of influenza. However, direct transmission from pigs to humans is rare, with only 12 recorded cases in the U.S. since 2005. Nevertheless, the retention of influenza strains in pigs after these strains have disappeared from the human population might make pigs a reservoir where influenza viruses could persist, later emerging to reinfect humans once human immunity to these strains has waned. Swine flu has been reported numerous times as a zoonosis in humans, usually with limited distribution, rarely with a widespread distribution. Outbreaks in swine are common and cause significant economic losses in industry, primarily by causing stunting and extended time to market. For example, this disease costs the British meat industry about £65 million every year. 1918 The 1918 flu pandemic in humans was associated with H1N1 and influenza appearing in pigs; this may reflect a zoonosis either from swine to humans, or from humans to swine. Although it is not certain in which direction the virus was transferred, some evidence suggests that in this case pigs caught the disease from humans. For instance, swine influenza was only noted as a new disease of pigs in 1918 after the first large outbreaks of influenza amongst people. Although a recent phylogenetic analysis of more recent strains of influenza in humans, birds, and other animals including swine suggests the 1918 outbreak in humans followed a reassortment event within a mammal, the exact origin of the 1918 strain remains elusive. It is estimated that anywhere from 50 to 100 million people were killed worldwide. U.S. 2009 The swine flu was initially seen in the US in April 2009, where the strain of the particular virus was a mixture from 3 types of strains. Six of the genes are very similar to the H1N2 influenza virus that was found in pigs around 2000. Outbreaks 1976 U.S. On February 5, 1976, a United States army recruit at Fort Dix said he felt tired and weak. He died the next day, and four of his fellow soldiers were later hospitalized. Two weeks after his death, health officials announced the cause of death was a new strain of swine flu. The strain, a variant of H1N1, is known as A/New Jersey/1976 (H1N1). It was detected only from January 19 to February 9 and did not spread beyond Fort Dix. This new strain appeared to be closely related to the strain involved in the 1918 flu pandemic. Moreover, the ensuing increased surveillance uncovered another strain in circulation in the U.S.: A/Victoria/75 (H3N2), which spread simultaneously, also caused illness, and persisted until March. Alarmed public health officials decided action must be taken to head off another major pandemic, and urged President Gerald Ford that every person in the U.S. be vaccinated for the disease. The vaccination program was plagued by delays and public relations problems. On October 1, 1976, immunizations began, and three senior citizens died soon after receiving their injections. This resulted in a media outcry that linked these deaths to the immunizations, despite the lack of any proof the vaccine was the cause. According to science writer Patrick Di Justo, however, by the time the truth was known—that the deaths were not proven to be related to the vaccine—it was too late. "The government had long feared mass panic about swine flu—now they feared mass panic about the swine flu vaccinations." This became a strong setback to the program. There were reports of Guillain–Barré syndrome (GBS), a paralyzing neuromuscular disorder, affecting some people who had received swine flu immunizations. Although whether a link exists is still not clear, this syndrome may be a side effect of influenza vaccines. As a result, Di Justo writes, "the public refused to trust a government-operated health program that killed old people and crippled young people." In total, 48,161,019 Americans, or just over 22% of the population, had been immunized by the time the National Influenza Immunization Program was effectively halted on December 16, 1976. Overall, there were 1098 cases of GBS recorded nationwide by CDC surveillance, 532 of which occurred after vaccination and 543 before vaccination. About one to two cases per 100,000 people of GBS occur every year, whether or not people have been vaccinated. The vaccination program seems to have increased this normal risk of developing GBS by about to one extra case per 100,000 vaccinations. Recompensation charges were filed for over 4,000 cases of severe vaccination damage, including 25 deaths, totaling US$3.5 billion, by 1979. The CDC stated most studies on modern influenza vaccines have seen no link with GBS, Although one review gives an incidence of about one case per million vaccinations, a large study in China, reported in the New England Journal of Medicine, covering close to 100 million doses of H1N1 flu vaccine, found only 11 cases of GBS, which is lower than the normal rate of the disease in China: "The risk-benefit ratio, which is what vaccines and everything in medicine is about, is overwhelmingly in favor of vaccination." 1988 U.S. In September 1988, a swine flu virus killed one woman and infected others. A 32-year-old woman, Barbara Ann Wieners, was eight months pregnant when she and her husband, Ed, became ill after visiting the hog barn at a county fair in Walworth County, Wisconsin. Barbara died eight days later, after developing pneumonia. The only pathogen identified was an H1N1 strain of swine influenza virus. Doctors were able to induce labor and deliver a healthy daughter before she died. Her husband recovered from his symptoms. Influenza-like illness (ILI) was reportedly widespread among the pigs exhibited at the fair. Of the 25 swine exhibitors aged 9 to 19 at the fair, 19 tested positive for antibodies to SIV, but no serious illnesses were seen. The virus was able to spread between people, since one to three health care personnel who had cared for the pregnant woman developed mild, influenza-like illnesses, and antibody tests suggested they had been infected with swine flu, but there was no community outbreak. In 1998, swine flu was found in pigs in four U.S. states. Within a year, it had spread through pig populations across the United States. Scientists found this virus had originated in pigs as a recombinant form of flu strains from birds and humans. This outbreak confirmed that pigs can serve as a crucible where novel influenza viruses emerge as a result of the reassortment of genes from different strains. Genetic components of these 1998 triple-hybrid strains would later form six out of the eight viral gene segments in the 2009 flu outbreak. 2007 Philippines On August 20, 2007, Department of Agriculture officers investigated the outbreak of swine flu in Nueva Ecija and central Luzon, Philippines. The mortality rate is less than 10% for swine flu, unless there are complications like hog cholera. On July 27, 2007, the Philippine National Meat Inspection Service (NMIS) raised a hog cholera "red alert" warning over Metro Manila and five regions of Luzon after the disease spread to backyard pig farms in Bulacan and Pampanga, even if they tested negative for the swine flu virus. 2009 Northern Ireland Since November 2009, 14 deaths as a result of swine flu in Northern Ireland have been reported. The majority of the deceased were reported to have pre-existing health conditions which had lowered their immunity. This closely corresponds to the 19 patients who had died in the year prior due to swine flu, where 18 of the 19 were determined to have lowered immune systems. Because of this, many mothers who have just given birth are strongly encouraged to get a flu shot because their immune systems are vulnerable. Also, studies have shown that people between the ages of 15 and 44 have the highest rate of infection. Although most people now recover, having any conditions that lower one's immune system increases the risk of having the flu become potentially lethal. In Northern Ireland now, approximately 56% of all people under 65 who are entitled to the vaccine have gotten the shot, and the outbreak is said to be under control. 2015 and 2019 India Swine flu outbreaks were reported in India in late 2014 and early 2015. As of March 19, 2015 the disease has affected 31,151 people and claimed over 1,841 lives. The largest number of reported cases and deaths due to the disease occurred in the western part of India including states like Delhi, Madhya Pradesh, Rajasthan, and Gujarat Andhra Pradesh Researchers of MIT have claimed that the swine flu has mutated in India to a more virulent version with changes in Hemagglutinin protein, contradicting earlier research by Indian researchers. There was another outbreak in India in 2017. The states of Maharashtra and Gujarat were the worst affected. Gujarat high court has given Gujarat government instructions to control deaths by swine flu. 1,090 people died of swine flu in India in 2019 until August 31, 2019. 2015 Nepal Swine flu outbreaks were reported in Nepal in the spring of 2015. Up to April 21, 2015, the disease had claimed 26 lives in the most severely affected district, Jajarkot in Northwest Nepal. Cases were also detected in the districts of Kathmandu, Morang, Kaski, and Chitwan. As of 22 April 2015 the Nepal Ministry of Health reported that 2,498 people had been treated in Jajarkot, of whom 552 were believed to have swine flu, and acknowledged that the government's response had been inadequate. The Jajarkot outbreak had just been declared an emergency when the April 2015 Nepal earthquake struck on 25 April 2015, diverting all medical and emergency resources to quake-related rescue and recovery. 2016 Pakistan Seven cases of swine flu were reported in Punjab province of Pakistan, mainly in the city of Multan, in January 2017. Cases of swine flu were also reported in Lahore and Faisalabad. 2017 Maldives As of March 16, 2017, over a hundred confirmed cases of swine flu and at least six deaths were reported in the Maldivian capital of Malé and some other islands. Makeshift flu clinics were opened in Malé. Schools in the capital were closed, prison visitations suspended, several events cancelled, and all non-essential travel to other islands outside the capital was advised against by the HPA. An influenza vaccination program focusing on pregnant women was initiated thereafter. An official visit by Saudi King Salman bin Abdulaziz Al Saud to the Maldives during his Asian tour was also cancelled last minute amidst fears over the outbreak of swine flu. 2020 G4 EA H1N1 publication G4 EA H1N1, also known as the G4 swine flu virus (G4) is a swine influenza virus strain discovered in China. The virus is a variant genotype 4 (G4) Eurasian avian-like (EA) H1N1 virus that mainly affects pigs, but there is some evidence of it infecting people. A peer-reviewed paper from the Proceedings of the National Academy of Sciences (PNAS) stated that "G4 EA H1N1 viruses possess all the essential hallmarks of being highly adapted to infect humans ... Controlling the prevailing G4 EA H1N1 viruses in pigs and close monitoring of swine working populations should be promptly implemented." Michael Ryan, executive director of the World Health Organization (WHO) Health Emergencies Program, stated in July 2020 that this strain of influenza virus was not new and had been under surveillance since 2011. Almost 30,000 swine had been monitored via nasal swabs between 2011 and 2018. While other variants of the virus have appeared and diminished, the study claimed the G4 variant has sharply increased since 2016 to become the predominant strain. The Chinese Ministry of Agriculture and Rural Affairs rebutted the study, saying that the media had interpreted the study "in an exaggerated and nonfactual way" and that the number of pigs sampled was too small to demonstrate G4 had become the dominant strain. Between 2016 and 2018, a serum surveillance program screened 338 swine production workers in China for exposure (presence of antibodies) to G4 EA H1N1 and found 35 (10.4%) positive. Among another 230 people screened who did not work in the swine industry, 10 (4.4%) were serum positive for antibodies indicating exposure. Two cases of infection caused by the G4 variant have been documented as of July 2020, with no confirmed cases of human-to-human transmission. Health officials (including Anthony Fauci) say the virus should be monitored, particularly among those in close contact with pigs, but it is not an immediate threat. There are no reported cases or evidence of the virus outside of China as of July 2020. See also COVID-19 pandemic Risk assessment for organic swine health Notes Further reading External links Official swine flu advice and latest information from the UK National Health Service on fora.tv Swine flu charts and maps Numeric analysis and approximation of current active cases "Swine Influenza" disease card on World Organisation for Animal Health Worried about swine flu? Then you should be terrified about the regular flu. Centers for Disease Control and Prevention (CDC) – Swine Flu Center for Infectious Disease Research and Policy – Novel H1N1 influenza resource list Pandemic Flu US Government Site World Health Organization (WHO): Swine influenza Medical Encyclopedia Medline Plus: Swine Flu Health-EU portal EU response to influenza European Commission – Public Health EU coordination on Pandemic (H1N1) 2009 Combating H3N2 Virus Animal viral diseases Zoonoses Health disasters Swine diseases Influenza Pandemics Articles containing video clips Vaccine-preventable diseases
Swine influenza
[ "Biology" ]
7,318
[ "Vaccination", "Vaccine-preventable diseases" ]
1,516,916
https://en.wikipedia.org/wiki/Magnetic%20core
A magnetic core is a piece of magnetic material with a high magnetic permeability used to confine and guide magnetic fields in electrical, electromechanical and magnetic devices such as electromagnets, transformers, electric motors, generators, inductors, loudspeakers, magnetic recording heads, and magnetic assemblies. It is made of ferromagnetic metal such as iron, or ferrimagnetic compounds such as ferrites. The high permeability, relative to the surrounding air, causes the magnetic field lines to be concentrated in the core material. The magnetic field is often created by a current-carrying coil of wire around the core. The use of a magnetic core can increase the strength of magnetic field in an electromagnetic coil by a factor of several hundred times what it would be without the core. However, magnetic cores have side effects which must be taken into account. In alternating current (AC) devices they cause energy losses, called core losses, due to hysteresis and eddy currents in applications such as transformers and inductors. "Soft" magnetic materials with low coercivity and hysteresis, such as silicon steel, or ferrite, are usually used in cores. Core materials An electric current through a wire wound into a coil creates a magnetic field through the center of the coil, due to Ampere's circuital law. Coils are widely used in electronic components such as electromagnets, inductors, transformers, electric motors and generators. A coil without a magnetic core is called an "air core" coil. Adding a piece of ferromagnetic or ferrimagnetic material in the center of the coil can increase the magnetic field by hundreds or thousands of times; this is called a magnetic core. The field of the wire penetrates the core material, magnetizing it, so that the strong magnetic field of the core adds to the field created by the wire. The amount that the magnetic field is increased by the core depends on the magnetic permeability of the core material. Because side effects such as eddy currents and hysteresis can cause frequency-dependent energy losses, different core materials are used for coils used at different frequencies. In some cases the losses are undesirable and with very strong fields saturation can be a problem, and an 'air core' is used. A former may still be used; a piece of material, such as plastic or a composite, that may not have any significant magnetic permeability but which simply holds the coils of wires in place. Solid metals Soft iron "Soft" (annealed) iron is used in magnetic assemblies, direct current (DC) electromagnets and in some electric motors; and it can create a concentrated field that is as much as 50,000 times more intense than an air core. Iron is desirable to make magnetic cores, as it can withstand high levels of magnetic field without saturating (up to 2.16 teslas at ambient temperature.) Annealed iron is used because, unlike "hard" iron, it has low coercivity and so does not remain magnetised when the field is removed, which is often important in applications where the magnetic field is required to be repeatedly switched. Due to the electrical conductivity of the metal, when a solid one-piece metal core is used in alternating current (AC) applications such as transformers and inductors, the changing magnetic field induces large eddy currents circulating within it, closed loops of electric current in planes perpendicular to the field. The current flowing through the resistance of the metal heats it by Joule heating, causing significant power losses. Therefore, solid iron cores are not used in transformers or inductors, they are replaced by laminated or powdered iron cores, or nonconductive cores like ferrite. Laminated silicon steel In order to reduce the eddy current losses mentioned above, most low frequency power transformers and inductors use laminated cores, made of stacks of thin sheets of silicon steel: Lamination Laminated magnetic cores are made of stacks of thin iron sheets coated with an insulating layer, lying as much as possible parallel with the lines of flux. The layers of insulation serve as a barrier to eddy currents, so eddy currents can only flow in narrow loops within the thickness of each single lamination. Since the current in an eddy current loop is proportional to the area of the loop, this prevents most of the current from flowing, reducing eddy currents to a very small level. Since power dissipated is proportional to the square of the current, breaking a large core into narrow laminations reduces the power losses drastically. From this, it can be seen that the thinner the laminations, the lower the eddy current losses. Silicon alloying A small addition of silicon to iron (around 3%) results in a dramatic increase of the resistivity of the metal, up to four times higher. The higher resistivity reduces the eddy currents, so silicon steel is used in transformer cores. Further increase in silicon concentration impairs the steel's mechanical properties, causing difficulties for rolling due to brittleness. Among the two types of silicon steel, grain-oriented (GO) and grain non-oriented (GNO), GO is most desirable for magnetic cores. It is anisotropic, offering better magnetic properties than GNO in one direction. As the magnetic field in inductor and transformer cores is always along the same direction, it is an advantage to use grain oriented steel in the preferred orientation. Rotating machines, where the direction of the magnetic field can change, gain no benefit from grain-oriented steel. Special alloys A family of specialized alloys exists for magnetic core applications. Examples are mu-metal, permalloy, and supermalloy. They can be manufactured as stampings or as long ribbons for tape wound cores. Some alloys, e.g. Sendust, are manufactured as powder and sintered to shape. Many materials require careful heat treatment to reach their magnetic properties, and lose them when subjected to mechanical or thermal abuse. For example, the permeability of mu-metal increases about 40 times after annealing in hydrogen atmosphere in a magnetic field; subsequent sharper bends disrupt its grain alignment, leading to localized loss of permeability; this can be regained by repeating the annealing step. Vitreous metal Amorphous metal is a variety of alloys (e.g. Metglas) that are non-crystalline or glassy. These are being used to create high-efficiency transformers. The materials can be highly responsive to magnetic fields for low hysteresis losses, and they can also have lower conductivity to reduce eddy current losses. Power utilities are currently making widespread use of these transformers for new installations. High mechanical strength and corrosion resistance are also common properties of metallic glasses which are positive for this application. Powdered metals Powder cores consist of metal grains mixed with a suitable organic or inorganic binder, and pressed to desired density. Higher density is achieved with higher pressure and lower amount of binder. Higher density cores have higher permeability, but lower resistance and therefore higher losses due to eddy currents. Finer particles allow operation at higher frequencies, as the eddy currents are mostly restricted to within the individual grains. Coating of the particles with an insulating layer, or their separation with a thin layer of a binder, lowers the eddy current losses. Presence of larger particles can degrade high-frequency performance. Permeability is influenced by the spacing between the grains, which form distributed air gap; the less gap, the higher permeability and the less-soft saturation. Due to large difference of densities, even a small amount of binder, weight-wise, can significantly increase the volume and therefore intergrain spacing. Lower permeability materials are better suited for higher frequencies, due to balancing of core and winding losses. The surface of the particles is often oxidized and coated with a phosphate layer, to provide them with mutual electrical insulation. Iron Powdered iron is the cheapest material. It has higher core loss than the more advanced alloys, but this can be compensated for by making the core bigger; it is advantageous where cost is more important than mass and size. Saturation flux of about 1 to 1.5 tesla. Relatively high hysteresis and eddy current loss, operation limited to lower frequencies (approx. below 100 kHz). Used in energy storage inductors, DC output chokes, differential mode chokes, triac regulator chokes, chokes for power factor correction, resonant inductors, and pulse and flyback transformers. The binder used is usually epoxy or other organic resin, susceptible to thermal aging. At higher temperatures, typically above 125 °C, the binder degrades and the core magnetic properties may change. With more heat-resistant binders the cores can be used up to 200 °C. Iron powder cores are most commonly available as toroids. Sometimes as E, EI, and rods or blocks, used primarily in high-power and high-current parts. Carbonyl iron is significantly more expensive than hydrogen-reduced iron. Carbonyl iron Powdered cores made of carbonyl iron, a highly pure iron, have high stability of parameters across a wide range of temperatures and magnetic flux levels, with excellent Q factors between 50 kHz and 200 MHz. Carbonyl iron powders are basically constituted of micrometer-size spheres of iron coated in a thin layer of electrical insulation. This is equivalent to a microscopic laminated magnetic circuit (see silicon steel, above), hence reducing the eddy currents, particularly at very high frequencies. Carbonyl iron has lower losses than hydrogen-reduced iron, but also lower permeability. A popular application of carbonyl iron-based magnetic cores is in high-frequency and broadband inductors and transformers, especially higher power ones. Carbonyl iron cores are often called "RF cores". The as-prepared particles, "E-type"and have onion-like skin, with concentric shells separated with a gap. They contain significant amount of carbon. They behave as much smaller than what their outer size would suggest. The "C-type" particles can be prepared by heating the E-type ones in hydrogen atmosphere at 400 °C for prolonged time, resulting in carbon-free powders. Hydrogen-reduced iron Powdered cores made of hydrogen reduced iron have higher permeability but lower Q than carbonyl iron. They are used mostly for electromagnetic interference filters and low-frequency chokes, mainly in switched-mode power supplies. Hydrogen-reduced iron cores are often called "power cores". MPP (molypermalloy) An alloy of about 2% molybdenum, 81% nickel, and 17% iron. Very low core loss, low hysteresis and therefore low signal distortion. Very good temperature stability. High cost. Maximum saturation flux of about 0.8 tesla. Used in high-Q filters, resonant circuits, loading coils, transformers, chokes, etc. The material was first introduced in 1940, used in loading coils to compensate capacitance in long telephone lines. It is usable up to about 200 kHz to 1 MHz, depending on vendor. It is still used in above-ground telephone lines, due to its temperature stability. Underground lines, where temperature is more stable, tend to use ferrite cores due to their lower cost. High-flux (Ni-Fe) An alloy of about 50–50% of nickel and iron. High energy storage, saturation flux density of about 1.5 tesla. Residual flux density near zero. Used in applications with high DC current bias (line noise filters, or inductors in switching regulators) or where low residual flux density is needed (e.g. pulse and flyback transformers, the high saturation is suitable for unipolar drive), especially where space is constrained. The material is usable up to about 200 kHz. Sendust, KoolMU An alloy of 6% aluminium, 9% silicon, and 85% iron. Core losses higher than MPP. Very low magnetostriction, makes low audio noise. Loses inductance with increasing temperature, unlike the other materials; can be exploited by combining with other materials as a composite core, for temperature compensation. Saturation flux of about 1 tesla. Good temperature stability. Used in switching power supplies, pulse and flyback transformers, in-line noise filters, swing chokes, and in filters in phase-fired controllers (e.g. dimmers) where low acoustic noise is important. Absence of nickel results in easier processing of the material and its lower cost than both high-flux and MPP. The material was invented in Japan in 1936. It is usable up to about 500 kHz to 1 MHz, depending on vendor. Nanocrystalline A nanocrystalline alloy of a standard iron-boron-silicon alloy, with addition of smaller amounts of copper and niobium. The grain size of the powder reaches down to 10–100 nanometers. The material has very good performance at lower frequencies. It is used in chokes for inverters and in high power applications. It is available under names like e.g. Nanoperm, Vitroperm, Hitperm and Finemet. Ceramics Ferrite Ferrite ceramics are used for high-frequency applications. The ferrite materials can be engineered with a wide range of parameters. As ceramics, they are essentially insulators, which prevents eddy currents, although losses such as hysteresis losses can still occur. Air A coil not containing a magnetic core is called an air core. This includes coils wound on a plastic or ceramic form in addition to those made of stiff wire that are self-supporting and have air inside them. Air core coils generally have a much lower inductance than similarly sized ferromagnetic core coils, but are used in radio frequency circuits to prevent energy losses called core losses that occur in magnetic cores. The absence of normal core losses permits a higher Q factor, so air core coils are used in high frequency resonant circuits, such as up to a few megahertz. However, losses such as proximity effect and dielectric losses are still present. Air cores are also used when field strengths above around 2 Tesla are required as they are not subject to saturation. Commonly used structures Straight cylindrical rod Most commonly made of ferrite or powdered iron, and used in radios especially for tuning an inductor. The coil is wound around the rod, or a coil form with the rod inside. Moving the rod in or out of the coil changes the flux through the coil, and can be used to adjust the inductance. Often the rod is threaded to allow adjustment with a screwdriver. In radio circuits, a blob of wax or resin is used once the inductor has been tuned to prevent the core from moving. The presence of the high permeability core increases the inductance, but the magnetic field lines must still pass through the air from one end of the rod to the other. The air path ensures that the inductor remains linear. In this type of inductor radiation occurs at the end of the rod and electromagnetic interference may be a problem in some circumstances. Single "I" core Like a cylindrical rod but is square, rarely used on its own. This type of core is most likely to be found in car ignition coils. "C" or "U" core U and C-shaped cores are used with I or another C or U core to make a square closed core, the simplest closed core shape. Windings may be put on one or both legs of the core. "E" core E-shaped core are more symmetric solutions to form a closed magnetic system. Most of the time, the electric circuit is wound around the center leg, whose section area is twice that of each individual outer leg. In 3-phase transformer cores, the legs are of equal size, and all three legs are wound. {{multiple image | align = center | direction = horizontal | width = 200 | image1 = E_core.png | caption1 = Classical E core | image2 = EFD_core.png | caption2 = The EFD core allows for construction of inductors or transformers with a lower profile | image3 = ER_core.png | caption3 = The ETD core has a cylindrical central leg. | image4 = EP_core.png | caption4 = The EP core is halfway between a E and a pot core }} "E" and "I" core Sheets of suitable iron stamped out in shapes like the (sans-serif) letters "E" and "I", are stacked with the "I" against the open end of the "E" to form a 3-legged structure. Coils can be wound around any leg, but usually the center leg is used. This type of core is frequently used for power transformers, autotransformers, and inductors. Pair of "E" cores Again used for iron cores. Similar to using an "E" and "I" together, a pair of "E" cores will accommodate a larger coil former and can produce a larger inductor or transformer. If an air gap is required, the centre leg of the "E" is shortened so that the air gap sits in the middle of the coil to minimize fringing and reduce electromagnetic interference. Planar core A planar core consists of two flat pieces of magnetic material, one above and one below the coil. It is typically used with a flat coil that is part of a printed circuit board. This design is excellent for mass production and allows a high power, small volume transformer to be constructed for low cost. It is not as ideal as either a pot core or toroidal core''' but costs less to produce. Pot core Usually ferrite or similar. This is used for inductors and transformers. The shape of a pot core is round with an internal hollow that almost completely encloses the coil. Usually a pot core is made in two halves which fit together around a coil former (bobbin). This design of core has a shielding effect, preventing radiation and reducing electromagnetic interference. Toroidal core This design is based on a toroid (the same shape as a doughnut). The coil is wound through the hole in the torus and around the outside. An ideal coil is distributed evenly all around the circumference of the torus. The symmetry of this geometry creates a magnetic field of circular loops inside the core, and the lack of sharp bends will constrain virtually all of the field to the core material. This not only makes a highly efficient transformer, but also reduces the electromagnetic interference radiated by the coil. It is popular for applications where the desirable features are: high specific power per mass and volume, low mains hum, and minimal electromagnetic interference. One such application is the power supply for a hi-fi audio amplifier. The main drawback that limits their use for general purpose applications is the inherent difficulty of winding wire through the center of a torus. Unlike a split core (a core made of two elements, like a pair of E cores), specialized machinery is required for automated winding of a toroidal core. Toroids have less audible noise, such as mains hum, because the magnetic forces do not exert bending moment on the core. The core is only in compression or tension, and the circular shape is more stable mechanically. Ring or bead The ring is essentially identical in shape and performance to the toroid, except that inductors commonly pass only through the center of the core, without wrapping around the core multiple times. The ring core may also be composed of two separate C-shaped hemispheres secured together within a plastic shell, permitting it to be placed on finished cables with large connectors already installed, that would prevent threading the cable through the small inner diameter of a solid ring. AL value The AL value of a core configuration is frequently specified by manufacturers. The relationship between inductance and AL number in the linear portion of the magnetisation curve is defined to be: where n is the number of turns, L is the inductance (e.g. in nH) and AL is expressed in inductance per turn squared (e.g. in nH/n2). Core loss When the core is subjected to a changing magnetic field, as it is in devices that use AC current such as transformers, inductors, and AC motors and alternators, some of the power that would ideally be transferred through the device is lost in the core, dissipated as heat and sometimes noise. Core loss is commonly termed iron loss in contradistinction to copper loss, the loss in the windings. Iron losses are often described as being in three categories: Hysteresis losses When the magnetic field through the core changes, the magnetization of the core material changes by expansion and contraction of the tiny magnetic domains it is composed of, due to movement of the domain walls. This process causes losses, because the domain walls get "snagged" on defects in the crystal structure and then "snap" past them, dissipating energy as heat. This is called hysteresis loss. It can be seen in the graph of the B field versus the H'' field for the material, which has the form of a closed loop. The net energy that flows into the inductor expressed in relationship to the B-H characteristic of the core is shown by the equation This equation shows that the amount of energy lost in the material in one cycle of the applied field is proportional to the area inside the hysteresis loop. Since the energy lost in each cycle is constant, hysteresis power losses increase proportionally with frequency. The final equation for the hysteresis power loss is Eddy-current losses If the core is electrically conductive, the changing magnetic field induces circulating loops of current in it, called eddy currents, due to electromagnetic induction. The loops flow perpendicular to the magnetic field axis. The energy of the currents is dissipated as heat in the resistance of the core material. The power loss is proportional to the area of the loops and inversely proportional to the resistivity of the core material. Eddy current losses can be reduced by making the core out of thin laminations which have an insulating coating, or alternatively, making the core of a magnetic material with high electrical resistance, like ferrite. Most magnetic cores intended for power converter application use ferrite cores for this reason. Anomalous losses By definition, this category includes any losses in addition to eddy-current and hysteresis losses. This can also be described as broadening of the hysteresis loop with frequency. Physical mechanisms for anomalous loss include localized eddy-current effects near moving domain walls. Legg's equation An equation known as Legg's equation models the magnetic material core loss at low flux densities. The equation has three loss components: hysteresis, residual, and eddy current, and it is given by where is the effective core loss resistance (ohms), is the material permeability, is the inductance (henrys), is the hysteresis loss coefficient, is the maximum flux density (gauss), is the residual loss coefficient, is the frequency (hertz), and e is the eddy loss coefficient. Steinmetz coefficients Losses in magnetic materials can be characterized by the Steinmetz coefficients, which however do not take into account temperature variability. Material manufacturers provide data on core losses in tabular and graphical form for practical conditions of use. See also Balun Magnetic-core memory Pole piece Toroidal inductors and transformers References External links Online calculator for ferrite coil winding calculations What are the bumps at the end of computer cables? How to use ferrites for EMI suppression via Wayback Machine by Murata Manufacturing Electromagnetic components Radio electronics Electromagnetic radiation
Magnetic core
[ "Physics", "Engineering" ]
4,958
[ "Electromagnetic radiation", "Physical phenomena", "Radiation", "Radio electronics" ]
1,516,949
https://en.wikipedia.org/wiki/Kinematic%20determinacy
Kinematic determinacy is a term used in structural mechanics to describe a structure where material compatibility conditions alone can be used to calculate deflections. A kinematically determinate structure can be defined as a structure where, if it is possible to find nodal displacements compatible with member extensions, those nodal displacements are unique. The structure has no possible mechanisms, i.e. nodal displacements, compatible with zero member extensions, at least to a first-order approximation. Mathematically, the mass matrix of the structure must have full rank. Kinematic determinacy can be loosely used to classify an arrangement of structural members as a structure (stable) instead of a mechanism (unstable). The principles of kinematic determinacy are used to design precision devices such as mirror mounts for optics, and precision linear motion bearings. See also Statical determinacy Precision engineering Kinematic coupling References Mechanical engineering
Kinematic determinacy
[ "Physics", "Engineering" ]
191
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,517,049
https://en.wikipedia.org/wiki/Lidstone%20series
In mathematics, a Lidstone series, named after George James Lidstone, is a kind of polynomial expansion that can express certain types of entire functions. Let ƒ(z) be an entire function of exponential type less than (N + 1)π, as defined below. Then ƒ(z) can be expanded in terms of polynomials An as follows: Here An(z) is a polynomial in z of degree n, Ck a constant, and ƒ(n)(a) the nth derivative of ƒ at a. A function is said to be of exponential type of less than t if the function is bounded above by t. Thus, the constant N used in the summation above is given by with References Ralph P. Boas, Jr. and C. Creighton Buck, Polynomial Expansions of Analytic Functions, (1964) Academic Press, NY. Library of Congress Catalog 63-23263. Issued as volume 19 of Moderne Funktionentheorie ed. L.V. Ahlfors, series Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag Mathematical series
Lidstone series
[ "Mathematics" ]
235
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Series (mathematics)", "Mathematical analysis stubs", "Calculus" ]
7,127,168
https://en.wikipedia.org/wiki/Friction%20loss
In fluid dynamics, friction loss (or frictional loss) is the head loss that occurs in a containment such as a pipe or duct due to the effect of the fluid's viscosity near the surface of the containment. Engineering Friction loss is a significant engineering concern wherever fluids are made to flow, whether entirely enclosed in a pipe or duct, or with a surface open to the air. Historically, it is a concern in aqueducts of all kinds, throughout human history. It is also relevant to sewer lines. Systematic study traces back to Henry Darcy, an aqueduct engineer. Natural flows in river beds are important to human activity; friction loss in a stream bed has an effect on the height of the flow, particularly significant during flooding. The economies of pipelines for petrochemical delivery are highly affected by friction loss. The Yamal–Europe pipeline carries methane at a volume flow rate of 32.3 × 109 m3 of gas per year, at Reynolds numbers greater than 50 × 106. In hydropower applications, the energy lost to skin friction in flume and penstock is not available for useful work, say generating electricity. In refrigeration applications, energy is expended pumping the coolant fluid through pipes or through the condenser. In split systems, the pipes carrying the coolant take the place of the air ducts in HVAC systems. Calculating volumetric flow In the following discussion, we define volumetric flow rate V̇ (i.e. volume of fluid flowing per time) as where r = radius of the pipe (for a pipe of circular section, the internal radius of the pipe). v = mean velocity of fluid flowing through the pipe. A = cross sectional area of the pipe. In long pipes, the loss in pressure (assuming the pipe is level) is proportional to the length of pipe involved. Friction loss is then the change in pressure Δp per unit length of pipe L When the pressure is expressed in terms of the equivalent height of a column of that fluid, as is common with water, the friction loss is expressed as S, the "head loss" per length of pipe, a dimensionless quantity also known as the hydraulic slope. where ρ = density of the fluid, (SI kg / m3) g = the local acceleration due to gravity; Characterizing friction loss Friction loss, which is due to the shear stress between the pipe surface and the fluid flowing within, depends on the conditions of flow and the physical properties of the system. These conditions can be encapsulated into a dimensionless number Re, known as the Reynolds number where V is the mean fluid velocity and D the diameter of the (cylindrical) pipe. In this expression, the properties of the fluid itself are reduced to the kinematic viscosity ν where μ = viscosity of the fluid (SI kg / m • s) Friction loss in straight pipe The friction loss in uniform, straight sections of pipe, known as "major loss", is caused by the effects of viscosity, the movement of fluid molecules against each other or against the (possibly rough) wall of the pipe. Here, it is greatly affected by whether the flow is laminar (Re < 2000) or turbulent (Re > 4000): In laminar flow, losses are proportional to fluid velocity, V; that velocity varies smoothly between the bulk of the fluid and the pipe surface, where it is zero. The roughness of the pipe surface influences neither the fluid flow nor the friction loss. In turbulent flow, losses are proportional to the square of the fluid velocity, V2; here, a layer of chaotic eddies and vortices near the pipe surface, called the viscous sub-layer, forms the transition to the bulk flow. In this domain, the effects of the roughness of the pipe surface must be considered. It is useful to characterize that roughness as the ratio of the roughness height ε to the pipe diameter D, the "relative roughness". Three sub-domains pertain to turbulent flow: In the smooth pipe domain, friction loss is relatively insensitive to roughness. In the rough pipe domain, friction loss is dominated by the relative roughness and is insensitive to Reynolds number. In the transition domain, friction loss is sensitive to both. For Reynolds numbers 2000 < Re < 4000, the flow is unstable, varying with time as vortices within the flow form and vanish randomly. This domain of flow is not well modeled, nor are the details well understood. Form friction Factors other than straight pipe flow induce friction loss; these are known as "minor loss": Fittings, such as bends, couplings, valves, or transitions in hose or pipe diameter, or Objects intruded into the fluid flow. For the purposes of calculating the total friction loss of a system, the sources of form friction are sometimes reduced to an equivalent length of pipe. Surface roughness The roughness of the surface of the pipe or duct affects the fluid flow in the regime of turbulent flow. Usually denoted by ε, values used for calculations of water flow, for some representative materials are: Values used in calculating friction loss in ducts (for, e.g., air) are: Calculating friction loss Hagen–Poiseuille Equation Laminar flow is encountered in practice with very viscous fluids, such as motor oil, flowing through small-diameter tubes, at low velocity. Friction loss under conditions of laminar flow follow the Hagen–Poiseuille equation, which is an exact solution to the Navier-Stokes equations. For a circular pipe with a fluid of density ρ and viscosity μ, the hydraulic slope S can be expressed In laminar flow (that is, with Re < ~2000), the hydraulic slope is proportional to the flow velocity. Darcy–Weisbach Equation In many practical engineering applications, the fluid flow is more rapid, therefore turbulent rather than laminar. Under turbulent flow, the friction loss is found to be roughly proportional to the square of the flow velocity and inversely proportional to the pipe diameter, that is, the friction loss follows the phenomenological Darcy–Weisbach equation in which the hydraulic slope S can be expressed where we have introduced the Darcy friction factor fD (but see Confusion with the Fanning friction factor); fD = Darcy friction factor Note that the value of this dimensionless factor depends on the pipe diameter D and the roughness of the pipe surface ε. Furthermore, it varies as well with the flow velocity V and on the physical properties of the fluid (usually cast together into the Reynolds number Re). Thus, the friction loss is not precisely proportional to the flow velocity squared, nor to the inverse of the pipe diameter: the friction factor takes account of the remaining dependency on these parameters. From experimental measurements, the general features of the variation of fD are, for fixed relative roughness ε / D and for Reynolds number Re = V D / ν > ~2000, With relative roughness ε / D < 10−6, fD declines in value with increasing Re in an approximate power law, with one order of magnitude change in fD over four orders of magnitude in Re. This is called the "smooth pipe" regime, where the flow is turbulent but not sensitive to the roughness features of the pipe (because the vortices are much larger than those features). At higher roughness, with increasing Reynolds number Re, fD climbs from its smooth pipe value, approaching an asymptote that itself varies logarithmically with the relative roughness ε / D; this regime is called "rough pipe" flow. The point of departure from smooth flow occurs at a Reynolds number roughly inversely proportional to the value of the relative roughness: the higher the relative roughness, the lower the Re of departure. The range of Re and ε / D between smooth pipe flow and rough pipe flow is labeled "transitional". In this region, the measurements of Nikuradse show a decline in the value of fD with Re, before approaching its asymptotic value from below, although Moody chose not to follow those data in his chart, which is based on the Colebrook–White equation. At values of 2000 < Re < 4000, there is a critical zone of flow, a transition from laminar to turbulence, where the value of fD increases from its laminar value of 64 / Re to its smooth pipe value. In this regime, the fluid flow is found to be unstable, with vortices appearing and disappearing within the flow over time. The entire dependence of fD on the pipe diameter D is subsumed into the Reynolds number Re and the relative roughness ε / D, likewise the entire dependence on fluid properties density ρ and viscosity μ is subsumed into the Reynolds number Re. This is called scaling. The experimentally measured values of fD are fit to reasonable accuracy by the (recursive) Colebrook–White equation, depicted graphically in the Moody chart which plots friction factor fD versus Reynolds number Re for selected values of relative roughness ε / D. Calculating friction loss for water in a pipe In a design problem, one may select pipe for a particular hydraulic slope S based on the candidate pipe's diameter D and its roughness ε. With these quantities as inputs, the friction factor fD can be expressed in closed form in the Colebrook–White equation or other fitting function, and the flow volume Q and flow velocity V can be calculated therefrom. In the case of water (ρ = 1 g/cc, μ = 1 g/m/s) flowing through a 12-inch (300 mm) Schedule-40 PVC pipe (ε = 0.0015 mm, D = 11.938 in.), a hydraulic slope S = 0.01 (1%) is reached at a flow rate Q = 157 lps (liters per second), or at a velocity V = 2.17 m/s (meters per second). The following table gives Reynolds number Re, Darcy friction factor fD, flow rate Q, and velocity V such that hydraulic slope S = hf / L = 0.01, for a variety of nominal pipe (NPS) sizes. Note that the cited sources recommend that flow velocity be kept below 5 feet / second (~1.5 m/s). Also note that the given fD in this table is actually a quantity adopted by the NFPA and the industry, known as C, which has the customary units psi/(100 gpm2ft) and can be calculated using the following relation: where is the pressure in psi, is the flow in 100gpm and is the length of the pipe in 100ft Calculating friction loss for air in a duct Friction loss takes place as a gas, say air, flows through duct work. The difference in the character of the flow from the case of water in a pipe stems from the differing Reynolds number Re and the roughness of the duct. The friction loss is customarily given as pressure loss for a given duct length, Δp / L, in units of (US) inches of water for 100 feet or (SI) kg / m2 / s2. For specific choices of duct material, and assuming air at standard temperature and pressure (STP), standard charts can be used to calculate the expected friction loss. The chart exhibited in this section can be used to graphically determine the required diameter of duct to be installed in an application where the volume of flow is determined and where the goal is to keep the pressure loss per unit length of duct S below some target value in all portions of the system under study. First, select the desired pressure loss Δp / L, say 1 kg / m2 / s2 (0.12 in H2O per 100 ft) on the vertical axis (ordinate). Next scan horizontally to the needed flow volume Q, say 1 m3 / s (2000 cfm): the choice of duct with diameter D = 0.5 m (20 in.) will result in a pressure loss rate Δp / L less than the target value. Note in passing that selecting a duct with diameter D = 0.6 m (24 in.) will result in a loss Δp / L of 0.02 kg / m2 / s2 (0.02 in H2O per 100 ft), illustrating the great gains in blower efficiency to be achieved by using modestly larger ducts. The following table gives flow rate Q such that friction loss per unit length Δp / L (SI kg / m2 / s2) is 0.082, 0.245, and 0.816, respectively, for a variety of nominal duct sizes. The three values chosen for friction loss correspond to, in US units inch water column per 100 feet, 0.01, .03, and 0.1. Note that, in approximation, for a given value of flow volume, a step up in duct size (say from 100mm to 120mm) will reduce the friction loss by a factor of 3. Note that, for the chart and table presented here, flow is in the turbulent, smooth pipe domain, with R* < 5 in all cases. Notes Further reading – In translation, NACA TT F-10 359. The data are available in digital form. Cited by Moody, L. F. (1944) – In English translation, as NACA TM 1292, 1950. The data show in detail the transition region for pipes with high relative roughness (ε/D > 0.001). Cited by Moody, L. F. (1944) Exhibits Nikuradse data. Large amounts of field data on commercial pipes. The Colebrook–White equation was found inadequate over a wide range of flow conditions. Shows friction factor in the smooth flow region for 1 < Re < 108 from two very different measurements. References External links Pipe pressure drop calculator for single phase flows. Pipe pressure drop calculator for two phase flows. Open source pipe pressure drop calculator. Friction Fluid dynamics Fluid mechanics Mechanical engineering Piping
Friction loss
[ "Physics", "Chemistry", "Engineering" ]
2,900
[ "Mechanical phenomena", "Physical phenomena", "Force", "Friction", "Physical quantities", "Applied and interdisciplinary physics", "Building engineering", "Chemical engineering", "Surface science", "Civil engineering", "Mechanical engineering", "Piping", "Fluid mechanics", "Fluid dynamics"...
7,127,508
https://en.wikipedia.org/wiki/Open%20Prosthetics%20Project
The Open Prosthetics Project (OPP) is an open design effort, dedicated to public domain prosthetics. By creating an online collaboration between prosthetic users and designers, the project aims to make new technology available for anyone to use and customize. On the project's website, medical product designers can post new ideas for prosthetic devices as CAD files, which are then available to the public free of charge. Prosthetic users or other designers can download the Computer-aided design (CAD) data, customize or improve upon the prosthesis, and repost the modifications to the web site. Users are free to take 3D models to a fabricator and have the hardware built for less cost than buying a manufactured limb. The project was started by Jonathon Kuniholm, a member of United States Marine Corps Reserve who lost part of his right arm to an improvised explosive device (IED) in Iraq. Upon returning home and receiving his first myoelectric hand, he decided there must be a better solution. References Sources Public domain Prosthetics Medical and health organizations based in North Carolina Open content projects Open-source hardware
Open Prosthetics Project
[ "Engineering", "Biology" ]
240
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
7,128,600
https://en.wikipedia.org/wiki/Gray%20baby%20syndrome
Gray baby syndrome (also termed gray syndrome or grey syndrome) is a rare but serious, even fatal, side effect that occurs in newborn infants (especially premature babies) following the accumulation of the antibiotic chloramphenicol. Chloramphenicol is a broad-spectrum antibiotic that has been used to treat a variety of bacteria infections like Streptococcus pneumoniae as well as typhoid fever, meningococcal sepsis, cholera, and eye infections. Chloramphenicol works by binding to ribosomal subunits which blocks transfer ribonucleic acid (RNA) and prevents the synthesis of bacterial proteins. Chloramphenicol has also been used to treat neonates born before 37 weeks of the gestational period for prophylactic purposes. In 1958, newborns born prematurely due to rupture of the amniotic sac were given chloramphenicol to prevent possible infections, and it was noticed that these newborns had a higher mortality rate compared with those who were not treated with the antibiotic. Over the years, chloramphenicol has been used less in clinical practice due to the risks of toxicity not only to neonates, but also to adults due to the risk of aplastic anemia. Chloramphenicol is now reserved to treat certain severe bacteria infections that were not successfully treated with other antibiotic medications. Signs and symptoms Since the syndrome is due to the accumulation of chloramphenicol, the signs and symptoms are dose related. According to Kasten's review published in the Mayo Clinic Proceedings, a serum concentration of more than 50 μg/mL is a warning sign, while Hammett-Stabler and John states that the common therapeutics peak level is 10-20 μg/mL and is expected to achieve after 0.5-1.5 hours of intravenous administration in their review of antimicrobial drugs. The common onset of signs and symptoms are 2 to 9 days after the initiation of the medication, which allows the serum concentration to build up to reach the toxic concentration above. Common signs and symptoms include loss of appetite, fussiness, vomiting, ashen gray color of the skin, hypotension (low blood pressure), cyanosis (blue discoloration of lips and skin), hypothermia, cardiovascular collapse, hypotonia (muscle stiffness), abdominal distension, irregular respiration, and increased blood lactate. Pathophysiology Two pathophysiologic mechanisms are thought to play a role in the development of gray baby syndrome after exposure to chloramphenicol. This condition is due to a lack of glucuronidation reactions occurring in the baby (phase II hepatic metabolism), thus leading to an accumulation of toxic chloramphenicol metabolites: Metabolism: The UDP-glucuronyl transferase enzyme system in infants, especially premature infants, is not fully developed and incapable of metabolizing the excessive drug load needed to excrete chloramphenicol. Elimination: Insufficient renal excretion of the unconjugated drug. Insufficient metabolism and excretion of chloramphenicol leads to increased blood concentrations of the drug, causing blockade of the electron transport of the liver, myocardium, and skeletal muscles. Since the electron transport is an essential part of cellular respiration, its blockade can result in cell damage. In addition, the presence of chloramphenicol weakens the binding of bilirubin and albumin, so increased levels of the drug can lead to high levels of free bilirubin in the blood, resulting in brain damage or kernicterus. If left untreated, possible bleeding, renal (kidney) and/or hepatic (liver) failure, anemia, infection, confusion, weakness, blurred vision, or eventually death are expected. Additionally, chloramphenicol is significantly insoluble due to an absence of acidic and basic groups in its molecular compound. As a result, larger amounts of the medication are required to achieve the desired therapeutic effect. High volumes of a medication that can cause various toxicities is another avenue how chloramphenicol can potentially lead to grey baby syndrome. Diagnosis Gray baby syndrome should be suspected in a newborn with abdominal distension, progressive pallid cyanosis, irregular respirations, and refusal to breastfeed. The cause of gray baby syndrome can come from the direct use of intravenous or oral chloramphenicol in neonates. Direct chronological relation between the use of the medication and signs and symptoms of the syndrome should be found in the previous medical history. In terms of the possible route of chloramphenicol, gray baby syndrome do not come from the mother's use of chloramphenicol during pregnancy or breastfeeding. According to the Drug and Lactation database (LactMed), it states that "milk concentrations are not sufficient to induce gray baby syndrome". It is also reported that the syndrome may not develop in infants when their mothers use the medication in their late period of pregnancy. According to the Oxford Review, chloramphenicol given to mothers during their pregnancy did not result in gray baby syndrome, but was caused by infants receiving supra-therapeutic doses of chloramphenicol after birth. The presentation of symptoms can depend on the level of exposure of the drug to the baby, given its dose-related nature. A broad diagnosis is usually needed for babies who present with cyanosis. To support the diagnosis, blood work should be done to determine the level of serum chloramphenicol, and to further evaluate chloramphenicol toxicity, a metabolic panel and a complete blood panel including levels of serum ketones and glucose (due to the risk of hypoglycemia) should be completed to help determine if an infant has the syndrome. Other tools used to help with diagnosis include CT scans, ultrasound, and electrocardiogram. Prevention Since the syndrome is a side effect of chloramphenicol, the prevention is primarily related to the proper use of the medication. The WHO Model Formulary for Children 2010 recommends to reserve chloramphenicol for life-threatening infections. As well as using chloramphenicol only when necessary, it should also be limited to short periods of time to also prevent the potential for toxicity. In particular, this medication should not be prescribed especially in neonates less than one week old due to the significant risk of toxicity. Preterm infants especially should not be administered chloramphenicol. Gray baby syndrome has been noted to be dose-dependent as it typically occurs in neonates who have received a daily dose greater than 200 milligrams. When chloramphenicol is necessary, the condition can be prevented by using the recommended doses and monitoring blood levels, or alternatively, third generation cephalosporins can be effectively substituted for the drug, without the associated toxicity. Also, repeated administration and prolonged treatment should be avoided. In terms of neonatal hepatic development, it take only weeks from birth for them to develop their UDPGT expression and function to be at an adult-like level, while the function is only about 1% in the late pregnancy, even right before birth. According to MSD Manuals, chloramphenicol should not be given to neonates younger than 1 month of age with more than a dose of 25 mg/kg/day to start with. The serum concentration of the medication should be monitored to titrate to a therapeutic level and to prevent toxicity. Reconciliation of other medications that neonates may be taking that can decrease blood cell count should be monitored because this medication can suppress bone marrow activity. Rifampicin and trimethoprim are examples of such medications and are contraindicated for concomitant use with chloramphenicol. Regrading bone marrow suppression, chloramphenicol has two major etiological manifestations. The first affects hematopoiesis, and this is reversible being an early sign of toxicity. The second is bone marrow aplasia, associated with terminal toxicity, and sometimes irreversible. Chloramphenicol is contraindicated in breastfeeding due to the risk of toxic effects to the baby. However, if maternal use cannot be avoided, close monitoring of the baby's symptoms such as feeding difficulties and blood work is recommended. Treatment Chloramphenicol therapy should be stopped immediately if objective or subjective signs of gray baby syndrome are suspected since gray baby syndrome can be fatal for the infant if not diagnosed early as it can lead to anemia, shock, and terminal organ damage. After discontinuing the antibiotic, the side effects caused by the toxicity should be treated. This includes treating hypoglycemia to help prevent hemodynamic instability, as well as warming the infant if they have developed hypothermia. Since symptoms of gray baby syndrome are correlated with elevated serum chloramphenicol concentrations, exchange transfusion may be required to remove the drug. Charcoal column hemoperfusion is a type of transfusion that has shown significant effects but is associated with numerous side effects. The associated side effects are not the only reason why this method of treatment is not a first line therapy. According to the American Journal of Kidney Diseases, elevated cartridge prices and viable lifespan of the product are deterring factors to consider. Phenobarbital and theophylline are two drugs in particular that have shown significant efficacy with charcoal hemoperfusion, aside from its most significant indication for chronic aluminum toxicity in people with end-stage renal disease (ESRD) traditionally. Sometimes, phenobarbital is used to induce UDP-glucuronyl transferase enzyme function. For hemodynamically unstable neonates, supportive care measures such as resuscitation, oxygenation, and treatment for hypothermia are common practices when cessation of chloramphenicol alone is insufficient. With sepsis being a complication of severe gray baby syndrome, usage of broad-spectrum antibiotics such as vancomycin, for example, is a recommended treatment option. Third generation antibiotics have also proven efficacy in treating gray baby-induced sepsis. References Further reading External links Poisoning by drugs, medicaments and biological substances Syndromes
Gray baby syndrome
[ "Environmental_science" ]
2,174
[ " medicaments and biological substances", "Toxicology", "Poisoning by drugs" ]
7,129,616
https://en.wikipedia.org/wiki/Mucicarmine%20stain
Mucicarmine stain is a staining procedure used for different purposes. In microbiology the stain aids in the identification of a variety of microorganisms based on whether or not the cell wall stains intensely red. Generally this is limited to microorganisms with a cell wall that is composed, at least in part, of a polysaccharide component. One of the organisms that is identified using this staining technique is Cryptococcus neoformans. Another use is in surgical pathology where it can identify mucin. This is helpful, for example, in determining if the cancer is a type that produces mucin. Example would be to distinguish between high grade Mucoepidermoid Carcinoma of the parotid, which stains positive vs Squamous Cell Carcinoma of the parotid which does not. References Carbohydrate methods Staining dyes
Mucicarmine stain
[ "Chemistry", "Biology" ]
184
[ "Biochemistry methods", "Carbohydrate chemistry", "Carbohydrate methods" ]
7,130,280
https://en.wikipedia.org/wiki/FKBP
The FKBPs, or FK506 binding proteins, constitute a family of proteins that have prolyl isomerase activity and are related to the cyclophilins in function, though not in amino acid sequence. FKBPs have been identified in many eukaryotes, ranging from yeast to humans, and function as protein folding chaperones for proteins containing proline residues. Along with cyclophilin, FKBPs belong to the immunophilin family. FKBP1A (also known as FKBP12) is notable in humans for binding the immunosuppressant molecule tacrolimus (originally designated FK506), which is used in treating patients after organ transplant and patients with autoimmune disorders. Tacrolimus has been found to reduce episodes of organ rejection over a related treatment, the drug ciclosporin, which binds cyclophilin. Both the FKBP-tacrolimus complex and the cyclosporin-cyclophilin complex inhibit a phosphatase called calcineurin, thus blocking signal transduction in the T-lymphocyte transduction pathway. This therapeutic role is not related to its prolyl isomerase activity. FKBP25 is a nuclear FKBP which non-specifically binds with DNA and has a role in DNA repair. Use as a biological research tool FKBP (FKBP1A) does not normally form a dimer but will dimerize in the presence of FK1012, a derivative of the drug tacrolimus (FK506). This has made it a useful tool for chemically induced dimerization applications where it can be used to manipulate protein localization, signalling pathways and protein activation. Examples Human genes encoding proteins in this family include: AIP; AIPL1 FKBPL; FKBP1A; FKBP1B; FKBP2; FKBP3; FKBP4; FKBP5; FKBP6; FKBP7; FKBP8; FKBP9; FKBP10; FKBP11; FKBP14; FKBP15; Gene with unclear status (may be pseudogene): FKBP1C Pseudogenes in humans: LOC541473; FKBP9L; See also Immunophilins References External links Anti-rejection drugs EC 5.2.1 Protein families
FKBP
[ "Biology" ]
522
[ "Protein families", "Protein classification" ]
7,130,691
https://en.wikipedia.org/wiki/Signal%20recognition%20particle%20receptor
Signal recognition particle (SRP) receptor, also called the docking protein, is a dimer composed of 2 different subunits that are associated exclusively with the rough ER in mammalian cells. Its main function is to identify the SRP units. SRP (signal recognition particle) is a molecule that helps the ribosome-mRNA-polypeptide complexes to settle down on the membrane of the endoplasmic reticulum. The eukaryotic SRP receptor (termed SR) is a heterodimer of SR-alpha (70 kDa; SRPRA) and SR-beta (25 kDa; SRPRB), both of which contain a GTP-binding domain, while the prokaryotic SRP receptor comprises only the monomeric loosely membrane-associated SR-alpha homologue FtsY (). SRX domain SR-alpha regulates the targeting of SRP-ribosome-nascent polypeptide complexes to the translocon. SR-alpha binds to the SRP54 subunit of the SRP complex. The SR-beta subunit is a transmembrane GTPase that anchors the SR-alpha subunit (a peripheral membrane GTPase) to the ER membrane. SR-beta interacts with the N-terminal SRX-domain of SR-alpha, which is not present in the bacterial FtsY homologue. SR-beta also functions in recruiting the SRP-nascent polypeptide to the protein-conducting channel. The SRX family represents eukaryotic homologues of the alpha subunit of the SR receptor. Members of this entry consist of a central six-stranded anti-parallel beta-sheet sandwiched by helix alpha1 on one side and helices alpha2-alpha4 on the other. They interact with the small GTPase SR-beta, forming a complex that matches a class of small G protein-effector complexes, including Rap-Raf, Ras-PI3K(gamma), Ras-RalGDS, and Arl2-PDE(delta). On the C-terminal of SR-alpha and FtsY is the NG domain similar to SRP54. NG domain The receptor binds to SPR54/Ffh by the "NG domain", a combination of a 4-helical-bundle "N" domain () and a GTPase "G" domain (), shared by both proteins. The bound structure is a quasi-symmetric heterodimer termed a targeting complex. Signal recognition particle (SRP) The signal recognition particle (SRP) is a multimeric protein, which along with its conjugate receptor (SR), is involved in targeting secretory proteins to the rough endoplasmic reticulum (RER) membrane in eukaryotes, or to the plasma membrane in prokaryotes. SRP recognises the signal sequence of the nascent polypeptide on the ribosome, retards its elongation, and docks the SRP-ribosome-polypeptide complex to the RER membrane via the SR receptor. SRP consists of six polypeptides (SRP9, SRP14, SRP19, SRP54, SRP68 and SRP72) and a single 300 nucleotide 7S RNA molecule. The RNA component catalyses the interaction of SRP with its SR receptor. In higher eukaryotes, the SRP complex consists of the Alu domain and the S domain linked by the SRP RNA. The Alu domain consists of a heterodimer of SRP9 and SRP14 bound to the 5' and 3' terminal sequences of SRP RNA. This domain is necessary for retarding the elongation of the nascent polypeptide chain, which gives SRP time to dock the ribosome-polypeptide complex to the RER membrane. References Receptors Protein targeting Single-pass transmembrane proteins
Signal recognition particle receptor
[ "Chemistry", "Biology" ]
829
[ "Receptors", "Protein targeting", "Cellular processes", "Signal transduction" ]
7,131,583
https://en.wikipedia.org/wiki/MERCURE
Mercure can also refer to the chain of hotels run by Accor. See Mercure Hotels. MERCURE is an atmospheric dispersion modeling CFD code developed by Électricité de France (EDF) and distributed by ARIA Technologies, a French company. MERCURE is a version of the CFD software ESTET, developed by EDF's Laboratoire National d'Hydraulique. Thus, it has directly benefited from the improvements developed for ESTET. When requested, ARIA integrates MERCURE as a module into the ARIA RISK software for use in industrial risk assessments. Features of the model MERCURE is particularly well adapted to perform air pollution dispersion modelling on local or urban scales. Some of the models capabilities and features are: Pollution source types: Point or line sources, continuous or intermittent. Pollution plume types: Buoyant or dense gas plumes. Deposition: The model is capable of simulating the deposition or decay of plume pollutants. Users of the model There are many organizations that have used MERCURE. To name a few: Électricité de France (EDF) Laboratoire de Mécanique des Fluides et d’Acoustique (LMFA) de l'École Centrale de Lyon, France Institut de radioprotection et de sûreté nucléaire (IRSN), Fontenay, France The Italian National Agency for New Technology, Energy and the Environment (ENEA), Bologna, Italy Queensland University of Technology, Brisbane, Australia See also Bibliography of atmospheric dispersion modeling Atmospheric dispersion modeling List of atmospheric dispersion models Further reading For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read: www.crcpress.com www.air-dispersion.com References External links ARIA Technologies web site (English version) EDF website (English version) Atmospheric dispersion modeling Électricité de France
MERCURE
[ "Chemistry", "Engineering", "Environmental_science" ]
407
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
7,132,704
https://en.wikipedia.org/wiki/Fibre%20multi-object%20spectrograph
Fibre multi-object spectrograph (FMOS) is facility instrument for the Subaru Telescope on Mauna Kea in Hawaii. The instrument consists of a complex fibre-optic positioning system mounted at the prime focus of the telescope. Fibres are then fed to a pair of large spectrographs, each weighing nearly 3000 kg. The instrument will be used to look at the light from up to 400 stars or galaxies simultaneously over a field of view of 30 arcminutes (about the size of the full moon on the sky). The instrument will be used for a number of key programmes, including galaxy formation and evolution and dark energy via a measurement of the rate at which the universe is expanding. Design, construction, operation It is currently being built by a consortium of institutes led by Kyoto University and Oxford University with parts also being manufactured by the Rutherford Appleton Laboratory, Durham University and the Anglo-Australian Observatory. The instrument is scheduled for engineering first-light in late 2008. OH-suppression The spectrographs use a technique called OH-suppression to increase the sensitivity of the observations: The incoming light from the fibres is dispersed to a relatively high resolution and this spectrum forms an image on a pair of spherical mirrors which have been etched at the positions corresponding to the bright OH-lines. This spectrum is then re-imaged through a second diffraction grating to allow the full spectrum (without the OH lines) to be imaged onto a single infrared detector. References FMOS FMOS Project Telescope instruments Spectrographs Electronic test equipment Signal processing Laboratory equipment
Fibre multi-object spectrograph
[ "Physics", "Chemistry", "Astronomy", "Technology", "Engineering" ]
317
[ "Telecommunications engineering", "Telescope instruments", "Spectrum (physical sciences)", "Computer engineering", "Signal processing", "Electronic test equipment", "Measuring instruments", "Spectrographs", "Astronomical instruments", "Spectroscopy" ]
5,463,978
https://en.wikipedia.org/wiki/Topological%20entropy
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy. Definition A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f : X → X. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent. Definition of Adler, Konheim, and McAndrew Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers. For any continuous map f: X → X, the following limit exists: Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X. Interpretation The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement. then represents the logarithm of the minimal number of "words" of length n needed to encode the points of X according to the behavior of their first n − 1 iterates under f, or, put differently, the total number of "scenarios" of the behavior of these iterates, as "seen" by the partition C. Thus the topological entropy is the average (per iteration) amount of information needed to describe long iterations of the map f. Definition of Bowen and Dinaburg This definition uses a metric on X (actually, a uniform structure would suffice). This is a narrower definition than that of Adler, Konheim, and McAndrew, as it requires the additional metric structure on the topological space (but is independent of the choice of metrics generating the given topology). However, in practice, the Bowen-Dinaburg topological entropy is usually much easier to calculate. Let (X, d) be a compact metric space and f: X → X be a continuous map. For each natural number n, a new metric dn is defined on X by the formula Given any ε > 0 and n ≥ 1, two points of X are ε-close with respect to this metric if their first n iterates are ε-close. This metric allows one to distinguish in a neighborhood of an orbit the points that move away from each other during the iteration from the points that travel together. A subset E of X is said to be (n, ε)-separated if each pair of distinct points of E is at least ε apart in the metric dn. Denote by N(n, ε) the maximum cardinality of an (n, ε)-separated set. The topological entropy of the map f is defined by Interpretation Since X is compact, N(n, ε) is finite and represents the number of distinguishable orbit segments of length n, assuming that we cannot distinguish points within ε of one another. A straightforward argument shows that the limit defining h(f) always exists in the extended real line (but could be infinite). This limit may be interpreted as the measure of the average exponential growth of the number of distinguishable orbit segments. In this sense, it measures complexity of the topological dynamical system (X, f). Rufus Bowen extended this definition of topological entropy in a way which permits X to be non-compact under the assumption that the map f is uniformly continuous. Properties Topological entropy is an invariant of topological dynamical systems, meaning that it is preserved by topological conjugacy. Let be an expansive homeomorphism of a compact metric space and let be a topological generator. Then the topological entropy of relative to is equal to the topological entropy of , i.e. Let be a continuous transformation of a compact metric space , let be the measure-theoretic entropy of with respect to and let be the set of all -invariant Borel probability measures on X. Then the variational principle for entropy states that . In general the maximum of the quantities over the set is not attained, but if additionally the entropy map is upper semicontinuous, then a measure of maximal entropy - meaning a measure in with - exists. If has a unique measure of maximal entropy , then is ergodic with respect to . Examples Let by denote the full two-sided k-shift on symbols . Let denote the partition of into cylinders of length 1. Then is a partition of for all and the number of sets is respectively. The partitions are open covers and is a topological generator. Hence . The measure-theoretic entropy of the Bernoulli -measure is also . Hence it is a measure of maximal entropy. Further on it can be shown that no other measures of maximal entropy exist. Let be an irreducible matrix with entries in and let be the corresponding subshift of finite type. Then where is the largest positive eigenvalue of . Notes See also Milnor–Thurston kneading theory For the measure of correlations in systems with topological order see Topological entanglement entropy Mean dimension References Roy Adler, Tomasz Downarowicz, Michał Misiurewicz, Topological entropy at Scholarpedia External links http://www.scholarpedia.org/article/Topological_entropy Entropy and information Ergodic theory Topological dynamics
Topological entropy
[ "Physics", "Mathematics" ]
1,318
[ "Physical quantities", "Ergodic theory", "Entropy and information", "Entropy", "Topology", "Topological dynamics", "Dynamical systems" ]
5,466,649
https://en.wikipedia.org/wiki/Stable%20manifold
In mathematics, and in particular the study of dynamical systems, the idea of stable and unstable sets or stable and unstable manifolds give a formal mathematical definition to the general notions embodied in the idea of an attractor or repellor. In the case of hyperbolic dynamics, the corresponding notion is that of the hyperbolic set. Physical example The gravitational tidal forces acting on the rings of Saturn provide an easy-to-visualize physical example. The tidal forces flatten the ring into the equatorial plane, even as they stretch it out in the radial direction. Imagining the rings to be sand or gravel particles ("dust") in orbit around Saturn, the tidal forces are such that any perturbations that push particles above or below the equatorial plane results in that particle feeling a restoring force, pushing it back into the plane. Particles effectively oscillate in a harmonic well, damped by collisions. The stable direction is perpendicular to the ring. The unstable direction is along any radius, where forces stretch and pull particles apart. Two particles that start very near each other in phase space will experience radial forces causing them to diverge, radially. These forces have a positive Lyapunov exponent; the trajectories lie on a hyperbolic manifold, and the movement of particles is essentially chaotic, wandering through the rings. The center manifold is tangential to the rings, with particles experiencing neither compression nor stretching. This allows second-order gravitational forces to dominate, and so particles can be entrained by moons or moonlets in the rings, phase locking to them. The gravitational forces of the moons effectively provide a regularly repeating small kick, each time around the orbit, akin to a kicked rotor, such as found in a phase-locked loop. The discrete-time motion of particles in the ring can be approximated by the Poincaré map. The map effectively provides the transfer matrix of the system. The eigenvector associated with the largest eigenvalue of the matrix is the Frobenius–Perron eigenvector, which is also the invariant measure, i.e the actual density of the particles in the ring. All other eigenvectors of the transfer matrix have smaller eigenvalues, and correspond to decaying modes. Definition The following provides a definition for the case of a system that is either an iterated function or has discrete-time dynamics. Similar notions apply for systems whose time evolution is given by a flow. Let be a topological space, and a homeomorphism. If is a fixed point for , the stable set of is defined by and the unstable set of is defined by Here, denotes the inverse of the function , i.e. , where is the identity map on . If is a periodic point of least period , then it is a fixed point of , and the stable and unstable sets of are defined by and Given a neighborhood of , the local stable and unstable sets of are defined by and If is metrizable, we can define the stable and unstable sets for any point by and where is a metric for . This definition clearly coincides with the previous one when is a periodic point. Suppose now that is a compact smooth manifold, and is a diffeomorphism, . If is a hyperbolic periodic point, the stable manifold theorem assures that for some neighborhood of , the local stable and unstable sets are embedded disks, whose tangent spaces at are and (the stable and unstable spaces of ), respectively; moreover, they vary continuously (in a certain sense) in a neighborhood of in the topology of (the space of all diffeomorphisms from to itself). Finally, the stable and unstable sets are injectively immersed disks. This is why they are commonly called stable and unstable manifolds. This result is also valid for nonperiodic points, as long as they lie in some hyperbolic set (stable manifold theorem for hyperbolic sets). Remark If is a (finite-dimensional) vector space and an isomorphism, its stable and unstable sets are called stable space and unstable space, respectively. See also Invariant manifold Center manifold Limit set Julia set Slow manifold Inertial manifold Normally hyperbolic invariant manifold Lagrangian coherent structure References Limit sets Dynamical systems Manifolds
Stable manifold
[ "Physics", "Mathematics" ]
870
[ "Limit sets", "Space (mathematics)", "Topological spaces", "Topology", "Mechanics", "Manifolds", "Dynamical systems" ]
15,936,520
https://en.wikipedia.org/wiki/RNA%20extraction
RNA extraction is the purification of RNA from biological samples. This procedure is complicated by the ubiquitous presence of ribonuclease enzymes in cells and tissues, which can rapidly degrade RNA. Several methods are used in molecular biology to isolate RNA from samples, the most common of these is guanidinium thiocyanate-phenol-chloroform extraction.. Usually, the phenol-chloroform solution used for RNA extraction has lower pH, this aids in separating DNA from RNA and leads to a more pure RNA preparation. The filter paper based lysis and elution method features high throughput capacity.. RNA extraction in liquid nitrogen, commonly using a mortar and pestle (or specialized steel devices known as tissue pulverizers) is also useful in preventing ribonuclease activity. RNase contamination The extraction of RNA in molecular biology experiments is greatly complicated by the presence of ubiquitous and hardy RNases that degrade RNA samples. Certain RNases can be extremely hardy and inactivating them is difficult compared to neutralizing DNases. In addition to the cellular RNases that are released there are several RNases that are present in the environment. RNases have evolved to have many extracellular functions in various organisms. For example, RNase 7, a member of the RNase A superfamily, is secreted by human skin and serves as a potent antipathogen defence. For these secreted RNases, enzymatic activity may not even be necessary for the RNase's exapted function. For example, immune RNases act by destabilizing the cell membranes of bacteria. To counter this, equipment used for RNA extraction is usually cleaned thoroughly, kept separate from common lab equipment and treated with various harsh chemicals that destroy RNases. For the same reason, experimenters take special care not to let their bare skin touch the equipment. Broad RNAse inhibitors are also commercially available and sometimes added to in vitro transcription (RNA synthesis) reactions . See also Column purification DNA extraction Ethanol precipitation Phenol-chloroform extraction References External links Two-phase wash to solve the ubiquitous contaminant-carryover problem in commercial nucleic-acid extraction kits; by Erik Jue, Daan Witters & Rustem F. Ismagilov; Nature, Scientific reports, 2020. Biochemical separation processes Genetics techniques
RNA extraction
[ "Chemistry", "Engineering", "Biology" ]
486
[ "Biochemistry methods", "Genetics techniques", "Separation processes", "Genetic engineering", "Biochemical separation processes" ]
15,939,934
https://en.wikipedia.org/wiki/Rhodobacter%20sphaeroides
Rhodobacter sphaeroides is a kind of purple bacterium; a group of bacteria that can obtain energy through photosynthesis. Its best growth conditions are anaerobic phototrophy (photoheterotrophic and photoautotrophic) and aerobic chemoheterotrophy in the absence of light. R. sphaeroides is also able to fix nitrogen. It is remarkably metabolically diverse, as it is able to grow heterotrophically via fermentation and aerobic and anaerobic respiration. Such a metabolic versatility has motivated the investigation of R. sphaeroides as a microbial cell factory for biotechnological applications. Rhodobacter sphaeroides has been isolated from deep lakes and stagnant waters. Rhodobacter sphaeroides is one of the most pivotal organisms in the study of bacterial photosynthesis. It requires no unusual conditions for growth and is incredibly efficient. The regulation of its photosynthetic machinery is of great interest to researchers, as R. sphaeroides has an intricate system for sensing O2 tensions. Also, when exposed to a reduction in the partial pressure of oxygen, R. sphaeroides develops invaginations in its cellular membrane. The photosynthetic apparatus is housed in these invaginations. These invaginations are also known as chromatophores. The genome of R. sphaeroides is also somewhat intriguing. It has two chromosomes, one of 3 Mb (CI) and one of 900 Kb (CII), and five naturally occurring plasmids. Many genes are duplicated between the two chromosomes but appear to be differentially regulated. Moreover, many of the open reading frames (ORFs) on CII seem to code for proteins of unknown function. When genes of unknown function on CII are disrupted, many types of auxotrophy result, emphasizing that the CII is not merely a truncated version of CI. Small non-coding RNA Bacterial small RNAs have been identified as components of many regulatory networks. Twenty sRNAs were experimentally identified in Rhodobacter spheroides, and the abundant ones were shown to be affected by singlet oxygen (1O2) exposure. 1O2 which generates photooxidative stress, is made by bacteriochlorophyll upon exposure to oxygen and light. One of the 1O2 induced sRNAs SorY (1O2 resistance RNA Y) was shown to be induced under several stress conditions and conferred resistance against 1O2 by affecting a metabolite transporter. SorX is the second 1O2 induced sRNA that counteracts oxidative stress by targeting mRNA for a transporter. It also has an impact on resistance against organic hydroperoxides. A cluster of four homologous sRNAs called CcsR for conserved CCUCCUCCC motif stress-induced RNA has been shown to play a role in photo-oxidative stress resistance as well. PcrZ (photosynthesis control RNA Z) identified in R. sphaeroides, is a trans-acting sRNA which counteracts the redox-dependent induction of photosynthesis genes, mediated by protein regulators. Metabolism R. sphaeroides encodes several terminal oxidases which allow electron transfer to oxygen and other electron acceptors (e.g. DMSO or TMAO). Therefore, this microorganism can respire under oxic, micro-oxic and anoxic conditions under both light and dark conditions. Moreover, it is capable to accept a variety of carbon substrates, including C1 to C4 molecules, sugars and fatty acids. Several pathways for glucose catabolism are present in its genome, such as the Embden–Meyerhof–Parnas pathway (EMP), the Entner–Doudoroff pathway (ED) and the Pentose phosphate pathway (PP). The ED pathway is the predominant glycolytic pathway in this microorganism, whereas the EMP pathway contributing only to a smaller extent. Variation in nutrient availability has important effects on the physiology of this bacterium. For example, decrease in oxygen tensions activates the synthesis of photosynthetic machinery (including photosystems, antenna complexes and pigments). Moreover, depletion of nitrogen in the medium triggers intracellular accumulation of polyhydroxybutyrate, a reserve polymer. Biotechnological applications A genome-scale metabolic model exists for this microorganism, which can be used for predicting the effect of gene manipulations on its metabolic fluxes. For facilitating genome editing in this species, a CRISPR/Cas9 genome editing tool was developed and expanded. Moreover, partitioning of intracellular fluxes has been studied in detail, also with the help of 13C-glucose isotopomers. Altogether, these tools can be employed for improving R. sphaeroides as cell factory for industrial biotechnology. Knowledge of the physiology of R. sphaeroides allowed the development of biotechnological processes for the production of some endogenous compounds. These are hydrogen, polyhydroxybutyrate and isoprenoids (e.g. coenzyme Q10 and carotenoids). Moreover, this microorganism is used also for wastewater treatment. Hydrogen evolution occurs via the activity of the enzyme nitrogenase, whereas isoprenoids are synthesized naturally via the endogenous MEP pathway. The native pathway has been optimized via genetic engineering for improving coenzyme Q10 synthesis. Alternatively, improvement of isoprenoid synthesis was obtained via the introduction of a heterologous mevalonate pathway. Synthetic biology-driven engineering of the metabolism of R. sphaeroides, in combination to the functional replacement the MEP pathway with mevalonate pathway, allowed to further increase bioproduction of isoprenoids in this species. Accepted name Rhodobacter sphaeroides (van Niel 1944) Imhoff et al., 1984 Synonyms Rhodococcus minor Molisch 1907 Rhodococcus capsulatus Molisch 1907 Rhodosphaera capsulata (Molisch) Buchanan 1918 Rhodosphaera minor (Molisch) Bergey et al. 1923 Rhodorrhagus minor (Molisch) Bergey et al. 1925 Rhodorrhagus capsulatus (Molisch) Bergey et al. 1925 Rhodorrhagus capsulatus Bergey et al. 1939 Rhodopseudomonas sphaeroides van Niel 1944 Rhodopseudomonas spheroides van Niel 1944 Rhodorrhagus spheroides (van Niel) Brisou 1955 Reclassification In 2020 it was recommended that Rhodobacter sphaeroides be moved to the genus Cereibacter. This is the name currently used by the NCBI taxonomy database. References Bibliography Inomata Tsuyako, Higuchi Masataka (1976), Incorporation of tritium into cell materials of Rhodpseudomonas spheroides from tritiated water in the medium under aerobic conditions ; Journal of Biochemistry 80(3), p569-578, 1976-09 External links Video recordings van R. sphaeroides Type strain of Rhodobacter sphaeroides at BacDive - the Bacterial Diversity Metadatabase Phototrophic bacteria Rhodobacteraceae Bacteria described in 1944
Rhodobacter sphaeroides
[ "Chemistry", "Biology" ]
1,569
[ "Bacteria", "Photosynthesis", "Phototrophic bacteria" ]
15,940,961
https://en.wikipedia.org/wiki/Shortcut%20model
An important question in statistical mechanics is the dependence of model behaviour on the dimension of the system. The shortcut model was introduced in the course of studying this dependence. The model interpolates between discrete regular lattices of integer dimension. Introduction The behaviour of different processes on discrete regular lattices have been studied quite extensively. They show a rich diversity of behaviour, including a non-trivial dependence on the dimension of the regular lattice. In recent years the study has been extended from regular lattices to complex networks. The shortcut model has been used in studying several processes and their dependence on dimension. Dimension of complex network Usually, dimension is defined based on the scaling exponent of some property in the appropriate limit. One property one could use is the scaling of volume with distance. For regular lattices the number of nodes within a distance of node scales as . For systems which arise in physical problems one usually can identify some physical space relations among the vertices. Nodes which are linked directly will have more influence on each other than nodes which are separated by several links. Thus, one could define the distance between nodes and as the length of the shortest path connecting the nodes. For complex networks one can define the volume as the number of nodes within a distance of node , averaged over , and the dimension may be defined as the exponent which determines the scaling behaviour of the volume with distance. For a vector , where is a positive integer, the Euclidean norm is defined as the Euclidean distance from the origin to , i.e., However, the definition which generalises to complex networks is the norm, The scaling properties hold for both the Euclidean norm and the norm. The scaling relation is where d is not necessarily an integer for complex networks. is a geometric constant which depends on the complex network. If the scaling relation Eqn. holds, then one can also define the surface area as the number of nodes which are exactly at a distance from a given node, and scales as A definition based on the complex network zeta function generalises the definition based on the scaling property of the volume with distance and puts it on a mathematically robust footing. Shortcut model The shortcut model starts with a network built on a one-dimensional regular lattice. One then adds edges to create shortcuts that join remote parts of the lattice to one another. The starting network is a one-dimensional lattice of vertices with periodic boundary conditions. Each vertex is joined to its neighbors on either side, which results in a system with edges. The network is extended by taking each node in turn and, with probability , adding an edge to a new location nodes distant. The rewiring process allows the model to interpolate between a one-dimensional regular lattice and a two-dimensional regular lattice. When the rewiring probability , we have a one-dimensional regular lattice of size . When , every node is connected to a new location and the graph is essentially a two-dimensional lattice with and nodes in each direction. For between and , we have a graph which interpolates between the one and two dimensional regular lattices. The graphs we study are parametrized by Application to extensiveness of power law potential One application using the above definition of dimension was to the extensiveness of statistical mechanics systems with a power law potential where the interaction varies with the distance as . In one dimension the system properties like the free energy do not behave extensively when , i.e., they increase faster than N as , where N is the number of spins in the system. Consider the Ising model with the Hamiltonian (with N spins) where are the spin variables, is the distance between node and node , and are the couplings between the spins. When the have the behaviour , we have the power law potential. For a general complex network the condition on the exponent which preserves extensivity of the Hamiltonian was studied. At zero temperature, the energy per spin is proportional to and hence extensivity requires that be finite. For a general complex network is proportional to the Riemann zeta function . Thus, for the potential to be extensive, one requires Other processes which have been studied are self-avoiding random walks, and the scaling of the mean path length with the network size. These studies lead to the interesting result that the dimension transitions sharply as the shortcut probability increases from zero. The sharp transition in the dimension has been explained in terms of the combinatorially large number of available paths for points separated by distances large compared to 1. Conclusion The shortcut model is useful for studying the dimension dependence of different processes. The processes studied include the behaviour of the power law potential as a function of the dimension, the behaviour of self-avoiding random walks, and the scaling of the mean path length. It may be useful to compare the shortcut model with the small-world network, since the definitions have a lot of similarity. In the small-world network also one starts with a regular lattice and adds shortcuts with probability . However, the shortcuts are not constrained to connect to a node a fixed distance ahead. Instead, the other end of the shortcut can connect to any randomly chosen node. As a result, the small world model tends to a random graph rather than a two-dimensional graph as the shortcut probability is increased. References Networks Statistical mechanics
Shortcut model
[ "Physics" ]
1,075
[ "Statistical mechanics" ]
15,941,337
https://en.wikipedia.org/wiki/Time-domain%20thermoreflectance
Time-domain thermoreflectance (TDTR) is a method by which the thermal properties of a material can be measured, most importantly thermal conductivity. This method can be applied most notably to thin film materials (up to hundreds of nanometers thick), which have properties that vary greatly when compared to the same materials in bulk. The idea behind this technique is that once a material is heated up, the change in the reflectance of the surface can be utilized to derive the thermal properties. The reflectivity is measured with respect to time, and the data received can be matched to a model with coefficients that correspond to thermal properties. Experiment setup The technique of this method is based on the monitoring of acoustic waves that are generated with a pulsed laser. Localized heating of a material will create a localized temperature increase, which induces thermal stress. This stress build in a localized region causes an acoustic strain pulse. At an interface, the pulse will be subjected to a transmittance/reflectance state, and the characteristics of the interface may be monitored with the reflected waves. A probe laser will detect the effects of the reflecting acoustic waves by sensing the piezo-optic effect. The amount of strain is related to the optical laser pulse as follows. Take the localized temperature increase due to the laser, where R is the sample reflectivity, Q is the optical pulse energy, C is the specific heat (per unit volume), A is the optical spot area, ζ is the optical absorption length, and z is the distance into the sample. This temperature increase results in a strain that can be estimated by multiplying it with the linear coefficient of thermal expansion of the film. Usually, a typical magnitude value of the acoustic pulse will be small, and for long propagation nonlinear effects could become important. But propagation of such short duration pulses will suffer acoustic attenuation if the temperature is not very low. Thus, this method is most efficient with the utilization of surface acoustic waves, and studies on investigation of this method toward lateral structures are being conducted. To sense the piezo-optic effect of the reflected waves, fast monitoring is required due to the travel time of the acoustic wave and heat flow. Acoustic waves travel a few nanometers in a picosecond, where heat flows about a hundred nanometers in a second. Thus, lasers such as titanium sapphire (Ti:Al2O3) laser, with pulse width of ~200 fs, are used to monitor the characteristics of the interface. Other type of lasers include Yb:fiber, Yb:tungstate, Er:fiber, Nd:glass. Second-harmonic generation may be utilized to achieve frequency of double or higher. The output of the laser is split into pump and probe beams by a half-wave plate followed by a polarizing beam splitter leading to a cross-polarized pump and probe. The pump beam is modulated on the order of a few megahertz by an acousto-optic or electro-optic modulator and focused onto the sample with a lens. The probe is directed into an optical delay line. The probe beam is then focused with a lens onto the same spot on the sample as the pulse. Both pump and probe have a spot size on the order of 10–50 μm. The reflected probe light is input to a high bandwidth photodetector. The output is fed into a lock-in amplifier whose reference signal has the same frequency used to modulate the pump. The voltage output from the lock-in will be proportional to the change in reflectivity (ΔR). Recording this signal as the optical delay line is changed provides a measurement of ΔR as a function of optical probe-pulse time delay. Modeling materials The surface temperature of a single layer The frequency domain solution for a semi-infinite solid which is heated by a point source with angular frequency can be expressed by the following equation: , where . Here, Λ is the thermal conductivity of the solid, D is the thermal diffusivity of the solid, and r is the radial coordinate. In a typical time-domain thermoreflectance experiment, the co-aligned laser beams have cylindrical symmetry, therefore the Hankel transform can be used to simplify the computation of the convolution of the equation with the distributions of the laser intensities. Here is radially symmetric and by the definition of Hankel transform, Since the pump and probe beams used here have Gaussian distribution, the radius of the pump and probe beam are and respectively. The surface is heated by the pump laser beam with the intensity , i.e. where is the amplitude of the heat absorbed by the sample at frequency . Then the Hankel transform of is . Then the distributions of temperature oscillations at the surface is the inverse Hankel transforms of the product and , i.e. The surface temperatures are measured due to the change in the reflectivity with the temperature , i.e. , while this change is measured by the changes in the reflected intensity of a probe laser beam. The probe laser beam measures a weighted average of the temperature , i.e. This last integral can be simplified to an integral over : The surface temperature of a layered structure In the similar way, frequency domain solution for the surface temperature of a layered structure can be acquired. for a layered structure is where , . Here Λn is the thermal conductivity of nth layer, Dn is the thermal diffusivity of nth layer, and Ln is the thickness of nth layer. Then we can calculate the changes of temperature of a layered structure as before using the updated . Modeling of data acquired in time-domain thermoreflectance The acquired data from time-domain thermoreflectance experiments are required to be compared with the model. where Q is the quality factor of the resonant circuit. This calculated would be compared with the measured one. Application Through this process of time-domain thermoreflectance, the thermal properties of many materials can be obtained. Common test setups include having multiple metal blocks connected together in a diffusion multiple, where once subjected to high temperatures various compounds can be created as a result of the diffusion of two adjacent metal blocks. An example would be a Ni-Cr-Pd-Pt-Rh-Ru diffusion multiple which would have diffusion zones of Ni-Cr, Ni-Pd, Ni-Pt and so on. In this way, many different materials can be tested at the same time. Lowest thermal conductivity for a thin film of solid, fully dense material (i.e. not porous) was also recently reported with measurements using this method. Once this test sample is obtained, time-domain thermoreflectance measurements can take place, with laser pulses of very short duration for both the pump and the probe lasers (<1 ps). The thermoreflected signal is then measured by a photodiode which is connected to a RF lock-in amplifier. The signals that come out of the amplifier consist of an in phase and out of phase component, and the ratio of these allow thermal conductivity data to be measured for a specific delay time. The data received from this process can then be compared to a thermal model, and the thermal conductivity and thermal conductance can then be derived. It is found that these two parameters can be derived independently based on the delay times, with short delay times (0.1–0.5 ns) resulting in the thermal conductivity and longer delay times (> 2ns) resulting in the thermal conductance. There is much room for error involved due to phase errors in the RF amplifier in addition to noise from the lasers. Typically, however, accuracy can be found to be within 8%. See also Thermal conductivity measurement References Thermodynamics Materials testing
Time-domain thermoreflectance
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,601
[ "Materials testing", "Materials science", "Thermodynamics", "Dynamical systems" ]
15,945,246
https://en.wikipedia.org/wiki/Abolitionism%20%28animal%20rights%29
Abolitionism or abolitionist veganism is the animal rights based opposition to all animal use by humans. Abolitionism intends to eliminate all forms of animal use by maintaining that all sentient beings, humans or nonhumans, share a basic right not to be treated as properties or objects. Abolitionists emphasize that the production of animal products requires treating animals as property or resources, and that animal products are not necessary for human health in modern societies. Abolitionists believe that everyone who can live vegan is therefore morally obligated to be vegan. Abolitionists disagree on the strategy that must be used to achieve their goal. While some abolitionists, like Gary L. Francione, professor of law, argue that abolitionists should create awareness about the benefits of veganism through creative and nonviolent education (by also pointing to health and environmental benefits) and inform people that veganism is a moral imperative, others such as Tom Regan believe that abolitionists should seek to stop animal exploitation in society, and fight for this goal through political advocacy, without using the environmental or health arguments. Abolitionists such as Steven Best and David Nibert argue, respectively, that embracing alliance politics and militant direct action for change (including civil disobedience, mass confrontation, etc), and transcending capitalism are integral to ending animal exploitation. Abolitionists generally oppose movements that seek to make animal use more humane or to abolish specific forms of animal use, since they believe this undermines the movement to abolish all forms of animal use. The objective is to secure a moral and legal paradigm shift, whereby animals are no longer regarded as things to be owned and used. The American philosopher Tom Regan writes that abolitionists want empty cages, not bigger ones. This is contrasted with animal welfare, which seeks incremental reform, and animal protectionism, which seeks to combine the first principles of abolitionism with an incremental approach, but which is regarded by some abolitionists as another form of welfarism or "New Welfarism". Concepts The word relates to the historical term abolitionism—a social movement to end slavery or human ownership of other humans. Based on the way of evaluating welfare reforms, abolitionists can be either radical or pragmatic. While the former maintain that welfare reforms can only be dubiously described as moral improvements, the latter consider welfare reforms as moral improvements even when the conditions they permit are unjust. Gary L. Francione, professor of law and philosophy at Rutgers School of Law–Newark, argues from the abolitionist perspective that self-described animal-rights groups who pursue welfare concerns, such as People for the Ethical Treatment of Animals, risk making the public feel comfortable about its use of animals. He calls such groups the "new welfarists", arguing that, though their aim is an end to animal use, the reforms they pursue are indistinguishable from reforms agreeable to traditional welfarists, who he says have no interest in abolishing animal use. He argues that reform campaigns entrench the property status of animals, and validate the view that animals simply need to be treated better. Instead, he writes, the public's view that animals can be used and consumed ought to be challenged. His position is that this should be done by promoting ethical veganism. Others think that this should be done by creating a public debate in society. Philosopher Steven Best of the University of Texas at El Paso has been critical of Francione for his denunciation of militant direct actions carried out by the underground animal liberation movement and organizations like the Animal Liberation Front, which Best compares favorably to the "nineteenth-century-abolitionist movement" to end slavery, and also for placing the onus on individual consumers rather than powerful institutions such as corporations, the state and the mass media along with ignoring the "constraints imposed by poverty, class, and social conditioning." In this, he says that Francione "exculpates capitalism" and fails to "articulate a structural theory of oppression." The "vague, elitist, asocial 'vegan education' approach," Best argues, is no substitute for "direct action, mass confrontation, civil disobedience, alliance politics, and struggle for radical change." Sociologist David Nibert of Wittenberg University argues that attempting to create a vegan world under global capitalism is unrealistic given that "tens of millions of animals are tortured and brutally killed every year to produce profits for twenty-first century elites, who hold investments in the corporate equivalents of Genghis Khan" and that any real and meaningful change will only come by transcending capitalism. He writes that the contemporary entrenchment of capitalism and continued exploitation of animals by human civilization dovetail into the ongoing expansion of what he describes as the animal–industrial complex, with the number of CAFOs and the animals to fill them dramatically increasing, along with growing numbers of humans consuming animal products. He rhetorically asks, how can one hope to create some consumer base for this new vegan world when over a billion people live on less than a dollar a day? Nibert acknowledges that post-capitalism on its own will not automatically end animal exploitation or bring about a more just world, but that it is a "necessary precondition" for such changes. New welfarists argue that there is no logical or practical contradiction between abolitionism and "welfarism". Welfarists think that they can be working toward abolition, but by gradual steps, pragmatically taking into account what most people can be realistically persuaded to do in the short as well as the long term, and reduce animal suffering as it is most urgent to relieve. People for the Ethical Treatment of Animals, for example, in addition to promoting local improvements in the treatment of animals, promote vegetarianism. Although some people believe that changing the legal status of nonhuman sentient beings is a first step in abolishing ownership or mistreatment, others argue that this will not succeed if the consuming public has not already begun to reduce or eliminate its exploitation of animals for food. Personhood In 1992, Switzerland amended its constitution to recognize animals as beings and not things. The dignity of animals is also protected in Switzerland. New Zealand granted basic rights to five great ape species in 1999. Their use is now forbidden in research, testing or teaching. Germany added animal welfare in a 2002 amendment to its constitution, becoming the first European Union member to do so. In 2007, the parliament of the Balearic Islands, an autonomous province of Spain, passed the world's first legislation granting legal rights to all great apes. In 2013, India officially recognized dolphins as non-human persons. In 2014, France revised the legal status of animals from movable property to sentient beings. In 2015, the province of Quebec in Canada adopted the Animal Welfare and Safety Act, which gave animals the legal status of "sentient beings with biological needs". See also Animal liberationist Animal rights List of animal rights advocates References Further reading Francione, Gary. Rain Without Thunder: The Ideology of the Animal Rights Movement. Temple University Press, 1996. Francione, Gary and Garner, Robert. The Animal Rights Debate: Abolition Or Regulation?. Columbia University Press, 2010. Francione, Gary. Ingrid Newkirk on Principled Veganism: "Screw the principle", Animal Rights: The Abolitionist Approach, September 2010. Francione, Gary. "Animal Rights: The Abolitionist Approach", accessed February 26, 2011. Francione, Gary. Animals, Property, and the Law. Temple University Press, 1995. Hall, Lee. "An Interview with Professor Gary L. Francione on the State of the U.S. Animal Rights Movement", Friends of Animals, accessed February 25, 2008. Regan, Tom. Empty Cages. Rowman & Littlefield Publishers, Inc., 2004. Regan, Tom. "The Torch of Reason, The Sword of Justice", animalsvoice.com, accessed May 29, 2012. Regan, Tom. "On Achieving Abolitionist Goals", Animal Rights Zone, May 18, 2011, accessed May 24, 2011. Regan, Tom. The Case for Animal Rights. University of California Press, 1980. Animal ethics Animal rights Bioethics
Abolitionism (animal rights)
[ "Technology" ]
1,696
[ "Bioethics", "Ethics of science and technology" ]
13,257,986
https://en.wikipedia.org/wiki/Plane%20wave%20expansion%20method
Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure (dispersion relation) of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media (a reformulation of the method named Inverse dispersion allows frequency-dependent refractive indices). Principles Plane waves are solutions to the homogeneous Helmholtz equation, and form a basis to represent fields in the periodic media. PWE as applied to photonic crystals as described is primarily sourced from Dr. Danner's tutorial. The electric or magnetic fields are expanded for each field component in terms of the Fourier series components along the reciprocal lattice vector. Similarly, the dielectric permittivity (which is periodic along reciprocal lattice vector for photonic crystals) is also expanded through Fourier series components. with the Fourier series coefficients being the K numbers subscripted by m, n respectively, and the reciprocal lattice vector given by . In real modeling, the range of components considered will be reduced to just instead of the ideal, infinite wave. Using these expansions in any of the curl-curl relations like, and simplifying under assumptions of a source free, linear, and non-dispersive region we obtain the eigenvalue relations which can be solved. Example for 1D case For a y-polarized z-propagating electric wave, incident on a 1D-DBR periodic in only z-direction and homogeneous along x,y, with a lattice period of a. We then have the following simplified relations: The constitutive eigenvalue equation we finally have to solve becomes, This can be solved by building a matrix for the terms in the left hand side, and finding its eigenvalue and vectors. The eigenvalues correspond to the modal solutions, while the corresponding magnetic or electric fields themselves can be plotted using the Fourier expansions. The coefficients of the field harmonics are obtained from the specific eigenvectors. The resulting band-structure obtained through the eigenmodes of this structure are shown to the right. Example code We can use the following code in MATLAB or GNU Octave to compute the same band structure, % % solve the DBR photonic band structure for a simple % 1D DBR. air-spacing d, periodicity a, i.e, a > d, % we assume an infinite stack of 1D alternating eps_r|air layers % y-polarized, z-directed plane wave incident on the stack % periodic in the z-direction; % % parameters d = 8; % air gap a = 10; % total periodicity d_over_a = d / a; eps_r = 12.2500; % dielectric constant, like GaAs, % max F.S coefs for representing E field, and Eps(r), are Mmax = 50; % Q matrix is non-symmetric in this case, Qij != Qji % Qmn = (2*pi*n + Kz)^2*Km-n % Kn = delta_n / eps_r + (1 - 1/eps_r) (d/a) sinc(pi.n.d/a) % here n runs from -Mmax to + Mmax, freqs = []; for Kz = - pi / a:pi / (10 * a): + pi / a Q = zeros(2 * Mmax + 1); for x = 1:2 * Mmax + 1 for y = 1:2 * Mmax + 1 X = x - Mmax; Y = y - Mmax; kn = (1 - 1 / eps_r) * d_over_a .* sinc((X - Y) .* d_over_a) + ((X - Y) == 0) * 1 / eps_r; Q(x, y) = (2 * pi * (Y - 1) / a + Kz) .^ 2 * kn; % -Mmax<=(Y-1)<=Mmax end end fprintf('Kz = %g\n', Kz) omega_c = eig(Q); omega_c = sort(sqrt(omega_c)); % important step freqs = [freqs; omega_c.']; end close figure hold on idx = 1; for idx = 1:length(- pi / a:pi / (10 * a): + pi / a) plot(- pi / a:pi / (10 * a): + pi / a, freqs(:, idx), '.-') end hold off xlabel('Kz') ylabel('omega/c') title(sprintf('PBG of 1D DBR with d/a=%g, Epsr=%g', d / a, eps_r)) Advantages PWE expansions are rigorous solutions. PWE is extremely well suited to the modal solution problem. Large size problems can be solved using iterative techniques like Conjugate gradient method. For both generalized and normal eigenvalue problems, just a few band-index plots in the band-structure diagrams are required, usually lying on the brillouin zone edges. This corresponds to eigenmodes solutions using iterative techniques, as opposed to diagonalization of the entire matrix. The PWEM is highly efficient for calculating modes in periodic dielectric structures. Being a Fourier space method, it suffers from the Gibbs phenomenon and slow convergence in some configuration when fast Fourier factorization is not used. It is the method of choice for calculating the band structure of photonic crystals. It is not easy to understand at first, but it is easy to implement. Disadvantages Sometimes spurious modes appear. Large problems scaled as O(n3), with the number of the plane waves (n) used in the problem. This is both time consuming and complex in memory requirements. Alternatives include Order-N spectral method, and methods using Finite-difference time-domain (FDTD) which are simpler, and model transients. If implemented correctly, spurious solutions are avoided. It is less efficient when index contrast is high or when metals are incorporated. It cannot be used for scattering analysis. Being a Fourier-space method, Gibbs phenomenon affects the method's accuracy. This is particularly problematic for devices with high dielectric contrast. See also Photonic crystal Computational electromagnetics Finite-difference time-domain method Finite element method Maxwell's equations References Computational science Electrodynamics Computational electromagnetics
Plane wave expansion method
[ "Physics", "Mathematics" ]
1,482
[ "Computational electromagnetics", "Applied mathematics", "Computational physics", "Computational science", "Electrodynamics", "Dynamical systems" ]
13,259,181
https://en.wikipedia.org/wiki/Bollard%20pull
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load. Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above "normal" tugboats. The worlds strongest tug since its delivery in 2020 is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of . Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel. For vessels that hold station by thrusting under power against a fixed object, such as crew transfer ships used in offshore wind turbine maintenance, an equivalent measure "bollard push" may be given. Background Unlike in ground vehicles, the statement of installed horsepower is not sufficient to understand how strong a tug is – this is because the tug operates mainly in very low or zero speeds, thus may not be delivering power (power = force × velocity; so, for zero speeds, the power is also zero), yet still absorbing torque and delivering thrust. Bollard pull values are stated in tonnes-force (written as t or tonne) or kilonewtons (kN). Effective towing power is equal to total resistance times velocity of the ship. Total resistance is the sum of frictional resistance, , residual resistance, , and air resistance, . Where: is the density of water is the density of air is the velocity of (relative to) water is the velocity of (relative to) air is resistance coefficient of frictional resistance is resistance coefficient of residual resistance is resistance coefficient of air resistance (usually quite high, >0.9, as ships are not designed to be aerodynamic) is the wetted area of the ship is the cross-sectional area of the ship above the waterline Measurement Values for bollard pull can be determined in two ways. Practical trial This method is useful for one-off ship designs and smaller shipyards. It is limited in precision - a number of boundary conditions need to be observed to obtain reliable results. Summarizing the below requirements, practical bollard pull trials need to be conducted in a deep water seaport, ideally not at the mouth of a river, on a calm day with hardly any traffic. The ship needs to be in undisturbed water. Currents or strong winds would falsify the measurement. The static force that intends to move the ship forward must only be generated by the propeller discharge. If the ship were too close to a wall, water could rebound back, creating a propulsive wave. This would falsify the measurement. The ship must be in deep water. If there were any ground effect, the measurement would be falsified. The same holds true for propeller walk. Water salinity must have a well-defined value, as it influences the specific weight of the water and thereby the mass moved by the propeller per unit of time. The geometry of the towing line must have a well-defined value. Ideally, one would expect it to be exactly horizontal and straight. This is impossible in reality, because the line falls into a catenary due to its weight; the two fixed points of the line, being the bollard on shore and the ship's towing hook or cleat, may not have the same height above water. Conditions must be static. The engine power, the heading of the ship, the conditions of the propeller discharge race and the tension in the towing line must have settled to a constant or near-constant value for a reliable measurement. One condition to watch out for is the formation of a short circuit in propeller discharge race. If part of the discharge race is sucked back into the propeller, efficiency decreases sharply. This could occur due to a trial that is performed in too shallow water or too close to a wall. See Figure 2 for an illustration of error influences in a practical bollard pull trial. Note the difference in elevation of the ends of the line (the port bollard is higher than the ship's towing hook). Furthermore, there is the partial short circuit in propeller discharge current, the uneven trim of the ship and the short length of the tow line. All of these factors contribute to measurement error. Simulation This method eliminates much of the uncertainties of the practical trial. However, any numerical simulation also has an error margin. Furthermore, simulation tools and computer systems capable of determining bollard pull for a ship design are costly. Hence, this method makes sense for larger shipyards and for the design of a series of ships. Both methods can be combined. Practical trials can be used to validate the result of numerical simulation. Human-powered vehicles Practical bollard pull tests under simplified conditions are conducted for human powered vehicles. There, bollard pull is often a category in competitions and gives an indication of the power train efficiency. Although conditions for such measurements are inaccurate in absolute terms, they are the same for all competitors. Hence, they can still be valid for comparing several craft. See also Azipod Kort nozzle Tractive force Notes Further reading External links International Standard for Bollard Pull trials - 2019 Bollard Pull by Capt. P. Zahalka, Association of Hanseatic Marine Underwriters Physical quantities Water transport Nautical terminology Force
Bollard pull
[ "Physics", "Mathematics" ]
1,333
[ "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Wikipedia categories named after physical quantities", "Physical properties", "Matter" ]
13,260,616
https://en.wikipedia.org/wiki/Krivine%E2%80%93Stengle%20Positivstellensatz
In real algebraic geometry, Krivine–Stengle (German for "positive-locus-theorem") characterizes polynomials that are positive on a semialgebraic set, which is defined by systems of inequalities of polynomials with real coefficients, or more generally, coefficients from any real closed field. It can be thought of as a real analogue of Hilbert's Nullstellensatz (which concern complex zeros of polynomial ideals), and this analogy is at the origin of its name. It was proved by French mathematician and then rediscovered by the Canadian . Statement Let be a real closed field, and = {f1, f2, ..., fm} and = {g1, g2, ..., gr} finite sets of polynomials over in variables. Let be the semialgebraic set and define the preorder associated with as the set where Σ2[1,...,] is the set of sum-of-squares polynomials. In other words, (, ) = + , where is the cone generated by (i.e., the subsemiring of [1,...,] generated by and arbitrary squares) and is the ideal generated by . Let  ∈ [1,...,] be a polynomial. Krivine–Stengle Positivstellensatz states that (i) if and only if and such that . (ii) if and only if such that . The weak is the following variant of the . Let be a real closed field, and , , and finite subsets of [1,...,]. Let be the cone generated by , and the ideal generated by . Then if and only if (Unlike , the "weak" form actually includes the "strong" form as a special case, so the terminology is a misnomer.) Variants The Krivine–Stengle Positivstellensatz also has the following refinements under additional assumptions. It should be remarked that Schmüdgen's Positivstellensatz has a weaker assumption than Putinar's Positivstellensatz, but the conclusion is also weaker. Schmüdgen's Positivstellensatz Suppose that . If the semialgebraic set is compact, then each polynomial that is strictly positive on can be written as a polynomial in the defining functions of with sums-of-squares coefficients, i.e. . Here is said to be strictly positive on if for all . Note that Schmüdgen's Positivstellensatz is stated for and does not hold for arbitrary real closed fields. Putinar's Positivstellensatz Define the quadratic module associated with as the set Assume there exists L > 0 such that the polynomial If for all , then ∈ (,). See also Positive polynomial for other positivstellensatz theorems. Real Nullstellensatz Notes References Real algebraic geometry Algebraic varieties German words and phrases Theorems in algebraic geometry
Krivine–Stengle Positivstellensatz
[ "Mathematics" ]
635
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
13,262,417
https://en.wikipedia.org/wiki/Polyembryony
Polyembryony is the phenomenon of two or more embryos developing from a single fertilized egg. Due to the embryos resulting from the same egg, the embryos are identical to one another, but are genetically diverse from the parents. The genetic difference between the offspring and the parents, but the similarity among siblings, are significant distinctions between polyembryony and the process of budding and typical sexual reproduction. Polyembryony can occur in humans, resulting in identical twins, though the process is random and at a low frequency. Polyembryony occurs regularly in many species of vertebrates, invertebrates, and plants. Evolution of polyembryony The evolution of polyembryony and the potential evolutionary advantages that may entail have been studied. In parasitoid wasps, there are several hypotheses surrounding the evolutionary advantages of polyembryony, one of them being that it allows female wasps that are small in size to increase the number of potential offspring in comparison to wasps that are mono embryonic. There are limitations to monoembryony, but with this method of development, multiple embryos can be derived from each of the individual eggs that are laid. The potential advantages of polyembryony in competing invasive plant species has been studied as well. Vertebrates Armadillos are the most well studied vertebrate that undergoes polyembryony, with six species of armadillo in the genus Dasypus that are always polyembryonic. The nine banded armadillo, for instance, always gives birth to four identical young. There are two conditions that are expected to promote the evolution of polyembryony: the mother does not know the environmental conditions of her offspring as in the case of parasitoids, or a constraint on reproduction. It is thought that nine banded armadillos evolved to be polyembryonic because of the latter. Invertebrates A more striking example of the use of polyembryony as a competitive reproductive tool is found in the parasitoid Hymenoptera, family Encyrtidae. The progeny of the splitting embryo develop into at least two forms, those that will develop into adults and those that become a type of soldier, called precocious larvae. These latter larvae patrol the host and kill any other parasitoids they find with the exception of their siblings, usually sisters. Obligately polyembryonic insects fall in two classes: Hymenoptera (certain wasps), and Strepsiptera. From one egg, these insects can produce over thousands of offspring. Polyembryonic wasps from the Hymenoptera group can be further subdivided into four families including Braconidae (Macrocentrus), Platygastridae (Platygaster), Encyrtidae (Copidosoma), and Dryinidae. Polyembryony also occurs in Bryozoa. Through genotype analysis and molecular data, it has been suggested that polyembryony happens in the entire bryozoan order Cyclostomatida. Plants The term is also used in botany to describe the phenomenon of seedlings emerging from one embryo. Around 20 genera of gymnospores undergo polyembryony, termed "cleavage polyembryony," where the original zygote splits into many identical embryos. In some plant taxa, the many embryos of polyembryony eventually gives rise to only a single offspring. The mechanism underlying the phenomenon of a resulting single (or in some cases a few) offspring is described in Pinus sylvestris to be programmed cell death (PCD), which removes all but one embryo. Originally, all embryos have equal opportunity to develop into full seeds, but during the early stages of development, one embryo becomes dominant through competition, and therefore the now dormant seed, while the other embryos are destroyed through PCD. The genus Citrus has a number of species that undergo polyembryony, where multiple nucellar-cell-derived embryos exist alongside sexually-derived embryos. Antonie van Leeuwenhoek first described polyembryony in 1719 when the seed in Citrus was observed to have two germinating embryos. In Citrus, polyembryony is genetically controlled by a shared polyembryony locus among the species, determined by single-nucleotide polymorphism in the genotypes sequenced. The variation within the species of citrus is based on the amount of embryos that develop, the impact of the environment, and gene expression. As with other species, due to the many embryos developing in close proximity, competition occurs, which can cause variation in seed success or vigor. See also Monoembryony References External links Plant reproduction Embryology Insect physiology
Polyembryony
[ "Biology" ]
982
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
13,263,551
https://en.wikipedia.org/wiki/Ridership
In public transportation, ridership refers to the number of people using a transit service. It is often summed or otherwise aggregated over some period of time for a given service or set of services and used as a benchmark of success or usefulness. Common statistics include the number of people served by an entire transit system in a year and the number of people served each day by a single transit line. The concept should not be confused with the maximum capacity of a particular vehicle or transit line. See also References Transportation planning Public transport
Ridership
[ "Physics" ]
108
[ "Physical systems", "Transport", "Transport stubs" ]
13,265,459
https://en.wikipedia.org/wiki/Hydraulic%20Launch%20Assist
Hydraulic Launch Assist (HLA) is the name of a hydraulic hybrid regenerative braking system for land vehicles produced by the Eaton Corporation. Background The HLA system recycles energy by converting kinetic energy into potential energy during deceleration via hydraulics, storing the energy at high pressure in an accumulator filled with nitrogen gas. The energy is then returned to the vehicle during subsequent acceleration thereby reducing the amount of work done by the internal combustion engine. This system provides considerable increase in vehicle productivity while reducing fuel consumption in stop-and-go use profiles like refuse vehicles and other heavy duty vehicles. Parallel vs. series hybrids The HLA system is called a parallel hydraulic hybrid. In parallel systems the original vehicle drive-line remains, allowing the vehicle to operate normally when the HLA system is disengaged. When the HLA is engaged, energy is captured during deceleration and released during acceleration, in contrast to series hydraulic hybrid systems which replace the entire traditional drive-line to provide power transmission in addition to regenerative braking. Hydraulic vs. electric hybrids Hydraulic hybrids are said to be power dense, while electric hybrids are energy dense. This means that electric hybrids, while able to deliver large amounts of energy over long periods of time are limited by the rate at which the chemical energy in the batteries is converted to mechanical energy and . This is largely governed by reaction rates in the battery and current ratings of associated components. Hydraulic hybrids on the other hand are capable of transferring energy at a much higher rate, but are limited by the amount of energy that can be stored. For this reason, hydraulic hybrids lend themselves well to stop-and-go applications and heavy vehicles. Applications Concept vehicles Ford Motor Company included the HLA system in their 2002 F-350 Tonka truck concept vehicle, reported to have lower emissions and better fuel economy than any V-8 diesel truck engine of the time, with HLA designed to eventually improve fuel economy by 25%-35% in heavy truck city driving. Shuttle bus Eaton, Ford, the US Army, and IMPACT Engineering, Inc. (of Kent, Washington), built an E-450 shuttle bus as part of the Army's HAMMER (Hydraulic Hybrid Advanced Materials Multifuel Engine Research) project. Refuse Eaton has been awarded the Texas government’s New Technology Research and Development grant to build 12 refuse vehicles with HLA systems. Peterbilt Motors has designed a Model 320 chassis that incorporates the HLA system, which was featured on the cover of the December 13, 2007, issue of Machine Design. References Green vehicles Hybrid vehicles Hybrid powertrain Hybrid trucks Hydraulics
Hydraulic Launch Assist
[ "Physics", "Chemistry" ]
529
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
13,266,770
https://en.wikipedia.org/wiki/Sialon
SiAlON ceramics are a specialist class of high-temperature refractory materials, with high strength at ambient and high temperatures, good thermal shock resistance and exceptional resistance to wetting or corrosion by molten non-ferrous metals, compared to other refractory materials such as, for example, alumina. A typical use is with handling of molten aluminium. They also are exceptionally corrosion resistant and hence are also used in the chemical industry. SiAlONs also have high wear resistance, low thermal expansion and good oxidation resistance up to above ~1000 °C. They were first reported around 1971. Forms m and n are the numbers of Al–N and Al–O bonds substituting for Si–N bonds SiAlONs are ceramics based on the elements silicon (Si), aluminium (Al), oxygen (O) and nitrogen (N). They are solid solutions of silicon nitride (Si3N4), where Si–N bonds are partly replaced with Al–N and Al–O bonds. The substitution degrees can be estimated from the lattice parameters. The charge discrepancy caused by the substitution can be compensated by adding metal cations such as Li+, Mg2+, Ca2+, Y3+ and Ln3+, where Ln stands for lanthanide. SiAlONs exist in three basic forms, which are iso-structural with one of the two common forms of silicon nitride, alpha and beta, and with orthorhombic silicon oxynitride; they are hence named as α, β and O'-SiAlONs. Production SiAlONs are produced by first combining a mixture of raw materials including silicon nitride, alumina, aluminium nitride, silica and the oxide of a rare-earth element such as yttrium. The powder mix is fabricated into a "green" compact by isostatic powder compaction or slipcasting, for example. Then the shaped form is densified, typically by pressureless sintering or hot isostatic pressing. Abnormal grain growth has been extensively reported for SiAlON ceramics, and results in a bimodal grain size distribution of the sintered material. The sintered part may then need to be machined by diamond grinding (abrasive cutting). Alternatively, they can be forged into various shapes at a temperature of ca. 1200 °C. Applications SiAlON ceramics have found extensive use in non-ferrous molten metal handling, particularly aluminium and its alloys, including metal feed tubes for aluminum die casting, burner and immersion heater tubes, injector and degassing for nonferrous metals, thermocouple protection tubes, crucibles and ladles. In metal forming, SiAlON is used as a cutting tool for machining chill cast iron and as brazing and welding fixtures and pins, particularly for resistance welding. Other applications include in the chemical and process industries and the oil and gas industries, due to sialons excellent chemical stability and corrosion resistance and wear resistance properties. Some rare-earth activated SiAlONs are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists. For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN3-based (CASN) phosphor. References Ceramic materials Nitrides Superhard materials Phosphors and scintillators
Sialon
[ "Physics", "Chemistry", "Engineering" ]
790
[ "Luminescence", "Materials", "Superhard materials", "Phosphors and scintillators", "Ceramic materials", "Ceramic engineering", "Matter" ]
8,631,522
https://en.wikipedia.org/wiki/Shallow%20water%20equations
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below). The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived. While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation. Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow. Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state. Equations Conservative form The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are: Here η is the total fluid column height (instantaneous fluid depth as a function of x, y and t), and the 2D vector (u,v) is the fluid's horizontal flow velocity, averaged across the vertical column. Further g is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation. Non-conservative form Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density): where It is often the case that the terms quadratic in u and v, which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (), we have (without lateral viscous forces): One-dimensional Saint-Venant equations The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape. The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow. Equations The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is: and where x is the space coordinate along the channel axis, t denotes time, A(x,t) is the cross-sectional area of the flow at location x, u(x,t) is the flow velocity, ζ(x,t) is the free surface elevation and τ(x,t) is the wall shear stress along the wetted perimeter P(x,t) of the cross section at x. Further ρ is the (constant) fluid density and g is the gravitational acceleration. Closure of the hyperbolic system of equations ()–() is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area A and the surface elevation ζ at each position x. For example, for a rectangular cross section, with constant channel width B and channel bed elevation zb, the cross sectional area is: . The instantaneous water depth is , with zb(x) the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area A in equation () can be written as: with b(x,h) the effective width of the channel cross section at location x when the fluid depth is h – so for rectangular channels. The wall shear stress τ is dependent on the flow velocity u, they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula. Further, equation () is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation () is the momentum equation, giving the balance between forces and momentum change rates. The bed slope S(x), friction slope Sf(x, t) and hydraulic radius R(x, t) are defined as: and Consequently, the momentum equation () can be written as: Conservation of momentum The momentum equation () can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, () and (). In terms of the discharge : where A, I1 and I2 are functions of the channel geometry, described in the terms of the channel width B(σ,x). Here σ is the height above the lowest point in the cross section at location x, see the cross-section figure. So σ is the height above the bed level zb(x) (of the lowest point in the cross section): Above – in the momentum equation () in conservation form – A, I1 and I2 are evaluated at . The term describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, gives the effects of geometry variations along the channel axis x. In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, () or (), or the conservation form (). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump. Characteristics The Saint-Venant equations ()–() can be analysed using the method of characteristics. The two celerities dx/dt on the characteristic curves are: with The Froude number determines whether the flow is subcritical () or supercritical (). For a rectangular and prismatic channel of constant width B, i.e. with and , the Riemann invariants are: and so the equations in characteristic form are: The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011). The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions. Hamiltonian structure for frictionless flow In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian is equal to the energy of the free-surface flow: with constant the channel width and the constant fluid density. Hamilton's equations then are: since . Derived modelling Dynamic wave The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, InfoWorks_ICM , MIKE 11, Wash 123d and SWMM5. In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation. Diffusive wave For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as: The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface. Kinematic wave For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave: The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS. Derivation from Navier–Stokes equations The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The x-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the x-direction – can be written as: where u is the velocity in the x-direction, v is the velocity in the y-direction, w is the velocity in the z-direction, t is time, p is the pressure, ρ is the density of water, ν is the kinematic viscosity, and fx is the body force in the x-direction. If it is assumed that friction is taken into account as a body force, then can be assumed as zero so: Assuming one-dimensional flow in the x-direction it follows that: Assuming also that the pressure distribution is approximately hydrostatic it follows that: or in differential form: And when these assumptions are applied to the x-component of the Navier–Stokes equations: There are 2 body forces acting on the channel fluid, namely, gravity and friction: where fx,g is the body force due to gravity and fx,f is the body force due to friction. fx,g can be calculated using basic physics and trigonometry: where Fg is the force of gravity in the x-direction, θ is the angle, and M is the mass. The expression for sin θ can be simplified using trigonometry as: For small θ (reasonable for almost all streams) it can be assumed that: and given that fx represents a force per unit mass, the expression becomes: Assuming the energy grade line is not the same as the channel slope, and for a reach of consistent slope there is a consistent friction loss, it follows that: All of these assumptions combined arrives at the 1-dimensional Saint-Venant equation in the x-direction: where (a) is the local acceleration term, (b) is the convective acceleration term, (c) is the pressure gradient term, (d) is the friction term, and (e) is the gravity term. Terms The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation. The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope. Wave modelling by shallow-water equations Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundred of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength. Turbulence modelling using non-linear shallow-water equations Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity. See also Waves and shallow water Notes Further reading External links Derivation of the shallow-water equations from first principles (instead of simplifying the Navier–Stokes equations, some analytical solutions) Equations of fluid dynamics Partial differential equations Physical oceanography Water waves
Shallow water equations
[ "Physics", "Chemistry" ]
3,325
[ "Equations of fluid dynamics", "Physical phenomena", "Applied and interdisciplinary physics", "Equations of physics", "Water waves", "Waves", "Physical oceanography", "Fluid dynamics" ]
8,635,379
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion%20system
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space. Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form where represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable. One-component reaction–diffusion equations The simplest reaction–diffusion equation is in one spatial dimension in plane geometry, is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with and (Zeldovich number) that arises in combustion theory, and its particular degenerate case with that is sometimes referred to as the Zeldovich equation as well. The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form and therefore describes a permanent decrease of the "free energy" given by the functional with a potential such that In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form with , where is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For , there is a simple proof for this statement: if is a stationary solution and is an infinitesimally perturbed solution, linear stability analysis yields the equation With the ansatz we arrive at the eigenvalue problem of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance is a neutral eigenfunction with the eigenvalue , and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue cannot be the lowest one, thereby implying instability. To determine the velocity of a moving front, one may go to a moving coordinate system and look at stationary solutions: This equation has a nice mechanical analogue as the motion of a mass with position in the course of the "time" under the force with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of . When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability. Two-component reaction–diffusion equations Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion. A linear stability analysis however shows that when linearizing the general two-component system a plane wave perturbation of the stationary homogeneous solution will satisfy Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian of the reaction function. In particular, if a finite wave vector is supposed to be the most unstable one, the Jacobian must have the signs This class of systems is named activator-inhibitor system after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation with which describes how an action potential travels through a nerve. Here, and are positive constants. When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number or a Turing bifurcation to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns. For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle Three- and more-component reaction–diffusion equations For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems. It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in. Applications and universality In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment. Experiments Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems. Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out. Numerical treatments A reaction–diffusion system can be solved by using methods of numerical mathematics. There exist several numerical treatments in research literature. Numerical solution methods for complex geometries are also proposed. Reaction-diffusion systems are described to the highest degree of detail with particle based simulation tools like SRSim or ReaDDy which employ among others reversible interacting-particle reaction dynamics. See also Autowave Diffusion-controlled reaction Chemical kinetics Phase space method Autocatalytic reactions and order creation Pattern formation Patterns in nature Periodic travelling wave Self-similar solutions Diffusion equation Stochastic geometry MClone The Chemical Basis of Morphogenesis Turing pattern Multi-state modeling of biomolecules Examples Fisher's equation Zeldovich–Frank-Kamenetskii equation FitzHugh–Nagumo model Wrinkle paint References External links Reaction–Diffusion by the Gray–Scott Model: Pearson's parameterization a visual map of the parameter space of Gray–Scott reaction diffusion. A thesis on reaction–diffusion patterns with an overview of the field RD Tool: an interactive web application for reaction-diffusion simulation Mathematical modeling Parabolic partial differential equations Reaction mechanisms Functions of space and time
Reaction–diffusion system
[ "Physics", "Chemistry", "Mathematics" ]
1,944
[ "Reaction mechanisms", "Mathematical modeling", "Functions of space and time", "Applied mathematics", "Physical organic chemistry", "Spacetime", "Chemical kinetics" ]
8,635,577
https://en.wikipedia.org/wiki/Tetrahydroxyborate
Tetrahydroxyborate is an inorganic anion with the chemical formula or . It contributes no colour to tetrahydroxyborate salts. It is found in the mineral hexahydroborite, , originally formulated . It is one of the boron oxoanions, and acts as a weak base. The systematic names are tetrahydroxyboranuide (substitutive) and tetrahydroxidoborate(1−) (additive). It can be viewed as the conjugate base of boric acid. Structure Tetrahydroxyborate has a symmetric tetrahedral geometry, isoelectronic with the hypothetical compound orthocarbonic acid (). Chemical properties Basicity Tetrahydroxyborate acts as a weak Brønsted–Lowry base because it can assimilate a proton (), yielding boric acid with release of water: + + It can also release a hydroxide anion , thus acting as a classical Arrhenius base: + (pK = 9.14 to the left) Thus, when boric acid is dissolved in pure (neutral) water, most of it will exist as tetrahydroxyborate ions. With diols In aqueous solution, the tetrahydroxyborate anion reacts with cis-vicinal diols (organic compounds containing similarly-oriented hydroxyl groups in adjacent carbon atoms), ) such as mannitol, sorbitol, glucose and glycerol, to form anion esters containing one or two five-member rings. For example, the reaction with mannitol can be written as + + 2 + + 2 Giving the overall reaction + 2 + 4 These mannitoborate esters are fairly stable and thus depletes the tetrahydroxyborate from the solution. The addition of mannitol to an initially neutral solution containing boric acid or borates lowers the pH enough for the be titrated by a strong base as NaOH, including with an automated a potentiometric titrator. This is a reliable method to assay the amount of borate content present in the solution. Other chemical reactions Upon treatment with a strong acid, a metal tetrahydroxyborate converts to boric acid and the metal salt. Oxidation of tetrahydroxyborate gives the perborate anion : 2 + 2 → + 2 When heated to a high temperature, tetrahydroxyborate salts decompose to produce metaborate salts and water, or to produce boric acid and a metal hydroxide: n → () + 2n  → + HO− Production Tetrahydroxyborate salts are produced by treating boric acid with an alkali such as sodium hydroxide, with catalytic amounts of water. Other borate salts may be obtained by altering the process conditions. Uses Tetrahydroxyborate can be used as a cross-link in polymers. Occurrence The tetrahydroxyborate anion is found in Na[B(OH)4], Na2[B(OH)4]Cl and CuII[B(OH)4]Cl. See also Borate Tetrafluoroborate References Borates Anions
Tetrahydroxyborate
[ "Physics", "Chemistry" ]
688
[ "Ions", "Matter", "Anions" ]
10,787,946
https://en.wikipedia.org/wiki/Natural%20prolongation%20principle
The natural prolongation principle or principle of natural prolongation is a legal concept introduced in maritime claims submitted to the United Nations. The phrase denotes a concept of political geography and international law that a nation's maritime boundary should reflect the 'natural prolongation' of where its land territory reaches the coast. Oceanographic descriptions of the land mass under coastal waters became conflated and confused with criteria that are deemed relevant in border delimitation. The concept was developed in the process of settling disputes if the borders of adjacent nations were located on a contiguous continental shelf. An unresolved issue is whether a natural prolongation defined scientifically, without reference to equitable principles, is to be construed as a "natural prolongation" for the purpose of maritime border delimitation or maritime boundary disputes. History The phrase natural prolongation was established as a concept in the North Sea Continental Cases in 1969. The relevance and importance of natural prolongation as a factor in delimitation disputes and agreements has declined during the period in which international acceptance of UNCLOS III has expanded. The Malta/Libya Case in 1985 is marked as the eventual demise of the natural prolongation principle being used in delimiting between adjoining national maritime boundaries. The Bay of Bengal cases in the early 2010s (Bangladesh v Myanmar) and (Bangladesh v India) likewise dealt a blow to natural prolongation as the guiding principle for delimitation of the continental shelf more than 200 nautical miles beyond baselines. See also Equidistance principle References Sources Capaldo, Giuliana Ziccardi. (1995). Répertoire de la jurisprudence de la cour internationale de justice (1947-1992). Dordrecht: Martinus Nijhoff Publishers. ; ; ; OCLC 30701545 Dorinda G. Dallmeyer and Louis De Vorsey. (1989). Rights to Oceanic Resources: Deciding and Drawing Maritime Boundaries. Dordrecht: Martinus Nijhoff Publishers. ; OCLC 18981568 Francalanci, Giampiero; Tullio Scovazzi; and Daniela Romanò. (1994). Lines in the Sea. Dordrecht: Martinus Nijhoff Publishers. ; OCLC 30400059 Kaye, Stuart B. (1995). Australia's maritime boundaries. Wollongong, New South Wales: Centre for Maritime Policy (University of Wollongong). ; OCLC 38390208 Borders Maritime boundaries
Natural prolongation principle
[ "Physics" ]
506
[ "Spacetime", "Borders", "Space" ]
10,788,796
https://en.wikipedia.org/wiki/List%20of%20mass%20spectrometry%20software
Mass spectrometry software is used for data acquisition, analysis, or representation in mass spectrometry. Proteomics software In protein mass spectrometry, tandem mass spectrometry (also known as MS/MS or MS2) experiments are used for protein/peptide identification. Peptide identification algorithms fall into two broad classes: database search and de novo search. The former search takes place against a database containing all amino acid sequences assumed to be present in the analyzed sample. In contrast, the latter infers peptide sequences without knowledge of genomic data. Database search algorithms De novo sequencing algorithms De novo peptide sequencing algorithms are, in general, based on the approach proposed in Bartels et al. (1990). Homology searching algorithms MS/MS peptide quantification Other software See also Mass spectrometry data format: for a list of mass spectrometry data viewers and format converters. List of protein structure prediction software References External links List Proteomics Lists of bioinformatics software
List of mass spectrometry software
[ "Physics", "Chemistry" ]
208
[ "Mass spectrometry software", "Mass spectrometry", "Spectrum (physical sciences)", "Chemistry software" ]
10,788,804
https://en.wikipedia.org/wiki/Mascot%20%28software%29
Mascot is a software search engine that uses mass spectrometry data to identify proteins from peptide sequence databases. Mascot is widely used by research facilities around the world. Mascot uses a probabilistic scoring algorithm for protein identification that was adapted from the MOWSE algorithm. Mascot is freely available to use on the website of Matrix Science. A license is required for in-house use where more features can be incorporated. History means MOWSE was one of the first algorithms developed for protein identification using peptide mass fingerprinting. It was originally developed in 1993 as a collaboration between Darryl Pappin of the Imperial Cancer Research Fund (ICRF) and Alan Bleasby of the Science and Engineering Research Council (SERC). MOWSE stood apart from other protein identification algorithms in that it produced a probability-based score for identification. It was also the first to take into account the non-uniform distribution of peptide sizes, caused by the enzymatic digestion of a protein that is needed for mass spectrometry analysis. However, MOWSE was only applicable to peptide mass fingerprint searches and was dependent on pre-compiled databases which were inflexible with regard to post-translational modifications and enzymes other than trypsin. To overcome these limitations, to take advantage of multi-processor systems and to add non-enzymatic search functionality, development was begun again from scratch by David Perkins at the Imperial Cancer Research Fund. The first versions were developed for Silicon Graphics Irix and Digital Unix systems. Eventually this software was named Mascot and to reach a wider audience, an external bioinformatics company named Matrix Science was created by David Creasy and John Cottrell to develop and distribute Mascot. Legacy software versions exist for Tru64, Irix, AIX, Solaris, Microsoft Windows NT4 and Microsoft Windows 2000. Mascot has been available as a free service on the Matrix Science website since 1999 and has been cited in scientific literature over 5,000 times. Matrix Science still continues to work on improving Mascot’s functionality. Applications Mascot identifies proteins by interpreting mass spectrometry data. The prevailing experimental method for protein identification is a bottom-up approach, where a protein sample is typically digested with trypsin to form smaller peptides. While most proteins are too large, peptides usually fall within the limited mass range that a typical mass spectrometer can measure. Mass spectrometers measure the molecular weights of peptides in a sample. Mascot then compares these molecular weights against a database of known peptides. The program cleaves every protein in the specified search database in silico according to specific rules depending on the cleavage enzyme used for digestion and calculates the theoretical mass for each peptide. Mascot then computes a score based on the probability that the peptides from a sample match those in the selected protein database. The more peptides Mascot identifies from a particular protein, the higher the Mascot score for that protein. Features Peptide Mass Fingerprint search Identifies proteins from an uploaded peak list using a technique known as peptide mass fingerprinting. Sequence query Combines peptide mass data with amino acid sequence and composition information usually obtained from MS/MS tandem mass spectrometry data. Based on the peptide sequence tag approach. MS/MS Ion Search Identify fragment ions from uninterpreted MS/MS data of one or more peptides. The software processes data from mass spectrometers of the following companies: AB Sciex Agilent Technologies Bruker Shimadzu Corp. Thermo Fisher Scientific Waters Corporation Important parameters Modifications can be specified as fixed or variable. Fixed modifications are applied universally to every amino acid residue of the specified type or to the N-terminus or C-terminus of the peptide. The mass for the modification is added to each of the respective residues. When variable modifications are specified the program tries to match every different combination of amino acid residues with and without modification. This can increase the number of comparisons dramatically and lead to lower scores and longer search time. By setting a taxonomy, a search can be restricted to certain species or groups of species. This will reduce search time and ensure that only relevant protein hits are included. Scoring Mascot’s fundamental approach to identifying peptides is to calculate the probability whether an observed match between experimental data and peptide sequences found in a reference database has occurred by chance. The match with the lowest probability of occurring by chance is returned as the most significant match. The significance of the match depends on the size of the database that is being queried. Mascot employs the widely used significance level of 0.05, meaning that in a single test the probability of observing an event at random is less than or equal to 1 in 20. In this light, a score of 10−5 might seem very promising. However, if the database being searched contains 106 sequences several scores of this magnitude would be expected by chance alone because the algorithm carried out 106 individual comparisons. For a database of that size, by applying a Bonferroni correction to account for multiple comparisons, the significance threshold drops to 5*10−8. In addition to the calculated peptide scores, Mascot also estimates the False Discovery Rate (FDR) by searching against a decoy database. When performing a decoy search, Mascot generates a randomized sequence of the same length for every sequence in the target database. The decoy sequence is generated such that it has the same average amino acid composition as the target database. The FDR is estimated as the ratio of decoy database matches to target database matches. This relates to the standard formula FDR = FP / (FP + TP), where FP are false positives and TP are true positives. The decoy matches are certain to be spurious identifications, but we can't discriminate between true and false positives identified in the target database. FDR estimation was added in response to journals' guidelines on protein identification reports like the ones from Molecular and Cellular Proteomics. Mascot's FDR calculation incorporates ideas from different publications. Alternatives The most common alternative database search programs are listed in the Mass spectrometry software article. The performance of a variety of mass spectrometry software, including Mascot, can be observed in the 2011 iPRG study. Genome-based peptide fingerprint scanning is another method that compares the peptide fingerprints to the entire genome instead of only annotated genes. References Bioinformatics software Mass spectrometry software Proteomic sequencing
Mascot (software)
[ "Physics", "Chemistry", "Biology" ]
1,319
[ "Spectrum (physical sciences)", "Chemistry software", "Bioinformatics software", "Proteomic sequencing", "Bioinformatics", "Molecular biology techniques", "Mass spectrometry software", "Mass spectrometry" ]
10,789,014
https://en.wikipedia.org/wiki/C4H8
{{DISPLAYTITLE:C4H8}} The molecular formula C4H8 (molar mass: 56.11 g/mol) may refer to: Butenes (butylenes) 1-Butene, or 1-butylene 2-Butene Isobutylene Cyclobutane Methylcyclopropane Molecular formulas
C4H8
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
10,793,106
https://en.wikipedia.org/wiki/Trachealis%20muscle
The trachealis muscle is a sheet of smooth muscle in the trachea. Structure The trachealis muscle lies posterior to the trachea and anterior to the oesophagus. It bridges the gap between the free ends of C-shaped rings of cartilage at the posterior border of the trachea, adjacent to the oesophagus. This completes the ring of cartilages of the trachea. The trachealis muscle also supports a thin cartilage on the inside of the trachea. It is the only smooth muscle present in the trachea. Function The primary function of the trachealis muscle is to constrict the trachea, allowing air to be expelled with more force, such as during coughing. Clinical significance Tracheomalacia may involve hypotonia of the trachealis muscle. The trachealis muscle may become stiffer during ageing, which makes the whole trachea less elastic. In infants, the insertion of an oesophagogastroduodenoscope into the oesophagus may compress the trachealis muscle, and narrow the trachea. This can result in reduced airflow to the lungs. Infants may be intubated to make sure that the trachea is fixed open. See also Muscles of respiration References Respiratory system Respiration
Trachealis muscle
[ "Biology" ]
285
[ "Organ systems", "Respiratory system" ]
14,427,315
https://en.wikipedia.org/wiki/GPR12
Probable G-protein coupled receptor 12 is a protein that in humans is encoded by the GPR12 gene. The gene product of GPR12 is an orphan receptor, meaning that its endogenous ligand is currently unknown. Gene disruption of GPR12 in mice results in dyslipidemia and obesity. Ligands Inverse agonists Cannabidiol Evolution Paralogues Source: GPR6 GPR3 S1PR5 CNR1 CNR2 MC4R S1PR1 MC3R MC2R S1PR2 MC1R S1PR3 LPAR2 MC5R LPAR1 S1PR4 LPAR3 GPR119 References Further reading G protein-coupled receptors
GPR12
[ "Chemistry" ]
149
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,342
https://en.wikipedia.org/wiki/GPR15
G protein-coupled receptor 15 is a protein that in humans is encoded by the GPR15 gene. GPR15 is a class A orphan G protein-coupled receptor (heterotrimeric guanine nucleotide-binding protein, GPCR). The GPR15 gene is localized at chromosome 3q11.2-q13.1. It is found in epithelial cells, synovial macrophages, endothelial cells and lymphocytes especially T cells. From the mRNA sequence a 40.8 kD molecular weight of GPR15 is proposed. In an epithelial tumour cell line (HT-29), however, a 36 kD band, composed of GPR15 and galactosyl ceramide, was detected. Protein expression in lymphocytes is strongly associated with hypomethylation of its gene. Tissue distribution High gene expression was described for colonic mucosa, small bowel mucosa, liver and spleen. Moderate gene expression was found in blood, lymph node, thymus, testis and prostate. In peripheral blood, GPR15 is mainly found on T cells, especially on CD4+ T helper cells, and less prominent on B cells. By immunohistochemistry GPR15 is found specifically in glandular cells of the stomach, α-cells of islet of Langerhans in pancreas, surface epithelium of small intestine and colon, hepatocytes in liver, tubular epithelium of the kidney and in diverse tumour tissues such as glioblastoma, melanoma, small cell lung carcinoma or colon carcinoma. Function The overall physiological role remains elusive. It seems to play a role in homing of single T cell types to the colon. In human, GPR15 controls together with α4β7-integrin the homing of effector T cells to the inflamed gut of ulcerative colitis. With respect to the homing of GPR15-expressing immune cells to the colon there are divergent mechanisms between human and rodents like mouse. Ligands There are at least two endogenous ligand found recently. One ligand encoded by the human gene GPR15LG was identified as a robust marker for psoriasis whose abundance decreased after therapeutic treatment with anti-interleukin-17 antibody. Transcripts of GPR15LG are abundant in cervix and colon. It is currently unknown whether GPR15LG causes disease symptoms or is the consequence of a disturbed epithelial barrier. It does not act as a chemotactic agent but rather decrease T cell migration suggesting a mechanism of heterologous receptor desensitization. The second ligand is a fragment of thrombomodulin exerting anti-inflammatory function in mice. Clinical significance Human GPR15 was originally cloned as a co-receptor for HIV or the simian immunodeficiency virus. HIV-induced activation of GPR15 in enterocytes seems to cause HIV enteropathy accompanied with diarrhea and lipid malabsorption. In inflammatory bowel diseases (IBD) such as Crohn's disease and ulcerative colitis the proportion of GPR15-expressing cells among regulatory T cells is slightly increased in peripheral blood. In mouse, GPR15-deficient mice were prone to develop severe large intestine inflammation, which was rescued by the transfer of GPR15-sufficient T regs. Lifestyle Chronic tobacco smoking is a very strong inducer of GPR15-expressing T cells in peripheral blood. Although the proportion of GPR15-expressing cells among T-cells in peripheral blood is a high sensitive and specific biomarker for chronic tobacco smoking it does not indicate a disturbed homeostasis in the lung. References Further reading G protein-coupled receptors
GPR15
[ "Chemistry" ]
811
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,402
https://en.wikipedia.org/wiki/MPP%2B
{{DISPLAYTITLE:MPP+}} MPP+ (1-methyl-4-phenylpyridinium) is a positively charged organic molecule with the chemical formula C12H12N+. It is a monoaminergic neurotoxin that acts by interfering with oxidative phosphorylation in mitochondria by inhibiting complex I, leading to the depletion of ATP and eventual cell death. MPP+ arises in the body as the toxic metabolite of the closely related compound MPTP. MPTP is converted in the brain into MPP+ by the enzyme MAO-B, ultimately causing parkinsonism in primates by killing certain dopamine-producing neurons in the substantia nigra. The ability for MPP+ to induce Parkinson's disease has made it an important compound in Parkinson's research since this property was discovered in 1983. The chloride salt of MPP+ found use in the 1970s as an herbicide under the trade name cyperquat. Though no longer in use as an herbicide, cyperquat's closely related structural analog paraquat still finds widespread usage, raising some safety concerns. History MPP+ has been known since at least the 1920s, with a synthesis of the compound being published in a German chemistry journal in 1923. Its neurotoxic effects, however, were not known until much later, with the first paper definitively identifying MPP+ as a Parkinson's-inducing poison being published in 1983. This paper followed a string of poisonings that took place in San Jose, California in 1982 in which users of an illicitly synthesized analog of meperidine were presenting to hospital emergency rooms with symptoms of Parkinson's. Since most of the patients were young and otherwise healthy and Parkinson's disease tends to afflict people at a much older age, researchers at the hospital began to scrutinize the illicitly synthesized opiates that the patients had ingested. The researchers discovered that the opiates were tainted with MPTP, which is the biological precursor to the neurotoxic MPP+. The MPTP was present in the illicitly synthesized meperidine analog as an impurity, which had a precedent in a 1976 case involving a chemistry graduate student synthesizing meperidine and injecting the resulting product into himself. The student came down with symptoms of Parkinson's disease, and his synthesized product was found to be heavily contaminated with MPTP. The discovery that MPP+ could reliably and irreversibly induce Parkinson's disease in mammals reignited interest in Parkinson's research, which had previously been dormant for decades. Following the revelation, MPP+ and MPTP sold out in virtually all chemical catalogs, reappearing months later with a 100-fold price increase. Synthesis Laboratory MPP+ can be readily synthesized in the laboratory, with Zhang and colleagues publishing a representative synthesis in 2017. The synthesis involves reacting 4-phenylpyridine with methyl iodide in acetonitrile solvent at reflux for 24 hours. An inert atmosphere is used to ensure a quantitative yield. The product is formed as the iodide salt, and the reaction proceeds via an SN2 pathway. The industrial synthesis of MPP+ for sale as the herbicide cyperquat used methyl chloride as the source of the methyl group. Biological MPP+ is produced in vivo from the precursor MPTP. The process involves two successive oxidations of the molecule by monoamine oxidase B to form the final MPP+ product. This metabolic process occurs predominantly in astrocytes in the brain. Mechanism of toxicity MPP+ exhibits its toxicity mainly by promoting the formation of reactive free radicals in the mitochondria of dopaminergic neurons in the substantia nigra. MPP+ can siphon electrons from the mitochondrial electron transport chain at complex I and be reduced, in the process forming radical reactive oxygen species which go on to cause further, generalized cellular damage. In addition, the overall inhibition of the electron transport chain eventually leads to stunted ATP production and eventual death of the dopaminergic neurons, which ultimately displays itself clinically as symptoms of Parkinson's disease. MPP+ also displays toxicity by inhibiting the synthesis of catecholamines, reducing levels of dopamine and cardiac norepinephrine, and inactivating tyrosine hydroxylase. The mechanism of uptake of MPP+ is important to its toxicity. MPP+ injected as an aqueous solution into the bloodstream causes no symptoms of Parkinsonism in test subjects, since the highly charged molecule is unable to diffuse through the blood-brain barrier. Furthermore, MPP+ shows little toxicity to cells other than dopaminergic neurons, suggesting that these neurons have a unique process by which they can uptake the molecule, since, being charged, MPP+ cannot readily diffuse across the lipid bilayer that composes cellular membranes. Unlike MPP+, its common biological precursor MPTP is a lipid-soluble molecule that diffuses readily across the blood-brain barrier. MPTP itself is not cytotoxic, however, and must be metabolized to MPP+ by MAO-B to show any signs of toxicity. The oxidation of MPTP to MPP+ is a process that can be catalyzed only by MAO-B, and cells that express other forms of MAO do not show any MPP+ production. Studies in which MAO-B was selectively inhibited showed that MPTP had no toxic effect, further cementing the crucial role of MAO-B in MPTP and MPP+ toxicity. Studies in rats and mice show that various compounds, including nobiletin, a flavonoid found in citrus, can rescue dopaminergic neurons from degeneration caused by treatment with MPP+. The specific mechanism of protection, however, remains unknown. Uses In scientific research MPP+ and its precursor MPTP are widely used in animal models of Parkinson's disease to irreversibly induce the disease. Excellent selectivity and dose control can be achieved by injecting the compound directly into cell types of interest. Most modern studies use rats as a model system, and much research is directed at identifying compounds that can attenuate or reverse the effects of MPP+. Commonly studied compounds include various MAO inhibitors and general antioxidants. While some of these compounds are quite effective at stopping the neurotoxic effects of MPP+, further research is needed to establish their potential efficacy in treating clinical Parkinson's. The revelation that MPP+ causes the death of dopaminergic neurons and ultimately induces symptoms of Parkinson's disease was crucial in establishing the lack of dopamine as central to Parkinson's disease. Levodopa or L-DOPA came into common use as an anti-Parkinson's medication thanks to the results brought about by research using MPP+. Further medications are in trial to treat the progression of the disease itself as well as the motor and non-motor symptoms associated with Parkinson's, with MPP+ still being widely used in early trials to test efficacy. As a pesticide MPP+, sold as the chloride salt under the brand name cyperquat, was used briefly in the 1970s as an herbicide to protect crops against nutsedge, a member of the cyperus genus of plants. MPP+ as a salt has much lower acute toxicity than its precursor MPTP due to the inability of the former to pass through the blood-brain barrier and ultimately access the only cells that will permit its uptake, the dopaminergic neurons. While cyperquat is no longer used as an herbicide, a closely related compound named paraquat is. Given the structural similarities, some have raised concerns about paraquat's active use as an herbicide for those handling it. However, studies have shown paraquat to be far less neurotoxic than MPP+, since paraquat does not bind to complex I in the mitochondrial electron transport chain, and thus its toxic effects cannot be realized. Safety MPP+ is commonly sold as the water-soluble iodide salt and is a white-to-beige powder. Specific toxicological data on the compound is somewhat lacking, but one MSDS quotes an LD50 of 29 mg/kg via an intraperitoneal route and 22.3 mg/kg via a subcutaneous route of exposure. Both values come from a mouse model system. MPP+ encountered in the salt form is far less toxic by ingestion, inhalation, and skin exposure than its biological precursor MPTP, due to the inability of MPP+ to cross the blood-brain barrier and freely diffuse across cellular membranes. There is no specific antidote to MPP+ poisoning. Clinicians are advised to treat exposure symptomatically. References Herbicides Human drug metabolites Human pathological metabolites Monoaminergic neurotoxins Pyridinium compounds
MPP+
[ "Chemistry", "Biology" ]
1,882
[ "Chemicals in medicine", "Biocides", "Herbicides", "Human drug metabolites" ]
14,430,019
https://en.wikipedia.org/wiki/Landau%E2%80%93Zener%20formula
The Landau–Zener formula is an analytic solution to the equations of motion governing the transition dynamics of a two-state quantum system, with a time-dependent Hamiltonian varying such that the energy separation of the two states is a linear function of time. The formula, giving the probability of a diabatic (not adiabatic) transition between the two energy states, was published separately by Lev Landau, Clarence Zener, Ernst Stueckelberg, and Ettore Majorana, in 1932. If the system starts, in the infinite past, in the lower energy eigenstate, we wish to calculate the probability of finding the system in the upper energy eigenstate in the infinite future (a so-called Landau–Zener transition). For infinitely slow variation of the energy difference (that is, a Landau–Zener velocity of zero), the adiabatic theorem tells us that no such transition will take place, as the system will always be in an instantaneous eigenstate of the Hamiltonian at that moment in time. At non-zero velocities, transitions occur with probability as described by the Landau–Zener formula. Conditions and approximation Such transitions occur between states of the entire system; hence any description of the system must include all external influences, including collisions and external electric and magnetic fields. In order that the equations of motion for the system might be solved analytically, a set of simplifications are made, known collectively as the Landau–Zener approximation. The simplifications are as follows: The perturbation parameter in the Hamiltonian is a known, linear function of time The energy separation of the diabatic states varies linearly with time The coupling in the diabatic Hamiltonian matrix is independent of time The first simplification makes this a semi-classical treatment. In the case of an atom in a magnetic field, the field strength becomes a classical variable which can be precisely measured during the transition. This requirement is quite restrictive as a linear change will not, in general, be the optimal profile to achieve the desired transition probability. The second simplification allows us to make the substitution where and are the energies of the two states at time , given by the diagonal elements of the Hamiltonian matrix, and is a constant. For the case of an atom in a magnetic field this corresponds to a linear change in magnetic field. For a linear Zeeman shift this follows directly from point 1. The final simplification requires that the time–dependent perturbation does not couple the diabatic states; rather, the coupling must be due to a static deviation from a Coulomb potential, commonly described by a quantum defect. Formula The details of Zener's solution are somewhat opaque, relying on a set of substitutions to put the equation of motion into the form of the Weber equation and using the known solution. A more transparent solution is provided by Curt Wittig using contour integration. The key figure of merit in this approach is the Landau–Zener velocity: where is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and and are the energies of the two diabatic (crossing) states. A large results in a large diabatic transition probability and vice versa. Using the Landau–Zener formula the probability, , of a diabatic transition is given by The quantity is the off-diagonal element of the two-level system's Hamiltonian coupling the bases, and as such it is half the distance between the two unperturbed eigenenergies at the avoided crossing, when . Multistate problem The simplest generalization of the two-state Landau–Zener model is a multistate system with a Hamiltonian of the form , where A and B are Hermitian NxN matrices with time-independent elements. The goal of the multistate Landau–Zener theory is to determine elements of the scattering matrix and the transition probabilities between states of this model after evolution with such a Hamiltonian from negative infinite to positive infinite time. The transition probabilities are the absolute value squared of scattering matrix elements. There are exact formulas, called hierarchy constraints, that provide analytical expressions for special elements of the scattering matrix in any multi-state Landau–Zener model. Special cases of these relations are known as the Brundobler–Elser (BE) formula,), and the no-go theorem,. Discrete symmetries often lead to constraints that reduce the number of independent elements of the scattering matrix. There are also integrability conditions that, when they are satisfied, lead to exact expressions for the entire scattering matrices in multistate Landau–Zener models. Numerous such completely solvable models have been identified, including: Demkov–Osherov model that describes a single level that crosses a band of parallel levels. A surprising fact about the solution of this model is coincidence of the exactly obtained transition probability matrix with its form obtained with a simple semiclassical independent crossing approximation. With some generalizations, this property appears in almost all solvable Landau–Zener systems with a finite number of interacting states. Generalized bow-tie model. The model describes coupling of two (or one in the degenerate case limit) levels to a set of otherwise noninteracting diabatic states that cross at a single point. Driven Tavis–Cummings model describes interaction of N spins- with a bosonic mode in a linearly time-dependent magnetic field. This is the richest known solved system. It has combinatorial complexity: the dimension of its state vector space is growing exponentially with the number of spins N. The transition probabilities in this model are described by the q-deformed binomial statistics. This solution has found practical applications in physics of Bose-Einstein condensates. Spin clusters interacting with time-dependent magnetic fields. This class of models shows relatively complex behavior of the transition probabilities due to the path interference effects in the semiclassical independent crossing approximation. Reducible (or composite) multistate Landau–Zener models. This class consists of systems that can be decoupled to subsets of other solvable and simpler models by a symmetry transformation. The notable example is an arbitrary spin Hamiltonian , where Sz and Sx are spin operators, and S>1/2; b and g are constant parameters. This is the earliest known solvable system, which was discussed by Majorana in 1932. Among the other examples there are models of a pair of degenerate level crossing, and the 1D quantum Ising chain in a linearly changing magnetic field. Landau–Zener transitions in infinite linear chains. This class contains the systems with formally infinite number of interacting states. Although most known their instances can be obtained as limits of the finite size models (such as the Tavis–Cummings model), there are also cases that do not belong to this classification. For example, there are solvable infinite chains with nonzero couplings between non-nearest states. Study of noise Applications of the Landau–Zener solution to the problems of quantum state preparation and manipulation with discrete degrees of freedom stimulated the study of noise and decoherence effects on the transition probability in a driven two-state system. Several compact analytical results have been derived to describe these effects, including the Kayanuma formula for a strong diagonal noise, and Pokrovsky–Sinitsyn formula for the coupling to a fast colored noise with off-diagonal components. Using the Schwinger–Keldysh Green's function, a rather complete and comprehensive study on the effect of quantum noise in all parameter regimes were performed by Ao and Rammer in late 1980s, from weak to strong coupling, low to high temperature, slow to fast passage, etc. Concise analytical expressions were obtained in various limits, showing the rich behaviors of such problem. The effects of nuclear spin bath and heat bath coupling on the Landau–Zener process was explored by Sinitsyn and Prokof'ev and Pokrovsky and Sun, respectively. Exact results in multistate Landau–Zener theory (no-go theorem and BE-formula) can be applied to Landau–Zener systems which are coupled to baths composed of infinite many oscillators and/or spin baths (dissipative Landau–Zener transitions). They provide exact expressions for transition probabilities averaged over final bath states if the evolution begins from the ground state at zero temperature, see in Ref. for oscillator baths and for universal results including spin baths in Ref. See also Nonadiabatic transition state theory Adiabatic theorem Bond softening Bond hardening Froissart-Stora equation References Quantum mechanics Lev Landau
Landau–Zener formula
[ "Physics" ]
1,803
[ "Theoretical physics", "Quantum mechanics" ]
14,431,176
https://en.wikipedia.org/wiki/Photochemical%20Reflectance%20Index
The Photochemical Reflectance Index (PRI) is a reflectance measurement developed by John Gamon during his tenure as a postdoctorate fellow supervised by Christopher Field at the Carnegie Institution for Science at Stanford University. The PRI is sensitive to changes in carotenoid pigments (e.g. xanthophyll pigments) in live foliage. Carotenoid pigments are indicative of photosynthetic light use efficiency, or the rate of carbon dioxide uptake by foliage per unit energy absorbed. As such, it is used in studies of vegetation productivity and stress. Because the PRI measures plant responses to stress, it can be used to assess general ecosystem health using satellite data or other forms of remote sensing. Applications include vegetation health in evergreen shrublands, forests, and agricultural crops prior to senescence. PRI is defined by the following equation using reflectance (ρ) at 531 and 570 nm wavelength: Some authors use The values range from –1 to 1. Sources ENVI Users Guide John Gamon, Josep Penuelas, and Christopher Field (1992). A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sensing of environment, 41, 35-44. Drolet, G.G. Heummrich, K.F. Hall, F.G., Middleton, E.M., Black, T.A., Barr, A.G. and Margolis, H.A. (2005). A MODIS-derived photochemical reflectance index to detect inter-annual variations in the photosynthetic light-use efficiency of a boreal deciduous forest. Remote Sensing of environment, 98, 212-224. Biophysics Botany Remote sensing 1992 introductions
Photochemical Reflectance Index
[ "Physics", "Biology" ]
362
[ "Plants", "Applied and interdisciplinary physics", "Biophysics", "Botany" ]
14,431,229
https://en.wikipedia.org/wiki/Outline%20of%20nanotechnology
The following outline is provided as an overview of and topical guide to nanotechnology: Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers. Branches of nanotechnology Green nanotechnology – use of nanotechnology to enhance the environmental-sustainability of processes currently producing negative externalities. It also refers to the use of the products of nanotechnology to enhance sustainability. Nanoengineering – practice of engineering on the nanoscale. Multi-disciplinary fields that include nanotechnology Nanobiotechnology – intersection of nanotechnology and biology. Ceramic engineering – science and technology of creating objects from inorganic, non-metallic materials. Materials science – interdisciplinary field applying the properties of matter to various areas of science and engineering. It investigates the relationship between the structure of materials at atomic or molecular scales and their macroscopic properties. Molecular engineering Contributing fields Nanoscience Nanoelectronics – use of nanotechnology on electronic components, including transistors so small that inter-atomic interactions and quantum mechanical properties need to be studied extensively. Nanomechanics – branch of nanoscience studying fundamental mechanical (elastic, thermal and kinetic) properties of physical systems at the nanometer scale. Nanophotonics – study of the behavior of light on the nanometer scale. Other contributing fields Calculus Chemistry Computer science Engineering Miniaturization Physics Protein engineering Quantum mechanics Self-organization Science Supramolecular chemistry Tissue engineering Robotics Medicine Risks of nanotechnology Implications of nanotechnology Health impact of nanotechnology Environmental impact of nanotechnology Regulation of nanotechnology Societal impact of nanotechnology Applications of nanotechnology Energy applications of nanotechnology Quantum computing – computation using quantum mechanical phenomena, such as superposition and entanglement, to perform data operations. List of nanotechnology applications Nanomaterials Nanomaterials – field that studies materials with morphological features on the nanoscale, and especially those that have special properties stemming from their nanoscale dimensions. Fullerenes and carbon forms Fullerene – any molecule composed entirely of carbon, in the form of a hollow sphere, ellipsoid, or tube. Fullerene spheres and tubes have applications in nanotechnology. Allotropes of carbon – Aggregated diamond nanorods – Buckypaper – Carbon nanofoam – Carbon nanotube – Nanoknot – Nanotube membrane – Fullerene chemistry – Bingel reaction – Endohedral hydrogen fullerene – Prato reaction – Endohedral fullerenes – Fullerite – Graphene – Graphene nanoribbon – Potential applications of carbon nanotubes – Timeline of carbon nanotubes – Nanoparticles and colloids Nanoparticle – Ceramics processing – Colloid – Colloidal crystal – Diamondoids – Fiveling - Nanocomposite – Nanostructure – Nanocages – Nanocomposite – Nanofabrics – Nanofiber – Nanofoam – Nanoknot – Nanomesh – Nanopillar – Nanopin film – Nanoring – Nanorod – Nanoshell – Nanotube – Quantum dot – Quantum heterostructure – Sculptured thin film – Nanomedicine Nanomedicine – Lab-on-a-chip – Nanobiotechnology – Nanosensor – Nanotoxicology – Molecular self-assembly Molecular self-assembly – DNA nanotechnology – DNA computing – DNA machine – DNA origami – Self-assembled monolayer – Supramolecular assembly – Nanoelectronics Nanoelectronics – Break junction – Chemical vapor deposition – Microelectromechanical systems (MEMS) Nanocircuits – Nanocomputer – Nanoelectromechanical systems (NEMS) Surface micromachining – Nanoelectromechanical relays Molecular electronics Molecular electronics – Nanolithography Nanolithography – Electron beam lithography – Ion-beam sculpting – Nanoimprint lithography – Photolithography – Scanning probe lithography – Molecular self-assembly – IBM Millipede – Molecular nanotechnology Molecular nanotechnology – Grey goo – Mechanosynthesis – Molecular assembler – Molecular modelling – Nanorobotics – Smartdust – Utility fog – Nanochondria – Programmable matter – Self reconfigurable – Self-replication – Devices Micromachinery – Nano-abacus – Nanomotor – Nanopore – Nanopore sequencing – Quantum point contact – Synthetic molecular motors – Carbon nanotube actuators – Microscopes and other devices Microscopy – Atomic force microscope – Electron microscopy - Scanning tunneling microscope – Scanning probe microscope – Sarfus – Notable organizations in nanotechnology List of nanotechnology organizations Government National Cancer Institute (US) National Institutes of Health (US) National Nanotechnology Initiative (US) Russian Nanotechnology Corporation (RU) Seventh Framework Programme (FP7) (EU) Advocacy and information groups American Chemistry Council (US) American Nano Society (US) Center for Responsible Nanotechnology (US) Foresight Institute (US) Project on Emerging Nanotechnologies (global) Manufacturers Cerion Nanomaterials, Metal / Metal Oxide / Ceramic Nanoparticles (US) OCSiAl, Carbon Nanotubes (Luxembourg) Notable figures in nanotechnology Phaedon Avouris - first electronic devices made out of carbon nanotubes Gerd Binnig - co-inventor of the scanning tunneling microscope Heinrich Rohrer - co-inventor of the scanning tunneling microscope Vicki Colvin Director for the Center for Biological and Environmental Nanotechnology, Rice University Eric Drexler - was the first to theorise about nanotechnology in depth and popularised the subject Richard Feynman - gave the first mention of some of the distinguishing concepts in a 1959 talk, entitled There's Plenty of Room at the Bottom Robert Freitas - nanomedicine theorist Andre Geim - Discoverer of 2-D carbon film called graphene Sumio Iijima - discoverer of carbon nanotube Harry Kroto - co-discoverer of buckminsterfullerene Akhlesh Lakhtakia - conceptualized sculptured thin films Ralph Merkle - nanotechnology theorist Carlo Montemagno - inventor ATP nanobiomechanical motor (UCLA) Erwin Wilhelm Müller - invented the field ion microscope, and the atom probe Chris Phoenix - co-founder of the Center for Responsible Nanotechnology Uri Sivan - set up and led the Russell Berrie Nanotechnology Research Institute at Technion in Israel Richard Smalley - co-discoverer of buckminsterfullerene Norio Taniguchi - coined the term "nano-technology" Mike Treder - co-founder of the Center for Responsible Nanotechnology Joseph Wang - pioneer in electrochemical sensors exploiting nanostructured materials; synthetic nanomotors Alex Zettl - Built the first molecular motor based on carbon nanotubes Russell M. Taylor II - co-director of the UNC CISMM Adriano Cavalcanti - nanorobot expert working at CAN Lajos P. Balogh - editor in chief of the Precision Nanomedicine journal Charles M. Lieber - pioneer on nanoscale materials (Harvard) See also Catalyst Macromolecule Mesh networking Monolayer Nanometer Nanosub NBI Knowledgebase Photonic crystal Potential well Quantum confinement Quantum tunneling Self-assembly Self-organization Technological singularity Place these History of nanotechnology List of nanotechnology organizations Nanotechnology in fiction Outline of nanotechnology Impact of nanotechnology Nanomedicine Nanotoxicology Green nanotechnology Health and safety hazards of nanomaterials Regulation of nanotechnology Nanomaterials Fullerenes Carbon nanotubes Nanoparticles Molecular self-assembly Self-assembled monolayer Supramolecular assembly DNA nanotechnology Nanoelectronics Molecular scale electronics Nanolithography Nanometrology Atomic force microscopy Scanning tunneling microscope Electron microscope Super resolution microscopy Nanotribology Molecular nanotechnology Molecular assembler Nanorobotics Mechanosynthesis Molecular engineering Further reading Engines of Creation, by Eric Drexler Nanosystems, by Eric Drexler Nanotechnology: A Gentle Introduction to the Next Big Idea by Mark and Daniel Ratner, There's Plenty of Room at the Bottom by Richard Feynman The challenges of nanotechnology by Claire Auplat References External links NanoTechMap The online exhibition of nanotechnology featuring over 4000 registered companies What is Nanotechnology? (A Vega/BBC/OU Video Discussion). Course on Introduction to Nanotechnology Nanex Project SAFENANO A nanotechnology initiative of the Institute of Occupational Medicine Glossary of Drug Nanotechnology Nanotechnology Nanotechnology
Outline of nanotechnology
[ "Materials_science", "Engineering" ]
1,810
[ "Nanotechnology", "Materials science" ]
14,433,872
https://en.wikipedia.org/wiki/PRKAR1A
cAMP-dependent protein kinase type I-alpha regulatory subunit is an enzyme that in humans is encoded by the PRKAR1A gene. Function cAMP is a signaling molecule important for a variety of cellular functions. cAMP exerts its effects by activating the cAMP-dependent protein kinase A (PKA), which transduces the signal through phosphorylation of different target proteins. The inactive holoenzyme of PKA is a tetramer composed of two regulatory and two catalytic subunits. cAMP causes the dissociation of the inactive holoenzyme into a dimer of regulatory subunits bound to four cAMP and two free monomeric catalytic subunits. Four different regulatory subunits and three catalytic subunits of PKA have been identified in humans. The protein encoded by this gene is one of the regulatory subunits. This protein was found to be a tissue-specific extinguisher that downregulates the expression of seven liver genes in hepatoma x fibroblast hybrids Three alternatively spliced transcript variants encoding the same protein have been observed. Clinical significance Functional null mutations in this gene cause Carney complex (CNC), an autosomal dominant multiple neoplasia syndrome. This gene can fuse to the RET protooncogene by gene rearrangement and form the thyroid tumor-specific chimeric oncogene known as PTC2. Mutation of PRKAR1A leads to the Carney complex, associating multiple endocrine tumors. Interactions PRKAR1A has been shown to interact with: AKAP10, AKAP1, AKAP4, ARFGEF1, ARFGEF2, Grb2, MYO7A, PRKAR1B, and UBE2M. See also cAMP-dependent protein kinase References Further reading External links PDBe-KB provides an overview of all the structure information available in the PDB for Human cAMP-dependent protein kinase type I-alpha regulatory subunit (PRKAR1A) Signal transduction
PRKAR1A
[ "Chemistry", "Biology" ]
417
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
14,434,713
https://en.wikipedia.org/wiki/Operation%20CHASE
Operation CHASE (an acronym for "Cut Holes And Sink 'Em") was a United States Department of Defense program for the disposal of unwanted munitions at sea from May 1964 until the early 1970s. Munitions were loaded onto ships to be scuttled once they were at least 250 miles (400 km) offshore. While most of the sinkings involved conventional weapons, four of them involved chemical weapons. The disposal site for the chemical weapons was a three-mile (5 km) area of the Atlantic Ocean between the coast of the U.S. state of Florida and the Bahamas. Other weapons were disposed of in various locations in the Atlantic and Pacific oceans. The CHASE program was preceded by the United States Army disposal of 8,000 short tons of mustard and lewisite chemical warfare gas aboard the scuttled SS William C. Ralston in April 1958. These ships were sunk by having Explosive Ordnance Disposal (EOD) teams open seacocks on the ship after they arrived at the disposal site. The typical Liberty ship sank about three hours after the seacocks were opened. Operations CHASE 1 The mothballed C-3 Liberty ship John F. Shafroth was taken from the National Defense Reserve Fleet at Suisun Bay and towed to the Concord Naval Weapons Station for stripping and loading. A major fraction of the munitions in CHASE 1 was Bofors 40 mm gun ammunition from the Naval Ammunition Depot at Hastings, Nebraska. CHASE 1 also included bombs, torpedo warheads, naval mines, cartridges, projectiles, fuzes, detonators, boosters, overage UGM-27 Polaris motors, and a quantity of contaminated cake mix an army court had ordered dumped at sea. Shafroth was sunk 47 miles (76 km) off San Francisco on 23 July 1964 with 9,799 tons of munitions. CHASE 2 Village was loaded with 7,348 short tons of munitions at the Naval Weapons Station Earle and towed to a deep-water dump site on 17 September 1964. There were three large and unexpected detonations five minutes after Village slipped beneath the surface. An oil slick and some debris appeared on the surface. The explosion registered on seismic equipment all over the world. Inquiries were received regarding seismic activity off the east coast of the United States, and the Office of Naval Research and Advanced Research Projects Agency expressed interest in measuring the differences between seismic shocks and underwater explosive detonations to detect underwater nuclear detonations then banned by treaty. CHASE 3 Coastal Mariner was loaded with 4040 short tons of munitions at the Naval Weapons Station Earle. The munitions included 512 tons of explosives. Four SOFAR bombs were packed in the explosives cargo hold with booster charges of 500 pounds (227 kg) of TNT to detonate the cargo at a depth of 1,000 feet (300 m). The United States Coast Guard issued a notice to mariners and the United States Department of Fish and Wildlife and the United States Bureau of Commercial Fisheries sent observers. The explosives detonated seventeen seconds after Coastal Mariner slipped below the surface on 14 July 1965. The detonation created a 600-foot (200 m) waterspout but was not deep enough to be recorded on seismic instruments. CHASE 4 Santiago Iglesias was loaded with 8,715 tons of munitions at the Naval Weapons Station Earle, rigged for detonation at 1,000 feet (300 m), and detonated 31 seconds after sinking on 16 September 1965. CHASE 5 Isaac Van Zandt was loaded with 8,000 tons of munitions (including 400 tons of high explosives) at the Naval Base Kitsap and rigged for detonation at 4,000 feet (1.2 km). On 23 May 1966 the tow cable parted en route to the planned disposal area. Navy tugs USS Tatnuck (ATA-195) and USS Koka (ATA-185) recovered the tow within six hours, but the location of sinking was changed by the delay. CHASE 6 Different sources describe CHASE 6 differently. Naval Institute Proceedings indicates Horace Greeley was loaded at the Naval Weapons Station Earle, rigged for detonation at 4,000 feet (1.2 km), and detonated on 28 July 1966. Other sources describe CHASE 6 as the Liberty ship Robert Louis Stevenson loaded with 2,000 tons of explosives at Naval Base Kitsap in July 1967 as part of the ONR and ARPA investigation to detect underwater nuclear tests. Robert Louis Stevenson failed to sink as rapidly as had been predicted and drifted into water too shallow to actuate the hydrostatic-pressure detonators. The tug Tatnuck involved in towing Robert Louis Stevenson was reported by Proceedings as towing Izaac Van Zandt a year earlier for CHASE 5. CHASE 7 Michael J. Monahan was loaded with overage UGM-27 Polaris motors at the Naval Weapons Station Charleston and sunk without detonation on 30 April 1967. CHASE 8 The first chemical weapons disposal via the program was in 1967 and designated CHASE 8. CHASE 8 disposed of mustard gas and GB-filled M-55 rockets. All of the cargo was placed aboard a merchant hulk (the S.S. Corporal Eric G. Gibson) and was then sunk in deep water off the continental shelf. CHASE 9 Eric G. Gibson was sunk on 15 June 1967. CHASE 10 CHASE 10 dumped 3,000 tons of United States Army nerve agent filled rockets encased in concrete vaults. The ship used was the LeBaron Russell Briggs. Public controversy delayed CHASE 10 disposal until August 1970. Public awareness of operation CHASE 10 was increased by mass media reporting following delivery of information from the Pentagon to the office of U.S. Representative Richard McCarthy in 1969. Both American television and print media followed the story with heavy coverage. In 1970, 58 separate reports were aired on the three major network news programs on NBC, ABC and CBS concerning Operation CHASE. Similarly, The New York Times included Operation CHASE coverage in 42 separate issues during 1970, 21 of those in the month of August. The publicity played a role in ending the practice of dumping chemical weapons at sea. CHASE 11 CHASE 11 occurred in June 1968 and disposed of United States Army GB and VX, all sealed in tin containers. CHASE 12 CHASE 12, in August 1968, again disposed of United States Army mustard agent and was numerically (although not chronologically) the final mission to dispose of chemical weapons. Aftermath Operation CHASE was exposed to the public during a time when the army, especially the Chemical Corps, was under increasing public criticism. CHASE was one of the incidents which led to the near disbanding of the Chemical Corps in the aftermath of the Vietnam War. Concerns were raised over the program's effect on the ocean environment as well as the risk of chemical weapons washing up on shore. The concerns led to the Marine Protection, Research, and Sanctuaries Act of 1972, which prohibited such future missions. After a treaty was drafted by the United Nations' London Convention in 1972, an international ban came into effect as well. See also Dugway sheep incident Operation Red Hat References Chase Chemical weapons demilitarization Ocean pollution Military projects of the United States Chase
Operation CHASE
[ "Chemistry", "Engineering", "Environmental_science" ]
1,434
[ "Ocean pollution", "Military projects", "Chemical weapons", "Chemical weapons demilitarization", "Water pollution", "Military projects of the United States" ]
14,436,317
https://en.wikipedia.org/wiki/Doubled%20haploidy
A doubled haploid (DH) is a genotype formed when haploid cells undergo chromosome doubling. Artificial production of doubled haploids is important in plant breeding. Haploid cells are produced from pollen or egg cells or from other cells of the gametophyte, then by induced or spontaneous chromosome doubling, a doubled haploid cell is produced, which can be grown into a doubled haploid plant. If the original plant was diploid, the haploid cells are monoploid, and the term doubled monoploid may be used for the doubled haploids. Haploid organisms derived from tetraploids or hexaploids are sometimes called dihaploids (and the doubled dihaploids are, respectively, tetraploid or hexaploid). Conventional inbreeding procedures take six generations to achieve approximately complete homozygosity, whereas doubled haploidy achieves it in one generation. Dihaploid plants derived from tetraploid crop plants may be important for breeding programs that involve diploid wild relatives of the crops. History The first report of the haploid plant was published by Blakeslee et al. (1922) in Datura stramonium. Subsequently, haploids were reported in many other species. Guha and Maheshwari (1964) developed an anther culture technique for the production of haploids in the laboratory. Haploid production by wide crossing was reported in barley (Kasha and Kao, 1970) and tobacco (Burk et al., 1979). Tobacco, rapeseed, and barley are the most responsive species for doubled haploid production. Doubled haploid methodologies have now been applied to over 250 species. Production of doubled haploids Doubled haploids can be produced in vivo or in vitro. Haploid embryos are produced in vivo by parthenogenesis, pseudogamy, or chromosome elimination after wide crossing. The haploid embryo is rescued, cultured, and chromosome-doubling produces doubled haploids. The in vitro methods include gynogenesis (ovary and flower culture) and androgenesis (anther and microspore culture). Androgenesis is the preferred method. Another method of producing the haploids is wide crossing. In barley, haploids can be produced by wide crossing with the related species Hordeum bulbosum; fertilization is affected, but during the early stages of seed development the H. bulbosum chromosomes are eliminated leaving a haploid embryo. In tobacco (Nicotiana tabacum), wide crossing with Nicotiana africana is widely used. When N. africana is used to pollinate N. tabacum, 0.25 to 1.42 percent of the progeny survive and can readily be identified as either F1 hybrids or maternal haploids. Although these percentages appear small, the vast yield of tiny seeds and the early death of most seedlings provide significant numbers of viable hybrids and haploids in relatively small soil containers. This method of interspecific pollination serves as a practical way of producing seed-derived haploids of N. tabacum, either as an alternative method or complementary method to anther culture. Genetics of DH population In DH method only two types of genotypes occur for a pair of alleles, A and a, with the frequency of ½ AA and ½ aa, while in diploid method three genotypes occur with the frequency of ¼ AA, ½ Aa, ¼ aa. Thus, if AA is desirable genotype, the probability of obtaining this genotype is higher in haploid method than in diploid method. If n loci are segregating, the probability of getting the desirable genotype is (1/2)n by the haploid method and (1/4)n by the diploid method. Hence the efficiency of the haploid method is high when the number of genes concerned is large. Studies were conducted comparing DH method and other conventional breeding methods and it was concluded that adoption of doubled haploidy does not lead to any bias of genotypes in populations, and random DHs were even found to be compatible to selected line produced by conventional pedigree method. Applications of DHs plant breeding Mapping quantitative trait loci Most of the economic traits are controlled by genes with small but cumulative effects. Although the potential of DH populations in quantitative genetics has been understood for some time, it was the advent of molecular marker maps that provided the impetus for their use in identifying loci controlling quantitative traits. As the quantitative trait loci (QTL) effects are small and highly influenced by environmental factors, accurate phenotyping with replicated trials is needed. This is possible with doubled haploidy organisms because of their true breeding nature and because they can conveniently be produced in large numbers. Using DH populations, 130 quantitative traits have been mapped in nine crop species. In total, 56 DH populations were used for QTL detection. Backcross breeding In backcross conversion, genes are introgressed from a donor cultivar or related species into a recipient elite line through repeated backcrossing. A problem in this procedure is being able to identify the lines carrying the trait of interest at each generation. The problem is particularly acute if the trait of interest is recessive, as it will be present only in a heterozygous condition after each backcross. The development of molecular markers provides an easier method of selection based on the genotype (marker) rather than the phenotype. Combined with doubled haploidy it becomes more effective. In marker assisted backcross conversion, a recipient parent is crossed with a donor line and the hybrid (F1) backcrossed to the recipient. The resulting generation (BC1) is backcrossed and the process repeated until the desired genotypes are produced. The combination of doubled haploidy and molecular marker provides the short cut. In the backcross generation one itself, a genotype with the character of interest can be selected and converted into homozygous doubled-haploid genotype. Chen et al. (1994) used marker assisted backcross conversion with doubled haploidy of BC1 individuals to select stripe rust resistant lines in barley. Bulked segregant analysis (BSA) In bulked segregant analysis, a population is screened for a trait of interest and the genotypes at the two extreme ends form two bulks. Then the two bulks are tested for the presence or absence of molecular markers. Since the bulks are supposed to contrast in the alleles that contribute positive and negative effects, any marker polymorphism between the two bulks indicates the linkage between the marker and trait of interest. BSA is dependent on accurate phenotyping and the DH population has particular advantage in that they are true breeding and can be tested repeatedly. DH populations are commonly used in bulked segregant analysis, which is a popular method in marker assisted breeding. This method has been applied mostly to rapeseed and barley. Genetic maps Genetic maps are very important to understand the structure and organization of genomes from which evolution patterns and syntenic relationships between species can be deduced. Genetic maps also provide a framework for the mapping of genes of interest and estimating the magnitude of their effects and aid our understanding of genotype/phenotype associations. DH populations have become standard resources in genetic mapping for species in which DHs are readily available. Doubled haploid populations are ideal for genetic mapping. It is possible to produce a genetic map within two years of the initial cross regardless of the species. Map construction is relatively easy using a DH population derived from a hybrid of two homozygous parents as the expected segregation ratio is simple, i.e. 1:1. DH populations have now been used to produce genetic maps of barley, rapeseed, rice, wheat, and pepper. DH populations played a major role in facilitating the generation of the molecular marker maps in eight crop species. Genetic studies Genetic ratios and mutation rates can be read directly from haploid populations. A small doubled haploid (DH) population was used to demonstrate that a dwarfing gene in barley is located chromosome 5H. In another study the segregation of a range of markers has been analyzed in barley. Genomics Although QTL analysis has generated a vast amount of information on gene locations and the magnitude of effects on many traits, the identification of the genes involved has remained elusive. This is due to poor resolution of QTL analysis. The solution for this problem would be production of recombinant chromosome substitution line, or stepped aligned recombinant inbred lines. Here, backcrossing is carried out until a desired level of recombination has occurred and genetic markers are used to detect desired recombinant chromosome substitution lines in the target region, which can be fixed by doubled haploidy. In rice, molecular markers have been found to be linked with major genes and QTLs for resistance to rice blast, bacterial blight, and sheath blight in a map produced from DH population. Elite crossing Traditional breeding methods are slow and take 10–15 years for cultivar development. Another disadvantage is inefficiency of selection in early generations because of heterozygosity. These two disadvantages can be over come by DHs, and more elite crosses can be evaluated and selected within less time. Cultivar development Uniformity is a general requirement of cultivated line in most species, which can be easily obtained through DH production. There are various ways in which DHs can be used in cultivar production. The DH lines themselves can be released as cultivars, they may be used as parents in hybrid cultivar production or more indirectly in the creation of breeders lines and in germplasm conservation. Barley has over 100 direct DH cultivars. According to published information there are currently around 300 DH derived cultivars in 12 species worldwide. The relevance of DHs to plant breeding has increased markedly in recent years owing to the development of protocols for 25 species. Doubled haploidy already plays an important role in hybrid cultivar production of vegetables, and the potential for ornamental production is being vigorously examined. DHs are also being developed in the medicinal herb Valeriana officinalis to select lines with high pharmacological activity. Another interesting development is that fertile homozygous DH lines can be produced in species that have self-incompatibility systems. Advantages of DHs The ability to produce homozygous lines after a single round recombination saves a lot of time for the plant breeders. Studies conclude that random DH’s are comparable to the selected lines in pedigree inbreeding. The other advantages include development of large number of homozygous lines, efficient genetic analysis and development of markers for useful traits in much less time. More specific benefits include the possibility of seed propagation as an alternative to vegetative multiplication in ornamentals, and in species such as trees in which long life cycles and inbreeding depression preclude traditional breeding methods, doubled haploidy provides new alternatives. Disadvantages of DHs The main disadvantage with the DH population is that selection cannot be imposed on the population. But in conventional breeding selection can be practised for several generations: thereby desirable characters can be improved in the population. In haploids produced from anther culture, it is observed that some plants are aneuploids and some are mixed haploid-diploid types. Another disadvantage associated with the double haploidy is the cost involved in establishing tissue culture and growth facilities. The over-usage of doubled haploidy may reduce genetic variation in breeding germplasm. Hence one has to take several factors into consideration before deploying doubled haploidy in breeding programmes. Conclusions Technological advances have now provided DH protocols for most plant genera. The number of species amenable to doubled haploidy has reached a staggering 250 in just a few decades. Response efficiency has also improved with gradual removal of species from recalcitrant category. Hence it will provide greater efficiency of plant breeding. Tutorials Doubled Haploids to Improve Winter Wheat Video : Doubled Haploids: A simple method to improve efficiency of maize breeding. References Ardiel, G.S., Grewal, T.S., Deberdt, P., Rossnagel, B.G., and Scoles, G.J. 2002. Inheritance of resistance to covered smut in barley and development of tightly linked SCAR marker. Theoretical and applied genetics 104:457-464. Blakelsee, A.F., Belling, J., Farhnam, M.E., and Bergner, A.D.1922. A haploid mutant in the Jimson weed, Datura stramonium. Science 55:646-647. Burk, L.G., Gerstel, D.U., and Wernsman, E.A. 1979. Maternal haploids of Nicotiana tabacum L. from seed. Science 206:585. Chen, F.Q., D.Prehn, P.M. Hayes, D.Mulrooney, A. Corey, and H.Vivar. 1994. Mapping genes for resistance to barley stripe rust (Puccinia striiformis f. sp. hordei). Theoretical and Applied Genetics. 88:215-219. Friedt, W., Breun, J., Zuchner, S., and Foroughi-Wehr, B. 1986. Comparative value of androgenetic doubled haploid and conventionally selected spring barley line. Plant Breeding 97:56-63. Guha, S., and Maheswari, S. C. 1964. In vitro production of embryos from anthers of Datura. Nature 204:497. Immonen, S., and H. Anttila. 1996. Success in rye anther culture. Vortr. Pflanzenzuchtg. 35:237-244. Kasha, K. J., and Kao, K. N. 1970. High frequency haploid production in barley (Hordeum vulgare L.). Nature 225: 874-876. Kearsey, M. J. 2002. QTL analysis: Problems and (possible) solutions. p. 45-58. In: M.S. Kang (ed.), Quantitative genetics, genomics and plant breeding. CABI Publ., CAB International. Maluszynski, M.., Kasha K. J., Forster, B.P., and Szarejko, I. 2003. Doubled haploid production in crop plants: A manual. Kluwer Academic Publ., Dordrecht, Boston, London. Paterson, A.H., Deverna, J.W., Lanin, B., and Tanksley, S. 1990. Fine mapping of quantitative trait loci using selected overlapping recombinant chromosomes in an interspecies cross of tomato. Genetics 124:735-741. Schon, C., M. Sanchez,T. Blake, and P.M. Hayes. 1990. Segregation of Mendelian markers in doubled haploid and F2 progeny of barley cross. Hereditas 113:69-72. Thomas, W. T. B., B. Gertson and B.P. Forster. 2003. Doubled haploids in breeding p. 337-350. in :M. Maluszynski, K.J. Kasha, B.P. Forster and I. Szarejko (eds)., Doubled haploid production in crop plants:A Manual. Kluwer Academic Publ., Dordrecht, Boston, London. Thomas, W.T.B., Newton, A.C., Wilson, A., Booth, A., Macaulay, M., and Keith, R. 2000. Development of recombinant chromosome substitution lines: A barley resource. SCRI Annual Report 1999/2000, 99-100. Thomas, W.T.B., Powell, W., and Wood, W. 1984. The chromosomal location of the dwarfing gene present in the spring barley variety Golden Promise. Heredity 53:177-183. Wang, Z., G. Taramino, D.Yang, G. Liu, S.V. Tingey, G.H. Miao, and G.L. Wang. 2001. Rice ESTs with disease-resistance gene or defense-response gene-like sequences mapped to regions containing major resistance genes or QTLs. Molecular Genetics and Genomics. 265:303-310. William, K.J., Taylor, S.P., Bogacki, P., Pallotta, M., Bariana, H.S., and Wallwork, H. 2002. Mapping of the root lesion nematode (Pratylenchus neglectus) resistance gene Rlnn 1 in wheat. Theoretical and applied genetics 104:874-879. Winzeler, H., Schmid, J., and Fried, P.M. 1987. Field performance of androgenetic doubled haploid spring wheat line in comparison with line selected by the pedigree system. Plant breeding 99:41-48. Yi, H.Y., Rufty, R.C., Wernsman, E.A., and Conkling, M.C. 1998. Mapping the root-knot nematode resistance gene (Rk) in tobacco with RAPD markers. Plant Disease 82:1319-1322. Plant breeding Genetics Plant genetics
Doubled haploidy
[ "Chemistry", "Biology" ]
3,702
[ "Genetics", "Plant genetics", "Plants", "Molecular biology", "Plant breeding" ]
11,760,070
https://en.wikipedia.org/wiki/NatCarb
The NatCarb geoportal provides access to geospatial information and tools concerning carbon sequestration in the United States. External links National Energy Technology Laboratory Carbon Sequestration Regional Partnerships References Carr, T.R., P.M. Rich, and J.D. Bartley. 2007. The NATCARB geoportal: linking distributed data from the Carbon Sequestration Regional Partnerships. Journal of Map and Geography Libraries (Geoscapes), "Special Issue on Department of Energy (DOE) Geospatial Science Innovations". In Press. Carbon capture and storage
NatCarb
[ "Engineering" ]
122
[ "Geoengineering", "Carbon capture and storage" ]
11,760,149
https://en.wikipedia.org/wiki/Angle%20of%20climb
In aerodynamics, climb gradient is the ratio between distance travelled over the ground and altitude gained, and is expressed as a percentage. The angle of climb can be defined as the angle between a horizontal plane representing the Earth's surface, and the actual flight path followed by the aircraft during its ascent. The speed of an aircraft type at which the angle of climb is largest is called VX. It is always slower than VY, the speed for the best rate of climb. As the latter gives the quickest way for gaining altitude levels, regardless of the distance covered during such a maneuver, it is more relevant to cruising. The maximum angle of climb on the other hand is where the aircraft gains the most altitude in a given distance, regardless of the time needed for the maneuver. This is important for clearing an obstacle, and therefore is the speed a pilot uses when executing a "short field" takeoff. VX increases with altitude, and VY decreases with altitude until they converge at the airplane's absolute ceiling. Best angle of climb (BAOC) airspeed for an airplane is the speed at which the maximum excess thrust is available. Excess thrust is the difference between the total drag of the aircraft, and the thrust output of the powerplant. For a jet aircraft, this speed is very close to the speed at which the total minimum drag occurs. See also Rate of climb References Aerodynamics
Angle of climb
[ "Chemistry", "Engineering" ]
282
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics stubs", "Fluid dynamics" ]
11,763,157
https://en.wikipedia.org/wiki/Asia-Pacific%20Journal%20of%20Chemical%20Engineering
The Asia-Pacific Journal of Chemical Engineering is a peer-reviewed scientific journal published by John Wiley & Sons on behalf of Curtin University of Technology. Until 2006 it was known as Developments in Chemical Engineering and Mineral Processing and published (in print only) by Curtin University of Technology. The current editor-in-chief is Moses O. Tadé (Curtin University of Technology). Most cited papers The three most-cited papers published by the journal are: Research Article: Development of a novel autothermal reforming process and its economics for clean hydrogen production, Volume 1, Issue 1–2, Nov-Dec 2006, Pages: 5–12, Chen ZX, Elnashaie SSEH Research Article: Review: examining the use of different feedstock for the production of biodiesel, Volume 2, Issue 5, Sep-Oct 2007, Pages: 480–486, Behzadi S, Farid MM Research Article: The forces at work in colloidal self-assembly: a review on fundamental interactions between colloidal particles, Volume 3, Issue 3, May-Jun 2008, Pages: 255–268, Li Q, Jonas U, Zhao XS, et al. References External links Chemical engineering journals Academic journals established in 1993 Bimonthly journals English-language journals Wiley (publisher) academic journals
Asia-Pacific Journal of Chemical Engineering
[ "Chemistry", "Engineering" ]
271
[ "Chemical engineering", "Chemical engineering journals" ]
11,763,255
https://en.wikipedia.org/wiki/Applied%20Organometallic%20Chemistry
Applied Organometallic Chemistry is a monthly peer-reviewed scientific journal published since 1987 by John Wiley & Sons. The editor-in-chief is Cornelis J. Elsevier (University of Amsterdam). Contents The journal includes: reviews full papers communications working methods papers crystallographic reports It also includes occasional reports on: relevant conferences of applied work in the field of organometallics including bioorganometallic chemistry metal/organic ligand coordination chemistry. Abstracting and indexing The journal is abstracted and indexed in: Biological Abstracts BIOSIS Previews Cambridge Structural Database Chemical Abstracts Service Ceramic Abstracts ChemWeb Compendex Advanced Polymer Abstracts Civil Engineering Abstracts Mechanical & Transportation Engineering Abstracts Current Contents/Physical Chemical & Earth Sciences Engineered Materials Abstracts International Aerospace Abstracts METADEX PASCAL Science Citation Index Scopus According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.105. Most cited papers The three highest cited papers (> 250 citations each) are: References External links Organic chemistry journals Wiley (publisher) academic journals Academic journals established in 1987 English-language journals Monthly journals
Applied Organometallic Chemistry
[ "Chemistry" ]
224
[ "Organic chemistry journals" ]
11,763,375
https://en.wikipedia.org/wiki/Concatenated%20error%20correction%20code
In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s. Background The field of channel coding is concerned with sending a stream of data at the highest possible rate over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates less than a certain threshold , called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with , so such an optimum decoder rapidly becomes infeasible. In his doctoral thesis, Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length. Description Let Cin be a [n, k, d] code, that is, a block code of length n, dimension k, minimum Hamming distance d, and rate r = k/n, over an alphabet A: Let Cout be a [N, K, D] code over an alphabet B with |B| = |A|k symbols: The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet B. We use this channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout∘Cin, is thus a code of length Nn over the alphabet A: It maps each input message m = (m1, m2, ..., mK) to a codeword (Cin(m'1), Cin(m'2), ..., Cin(m'N)), where (m'1, m'2, ..., m'N) = Cout(m1, m2, ..., mK). The key insight in this approach is that if Cin is decoded using a maximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), and Cout is a code with length N = 2nr that can be decoded in polynomial time of N, then the concatenated code can be decoded in polynomial time of its combined length n2nr = O(N⋅log(N)) and shows an exponentially decreasing error probability, even if Cin has exponential decoding complexity. This is discussed in more detail in section Decoding concatenated codes. In a generalization of above concatenation, there are N possible inner codes Cin,i and the i-th symbol in a codeword of Cout is transmitted across the inner channel using the i-th inner code. The Justesen codes are examples of generalized concatenated codes, where the outer code is a Reed–Solomon code. Properties 1. The distance of the concatenated code Cout∘Cin is at least dD, that is, it is a [nN, kK, D'] code with D' ≥ dD. Proof: Consider two different messages m1 ≠ m2 ∈ BK. Let Δ denote the distance between two codewords. Then Thus, there are at least D positions in which the sequence of N symbols of the codewords Cout(m1) and Cout(m2) differ. For these positions, denoted i, we have Consequently, there are at least d⋅D positions in the sequence of n⋅N symbols taken from the alphabet A in which the two codewords differ, and hence 2. If Cout and Cin are linear block codes, then Cout∘Cin is also a linear block code. This property can be easily shown based on the idea of defining a generator matrix for the concatenated code in terms of the generator matrices of Cout and Cin. Decoding concatenated codes A natural concept for a decoding algorithm for concatenated codes is to first decode the inner code and then the outer code. For the algorithm to be practical it must be polynomial-time in the final block length. Consider that there is a polynomial-time unique decoding algorithm for the outer code. Now we have to find a polynomial-time decoding algorithm for the inner code. It is understood that polynomial running time here means that running time is polynomial in the final block length. The main idea is that if the inner block length is selected to be logarithmic in the size of the outer code then the decoding algorithm for the inner code may run in exponential time of the inner block length, and we can thus use an exponential-time but optimal maximum likelihood decoder (MLD) for the inner code. In detail, let the input to the decoder be the vector y = (y1, ..., yN) ∈ (An)N. Then the decoding algorithm is a two-step process: Use the MLD of the inner code Cin to reconstruct a set of inner code words y' = (y'1, ..., y'N), with y'i = MLDCin(yi), 1 ≤ i ≤ N. Run the unique decoding algorithm for Cout on y'. Now, the time complexity of the first step is O(N⋅exp(n)), where n = O(log(N)) is the inner block length. In other words, it is NO(1) (i.e., polynomial-time) in terms of the outer block length N. As the outer decoding algorithm in step two is assumed to run in polynomial time the complexity of the overall decoding algorithm is polynomial-time as well. Remarks The decoding algorithm described above can be used to correct all errors up to less than dD/4 in number. Using minimum distance decoding, the outer decoder can correct all inputs y' with less than D/2 symbols y'i in error. Similarly, the inner code can reliably correct an input yi if less than d/2 inner symbols are erroneous. Thus, for an outer symbol y'i to be incorrect after inner decoding at least d/2 inner symbols must have been in error, and for the outer code to fail this must have happened for at least D/2 outer symbols. Consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at least d/2⋅D/2 = dD/4. The algorithm also works if the inner codes are different, e.g., for Justesen codes. The generalized minimum distance algorithm, developed by Forney, can be used to correct up to dD/2 errors. It uses erasure information from the inner code to improve performance of the outer code, and was the first example of an algorithm using soft-decision decoding. Applications Although a simple concatenation scheme was implemented already for the 1971 Mariner Mars orbiter mission, concatenated codes were starting to be regularly used for deep space communication with the Voyager program, which launched two space probes in 1977. Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention of turbo codes and LDPC codes. Typically, the inner code is not a block code but a soft-decision convolutional Viterbi-decoded code with a short constraint length. For the outer code, a longer hard-decision block code, frequently a Reed-Solomon code with eight-bit symbols, is used. The larger symbol size makes the outer code more robust to error bursts that can occur due to channel impairments, and also because erroneous output of the convolutional code itself is bursty. An interleaving layer is usually added between the two codes to spread error bursts across a wider range. The combination of an inner Viterbi convolutional code with an outer Reed–Solomon code (known as an RSV code) was first used in Voyager 2, and it became a popular construction both within and outside of the space sector. It is still notably used today for satellite communications, such as the DVB-S digital television broadcast standard. In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within the DVB-S2 standard, a highly efficient LDPC code is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherent error floor. A simple concatenation scheme is also used on the compact disc (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks. Turbo codes: A parallel concatenation approach The description above is given for what is now called a serially concatenated code. Turbo codes, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes. This design has a better performance than any previously conceived concatenated codes. However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the "Galileo code" of the Galileo space probe. See also Gilbert–Varshamov bound Justesen code Singleton bound Zyablov bound References Further reading External links University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra Error detection and correction Coding theory Finite fields Information theory
Concatenated error correction code
[ "Mathematics", "Technology", "Engineering" ]
2,225
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Reliability engineering", "Applied mathematics", "Error detection and correction", "Computer science", "Information theory" ]
11,763,521
https://en.wikipedia.org/wiki/Nucleate%20boiling
In fluid thermodynamics, nucleate boiling is a type of boiling that takes place when the surface temperature is hotter than the saturated fluid temperature by a certain amount but where the heat flux is below the critical heat flux. For water, as shown in the graph below, nucleate boiling occurs when the surface temperature is higher than the saturation temperature () by between . The critical heat flux is the peak on the curve between nucleate boiling and transition boiling. The heat transfer from surface to liquid is greater than that in film boiling. Nucleate boiling is common in electric kettles and is responsible for the noise that occurs before boiling occurs. It also occurs in water boilers where water is rapidly heated. Mechanism Two different regimes may be distinguished in the nucleate boiling range. When the temperature difference is between approximately above TS, isolated bubbles form at nucleation sites and separate from the surface. This separation induces considerable fluid mixing near the surface, substantially increasing the convective heat transfer coefficient and the heat flux. In this regime, most of the heat transfer is through direct transfer from the surface to the liquid in motion at the surface and not through the vapor bubbles rising from the surface. Between above TS, a second flow regime may be observed. As more nucleation sites become active, increased bubble formation causes bubble interference and coalescence. In this region the vapor escapes as jets or columns which subsequently merge into plugs of vapor. Interference between the densely populated bubbles inhibits the motion of liquid near the surface. This is observed on the graph as a change in the direction of the gradient of the curve or an inflection in the boiling curve. After this point, the heat transfer coefficient starts to reduce as the surface temperature is further increased although the product of the heat transfer coefficient and the temperature difference (the heat flux) is still increasing. When the relative increase in the temperature difference is balanced by the relative reduction in the heat transfer coefficient, a maximum heat flux is achieved as observed by the peak in the graph. This is the critical heat flux. At this point in the maximum, considerable vapor is being formed, making it difficult for the liquid to continuously wet the surface to receive heat from the surface. This causes the heat flux to reduce after this point. At extremes, film boiling commonly known as the Leidenfrost effect is observed. The process of forming steam bubbles within liquid in micro cavities adjacent to the wall if the wall temperature at the heat transfer surface rises above the saturation temperature while the bulk of the liquid (heat exchanger) is subcooled. The bubbles grow until they reach some critical size, at which point they separate from the wall and are carried into the main fluid stream. There the bubbles collapse because the temperature of bulk fluid is not as high as at the heat transfer surface, where the bubbles were created. This collapsing is also responsible for the sound a water kettle produces during heat up but before the temperature at which bulk boiling is reached. Heat transfer and mass transfer during nucleate boiling has a significant effect on the heat transfer rate. This heat transfer process helps quickly and efficiently to carry away the energy created at the heat transfer surface and is therefore sometimes desirable—for example in nuclear power plants, where liquid is used as a coolant. The effects of nucleate boiling take place at two locations: the liquid-wall interface the bubble-liquid interface The nucleate boiling process has a complex nature. A limited number of experimental studies provided valuable insights into the boiling phenomena, however these studies provided often contradictory data due to internal recalculation (state of chaos in the fluid not applying to classical thermodynamic methods of calculation, therefore giving wrong return values) and have not provided conclusive findings yet to develop models and correlations. Nucleate boiling phenomenon still requires more understanding. Boiling heat transfer correlations The nucleate boiling regime is important to engineers because of the high heat fluxes possible with moderate temperature differences. The data can be correlated by an equation of the form Where is the Nusselt number, defined as: where: is the total heat flux, is the maximum bubble diameter as it leaves the surface, is the excess temperature, is the thermal conductivity of the liquid, is the Prandtl number of the liquid, is the bubble Reynolds number, where: is the average mass velocity of the vapor leaving the surface is the liquid viscosity. Rohsenow has developed the first and most widely used correlation for nucleate boiling, where: is the specific heat of the liquid, is the surface fluid combination and vary for various combinations of fluid and surface, is the surface tension of the liquid-vapor interface. The variable depends on the surface fluid combination and typically has a value of 1.0 or 1.7. For example, water and nickel have a of 0.006 and of 1.0. Departure from nucleate boiling If the heat flux of a boiling system is higher than the critical heat flux (CHF) of the system, the bulk fluid may boil, or in some cases, regions of the bulk fluid may boil where the fluid travels in small channels. Thus large bubbles form, sometimes blocking the passage of the fluid. This results in a departure from nucleate boiling (DNB) in which steam bubbles no longer break away from the solid surface of the channel, bubbles dominate the channel or surface, and the heat flux dramatically decreases. Vapor essentially insulates the bulk liquid from the hot surface. During DNB, the surface temperature must therefore increase substantially above the bulk fluid temperature in order to maintain a high heat flux. Avoiding the CHF is an engineering problem in heat transfer applications, such as nuclear reactors, where fuel plates must not be allowed to overheat. DNB may be avoided in practice by increasing the pressure of the fluid, increasing its flow rate, or by utilizing a lower temperature bulk fluid which has a higher CHF. If the bulk fluid temperature is too low or the pressure of the fluid is too high, nucleate boiling is however not possible. DNB is also known as transition boiling, unstable film boiling, and partial film boiling. For water boiling as shown on the graph, transition boiling occurs when the temperature difference between the surface and the boiling water is approximately above the TS. This corresponds to the high peak and the low peak on the boiling curve. The low point between transition boiling and film boiling is the Leidenfrost point. During transition boiling of water, the bubble formation is so rapid that a vapor film or blanket begins to form at the surface. However, at any point on the surface, the conditions may oscillate between film and nucleate boiling, but the fraction of the total surface covered by the film increases with increasing temperature difference. As the thermal conductivity of the vapor is much less than that of the liquid, the convective heat transfer coefficient and the heat flux reduces with increasing temperature difference. See also Boiling Cavitation Chemical engineering Fluid physics Heat transfer Leidenfrost effect Sonoluminescence References Thermodynamic entropy Nuclear technology Cooling technology Heat transfer Transport phenomena
Nucleate boiling
[ "Physics", "Chemistry", "Engineering" ]
1,453
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Physical quantities", "Chemical engineering", "Thermodynamic entropy", "Nuclear technology", "Entropy", "Thermodynamics", "Nuclear physics", "Statistical mechanics" ]
11,763,579
https://en.wikipedia.org/wiki/Microvesicle
Microvesicles (ectosomes, or microparticles) are a type of extracellular vesicle (EV) that are released from the cell membrane. In multicellular organisms, microvesicles and other EVs are found both in tissues (in the interstitial space between cells) and in many types of body fluids. Delimited by a phospholipid bilayer, microvesicles can be as small as the smallest EVs (30 nm in diameter) or as large as 1000 nm. They are considered to be larger, on average, than intracellularly-generated EVs known as exosomes. Microvesicles play a role in intercellular communication and can transport molecules such as mRNA, miRNA, and proteins between cells. Though initially dismissed as cellular debris, microvesicles may reflect the antigenic content of the cell of origin and have a role in cell signaling. Like other EVs, they have been implicated in numerous physiologic processes, including anti-tumor effects, tumor immune suppression, metastasis, tumor-stroma interactions, angiogenesis, and tissue regeneration. Microvesicles may also remove misfolded proteins, cytotoxic agents and metabolic waste from the cell. Changes in microvesicle levels may indicate diseases including cancer. Formation and contents Different cells can release microvesicles from the plasma membrane. Sources of microvesicles include megakaryocytes, blood platelets, monocytes, neutrophils, tumor cells and placenta. Platelets play an important role in maintaining hemostasis: they promote thrombus growth, and thus they prevent loss of blood. Moreover, they enhance immune response, since they express the molecule CD154 (CD40L). Platelets are activated by inflammation, infection, or injury, and after their activation microvesicles containing CD154 are released from platelets. CD154 is a crucial molecule in the development of T cell-dependent humoral immune response. CD154 knockout mice are incapable of producing IgG, IgE, or IgA as a response to antigens. Microvesicles can also transfer prions and molecules CD41 and CXCR4. Endothelial microparticles Endothelial microparticles are small vesicles that are released from endothelial cells and can be found circulating in the blood. The microparticle consists of a plasma membrane surrounding a small amount of cytosol. The membrane of the endothelial microparticle contains receptors and other cell surface molecules which enable the identification of the endothelial origin of the microparticle, and allow it to be distinguished from microparticles from other cells, such as platelets. Although circulating endothelial microparticles can be found in the blood of normal individuals, increased numbers of circulating endothelial microparticles have been identified in individuals with certain diseases, including hypertension and cardiovascular disorders, and pre-eclampsia and various forms of vasculitis. The endothelial microparticles in some of these disease states have been shown to have arrays of cell surface molecules reflecting a state of endothelial dysfunction. Therefore, endothelial microparticles may be useful as an indicator or index of the functional state of the endothelium in disease, and may potentially play key roles in the pathogenesis of certain diseases, including rheumatoid arthritis. Endothelial microparticles have been found to prevent apoptosis in recipient cells by inhibiting the p38 pathway via inactivating mitogen-activated protein kinase (MKP)-1. Uptake of endothelial micoparticles is Annexin I/Phosphatidylserine receptor dependant. Microparticles are derived from many other cell types. Process of formation Microvesicles and exosomes are formed and released by two slightly different mechanisms. These processes result in the release of intercellular signaling vesicles. Microvesicles are small, plasma membrane-derived particles that are released into the extracellular environment by the outward budding and fission of the plasma membrane. This budding process involves multiple signaling pathways including the elevation of intracellular calcium and reorganization of the cell's structural scaffolding. The formation and release of microvesicles involve contractile machinery that draws opposing membranes together before pinching off the membrane connection and launching the vesicle into the extracellular space. Microvesicle budding takes place at unique locations on the cell membrane that are enriched with specific lipids and proteins reflecting their cellular origin. At these locations, proteins, lipids, and nucleic acids are selectively incorporated into microvesicles and released into the surrounding environment. Exosomes are membrane-covered vesicles, formed intracellularly are considered to be smaller than 100 nm. In contrast to microvesicles, which are formed through a process of membrane budding, or exocytosis, exosomes are initially formed by endocytosis. Exosomes are formed by invagination within a cell to create an intracellular vesicle called an endosome, or an endocytic vesicle. In general, exosomes are formed by segregating the cargo (e.g., lipids, proteins, and nucleic acids) within the endosome. Once formed, the endosome combines with a structure known as a multivesicular body (MVB). The MVB containing segregated endosomes ultimately fuses with the plasma membrane, resulting in exocytosis of the exosomes. Once formed, both microvesicles and exosomes (collectively called extracellular vesicles) circulate in the extracellular space near the site of release, where they can be taken up by other cells or gradually deteriorate. In addition, some vesicles migrate significant distances by diffusion, ultimately appearing in biological fluids such as cerebrospinal fluid, blood, and urine. Mechanism of shedding There are three mechanisms which lead to release of vesicles into the extracellular space. First of these mechanisms is exocytosis from multivesicular bodies and the formation of exosomes. Another mechanism is budding of microvesicles directly from a plasma membrane. And the last one is cell death leading to apoptotic blebbing. These are all energy-requiring processes. Under physiologic conditions, the plasma membrane of cells has an asymmetric distribution of phospholipids. aminophospholipids, phosphatidylserine, and phosphatidylethanolamine are specifically sequestered in the inner leaflet of the membrane. The transbilayer lipid distribution is under the control of three phospholipidic pumps: an inward-directed pump, or flippase; an outward-directed pump, or floppase; and a lipid scramblase, responsible for non-specific redistribution of lipids across the membrane. After cell stimulation, including apoptosis, a subsequent cytosolic Ca2+ increase promotes the loss of phospholipid asymmetry of the plasma membrane, subsequent phosphatidylserine exposure, and a transient phospholipidic imbalance between the external leaflet at the expense of the inner leaflet, leading to budding of the plasma membrane and microvesicle release. Molecular contents The lipid and protein content of microvesicles has been analyzed using various biochemical techniques. Microvesicles display a spectrum of enclosed molecules enclosed within the vesicles and their plasma membranes. Both the membrane molecular pattern and the internal contents of the vesicle depend on the cellular origin and the molecular processes triggering their formation. Because microvesicles are not intact cells, they do not contain mitochondria, Golgi, endoplasmic reticulum, or a nucleus with its associated DNA. Microvesicle membranes consist mainly of membrane lipids and membrane proteins. Regardless of their cell type of origin, nearly all microvesicles contain proteins involved in membrane transport and fusion. They are surrounded by a phospholipid bilayer composed of several different lipid molecules. The protein content of each microvesicle reflects the origin of the cell from which it was released. For example, those released from antigen-presenting cells (APCs), such as B cells and dendritic cells, are enriched in proteins necessary for adaptive immunity, while microvesicles released from tumors contain proapoptotic molecules and oncogenic receptors (e.g. EGFR). In addition to the proteins specific to the cell type of origin, some proteins are common to most microvesicles. For example, nearly all contain the cytoplasmic proteins tubulin, actin and actin-binding proteins, as well as many proteins involved in signal transduction, cell structure and motility, and transcription. Most microvesicles contain the so-called "heat-shock proteins" hsp70 and hsp90, which can facilitate interactions with cells of the immune system. Finally, tetraspanin proteins, including CD9, CD37, CD63 and CD81 are one of the most abundant protein families found in microvesicle membranes. Many of these proteins may be involved in the sorting and selection of specific cargos to be loaded into the lumen of the microvesicle or its membrane. Other than lipids and proteins, microvesicles are enriched with nucleic acids (e.g., messenger RNA (mRNA) and microRNA (miRNA)). The identification of RNA molecules in microvesicles supports the hypothesis that they are a biological vehicle for the transfer of nucleic acids and subsequently modulate the target cell's protein synthesis. Messenger RNA transported from one cell to another through microvesicles can be translated into proteins, conferring new function to the target cell. The discovery that microvesicles may shuttle specific mRNA and miRNA suggests that this may be a new mechanism of genetic exchange between cells. Exosomes produced by cells exposed to oxidative stress can mediate protective signals, reducing oxidative stress in recipient cells, a process which is proposed to depend on exosomal RNA transfer. These RNAs are specifically targeted to microvesicles, in some cases containing detectable levels of RNA that is not found in significant amounts in the donor cell. Because the specific proteins, mRNAs, and miRNAs in microvesicles are highly variable, it is likely that these molecules are specifically packaged into vesicles using an active sorting mechanism. At this point, it is unclear exactly which mechanisms are involved in packaging soluble proteins and nucleic acids into microvesicles. Role on target cells Once released from their cell of origin, microvesicles interact specifically with cells they recognize by binding to cell-type specific, membrane-bound receptors. Because microvesicles contain a variety of surface molecules, they provide a mechanism for engaging different cell receptors and exchanging material between cells. This interaction ultimately leads to fusion with the target cell and release of the vesicles' components, thereby transferring bioactive molecules, lipids, genetic material, and proteins. The transfer of microvesicle components includes specific mRNAs and proteins, contributing to the proteomic properties of target cells. microvesicles can also transfer miRNAs that are known to regulate gene expression by altering mRNA turnover. Mechanisms of signaling Degradation In some cases, the degradation of microvesicles is necessary for the release of signaling molecules. During microvesicle production, the cell can concentrate and sort the signaling molecules which are released into the extracellular space upon microvesicle degradation. Dendritic cells, macrophage and microglia derived microvesicles contain proinflammatory cytokines and neurons and endothelial cells release growth factors using this mechanism of release. Fusion Proteins on the surface of the microvesicle will interact with specific molecules, such as integrin, on the surface of its target cell. Upon binding, the microvesicle can fuse with the plasma membrane. This results in the delivery of nucleotides and soluble proteins into the cytosol of the target cell as well as the integration of lipids and membrane proteins into its plasma membrane. Internalization Microvesicles can be endocytosed upon binding to their targets, allowing for additional steps of regulation by the target cell. The microvesicle may fuse, integrating lipids and membrane proteins into the endosome while releasing its contents into the cytoplasm. Alternatively, the endosome may mature into a lysosome causing the degradation of the microvesicle and its contents, in which case the signal is ignored. Transcytosis After internalization of microvesicle via endocytosis, the endosome may move across the cell and fuse with the plasma membrane, a process called transcytosis. This results in the ejection of the microvesicle back into the extracellular space or may result in the transportation of the microvesicle into a neighboring cell. This mechanism might explain the ability of microvesicle to cross biological barriers, such as the blood brain barrier, by moving from cell to cell. Contact dependent signaling In this form of signaling, the microvesicle does not fuse with the plasma membrane or engulfed by the target cell. Similar to the other mechanisms of signaling, the microvesicle has molecules on its surface that will interact specifically with its target cell. There are additional surface molecules, however, that can interact with receptor molecules which will interact with various signaling pathways. This mechanism of action can be used in processes such as antigen presentation, where MHC molecules on the surface of microvesicle can stimulate an immune response. Alternatively, there may be molecules on microvesicle surfaces that can recruit other proteins to form extracellular protein complexes that may be involved in signaling to the target cell. Relevance in disease Cancer Promoting aggressive tumor phenotypes The oncogenic receptor ECGFvIII, which is located in a specific type of aggressive glioma tumor, can be transferred to a non-aggressive population of tumor cells via microvesicles. After the oncogenic protein is transferred, the recipient cells become transformed and show characteristic changes in the expression levels of target genes. It is possible that transfer of other mutant oncogenes, such as HER2, may be a general mechanism by which malignant cells cause cancer growth at distant sites. Microvesicles from non-cancer cells can signal to cancer cells to become more aggressive. Upon exposure to microvesicles from tumor-associated macrophages, breast cancer cells become more invasive in vitro. Promoting angiogenesis Angiogenesis, which is essential for tumor survival and growth, occurs when endothelial cells proliferate to create a matrix of blood vessels that infiltrate the tumor, supplying the nutrients and oxygen necessary for tumor growth. A number of reports have demonstrated that tumor-associated microvesicles release proangiogenic factors that promote endothelial cell proliferation, angiogenesis, and tumor growth. Microvesicles shed by tumor cells and taken up by endothelial cells also facilitate angiogenic effects by transferring specific mRNAs and miRNAs. Involvement in multidrug resistance When anticancer drugs such as doxorubicin accumulate in microvesicles, the drug's cellular levels decrease. This can ultimately contribute to the process of drug resistance. Similar processes have been demonstrated in microvesicles released from cisplatin-insensitive cancer cells. Vesicles from these tumors contained nearly three times more cisplatin than those released from cisplatin-sensitive cells. For example, tumor cells can accumulate drugs into microvesicles. Subsequently, the drug-containing microvesicles are released from the cell into the extracellular environment, thereby mediating resistance to chemotherapeutic agents and resulting in significantly increased tumor growth, survival, and metastasis. Interference with antitumor immunity Microvesicles from various tumor types can express specific cell-surface molecules (e.g. FasL or CD95) that induce T-cell apoptosis and reduce the effectiveness of other immune cells. microvesicles released from lymphoblastoma cells express the immune-suppressing protein latent membrane protein-1 (LMP1), which inhibits T-cell proliferation and prevents the removal of circulating tumor cells (CTCs). As a consequence, tumor cells can turn off T-cell responses or eliminate the antitumor immune cells altogether by releasing microvesicles. the combined use of microvesicles and 5-FU resulted in enhanced chemosensitivity of squamous cell carcinoma cells more than the use of either 5-FU or microvesicle alone Impact on tumor metastasis Degradation of the extracellular matrix is a critical step in promoting tumor growth and metastasis. Tumor-derived microvesicles often carry protein-degrading enzymes, including matrix metalloproteinase 2 (MMP-2), MMP-9, and urokinase-type plasminogen activator (uPA). By releasing these proteases, tumor cells can degrade the extracellular matrix and invade surrounding tissues. Likewise, inhibiting MMP-2, MMP-9, and uPA prevents microvesicles from facilitating tumor metastasis. Matrix digestion can also facilitate angiogenesis, which is important for tumor growth and is induced by the horizontal transfer of RNAs from microvesicles. Cellular Origin of Microvesicles The release of microvesicles has been shown from endothelial cells, vascular smooth muscle cells, platelets, white blood cells (e.g. leukocytes and lymphocytes), and red blood cells. Although some of these microvesicle populations occur in the blood of healthy individuals and patients, there are obvious changes in number, cellular origin, and composition in various disease states. It has become clear that microvesicles play important roles in regulating the cellular processes that lead to disease pathogenesis. Moreover, because microvesicles are released following apoptosis or cell activation, they have the potential to induce or amplify disease processes. Some of the inflammatory and pathological conditions that microvesicles are involved in include cardiovascular disease, hypertension, neurodegenerative disorders, diabetes, and rheumatic diseases. Cardiovascular disease Microvesicles are involved in cardiovascular disease initiation and progression. Microparticles derived from monocytes aggravate atherosclerosis by modulating inflammatory cells. Additionally, microvesicles can induce clotting by binding to clotting factors or by inducing the expression of clotting factors in other cells. Circulating microvesicles isolated from cardiac surgery patients were found to be thrombogenic in both in vitro assays and in rats. Microvesicles isolated from healthy individuals did not have the same effects and may actually have a role in reducing clotting. Tissue factor, an initiator of coagulation, is found in high levels within microvesicles, indicating their role in clotting. Renal mesangial cells exposed to high glucose media release microvesicles containing tissue factor, having an angiogenic effect on endothelial cells. Inflammation Microvesicles contain cytokines that can induce inflammation via numerous different pathways. These cells will then release more microvesicles, which have an additive effect. This can call neutrophils and leukocytes to the area, resulting in the aggregation of cells. However, microvesicles also seem to be involved in a normal physiological response to disease, as there are increased levels of microvesicles that result from pathology. Neurological disorders Microvesicles seem to be involved in a number of neurological diseases. Since they are involved in numerous vascular diseases and inflammation, strokes and multiple sclerosis seem to be other diseases for which microvesicles are involved. Circulating microvesicles seem to have an increased level of phosphorylated tau proteins during early stage Alzheimer's disease. Similarly, increased levels of CD133 are an indicator of epilepsy. Clinical applications Detection of cancer Tumor-associated microvesicles are abundant in the blood, urine, and other body fluids of patients with cancer, and are likely involved in tumor progression. They offer a unique opportunity to noninvasively access the wealth of biological information related to their cells of origin. The quantity and molecular composition of microvesicles released from malignant cells varies considerably compared with those released from normal cells. Thus, the concentration of plasma microvesicles with molecular markers indicative of the disease state may be used as an informative blood-based biosignature for cancer. Microvesicles express many membrane-bound proteins, some of which can be used as tumor biomarkers. Several tumor markers accessible as proteins in blood or urine have been used to screen and diagnose various types of cancer. In general, tumor markers are produced either by the tumor itself or by the body in response to the presence of cancer or some inflammatory conditions. If a tumor marker level is higher than normal, the patient is examined more closely to look for cancer or other conditions. For example, CA19-9, CA-125, and CEA have been used to help diagnose pancreatic, ovarian, and gastrointestinal malignancies, respectively. However, although they have proven clinical utility, none of these tumor markers are highly sensitive or specific. Clinical research data suggest that tumor-specific markers exposed on microvesicles are useful as a clinical tool to diagnose and monitor disease. Research is also ongoing to determine if tumor-specific markers exposed on microvesicles are predictive for therapeutic response. Evidence produced by independent research groups has demonstrated that microvesicles from the cells of healthy tissues, or selected miRNAs from these microvesicles, can be employed to reverse many tumors in pre-clinical cancer models, and may be used in combination with chemotherapy. Conversely, microvesicles processed from a tumor cell are involved in the transport of cancer proteins and in delivering microRNA to the surrounding healthy tissue. It leads to a change of healthy cell phenotype and creates a tumor-friendly environment. Microvesicles play an important role in tumor angiogenesis and in the degradation of matrix due to the presence of metalloproteases, which facilitate metastasis. They are also involved in intensification of the function of regulatory T-lymphocytes and in the induction of apoptosis of cytotoxic T-lymphocytes, because microvesicles released from a tumor cell contain Fas ligand and TRAIL. They prevent differentiation of monocytes to dendritic cells. Tumor microvesicles also carry tumor antigen, so they can be an instrument for developing tumor vaccines. Circulating miRNA and segments of DNA in all body fluids can be potential markers for tumor diagnostics. Microvesicles and Rheumatoid arthritis Rheumatoid arthritis is a chronic systemic autoimmune disease characterized by inflammation of joints. In the early stage there are abundant Th17 cells producing proinflammatory cytokines IL-17A, IL-17F, TNF, IL-21, and IL-22 in the synovial fluid. regulatory T-lymphocytes have a limited capability to control these cells. In the late stage, the extent of inflammation correlates with numbers of activated macrophages that contribute to joint inflammation and bone and cartilage destruction, because they have the ability to transform themselves into osteoclasts that destroy bone tissue. Synthesis of reactive oxygen species, proteases, and prostaglandins by neutrophils is increased. Activation of platelets via collagen receptor GPVI stimulates the release of microvesicles from platelet cytoplasmic membranes. These microparticles are detectable at a high level in synovial fluid, and they promote joint inflammation by transporting proinflammatory cytokine IL-1. Biological markers for disease In addition to detecting cancer, it is possible to use microvesicles as biological markers to give prognoses for various diseases. Many types of neurological diseases are associated with increased level of specific types of circulating microvesicles. For example, elevated levels of phosphorylated tau proteins can be used to diagnose patients in early stages of Alzheimer's. Additionally, it is possible to detect increased levels of CD133 in microvesicles of patients with epilepsy. Mechanism for drug delivery Circulating microvesicles may be useful for the delivery of drugs to very specific targets. Using electroporation or centrifugation to insert drugs into microvesicles targeting specific cells, it is possible to target the drug very efficiently. This targeting can help by reducing necessary doses as well as prevent off-target side effects. They can target anti-inflammatory drugs to specific tissues. Additionally, circulating microvesicles can bypass the blood–brain barrier and deliver their cargo to neurons while not having an effect on muscle cells. The blood-brain barrier is typically a difficult obstacle to overcome when designing drugs, and microvesicles may be a means of overcoming it. Current research is looking into efficiently creating microvesicles synthetically, or isolating them from patient or engineered cell lines. Microvesicles used in therapeutic genome editing appoaches are sometimes called a “gesicle”, especially if used to package/deliver the Cas9 RNP complex. See also International Society for Extracellular Vesicles Journal of Extracellular Vesicles Exocytosis Membrane vesicle trafficking References Further reading External links Vesiclepedia—A database of molecules identified in extracellular vesicles ExoCarta—A database of molecules identified in exosomes International Society for Extracellular Vesicles Resource on the detection of circulating microvesicles Cell biology Vesicles Medical diagnosis Nanotechnology
Microvesicle
[ "Materials_science", "Engineering", "Biology" ]
5,340
[ "Nanotechnology", "Cell biology", "Materials science" ]
11,764,750
https://en.wikipedia.org/wiki/Point%20diffraction%20interferometer
A point diffraction interferometer (PDI) is a type of common-path interferometer. Unlike an amplitude-splitting interferometer, such as a Michelson interferometer, which separates out an unaberrated beam and interferes this with the test beam, a common-path interferometer generates its own reference beam. In PDI systems, the test and reference beams travel the same or almost the same path. This design makes the PDI extremely useful when environmental isolation is not possible or a reduction in the number of precision optics is required. The reference beam is created from a portion of the test beam by diffraction from a small pinhole in a semitransparent coating. The principle of a PDI is shown in Figure 1. The device is similar to a spatial filter. Incident light is focused onto a semi-transparent mask (about 0.1% transmission). In the centre of the mask is a hole about the size of the Airy disc, and the beam is focused onto this hole with a Fourier-transforming lens. The zeroth order (the low frequencies in Fourier space) then passes through the hole and interferes with the rest of beam. The transmission and the hole size are selected to balance the intensities of the test and reference beams. The device is similar in operation to phase-contrast microscopy. Development in PDI systems PDI systems are valuable tool to measure absolute surface characteristics of an optical or reflective instruments non destructively. The common path design eliminates any need of having a reference optics, which are known to overlap the absolute surface form of a test object with its own surface form errors. This is a major disadvantage of a double path systems, such as Fizeau interferometers, as shown in Figure 2. Similarly the common path design is resistant to ambient disturbances. The main criticisms of the original design are (1) that the required low-transmission reduces the efficiency, and (2) when the beam becomes too aberrated, the intensity on-axis is reduced, and less light is available for the reference beam, leading to a loss of fringe contrast. Lowered transmission was associated with lowered signal to noise ratio. These problems are largely overcome in the phase-shifting point diffraction interferometer designs, in which a grating or beamsplitter creates multiple, identical copies of the beam that is incident on an opaque mask. The test beam passes through a somewhat large hole or aperture in the membrane, without losses due to absorption; the reference beam is focused onto the pinhole for highest transmission. In the grating-based instance, phase-shifting is accomplished by translating the grating perpendicular to the rulings, while multiple images are recorded. The continued developments in phase shifting PDI have achieved accuracy orders of magnitude greater than standard Fizeau based systems. Phase-shifting [see Interferometry] versions have been created to increase measurement resolution and efficiency. These include a diffraction grating interferometer by Kwon and the Phase-Shifting Point Diffraction Interferometer. Types of phase-shifting PDI systems Phase-shifting PDI with single pinhole Gary Sommargren proposed a point diffraction interferometer design which directly followed from the basic design where parts of the diffracted wavefront was used for testing and the remaining part for detection as shown in Figure 3. This design was a major upgrade to existing systems. The scheme could accurately measure the optical surface with variations of 1 nm. The phase shifting was obtained by moving the test part with a piezo electric translation stage. An unwanted side effect of moving the test part is that the defocus also moves distorting the fringes. Another downsides of Sommargren's approach is that it produces low contrast fringes and an attempt to regulate the contrast also modifies the measured wavefront. PDI systems using optical fibres In this type of point diffraction interferometer the point source is a single mode fiber. The end face is narrowed down to resemble a cone and is covered with metallic film to reduce the light spill. Fibre is arranged so that they generate spherical waves for both testing and referencing. End of an optical fibre is known to generate spherical waves with an accuracy greater than . Although optical fibre based PDIs provide some advancement over the single pinhole based system, they are difficult to manufacture and align. Two-beam phase-shifting PDI Two-beam PDI provides a major advantage over other schemes by availing two independently steerable beams. Here, the test beam and reference beam are perpendicular to each other, where the intensity of reference can be regulated. Similarly, an arbitrary and stable phase shifts can be obtained relative to the test beam keeping the test part static. The scheme as shown in Figure 4 is easy to manufacture and provides user-friendly measuring conditions similar to Fizeau type interferometers. At the same time renders following additional benefits: Absolute surface form of the test part. High numerical aperture (NA = 0.55). Clear fringe patterns of high contrast. High accuracy of surface form testing (wavefront RMS error 0.125 nm). Simple RMS repeatability 0.05 nm. Can measure depolarising test parts. The device is self-referencing, therefore it can be used in environments with a lot of vibrations or when no reference beam is available, such as in many adaptive optics and short-wavelength scenarios. Applications of PDI Interferometry has been used for various quantitative characterisation of optical systems indicating their overall performance. Traditionally, Fizeau interferometers have been used to detect optical or polished surface forms but new advances in precision manufacturing has allowed industrial point diffraction interferometry possible. PDI is especially suited for high resolution, high accuracy measurements in laboratory conditions to noisy factory floors. Lack of reference optics makes the method suitable to visualise absolute surface form of optical systems. Therefore, a PDI is uniquely suitable to verify the reference optics of other interferometers. It is also immensely useful in analysing optical assemblies used in Laser based systems. Characterising optics for UV lithography. Quality control of precision optics. Verifying the actual resolution of an optical assembly. Measuring the wavefront map produced by X-ray optics. PS-PDI can also be used to verify rated resolution of space optics before deployment. See also Interferometry References External links Making sure the space camera is up for the job before deployment: A case study by the Interferometer manufacturer Difrotec OÜ. Interferometers
Point diffraction interferometer
[ "Technology", "Engineering" ]
1,346
[ "Interferometers", "Measuring instruments" ]
11,766,301
https://en.wikipedia.org/wiki/X-Ray%20Spectrometry%20%28journal%29
X-Ray Spectrometry is a bimonthly peer-reviewed scientific journal established in 1972 and published by John Wiley & Sons. It covers the theory and application of X-ray spectrometry. The current editor-in-chiefs are Johan Boman (University of Gothenburg) and Liqiang Luo (National Research Center of Geoanalysis). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.488, ranking it 30th out of 43 journals in the category "Spectroscopy". Notable articles The highest-cited articles from this journal are: References External links Spectroscopy journals Wiley (publisher) academic journals Academic journals established in 1972 English-language journals Bimonthly journals X-ray spectroscopy
X-Ray Spectrometry (journal)
[ "Physics", "Chemistry", "Astronomy" ]
164
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Spectroscopy journals", "X-ray spectroscopy", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
11,766,544
https://en.wikipedia.org/wiki/Journal%20of%20Raman%20Spectroscopy
The Journal of Raman Spectroscopy is a monthly peer-reviewed scientific journal covering all aspects of Raman spectroscopy, including Higher Order Processes, and Brillouin and Rayleigh scattering. It was established in 1973 and is published by John Wiley & Sons. The editor-in-chief is Laurence A. Nafie (Syracuse University). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.133. Notable papers , the most cited papers published by the journal are: References External links Spectroscopy journals Wiley (publisher) academic journals English-language journals Academic journals established in 1973 Monthly journals Raman spectroscopy
Journal of Raman Spectroscopy
[ "Physics", "Chemistry" ]
141
[ "Spectroscopy", "Spectrum (physical sciences)", "Spectroscopy journals" ]
11,766,887
https://en.wikipedia.org/wiki/Curvature%20of%20a%20measure
In mathematics, the curvature of a measure defined on the Euclidean plane R2 is a quantification of how much the measure's "distribution of mass" is "curved". It is related to notions of curvature in geometry. In the form presented below, the concept was introduced in 1995 by the mathematician Mark S. Melnikov; accordingly, it may be referred to as the Melnikov curvature or Menger-Melnikov curvature. Melnikov and Verdera (1995) established a powerful connection between the curvature of measures and the Cauchy kernel. Definition Let μ be a Borel measure on the Euclidean plane R2. Given three (distinct) points x, y and z in R2, let R(x, y, z) be the radius of the Euclidean circle that joins all three of them, or +∞ if they are collinear. The Menger curvature c(x, y, z) is defined to be with the natural convention that c(x, y, z) = 0 if x, y and z are collinear. It is also conventional to extend this definition by setting c(x, y, z) = 0 if any of the points x, y and z coincide. The Menger-Melnikov curvature c2(μ) of μ is defined to be More generally, for α ≥ 0, define c2α(μ) by One may also refer to the curvature of μ at a given point x: in which case Examples The trivial measure has zero curvature. A Dirac measure δa supported at any point a has zero curvature. If μ is any measure whose support is contained within a Euclidean line L, then μ has zero curvature. For example, one-dimensional Lebesgue measure on any line (or line segment) has zero curvature. The Lebesgue measure defined on all of R2 has infinite curvature. If μ is the uniform one-dimensional Hausdorff measure on a circle Cr or radius r, then μ has curvature 1/r. Relationship to the Cauchy kernel In this section, R2 is thought of as the complex plane C. Melnikov and Verdera (1995) showed the precise relation of the boundedness of the Cauchy kernel to the curvature of measures. They proved that if there is some constant C0 such that for all x in C and all r > 0, then there is another constant C, depending only on C0, such that for all ε > 0. Here cε denotes a truncated version of the Menger-Melnikov curvature in which the integral is taken only over those points x, y and z such that Similarly, denotes a truncated Cauchy integral operator: for a measure μ on C and a point z in C, define where the integral is taken over those points ξ in C with References Curvature (mathematics) Measure theory
Curvature of a measure
[ "Physics" ]
578
[ "Geometric measurement", "Physical quantities", "Curvature (mathematics)" ]
2,161,298
https://en.wikipedia.org/wiki/Bioglass%2045S5
Bioglass 45S5 or calcium sodium phosphosilicate, is a bioactive glass specifically composed of 45 wt% SiO2, 24.5 wt% CaO, 24.5 wt% Na2O, and 6.0 wt% P2O5. Typical applications of Bioglass 45S5 include: bone grafting biomaterials, repair of periodontal defects, cranial and maxillofacial repair, wound care, blood loss control, stimulation of vascular regeneration, and nerve repair. The name "Bioglass" was trademarked by the University of Florida as a name for the original 45S5 composition. It should therefore only be used in reference to the 45S5 composition and not as a general term for bioactive glasses. Bioglass 45S5 is available commercially under the registered trade name NovaMin, which is owned by the pharmaceutical company GlaxoSmithKline. NovaMin is bioactive glass that has been ground into a fine particulate with a median size of less than 20 μm. It can reduce dentin hypersensitivity by blocking open dentinal tubules and by supplying calcium (Ca2+) and phosphate () ions to form hydroxycarbonate apatite (HCA), the principal mineral component of bone tissue in mammals. NovaMin is the active ingredient in Sensodyne "Repair & Protect" toothpaste, except when sold in the United States, containing stannous fluoride instead. Characteristics 45S5 bioactive glass is white in color and is in powder form, with particulates with a median size of less than 20 μm. Its chemical composition by weight is: silica (SiO2) 43–47%, calcium oxide (CaO) 22.5–26.5%, phosphorus pentoxide (P2O5) 5–7% and sodium oxide (Na2O) 22.5–26.5%. Glasses are non-crystalline disordered solids that are commonly composed of silica-based materials with other minor additives. Compared to soda-lime glass (commonly used, as in windows or bottles), Bioglass 45S5 contains less silica and higher amounts of calcium and phosphorus.  The 45S5 name signifies glass with 45% by weight of SiO2 and 5:1 molar ratio of calcium to phosphorus. This high ratio of calcium to phosphorus promotes formation of apatite crystals; calcium and silica ions can act as crystallization nuclei. Lower Ca:P ratios do not bond to bone. Bioglass 45S5's specific composition is optimal in biomedical applications because of its similar composition to that of hydroxyapatite, the mineral component of bone. This similarity provides Bioglass 45S5's ability to be integrated with living bone. This composition of bioactive glass is mechanically soft in comparison to other glasses. It can be machined, preferably with diamond tools, or ground to powder. Bioglass 45S5 has to be stored in a dry environment, as it readily absorbs moisture and reacts with it. Bioglass 45S5 is the first formulation of an artificial material that was found to chemically bond with bone, and its discovery led to a series of other bioactive glasses. One of its main medical advantages is its biocompatibility, seen in its ability to avoid an immune reaction and fibrous encapsulation. Its primary application is the repair of bone injuries or defects too large to be regenerated by the natural process. History Bioglass 45S5 is important to the field of biomimetic materials as one of the first completely synthetic materials that seamlessly bonds to bone. It was developed by Larry L. Hench in the late 1960s. The idea for the material came to him during a bus ride in 1967. While working as an assistant professor at the University of Florida, Hench decided to attend the U.S. Army Materials Research Conference held in Sagamore, New York, where he planned to talk about radiation resistant electronic materials. He began discussing his research with a fellow traveller on the bus, Colonel Klinker, who had recently returned to the United States after serving as an Army medical supply officer in Vietnam. After listening to Hench's description of his research, the Colonel asked, “If you can make a material that will survive exposure to high energy radiation can you make a material that will survive exposure to the human body?” Klinker then went on to describe the amputations that he had witnessed in Vietnam, which resulted from the body's rejection of metal and plastic implants. Hench realized that there was a need for a novel material that could form a living bond with tissues in the body. When Hench returned to Florida after the conference, he submitted a proposal to the U.S. Army Medical Research and Design Command. He received funding in 1968, and in November 1969 Hench began to synthesize small rectangles of what he called 45S5 glass. Ted Greenlee, Assistant Professor of Orthopaedic Surgery at the University of Florida, implanted them in rat femurs at the VA Hospital in Gainesville. Six weeks later, Greenlee called Hench asking, "Larry, what are those samples you gave me? They will not come out of the bone. I have pulled on them, I have pushed on them, I have cracked the bone and they are still bonded in place." With this first successful experiment, Bioglass was born and the first compositions studied. Hench published his first paper on the subject in 1971 in the Journal of Biomedical Materials Research, and his lab continued to work on the project for the next 10 years with continued funding from the U.S. Army. By 2006, there were over 500 papers published on the topic of bioactive glasses from different laboratories and institutions around the world. The first successful surgical use of Bioglass 45S5 was in replacement of ossicles in the middle ear as a treatment of conductive hearing loss, and the material continues to be used in bone reconstruction applications today. Other uses include cones for implantation into the jaw following a tooth extraction. Composite materials made of Bioglass 45S5 and patient's own bone can be used for bone reconstruction. Further research is being conducted for the development of new processing techniques to allow for more applications of Bioglass. Applications Bioglass 45S5 is used in jaw and orthopedics applications, in this way it dissolves and can stimulate the natural bone to repair itself. Bioactive glass offers good osteoconductivity and bioactivity, it can deliver cells and is biodegradable. This makes it an excellent candidate to be used in tissue engineering applications. Although this material is known to be brittle, it is still used extensively to enhance the growth of bone since new forms of bioactive glasses are based on borate and borosilicate compositions. Bioglass can also be doped with varying quantities of elements like copper, zinc, or strontium which can allow the growth and formation of healthy bone. The formation of neocartilage can also be induced with bioactive glass by using an in vitro culture of chondrocyte-seeded hydrogels and can serve as a subchondral substrate for tissue-engineered osteochondral constructs. The borate-based bioactive glass has controllable degradation rates in order to match the rate at which actual bone is formed. Bone formation has been shown to enhance when using this type of material. When implanted into rabbit femurs, the 45S5 bioactive glass showed that it could induce bone proliferation at a much quicker rate than synthetic hydroxyapatite (HA). 45S5 glass can also be osteoconductive and osteoinductive because it allows for new bone growth along the bone-implant interface as well as within the bone-implant interface. Studies have been conducted to determine the process by which it can induce bone formation. It was shown that 45S5 glass degrades and releases sodium ions, as well as soluble silica, the combination of all these ions is said to produce new bone. Borate bioglass has proven that it can support cell proliferation and differentiation in vitro and in vivo. It also has shown that it is suitable to be used as a substrate for drug release when treating bone infection. However, there has been a concern as to whether or not the release of boron into a solution as borate ions will be toxic to the body. It has been shown that in static cell culture conditions, borate glasses were toxic to cells, but not in dynamic culture conditions. Bioactive glass was applied to medical devices to help restore the hearing to a deaf patient using Bioglass 45S5 in 1984. The patient went deaf due to at ear infection that degraded two of the three bones in her middle ear. An implant was designed to replace the damaged bone and carry sound from the eardrum to the cochlea, restoring the patient's hearing. Before this material was available, plastics and metals would be used because they did not produce a reaction in the body; however, they eventually failed because tissue would grow around them after implantation. A prosthesis made up of Bioglass 45S5 was made to fit the patient and most of the prosthesis that were made were able to maintain functionality after 10 years. The Endosseous Ridge Maintenance Implant made of Bioglass 45S5 was another device that could be inserted into tooth extraction sites that would repair tooth roots and allow for a stable ridge for dentures. Another area in which bioactive glass has been investigated to use is tooth enamel reconstruction, which has proven to be a difficult task in the field of dentistry. Enamel is made up of a very organized hierarchical microstructure of carbonated hydroxyapatite nanocrystals. It has been reported that Bioglass 45S5-phosphoric acid paste can be used to form an interaction layer that can obstruct dentinal tubule orifices and can therefore be useful in the treatment of dentin hypersensitivity lesions. This material in an aqueous environment could have an antibacterial property that is advantageous in periodontal surgical procedures. In a study done with 45S5 Bioglass, biofilms of Streptococcus sanguinis were grown on inactive glass particulates and the biofilm grown on the Bioglass was significantly lower than those that were on the inactive glass. It was concluded that Bioglass may reduce bacterial colonisation which could aid osseointegration. A highly effective antibacterial bioactive glass is S53P4, which has been reported to exhibit a high antimicrobial activity and did not seem to select for resistance in the microbial strains tested. Bioactive glasses that are sol-gel derived, such as CaPSiO and CaPSiO II, have also exhibited antibacterial properties. Studies done with S. epidermidis and E. coli cultured with bioactive glass have shown that the 45S5 bioactive glass have a very high antibacterial resistance. It was also observed in the experiment that there were needle-like bioglass debris which could have ruptured the cell walls of the bacteria and rendered them inactive. GlaxoSmithKline is using this material as an active ingredient in toothpaste under the commercial name NovaMin, which can help repair tiny holes and decrease tooth sensitivity. More advanced fluoride-containing formulations of Bioglass have been developed, which provide stronger and longer-lasting protection against sensitivity. The inclusion of fluoride within the glass rather than as a soluble addition, such as the toothpaste BioMin, is claimed to optimise the rate of development of apatite, which shields the teeth from sensitivity for up to 12 hours. Mechanism of action When implanted, Bioglass 45S5 reacts with the surrounding physiological fluid, causing the formation of a hydroxyl carbonated apatite (HCA) layer at the material surface. The HCA layer has a similar composition to hydroxyapatite, the mineral phase of bone, a quality which allows for strong interaction and integration with bone. The process by which this reaction occurs can be separated into 12 steps. The first 5 steps are related to the Bioglass response to the environment within the body, and occur rapidly at the material surface over several hours. Reaction steps 6–10 detail the reaction of the body to the integration of the biomaterial, and the process of integration with bone. These stages occur over the scale of several weeks or months. The steps are separated as follows: Alkali ions (such as Na+ and Ca2+) on the glass surface rapidly exchange with hydrogen ions or hydronium from surrounding bodily fluids. The reaction below shows this process, which causes hydrolysis of silica groups. As this occurs, the pH of the solution increases. Si⎯O⎯Na+ + H+ + OH− → Si⎯OH+ + Na+ (aq) + OH− Due to an increase in the hydroxyl (OH−) concentration at the surface (a result of step 1), a dissolution of the silica glass network occurs, seen by the breaking of Si⎯O⎯Si bonds. Soluble silica is transformed to the form of Si(OH)4 and silanols (Si⎯OH) creation occurs at the material surface. The reaction occurring in this stage is shown below: Si⎯O⎯Si + H2O→ Si⎯OH + OH⎯Si The silanol groups at the material surface condense and repolymerize to form a silica-gel layer at the surface of bioglass. As a result of the first steps, the surface contains very little alkali content. The condensation reaction is shown below: Si⎯OH + Si⎯OH → Si⎯O⎯Si Amorphous Ca2+ and  gather at the silica-rich layer (created in step 3) from both the surrounding bodily fluid and the bulk of the Bioglass. This creates a layer composed primarily of CaO⎯P2O5 on top of the silica layer. The CaO⎯P2O5 film created in step 4 incorporates OH− and  from the bodily solution, causing it to crystallize. This layer is called a mixed carbonated hydroxyl apatite (HCA). Growth factors adsorb (adsorption) to the surface of Bioglass due to its structural and chemical similarities to hydroxyapatite. Adsorbed growth factors cause the activation of M2 macrophages. M2 macrophages tend to promote wound healing and initiate the migration of progenitor cells to an injury site. In contrast, M1 macrophages become activated when a non-biocompatible material is implanted, triggering an inflammatory response. Triggered by M2 macrophage activation, mesenchymal stem cells and osteoprogenitor cells migrate to the Bioglass surface and attach to the HCA layer. Stem cells and osteoprogenitor cells at the HCA surface differentiate to become osteogenic cells typically present in bone tissue, particularly osteoblasts. The attached and differentiated osteoblasts generate and deposit extracellular matrix (ECM) components, primarily type I collagen, the main protein component of bone. The collagen ECM becomes mineralized as normally occurs in native bone. Nanoscale hydroxyapatite crystals form a layered structure with the deposited collagen at the surface of the implant. Following these reactions, bone growth continues as the newly recruited cells continue to function and facilitate tissue growth and repair. The Bioglass implant continues to degrade and be converted to new ECM material. Manufacturing There are two main manufacturing techniques that are used for the synthesis of bioglass. The first is melt quench synthesis, which is the conventional glassmaking technology used by Larry Hench when he first manufactured the material in 1969. This method includes melting a mixture of oxides such as SiO2, Na2O, CaO and P2O5 at high temperatures generally above 1100–1300 °C. Platinum or platinum alloy crucibles are used to avoid contamination, which would interfere with the product's chemical reactivity in organism. Annealing is a crucial step in forming bulk parts, due to high thermal expansion of the material. Heat treatment of Bioglass reduces the volatile alkali metal oxide content and precipitates apatite crystals in the glass matrix. However, the scaffolds that result from melt quench techniques are much less porous compared to other manufacturing methods, which may lead to defects in tissue integration when implanted in vivo. The second method is sol-gel synthesis of Bioglass. This process is carried out at much lower temperatures than the traditional melting methods. It involves the creation of a solution (sol), which is composed of metal-organic and metal salt precursors. A gel is then formed through hydrolysis and condensation reactions, and it undergoes thermal treatment for drying, oxide formation, and organic removal. Because of the lower fabrication temperatures used in this method, there is a greater level of control on the composition and homogeneity of the product. In addition, sol-gel bioglasses have much higher porosity, which leads to a greater surface area and degree of integration in the body. Newer methods include flame and microwave synthesis of Bioglass, which has been gaining attention in recent years. Flame synthesis works by baking the powders directly in a flame reactor. Microwave synthesis is a rapid and low-cost powder synthesis method in which precursors are dissolved in water, transferred to an ultrasonic bath, and irradiated. Shortcomings A setback to using Bioglass 45S5 is that it is difficult to process into porous 3D scaffolds. These porous scaffolds are usually prepared by sintering glass particles that are already formed into the 3D geometry and allowing them to bond to the particles into a strong glass phase made up of a network of pores. Since this particular type of bioglass cannot fully sinter by viscous flow above its Tg, and its Tg is close to the onset of crystallization, it is hard to sinter this material into a dense network. 45S5 glass also has a slow degradation and rate of conversion to an HA-like material. This setback makes it more difficult for the degradation rate of the scaffold to coincide with the rate of tissue formation. Another limitation is that the biological environment can be easily influenced by its degradation. Increases in the sodium and calcium ions and changing pH is due to its degradation. However, the roles of these ions and their toxicity to the body have not been fully researched. Methods of improvement Several studies have investigated methods to improve the mechanical strength and toughness of Bioglass 45S5. These include creating polymer–glass composites, which combine the bioactivity of Bioglass with the relative flexibility and wear resistance of different polymers. Another solution is coating a metallic implant with Bioglass, which takes advantage of the mechanical strength of the implant's bulk material while retaining bioactive effects at the surface. Some of the most notable modifications have used various forms of carbon to improve the properties of 45S5 glass. For example, Touri et al. developed a method to incorporate carbon nanotubes (CNTs) into the structure without interfering with the material's bioactive properties. CNTs were chosen because of their large aspect ratio and high strength. By synthesizing Bioglass 45S5 on a CNT scaffold, the researchers were able to create a composite that more than doubled the compressive strength and the elastic modulus when compared to the pure glass. Another study carried out by Li et al. looked into different properties, such as the fracture toughness and wear resistance of Bioglass 45S5. The authors loaded graphene nanoplatelets (GNP) into the glass structure through a spark plasma sintering method. Graphene was chosen because of its high specific surface area and strength, as well as its cytocompatibility and lack of interference with Bioglass 45S5's bioactivity. The composites that were created in this experiment achieved a fracture toughness of more than double the control. In addition, the tribological properties of the material were greatly improved. See also Mechanical properties of biomaterials Synthesis of bioglass References Glass compositions Biomaterials
Bioglass 45S5
[ "Physics", "Chemistry", "Biology" ]
4,174
[ "Biomaterials", "Glass chemistry", "Glass compositions", "Materials", "Matter", "Medical technology" ]
2,161,429
https://en.wikipedia.org/wiki/Eigenvalues%20and%20eigenvectors
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor (possibly negative). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. Its eigenvectors are those vectors that are only stretched, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or squished. If the eigenvalue is negative, the eigenvector's direction is reversed. The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all the areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system. Definition Consider an matrix and a nonzero vector of length If multiplying with (denoted by ) simply scales by a factor of , where is a scalar, then is called an eigenvector of , and is the corresponding eigenvalue. This relationship can be expressed as: . There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space into itself, given any basis of the vector space. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices, or the language of linear transformations. The following section gives a more general viewpoint that also covers infinite-dimensional vector spaces. Overview Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation. The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue. If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis. History Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.{{efn| Augustin Cauchy (1839) "Mémoire sur l'intégration des équations linéaires" (Memoir on the integration of linear equations), Comptes rendus, 8: 827–830, 845–865, 889–907, 931–937. From p. 827: ''"On sait d'ailleurs qu'en suivant la méthode de Lagrange, on obtient pour valeur générale de la variable prinicipale une fonction dans laquelle entrent avec la variable principale les racines d'une certaine équation que j'appellerai léquation caractéristique, le degré de cette équation étant précisément l'order de l'équation différentielle qu'il s'agit d'intégrer." (One knows, moreover, that by following Lagrange's method, one obtains for the general value of the principal variable a function in which there appear, together with the principal variable, the roots of a certain equation that I will call the "characteristic equation", the degree of this equation being precisely the order of the differential equation that must be integrated.)}} Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961. Eigenvalues and eigenvectors of matrices Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.University of Michigan Mathematics (2016) Math Course Catalogue . Accessed on 2016-03-27. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider -dimensional vectors that are formed as a list of scalars, such as the three-dimensional vectors These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar such that In this case, . Now consider the linear transformation of -dimensional vectors defined by an by matrix , or where, for each row, If it occurs that and are scalar multiples, that is if then is an eigenvector of the linear transformation and the scale factor is the eigenvalue corresponding to that eigenvector. Equation () is the eigenvalue equation for the matrix . Equation () can be stated equivalently as where is the by identity matrix and 0 is the zero vector. Eigenvalues and the characteristic polynomial Equation () has a nonzero solution v if and only if the determinant of the matrix is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using the Leibniz formula for determinants, the left-hand side of equation () is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation () is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix Taking the determinant of , the characteristic polynomial of A is Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation In this example, the eigenvectors are any nonzero scalar multiples of If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. Spectrum of a matrix The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix. Algebraic multiplicity Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial. Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation () factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, If d = n then the right-hand side is the product of n linear terms and this is the same as equation (). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. Eigenspaces, geometric multiplicity, and the eigenbasis for matrices Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (), On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of . Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written , then or equivalently . This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if and α is a complex number, or equivalently . This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. To prove the inequality , consider how the definition of geometric multiplicity implies the existence of orthonormal eigenvectors , such that . We can therefore find a (unitary) matrix whose first columns are these eigenvectors, and whose remaining columns can be any orthonormal set of vectors orthogonal to these eigenvectors of . Then has full rank and is therefore invertible. Evaluating , we get a matrix whose top left block is the diagonal matrix . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding on both sides, we get since commutes with . In other words, is similar to , and . But from the definition of , we know that contains a factor , which means that the algebraic multiplicity of must satisfy . Suppose has distinct eigenvalues , where the geometric multiplicity of is . The total geometric multiplicity of , is the dimension of the sum of all the eigenspaces of 's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of . If , then The direct sum of the eigenspaces of all of 's eigenvalues is the entire vector space . A basis of can be formed from linearly independent eigenvectors of ; such a basis is called an eigenbasis Any vector in can be written as a linear combination of eigenvectors of . Additional properties Let be an arbitrary matrix of complex numbers with eigenvalues . Each eigenvalue appears times in this list, where is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: The trace of , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, The determinant of is the product of all its eigenvalues, The eigenvalues of the th power of ; i.e., the eigenvalues of , for any positive integer , are . The matrix is invertible if and only if every eigenvalue is nonzero. If is invertible, then the eigenvalues of are and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. If is equal to its conjugate transpose , or equivalently if is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. If is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. If is unitary, every eigenvalue has absolute value . If is a matrix and are its eigenvalues, then the eigenvalues of matrix (where is the identity matrix) are . Moreover, if , the eigenvalues of are . More generally, for a polynomial the eigenvalues of matrix are . Left and right eigenvectors Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the matrix in the defining equation, equation (), The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix . In this formulation, the defining equation is where is a scalar and is a matrix. Any row vector satisfying this equation is called a left eigenvector of and is its associated eigenvalue. Taking the transpose of this equation, Comparing this equation to equation (), it follows immediately that a left eigenvector of is the same as the transpose of a right eigenvector of , with the same eigenvalue. Furthermore, since the characteristic polynomial of is the same as the characteristic polynomial of , the left and right eigenvectors of are associated with the same eigenvalues. Diagonalization and the eigendecomposition Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, or by instead left multiplying both sides by Q−1, A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, . Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. Variational characterization In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of is the maximum value of the quadratic form . A value of that realizes that maximum is an eigenvector. Matrix examples Two-dimensional matrix example Consider the matrix The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, Setting the characteristic polynomial equal to zero, it has roots at and , which are the two eigenvalues of A. For , equation () becomes, Any nonzero vector with v1 = −v2 solves this equation. Therefore, is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For , equation () becomes Any nonzero vector with v1 = v2 solves this equation. Therefore, is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues and , respectively. Three-dimensional matrix example Consider the matrix The characteristic polynomial of A is The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors and or any nonzero multiple thereof. Three-dimensional matrix example with complex eigenvalues Consider the cyclic permutation matrix This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are where is an imaginary unit with For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example, For the complex conjugate pair of imaginary eigenvalues, Then and Therefore, the other two eigenvectors of A are complex and are and with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair, Diagonal matrix example Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix The characteristic polynomial of A is which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, respectively, as well as scalar multiples of these vectors. Triangular matrix example A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, The characteristic polynomial of A is which has the roots , , and . These roots are the diagonal elements as well as the eigenvalues of A. These eigenvalues correspond to the eigenvectors, respectively, as well as scalar multiples of these vectors. Matrix with repeated eigenvalues example As in the previous example, the lower triangular matrix has a characteristic polynomial that is the product of its diagonal elements, The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. Eigenvector-eigenvalue identity For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, where is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature. Eigenvalues and eigenfunctions of differential operators The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. Derivative operator example Consider the derivative operator with eigenvalue equation This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. General definition The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v. Eigenspaces, geometric multiplicity, and the eigenbasis Given an eigenvalue λ, consider the set which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues. Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. Spectral theory If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. Associative algebras and representation theory One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence. Dynamic equations The simplest difference equations have the form The solution of this equation for x in terms of t is found by using its characteristic equation which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations giving a k-dimensional system of the first order in the stacked variable vector in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots for use in the solution equation A similar procedure is used for solving a differential equation of the form Calculation The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. Classical method The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point. Eigenvalues The eigenvalues of a matrix can be determined by finding the roots of the characteristic polynomial. This is easy for matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an matrix is a sum of different products. Explicit algebraic formulas for the roots of a polynomial exist only if the degree is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree is the characteristic polynomial of some companion matrix of order .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. Eigenvectors Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix we can find its eigenvectors by solving the equation , that is This matrix equation is equivalent to two linear equations that is Both equations reduce to the single linear equation . Therefore, any vector of the form , for any nonzero real number , is an eigenvector of with eigenvalue . The matrix above has another eigenvalue . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of , that is, any vector of the form , for any nonzero real number . Simple iterative methods The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by this causes it to converge to an eigenvector of the eigenvalue closest to If is (a good approximation of) an eigenvector of , then the corresponding eigenvalue can be computed as where denotes the conjugate transpose of . Modern methods Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities. Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. Applications Geometric transformations Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is a quadratic equation with discriminant , which is a negative number whenever is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. Principal component analysis The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling. Graphs In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either (sometimes called the combinatorial Laplacian) or (sometimes called the normalized Laplacian), where is a diagonal matrix with equal to the degree of vertex , and in , the th diagonal entry is . The th principal eigenvector of a graph is defined as either the eigenvector corresponding to the th largest or th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. Markov chains A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. Vibration analysis Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by or That is, acceleration is proportional to position (i.e., we expect to be sinusoidal in time). In dimensions, becomes a mass matrix and a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem where is the eigenvalue and is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of alone. Furthermore, damped vibration, governed by leads to a so-called quadratic eigenvalue problem, This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. Tensor of moment of inertia In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass. Stress tensor In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. Schrödinger equation An example of an eigenvalue equation where the transformation is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: where , the Hamiltonian, is a second-order differential operator and , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which and can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by . In this notation, the Schrödinger equation is: where is an eigenstate of and represents the eigenvalue. is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above is understood to be the vector obtained by application of the transformation to . Wave transport Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix . The eigenvectors of the transmission operator form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, , of correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with and . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels. Molecular orbitals In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. Geology and glaciology In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered by their eigenvalues ; then is the primary orientation/dip of clast, is the secondary and is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of , , and are dictated by the nature of the sediment's fabric. If , the fabric is said to be isotropic. If , the fabric is said to be planar. If , the fabric is said to be linear. Basic reproduction number The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time has passed. The value is then the largest eigenvalue of the next generation matrix. Eigenfaces In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, eigenvoices''' represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. See also Antieigenvalue theory Eigenoperator Eigenplane Eigenmoments Eigenvalue algorithm Quantum states Jordan normal form List of numerical-analysis software Nonlinear eigenproblem Normal eigenvalue Quadratic eigenvalue problem Singular value Spectrum of a matrix Notes Citations Sources Further reading External links What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts" Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu. Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.) Theory Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst Abstract algebra Linear algebra Mathematical physics Matrix theory Singular value decomposition
Eigenvalues and eigenvectors
[ "Physics", "Mathematics" ]
11,215
[ "Applied mathematics", "Theoretical physics", "Linear algebra", "Abstract algebra", "Mathematical physics", "Algebra" ]
2,161,878
https://en.wikipedia.org/wiki/Protein%E2%80%93protein%20interaction
Protein–protein interactions (PPIs) are physical contacts of high specificity established between two or more protein molecules as a result of biochemical events steered by interactions that include electrostatic forces, hydrogen bonding and the hydrophobic effect. Many are physical contacts with molecular associations between chains that occur in a cell or in a living organism in a specific biomolecular context. Proteins rarely act alone as their functions tend to be regulated. Many molecular processes within a cell are carried out by molecular machines that are built from numerous protein components organized by their PPIs. These physiological interactions make up the so-called interactomics of the organism, while aberrant PPIs are the basis of multiple aggregation-related diseases, such as Creutzfeldt–Jakob and Alzheimer's diseases. PPIs have been studied with many methods and from different perspectives: biochemistry, quantum chemistry, molecular dynamics, signal transduction, among others. All this information enables the creation of large protein interaction networks – similar to metabolic or genetic/epigenetic networks – that empower the current knowledge on biochemical cascades and molecular etiology of disease, as well as the discovery of putative protein targets of therapeutic interest. Examples Electron transfer proteins In many metabolic reactions, a protein that acts as an electron carrier binds to an enzyme that acts as its reductase. After it receives an electron, it dissociates and then binds to the next enzyme that acts as its oxidase (i.e. an acceptor of the electron). These interactions between proteins are dependent on highly specific binding between proteins to ensure efficient electron transfer. Examples: mitochondrial oxidative phosphorylation chain system components cytochrome c-reductase / cytochrome c / cytochrome c oxidase; microsomal and mitochondrial P450 systems. In the case of the mitochondrial P450 systems, the specific residues involved in the binding of the electron transfer protein adrenodoxin to its reductase were identified as two basic Arg residues on the surface of the reductase and two acidic Asp residues on the adrenodoxin. More recent work on the phylogeny of the reductase has shown that these residues involved in protein–protein interactions have been conserved throughout the evolution of this enzyme. Signal transduction The activity of the cell is regulated by extracellular signals. Signal propagation inside and/or along the interior of cells depends on PPIs between the various signaling molecules. The recruitment of signaling pathways through PPIs is called signal transduction and plays a fundamental role in many biological processes and in many diseases including Parkinson's disease and cancer. Membrane transport A protein may be carrying another protein (for example, from cytoplasm to nucleus or vice versa in the case of the nuclear pore importins). Cell metabolism In many biosynthetic processes enzymes interact with each other to produce small compounds or other macromolecules. Muscle contraction Physiology of muscle contraction involves several interactions. Myosin filaments act as molecular motors and by binding to actin enables filament sliding. Furthermore, members of the skeletal muscle lipid droplet-associated proteins family associate with other proteins, as activator of adipose triglyceride lipase and its coactivator comparative gene identification-58, to regulate lipolysis in skeletal muscle Types To describe the types of protein–protein interactions (PPIs) it is important to consider that proteins can interact in a "transient" way (to produce some specific effect in a short time, like signal transduction) or to interact with other proteins in a "stable" way to form complexes that become molecular machines within the living systems. A protein complex assembly can result in the formation of homo-oligomeric or hetero-oligomeric complexes. In addition to the conventional complexes, as enzyme-inhibitor and antibody-antigen, interactions can also be established between domain-domain and domain-peptide. Another important distinction to identify protein–protein interactions is the way they have been determined, since there are techniques that measure direct physical interactions between protein pairs, named “binary” methods, while there are other techniques that measure physical interactions among groups of proteins, without pairwise determination of protein partners, named “co-complex” methods. Homo-oligomers vs. hetero-oligomers Homo-oligomers are macromolecular complexes constituted by only one type of protein subunit. Protein subunits assembly is guided by the establishment of non-covalent interactions in the quaternary structure of the protein. Disruption of homo-oligomers in order to return to the initial individual monomers often requires denaturation of the complex. Several enzymes, carrier proteins, scaffolding proteins, and transcriptional regulatory factors carry out their functions as homo-oligomers. Distinct protein subunits interact in hetero-oligomers, which are essential to control several cellular functions. The importance of the communication between heterologous proteins is even more evident during cell signaling events and such interactions are only possible due to structural domains within the proteins (as described below). Stable interactions vs. transient interactions Stable interactions involve proteins that interact for a long time, taking part of permanent complexes as subunits, in order to carry out functional roles. These are usually the case of homo-oligomers (e.g. cytochrome c), and some hetero-oligomeric proteins, as the subunits of ATPase. On the other hand, a protein may interact briefly and in a reversible manner with other proteins in only certain cellular contexts – cell type, cell cycle stage, external factors, presence of other binding proteins, etc. – as it happens with most of the proteins involved in biochemical cascades. These are called transient interactions. For example, some G protein–coupled receptors only transiently bind to Gi/o proteins when they are activated by extracellular ligands, while some Gq-coupled receptors, such as muscarinic receptor M3, pre-couple with Gq proteins prior to the receptor-ligand binding. Interactions between intrinsically disordered protein regions to globular protein domains (i.e. MoRFs) are transient interactions. Covalent vs. non-covalent Covalent interactions are those with the strongest association and are formed by disulphide bonds or electron sharing. While rare, these interactions are determinant in some posttranslational modifications, as ubiquitination and SUMOylation. Non-covalent bonds are usually established during transient interactions by the combination of weaker bonds, such as hydrogen bonds, ionic interactions, Van der Waals forces, or hydrophobic bonds. Role of water Water molecules play a significant role in the interactions between proteins. The crystal structures of complexes, obtained at high resolution from different but homologous proteins, have shown that some interface water molecules are conserved between homologous complexes. The majority of the interface water molecules make hydrogen bonds with both partners of each complex. Some interface amino acid residues or atomic groups of one protein partner engage in both direct and water mediated interactions with the other protein partner. Doubly indirect interactions, mediated by two water molecules, are more numerous in the homologous complexes of low affinity. Carefully conducted mutagenesis experiments, e.g. changing a tyrosine residue into a phenylalanine, have shown that water mediated interactions can contribute to the energy of interaction. Thus, water molecules may facilitate the interactions and cross-recognitions between proteins. Structure The molecular structures of many protein complexes have been unlocked by the technique of X-ray crystallography. The first structure to be solved by this method was that of sperm whale myoglobin by Sir John Cowdery Kendrew. In this technique the angles and intensities of a beam of X-rays diffracted by crystalline atoms are detected in a film, thus producing a three-dimensional picture of the density of electrons within the crystal. Later, nuclear magnetic resonance also started to be applied with the aim of unravelling the molecular structure of protein complexes. One of the first examples was the structure of calmodulin-binding domains bound to calmodulin. This technique is based on the study of magnetic properties of atomic nuclei, thus determining physical and chemical properties of the correspondent atoms or the molecules. Nuclear magnetic resonance is advantageous for characterizing weak PPIs. Protein-protein interaction domains Some proteins have specific structural domains or sequence motifs that provide binding to other proteins. Here are some examples of such domains: Src homology 2 (SH2) domain SH2 domains are structurally composed by three-stranded twisted beta sheet sandwiched flanked by two alpha-helices. The existence of a deep binding pocket with high affinity for phosphotyrosine, but not for phosphoserine or phosphothreonine, is essential for the recognition of tyrosine phosphorylated proteins, mainly autophosphorylated growth factor receptors. Growth factor receptor binding proteins and phospholipase Cγ are examples of proteins that have SH2 domains. Src homology 3 (SH3) domain Structurally, SH3 domains are constituted by a beta barrel formed by two orthogonal beta sheets and three anti-parallel beta strands. These domains recognize proline enriched sequences, as polyproline type II helical structure (PXXP motifs) in cell signaling proteins like protein tyrosine kinases and the growth factor receptor bound protein 2 (Grb2). Phosphotyrosine-binding (PTB) domain PTB domains interact with sequences that contain a phosphotyrosine group. These domains can be found in the insulin receptor substrate. LIM domain LIM domains were initially identified in three homeodomain transcription factors (lin11, is11, and mec3). In addition to this homeodomain proteins and other proteins involved in development, LIM domains have also been identified in non-homeodomain proteins with relevant roles in cellular differentiation, association with cytoskeleton and senescence. These domains contain a tandem cysteine-rich Zn2+-finger motif and embrace the consensus sequence CX2CX16-23HX2CX2CX2CX16-21CX2C/H/D. LIM domains bind to PDZ domains, bHLH transcription factors, and other LIM domains. Sterile alpha motif (SAM) domain SAM domains are composed by five helices forming a compact package with a conserved hydrophobic core. These domains, which can be found in the Eph receptor and the stromal interaction molecule (STIM) for example, bind to non-SAM domain-containing proteins and they also appear to have the ability to bind RNA. PDZ domain PDZ domains were first identified in three guanylate kinases: PSD-95, DlgA and ZO-1. These domains recognize carboxy-terminal tri-peptide motifs (S/TXV), other PDZ domains or LIM domains and bind them through a short peptide sequence that has a C-terminal hydrophobic residue. Some of the proteins identified as having PDZ domains are scaffolding proteins or seem to be involved in ion receptor assembling and receptor-enzyme complexes formation. FERM domain FERM domains contain basic residues capable of binding PtdIns(4,5)P2. Talin and focal adhesion kinase (FAK) are two of the proteins that present FERM domains. Calponin homology (CH) domain CH domains are mainly present in cytoskeletal proteins as parvin. Pleckstrin homology domain Pleckstrin homology domains bind to phosphoinositides and acid domains in signaling proteins. WW domain WW domains bind to proline enriched sequences. WSxWS motif Found in cytokine receptors Properties of the interface The study of the molecular structure can give fine details about the interface that enables the interaction between proteins. When characterizing PPI interfaces it is important to take into account the type of complex. Parameters evaluated include size (measured in absolute dimensions Å2 or in solvent-accessible surface area (SASA)), shape, complementarity between surfaces, residue interface propensities, hydrophobicity, segmentation and secondary structure, and conformational changes on complex formation. The great majority of PPI interfaces reflects the composition of protein surfaces, rather than the protein cores, in spite of being frequently enriched in hydrophobic residues, particularly in aromatic residues. PPI interfaces are dynamic and frequently planar, although they can be globular and protruding as well. Based on three structures – insulin dimer, trypsin-pancreatic trypsin inhibitor complex, and oxyhaemoglobin – Cyrus Chothia and Joel Janin found that between 1,130 and 1,720 Å2 of surface area was removed from contact with water indicating that hydrophobicity is a major factor of stabilization of PPIs. Later studies refined the buried surface area of the majority of interactions to 1,600±350 Å2. However, much larger interaction interfaces were also observed and were associated with significant changes in conformation of one of the interaction partners. PPIs interfaces exhibit both shape and electrostatic complementarity. Regulation Protein concentration, which in turn are affected by expression levels and degradation rates; Protein affinity for proteins or other binding ligands; Ligands concentrations (substrates, ions, etc.); Presence of other proteins, nucleic acids, and ions; Electric fields around proteins. Occurrence of covalent modifications; Experimental methods There are a multitude of methods to detect them. Each of the approaches has its own strengths and weaknesses, especially with regard to the sensitivity and specificity of the method. The most conventional and widely used high-throughput methods are yeast two-hybrid screening and affinity purification coupled to mass spectrometry. Yeast two-hybrid screening This system was firstly described in 1989 by Fields and Song using Saccharomyces cerevisiae as biological model. Yeast two hybrid allows the identification of pairwise PPIs (binary method) in vivo, in which the two proteins are tested for biophysically direct interaction. The Y2H is based on the functional reconstitution of the yeast transcription factor Gal4 and subsequent activation of a selective reporter such as His3. To test two proteins for interaction, two protein expression constructs are made: one protein (X) is fused to the Gal4 DNA-binding domain (DB) and a second protein (Y) is fused to the Gal4 activation domain (AD). In the assay, yeast cells are transformed with these constructs. Transcription of reporter genes does not occur unless bait (DB-X) and prey (AD-Y) interact with each other and form a functional Gal4 transcription factor. Thus, the interaction between proteins can be inferred by the presence of the products resultant of the reporter gene expression. In cases in which the reporter gene expresses enzymes that allow the yeast to synthesize essential amino acids or nucleotides, yeast growth under selective media conditions indicates that the two proteins tested are interacting. Recently, software to detect and prioritize protein interactions was published. Despite its usefulness, the yeast two-hybrid system has limitations. It uses yeast as main host system, which can be a problem when studying proteins that contain mammalian-specific post-translational modifications. The number of PPIs identified is usually low because of a high false negative rate; and, understates membrane proteins, for example. In initial studies that utilized Y2H, proper controls for false positives (e.g. when DB-X activates the reporter gene without the presence of AD-Y) were frequently not done, leading to a higher than normal false positive rate. An empirical framework must be implemented to control for these false positives. Limitations in lower coverage of membrane proteins have been overcoming by the emergence of yeast two-hybrid variants, such as the membrane yeast two-hybrid (MYTH) and the split-ubiquitin system, which are not limited to interactions that occur in the nucleus; and, the bacterial two-hybrid system, performed in bacteria; Affinity purification coupled to mass spectrometry Affinity purification coupled to mass spectrometry mostly detects stable interactions and thus better indicates functional in vivo PPIs. This method starts by purification of the tagged protein, which is expressed in the cell usually at in vivo concentrations, and its interacting proteins (affinity purification). One of the most advantageous and widely used methods to purify proteins with very low contaminating background is the tandem affinity purification, developed by Bertrand Seraphin and Matthias Mann and respective colleagues. PPIs can then be analysed by mass spectrometry using different methods: chemical incorporation, biological or metabolic incorporation (SILAC), and label-free methods. Furthermore, network theory has been used to study the whole set of identified protein–protein interactions in cells. Nucleic acid programmable protein array (NAPPA) This system was first developed by LaBaer and colleagues in 2004 by using in vitro transcription and translation system. They use DNA template encoding the gene of interest fused with GST protein, and it was immobilized in the solid surface. Anti-GST antibody and biotinylated plasmid DNA were bounded in aminopropyltriethoxysilane (APTES)-coated slide. BSA can improve the binding efficiency of DNA. Biotinylated plasmid DNA was bound by avidin. New protein was synthesized by using cell-free expression system i.e. rabbit reticulocyte lysate (RRL), and then the new protein was captured through anti-GST antibody bounded on the slide. To test protein–protein interaction, the targeted protein cDNA and query protein cDNA were immobilized in a same coated slide. By using in vitro transcription and translation system, targeted and query protein was synthesized by the same extract. The targeted protein was bound to array by antibody coated in the slide and query protein was used to probe the array. The query protein was tagged with hemagglutinin (HA) epitope. Thus, the interaction between the two proteins was visualized with the antibody against HA. Intragenic complementation When multiple copies of a polypeptide encoded by a gene form a complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation has been demonstrated in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe; the bacterium Salmonella typhimurium; the virus bacteriophage T4, an RNA virus and humans. In such studies, numerous mutations defective in the same gene were often isolated and mapped in a linear order on the basis of recombination frequencies to form a genetic map of the gene. Separately, the mutants were tested in pairwise combinations to measure complementation. An analysis of the results from such studies led to the conclusion that intragenic complementation, in general, arises from the interaction of differently defective polypeptide monomers to form a multimer. Genes that encode multimer-forming polypeptides appear to be common. One interpretation of the data is that polypeptide monomers are often aligned in the multimer in such a way that mutant polypeptides defective at nearby sites in the genetic map tend to form a mixed multimer that functions poorly, whereas mutant polypeptides defective at distant sites tend to form a mixed multimer that functions more effectively. Direct interaction of two nascent proteins emerging from nearby ribosomes appears to be a general mechanism for homo-oligomer (multimer) formation. Hundreds of protein oligomers were identified that assemble in human cells by such an interaction. The most prevalent form of interaction is between the N-terminal regions of the interacting proteins. Dimer formation appears to be able to occur independently of dedicated assembly machines. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle. Other potential methods Diverse techniques to identify PPIs have been emerging along with technology progression. These include co-immunoprecipitation, protein microarrays, analytical ultracentrifugation, light scattering, fluorescence spectroscopy, luminescence-based mammalian interactome mapping (LUMIER), resonance-energy transfer systems, mammalian protein–protein interaction trap, electro-switchable biosurfaces, protein–fragment complementation assay, as well as real-time label-free measurements by surface plasmon resonance, and calorimetry. Computational methods Computational prediction of protein–protein interactions The experimental detection and characterization of PPIs is labor-intensive and time-consuming. However, many PPIs can be also predicted computationally, usually using experimental data as a starting point. However, methods have also been developed that allow the prediction of PPI de novo, that is without prior evidence for these interactions. Genomic context methods The Rosetta Stone or Domain Fusion method is based on the hypothesis that interacting proteins are sometimes fused into a single protein in another genome. Therefore, we can predict if two proteins may be interacting by determining if they each have non-overlapping sequence similarity to a region of a single protein sequence in another genome. The Conserved Neighborhood method is based on the hypothesis that if genes encoding two proteins are neighbors on a chromosome in many genomes, then they are likely functionally related (and possibly physically interacting). The Phylogenetic Profile method is based on the hypothesis that if two or more proteins are concurrently present or absent across several genomes, then they are likely functionally related. Therefore, potentially interacting proteins can be identified by determining the presence or absence of genes across many genomes and selecting those genes which are always present or absent together. Text mining methods Publicly available information from biomedical documents is readily accessible through the internet and is becoming a powerful resource for collecting known protein–protein interactions (PPIs), PPI prediction and protein docking. Text mining is much less costly and time-consuming compared to other high-throughput techniques. Currently, text mining methods generally detect binary relations between interacting proteins from individual sentences using rule/pattern-based information extraction and machine learning approaches. A wide variety of text mining applications for PPI extraction and/or prediction are available for public use, as well as repositories which often store manually validated and/or computationally predicted PPIs. Text mining can be implemented in two stages: information retrieval, where texts containing names of either or both interacting proteins are retrieved and information extraction, where targeted information (interacting proteins, implicated residues, interaction types, etc.) is extracted. There are also studies using phylogenetic profiling, basing their functionalities on the theory that proteins involved in common pathways co-evolve in a correlated fashion across species. Some more complex text mining methodologies use advanced Natural Language Processing (NLP) techniques and build knowledge networks (for example, considering gene names as nodes and verbs as edges). Other developments involve kernel methods to predict protein interactions. Machine learning methods Many computational methods have been suggested and reviewed for predicting protein–protein interactions. Prediction approaches can be grouped into categories based on predictive evidence: protein sequence, comparative genomics, protein domains, protein tertiary structure, and interaction network topology. The construction of a positive set (known interacting protein pairs) and a negative set (non-interacting protein pairs) is needed for the development of a computational prediction model. Prediction models using machine learning techniques can be broadly classified into two main groups: supervised and unsupervised, based on the labeling of input variables according to the expected outcome. In 2005, integral membrane proteins of Saccharomyces cerevisiae were analyzed using the mating-based ubiquitin system (mbSUS). The system detects membrane proteins interactions with extracellular signaling proteins Of the 705 integral membrane proteins 1,985 different interactions were traced that involved 536 proteins. To sort and classify interactions a support vector machine was used to define high medium and low confidence interactions. The split-ubiquitin membrane yeast two-hybrid system uses transcriptional reporters to identify yeast transformants that encode pairs of interacting proteins. In 2006, random forest, an example of a supervised technique, was found to be the most-effective machine learning method for protein interaction prediction. Such methods have been applied for discovering protein interactions on human interactome, specifically the interactome of Membrane proteins and the interactome of Schizophrenia-associated proteins. As of 2020, a model using residue cluster classes (RCCs), constructed from the 3DID and Negatome databases, resulted in 96-99% correctly classified instances of protein–protein interactions. RCCs are a computational vector space that mimics protein fold space and includes all simultaneously contacted residue sets, which can be used to analyze protein structure-function relation and evolution. Databases Large scale identification of PPIs generated hundreds of thousands of interactions, which were collected together in specialized biological databases that are continuously updated in order to provide complete interactomes. The first of these databases was the Database of Interacting Proteins (DIP). Primary databases collect information about published PPIs proven to exist via small-scale or large-scale experimental methods. Examples: DIP, Biomolecular Interaction Network Database (BIND), Biological General Repository for Interaction Datasets (BioGRID), Human Protein Reference Database (HPRD), IntAct Molecular Interaction Database, Molecular Interactions Database (MINT), MIPS Protein Interaction Resource on Yeast (MIPS-MPact), and MIPS Mammalian Protein–Protein Interaction Database (MIPS-MPPI).< Meta-databases normally result from the integration of primary databases information, but can also collect some original data. Prediction databases include many PPIs that are predicted using several techniques (main article). Examples: Human Protein–Protein Interaction Prediction Database (PIPs), Interlogous Interaction Database (I2D), Known and Predicted Protein–Protein Interactions (STRING-db), and Unified Human Interactive (UniHI). The aforementioned computational methods all depend on source databases whose data can be extrapolated to predict novel protein–protein interactions. Coverage differs greatly between databases. In general, primary databases have the fewest total protein interactions recorded as they do not integrate data from multiple other databases, while prediction databases have the most because they include other forms of evidence in addition to experimental. For example, the primary database IntAct has 572,063 interactions, the meta-database APID has 678,000 interactions, and the predictive database STRING has 25,914,693 interactions. However, it is important to note that some of the interactions in the STRING database are only predicted by computational methods such as Genomic Context and not experimentally verified. Interaction networks Information found in PPIs databases supports the construction of interaction networks. Although the PPI network of a given query protein can be represented in textbooks, diagrams of whole cell PPIs are frankly complex and difficult to generate. One example of a manually produced molecular interaction map is the Kurt Kohn's 1999 map of cell cycle control. Drawing on Kohn's map, Schwikowski et al. in 2000 published a paper on PPIs in yeast, linking 1,548 interacting proteins determined by two-hybrid screening. They used a layered graph drawing method to find an initial placement of the nodes and then improved the layout using a force-based algorithm. Bioinformatic tools have been developed to simplify the difficult task of visualizing molecular interaction networks and complement them with other types of data. For instance, Cytoscape is an open-source software widely used and many plugins are currently available. Pajek software is advantageous for the visualization and analysis of very large networks. Identification of functional modules in PPI networks is an important challenge in bioinformatics. Functional modules means a set of proteins that are highly connected to each other in PPI network. It is almost similar problem as community detection in social networks. There are some methods such as Jactive modules and MoBaS. Jactive modules integrate PPI network and gene expression data where as MoBaS integrate PPI network and Genome Wide association Studies. protein–protein relationships are often the result of multiple types of interactions or are deduced from different approaches, including co-localization, direct interaction, suppressive genetic interaction, additive genetic interaction, physical association, and other associations. Signed interaction networks Protein–protein interactions often result in one of the interacting proteins either being 'activated' or 'repressed'. Such effects can be indicated in a PPI network by "signs" (e.g. "activation" or "inhibition"). Although such attributes have been added to networks for a long time, Vinayagam et al. (2014) coined the term Signed network for them. Signed networks are often expressed by labeling the interaction as either positive or negative. A positive interaction is one where the interaction results in one of the proteins being activated. Conversely, a negative interaction indicates that one of the proteins being inactivated. Protein–protein interaction networks are often constructed as a result of lab experiments such as yeast two-hybrid screens or 'affinity purification and subsequent mass spectrometry techniques. However these methods do not provide the layer of information needed in order to determine what type of interaction is present in order to be able to attribute signs to the network diagrams. RNA interference screens RNA interference (RNAi) screens (repression of individual proteins between transcription and translation) are one method that can be utilized in the process of providing signs to the protein–protein interactions. Individual proteins are repressed and the resulting phenotypes are analyzed. A correlating phenotypic relationship (i.e. where the inhibition of either of two proteins results in the same phenotype) indicates a positive, or activating relationship. Phenotypes that do not correlate (i.e. where the inhibition of either of two proteins results in two different phenotypes) indicate a negative or inactivating relationship. If protein A is dependent on protein B for activation then the inhibition of either protein A or B will result in a cell losing the service that is provided by protein A and the phenotypes will be the same for the inhibition of either A or B. If, however, protein A is inactivated by protein B then the phenotypes will differ depending on which protein is inhibited (inhibit protein B and it can no longer inactivate protein A leaving A active however inactivate A and there is nothing for B to activate since A is inactive and the phenotype changes). Multiple RNAi screens need to be performed in order to reliably appoint a sign to a given protein–protein interaction. Vinayagam et al. who devised this technique state that a minimum of nine RNAi screens are required with confidence increasing as one carries out more screens. As therapeutic targets Modulation of PPI is challenging and is receiving increasing attention by the scientific community. Several properties of PPI such as allosteric sites and hotspots, have been incorporated into drug-design strategies. Nevertheless, very few PPIs are directly targeted by FDA-approved small-molecule PPI inhibitors, emphasizing a huge untapped opportunity for drug discovery. In 2014, Amit Jaiswal and others were able to develop 30 peptides to inhibit recruitment of telomerase towards telomeres by utilizing protein–protein interaction studies. Arkin and others were able to develop antibody fragment-based inhibitors to regulate specific protein-protein interactions. As the "modulation" of PPIs not only includes the inhibition, but also the stabilization of quaternary protein complexes, molecules with this mechanism of action (so called molecular glues) are also intensively studied. Examples Tirobifan, inhibitor of the glycoprotein IIb/IIIa, used as a cardiovascular drug Maraviroc, inhibitor of the CCR5-gp120 interaction, used as anti-HIV drug. AMG-176, AZD5991, S64315, inhibitors of myeloid cell leukemia 1 (Mcl-1) protein and its interactions See also Glycan-protein interactions 3did Allostery Biological network Biological machines DIMA (database) Enzyme catalysis HitPredict Human interactome IsoBase Multiprotein complex Protein domain dynamics Protein flexibility Protein structure Protein–protein interaction prediction Protein–protein interaction screening Systems biology References Further reading External links Protein–Protein Interaction Databases Library of Modulators of Protein–Protein Interactions (PPI) Proteomics Signal transduction Biophysics Biochemistry methods Biotechnology Quantum biochemistry Protein–protein interaction assays Protein complexes
Protein–protein interaction
[ "Physics", "Chemistry", "Biology" ]
6,822
[ "Biochemistry methods", "Protein–protein interaction assays", "Quantum chemistry", "Applied and interdisciplinary physics", "Biochemistry", "Quantum mechanics", "Biotechnology", "Signal transduction", "Theoretical chemistry", "Biophysics", " molecular", "nan", "Atomic", "Neurochemistry", ...
2,162,692
https://en.wikipedia.org/wiki/Rapid%20thermal%20processing
Rapid thermal processing (RTP) is a semiconductor manufacturing process which heats silicon wafers to temperatures exceeding 1,000°C for not more than a few seconds. During cooling wafer temperatures must be brought down slowly to prevent dislocations and wafer breakage due to thermal shock. Such rapid heating rates are often attained by high intensity lamps or lasers. These processes are used for a wide variety of applications in semiconductor manufacturing including dopant activation, thermal oxidation, metal reflow and chemical vapor deposition. Temperature control One of the key challenges in rapid thermal processing is accurate measurement and control of the wafer temperature. Monitoring the ambient with a thermocouple has only recently become feasible, in that the high temperature ramp rates prevent the wafer from coming to thermal equilibrium with the process chamber. One temperature control strategy involves in situ pyrometry to effect real time control. Used for melting iron for welding purposes. Rapid thermal anneal Rapid thermal anneal (RTA) in rapid thermal processing is a process used in semiconductor device fabrication which involves heating a single wafer at a time in order to affect its electrical properties. Unique heat treatments are designed for different effects. Wafers can be heated in order to activate dopants, change film-to-film or film-to-wafer substrate interfaces, densify deposited films, change states of grown films, repair damage from ion implantation, move dopants or drive dopants from one film into another or from a film into the wafer substrate. Rapid thermal anneals are performed by equipment that heats a single wafer at a time using either lamp based heating, a hot chuck, or a hot plate that a wafer is brought near. Unlike furnace anneals they are of short duration, processing each wafer in several minutes. To achieve short annealing times and quick throughput, sacrifices are made in temperature and process uniformity, temperature measurement and control, and wafer stress. RTP-like processing has found applications in another rapidly growing field: solar cell fabrication. RTP-like processing, in which the semiconductor sample is heated by absorbing optical radiation, has come to be used for many solar cell fabrication steps, including phosphorus diffusion for N/P junction formation and impurity gettering, hydrogen diffusion for impurity and defect passivation, and formation of screen-printed contacts using Ag-ink for the front and Al-ink for back contacts, respectively. See also Tamman and Hüttig temperature References External links IEEE RTP Conference Proceedings RTP-Technology Different Heating Systems with Microwaves/Plasma Semiconductor device fabrication
Rapid thermal processing
[ "Materials_science" ]
526
[ "Semiconductor device fabrication", "Microtechnology" ]
2,163,562
https://en.wikipedia.org/wiki/Peaucellier%E2%80%93Lipkin%20linkage
The Peaucellier–Lipkin linkage (or Peaucellier–Lipkin cell, or Peaucellier–Lipkin inversor), invented in 1864, was the first true planar straight line mechanism – the first planar linkage capable of transforming rotary motion into perfect straight-line motion, and vice versa. It is named after Charles-Nicolas Peaucellier (1832–1913), a French army officer, and Yom Tov Lipman Lipkin (1846–1876), a Lithuanian Jew and son of the famed Rabbi Israel Salanter. Until this invention, no planar method existed of converting exact straight-line motion to circular motion, without reference guideways. In 1864, all power came from steam engines, which had a piston moving in a straight-line up and down a cylinder. This piston needed to keep a good seal with the cylinder in order to retain the driving medium, and not lose energy efficiency due to leaks. The piston does this by remaining perpendicular to the axis of the cylinder, retaining its straight-line motion. Converting the straight-line motion of the piston into circular motion was of critical importance. Most, if not all, applications of these steam engines, were rotary. The mathematics of the Peaucellier–Lipkin linkage is directly related to the inversion of a circle. Earlier Sarrus linkage There is an earlier straight-line mechanism, whose history is not well known, called the Sarrus linkage. This linkage predates the Peaucellier–Lipkin linkage by 11 years and consists of a series of hinged rectangular plates, two of which remain parallel but can be moved normally to each other. Sarrus' linkage is of a three-dimensional class sometimes known as a space crank, unlike the Peaucellier–Lipkin linkage which is a planar mechanism. Geometry In the geometric diagram of the apparatus, six bars of fixed length can be seen: , , , , , . The length of is equal to the length of , and the lengths of , , , and are all equal forming a rhombus. Also, point is fixed. Then, if point is constrained to move along a circle (for example, by attaching it to a bar with a length halfway between and ; path shown in red) which passes through , then point will necessarily have to move along a straight line (shown in blue). In contrast, if point were constrained to move along a line (not passing through ), then point would necessarily have to move along a circle (passing through ). Mathematical proof of concept Collinearity First, it must be proven that points , , are collinear. This may be easily seen by observing that the linkage is mirror-symmetric about line , so point must fall on that line. More formally, triangles and are congruent because side is congruent to itself, side is congruent to side , and side is congruent to side . Therefore, angles and are equal. Next, triangles and are congruent, since sides and are congruent, side is congruent to itself, and sides and are congruent. Therefore, angles and are equal. Finally, because they form a complete circle, we have but, due to the congruences, and , thus therefore points , , and are collinear. Inverse points Let point be the intersection of lines and . Then, since is a rhombus, is the midpoint of both line segments and . Therefore, length = length . Triangle is congruent to triangle , because side is congruent to side , side is congruent to itself, and side is congruent to side . Therefore, angle = angle . But since , then , , and . Let: Then: (due to the Pythagorean theorem) (same expression expanded) (Pythagorean theorem) Since and are both fixed lengths, then the product of and is a constant: and since points , , are collinear, then is the inverse of with respect to the circle with center and radius . Inversive geometry Thus, by the properties of inversive geometry, since the figure traced by point is the inverse of the figure traced by point , if traces a circle passing through the center of inversion , then is constrained to trace a straight line. But if traces a straight line not passing through , then must trace an arc of a circle passing through . Q.E.D. A typical driver Peaucellier–Lipkin linkages (PLLs) may have several inversions. A typical example is shown in the opposite figure, in which a rocker-slider four-bar serves as the input driver. To be precise, the slider acts as the input, which in turn drives the right grounded link of the PLL, thus driving the entire PLL. Historical notes Sylvester (Collected Works, Vol. 3, Paper 2) writes that when he showed a model to Kelvin, he “nursed it as if it had been his own child, and when a motion was made to relieve him of it, replied ‘No! I have not had nearly enough of it—it is the most beautiful thing I have ever seen in my life.’” Cultural references A monumental-scale sculpture implementing the linkage in illuminated struts is on permanent exhibition in Eindhoven, Netherlands. The artwork measures , weighs , and can be operated from a control panel accessible to the general public. See also Linkage (mechanical) Straight line mechanism References Bibliography — proof and discussion of Peaucellier–Lipkin linkage, mathematical and real-world mechanical models (and references cited therein) Hartenberg, R.S. & J. Denavit (1964) Kinematic synthesis of linkages, pp 181–5, New York: McGraw–Hill, weblink from Cornell University. External links How to Draw a Straight Line, online video clips of linkages with interactive applets. How to Draw a Straight Line, historical discussion of linkage design Interactive Java Applet with proof. Java animated Peaucellier–Lipkin linkage Jewish Encyclopedia article on Lippman Lipkin and his father Israel Salanter Peaucellier Apparatus features an interactive applet A simulation using the Molecular Workbench software A related linkage called Hart's Inversor. Modified Peaucellier robotic arm linkage (Vex Team 1508 video) Linkages (mechanical) Articles containing proofs Linear motion Straight line mechanisms
Peaucellier–Lipkin linkage
[ "Physics", "Mathematics" ]
1,341
[ "Articles containing proofs", "Physical phenomena", "Motion (physics)", "Linear motion" ]
2,164,886
https://en.wikipedia.org/wiki/Homological%20mirror%20symmetry
Homological mirror symmetry is a mathematical conjecture made by Maxim Kontsevich. It seeks a systematic mathematical explanation for a phenomenon called mirror symmetry first observed by physicists studying string theory. History In an address to the 1994 International Congress of Mathematicians in Zürich, speculated that mirror symmetry for a pair of Calabi–Yau manifolds X and Y could be explained as an equivalence of a triangulated category constructed from the algebraic geometry of X (the derived category of coherent sheaves on X) and another triangulated category constructed from the symplectic geometry of Y (the derived Fukaya category). Edward Witten originally described the topological twisting of the N=(2,2) supersymmetric field theory into what he called the A and B model topological string theories. These models concern maps from Riemann surfaces into a fixed target—usually a Calabi–Yau manifold. Most of the mathematical predictions of mirror symmetry are embedded in the physical equivalence of the A-model on Y with the B-model on its mirror X. When the Riemann surfaces have empty boundary, they represent the worldsheets of closed strings. To cover the case of open strings, one must introduce boundary conditions to preserve the supersymmetry. In the A-model, these boundary conditions come in the form of Lagrangian submanifolds of Y with some additional structure (often called a brane structure). In the B-model, the boundary conditions come in the form of holomorphic (or algebraic) submanifolds of X with holomorphic (or algebraic) vector bundles on them. These are the objects one uses to build the relevant categories. They are often called A and B branes respectively. Morphisms in the categories are given by the massless spectrum of open strings stretching between two branes. The closed string A and B models only capture the so-called topological sector—a small portion of the full string theory. Similarly, the branes in these models are only topological approximations to the full dynamical objects that are D-branes. Even so, the mathematics resulting from this small piece of string theory has been both deep and difficult. The School of Mathematics at the Institute for Advanced Study in Princeton devoted a whole year to Homological Mirror Symmetry during the 2016-17 academic year. Among the participants were Paul Seidel from MIT, Maxim Kontsevich from IHÉS, and Denis Auroux, from UC Berkeley. Examples Only in a few examples have mathematicians been able to verify the conjecture. In his seminal address, Kontsevich commented that the conjecture could be proved in the case of elliptic curves using theta functions. Following this route, Alexander Polishchuk and Eric Zaslow provided a proof of a version of the conjecture for elliptic curves. Kenji Fukaya was able to establish elements of the conjecture for abelian varieties. Later, Kontsevich and Yan Soibelman provided a proof of the majority of the conjecture for nonsingular torus bundles over affine manifolds using ideas from the SYZ conjecture. In 2003, Paul Seidel proved the conjecture in the case of the quartic surface. In 2002 explained SYZ conjecture in the context of Hitchin system and Langlands duality. Hodge diamond The dimensions hp,q of spaces of harmonic (p,q)-differential forms (equivalently, the cohomology, i.e., closed forms modulo exact forms) are conventionally arranged in a diamond shape called the Hodge diamond. These (p,q)-Betti numbers can be computed for complete intersections using a generating function described by Friedrich Hirzebruch. For a three-dimensional manifold, for example, the Hodge diamond has p and q ranging from 0 to 3: Mirror symmetry translates the dimension number of the (p, q)-th differential form hp,q for the original manifold into hn-p,q of that for the counter pair manifold. Namely, for any Calabi–Yau manifold the Hodge diamond is unchanged by a rotation by π radians and the Hodge diamonds of mirror Calabi–Yau manifolds are related by a rotation by π/2 radians. In the case of an elliptic curve, which is viewed as a 1-dimensional Calabi–Yau manifold, the Hodge diamond is especially simple: it is the following figure. In the case of a K3 surface, which is viewed as 2-dimensional Calabi–Yau manifold, since the Betti numbers are {1, 0, 22, 0, 1}, their Hodge diamond is the following figure. In the 3-dimensional case, usually called the Calabi–Yau manifold, a very interesting thing happens. There are sometimes mirror pairs, say M and W, that have symmetric Hodge diamonds with respect to each other along a diagonal line. M'''s diamond:W's diamond:M and W'' correspond to A- and B-model in string theory. Mirror symmetry not only replaces the homological dimensions but also the symplectic structure and complex structure on the mirror pairs. That is the origin of homological mirror symmetry. In 1990-1991, had a major impact not only on enumerative algebraic geometry but on the whole mathematics and motivated . The mirror pair of two quintic threefolds in this paper have the following Hodge diamonds. See also Mirror symmetry conjecture - more mathematically based article Topological quantum field theory Category theory Floer homology Fukaya category Derived category Quintic threefold References Differential geometry Conjectures Symmetry Duality theories String theory
Homological mirror symmetry
[ "Physics", "Astronomy", "Mathematics" ]
1,141
[ "Astronomical hypotheses", "Unsolved problems in mathematics", "Mathematical structures", "Conjectures", "Category theory", "Duality theories", "Geometry", "String theory", "Mathematical problems", "Symmetry" ]
2,164,976
https://en.wikipedia.org/wiki/Dip-pen%20nanolithography
Dip pen nanolithography (DPN) is a scanning probe lithography technique where an atomic force microscope (AFM) tip is used to directly create patterns on a substrate. It can be done on a range of substances with a variety of inks. A common example of this technique is exemplified by the use of alkane thiolates to imprint onto a gold surface. This technique allows surface patterning on scales of under 100 nanometers. DPN is the nanotechnology analog of the dip pen (also called the quill pen), where the tip of an atomic force microscope cantilever acts as a "pen", which is coated with a chemical compound or mixture acting as an "ink", and put in contact with a substrate, the "paper". DPN enables direct deposition of nanoscale materials onto a substrate in a flexible manner. Recent advances have demonstrated massively parallel patterning using two-dimensional arrays of 55,000 tips. Applications of this technology currently range through chemistry, materials science, and the life sciences, and include such work as ultra high density biological nanoarrays, and additive photomask repair. Development The uncontrollable transfer of a molecular "ink" from a coated AFM tip to a substrate was first reported by Jaschke and Butt in 1995, but they erroneously concluded that alkanethiols could not be transferred to gold substrates to form stable nanostructures. A research group at Northwestern University, US led by Chad Mirkin independently studied the process and determined that under the appropriate conditions, molecules could be transferred to a wide variety of surfaces to create stable chemically-adsorbed monolayers in a high resolution lithographic process they termed "DPN". Mirkin and his coworkers hold the patents on this process, and the patterning technique has expanded to include liquid "inks". It is important to note that "liquid inks" are governed by a very different deposition mechanism when compared to "molecular inks". Deposition materials Molecular inks Molecular inks are typically composed of small molecules that are coated onto a DPN tip and are delivered to the surface through a water meniscus. In order to coat the tips, one can either vapor coat the tip or dip the tips into a dilute solution containing the molecular ink. If one dip-coats the tips, the solvent must be removed prior to deposition. The deposition rate of a molecular ink is dependent on the diffusion rate of the molecule, which is different for each molecule. The size of the feature is controlled by the tip/surface dwell-time (ranging from milliseconds to seconds) and the size of the water meniscus, which is determined by the humidity conditions (assuming the tip's radius of curvature is much smaller than the meniscus). Water meniscus mediated (exceptions do exist) Nanoscale feature resolution (50 nm to 2000 nm) No multiplexed depositions Each molecular ink is limited to its corresponding substrate Examples Alkane thiols written to gold Silanes (solid phase) written to glass or silicon Liquid inks Liquid inks can be any material that is liquid at deposition conditions. The liquid deposition properties are determined by the interactions between the liquid and the tip, the liquid and the surface, and the viscosity of the liquid itself. These interactions limit the minimum feature size of the liquid ink to about 1 micrometre, depending on the contact angle of the liquid. Higher viscosities offer greater control over feature size and are desirable. Unlike molecular inks, it is possible to perform multiplexed depositions using a carrier liquid. For example, using a viscous buffer, it is possible to directly deposit multiple proteins simultaneously. 1–10 micrometre feature resolution Multiplexed depositions Less restrictive ink/surface requirements Direct deposition of high viscosity materials Examples Protein, peptide, and DNA patterning Hydrogels Sol gels Conductive inks Lipids Silanes (liquid phase) written to glass or silicon Applications In order to define a good DPN application, it is important to understand what DPN can do that other techniques cannot. Direct-write techniques, like contact printing, can pattern multiple biological materials but it cannot create features with subcellular resolution. Many high-resolution lithography methods can pattern at sub-micrometre resolution, but these require high-cost equipment that were not designed for biomolecule deposition and cell culture. Microcontact printing can print biomolecules at ambient conditions, but it cannot pattern multiple materials with nanoscale registry. Industrial applications The following are some examples of how DPN is being applied to potential products. Biosensor Functionalization – Directly place multiple capture domains on a single biosensor device Nanoscale Sensor Fabrication – Small, high-value sensors that can detect multiple targets Nanoscale Protein Chips – High-density protein arrays with increased sensitivity Emerging applications Cell engineering DPN is emerging as a powerful research tool for manipulating cells at subcellular resolution Stem cell differentiation Subcellular drug delivery Cell sorting Surface gradients Subcellular ECM protein patterns Cell adhesion Rapid prototyping Plasmonics and Metamaterials Cell and tissue screening Properties Direct write DPN is a direct write technique so it can be used for top-down and bottom-up lithography applications. In top-down work, the tips are used to deliver an etch resist to a surface, which is followed by a standard etching process. In bottom-up applications, the material of interest is delivered directly to the surface via the tips. Unique advantages Directed Placement – Directly print various materials onto existing nano and microstructures with nanoscale registry Direct Write – Maskless creation of arbitrary patterns with feature resolutions from as small as 50 nm and as large as 10 micrometres Biocompatible – Subcellular to nanoscale resolution at ambient deposition conditions Scalable – Force independent, allowing for parallel depositions Thermal dip pen lithography A heated probe tip version of Dip Pen Lithography has also been demonstrated, thermal Dip Pen Lithography (tDPL), to deposit nanoparticles. Semiconductor, magnetic, metallic, or optically active nanoparticles can be written to a substrate via this method. The particles are suspended in a Poly(methyl methacrylate) (PMMA) or equivalent polymer matrix, and heated by the probe tip until they begin to flow. The probe tip acts as a nano-pen, and can pattern nanoparticles into a programmed structure. Depending on the size of the nanoparticles, resolutions of 78–400 nm were attained. An O2 plasma etch can be used to remove the PMMA matrix, and in the case of Iron Oxide nanoparticles, further reduce the resolution of lines to 10 nm. Advantages unique to tDPL are that it is a maskless additive process that can achieve very narrow resolutions, it can also easily write many types of nanoparticles without requiring special solution preparation techniques. However there are limitations to this method. The nanoparticles must be smaller than the radius of gyration of the polymer, in the case of PMMA this is about 6 nm. Additionally, as nanoparticles increase in size viscosity increases, slowing the process. For a pure polymer deposition speeds of 200 μm/s are achievable. Adding nanoparticles reduces speeds to 2 μm/s, but is still faster than regular Dip Pen Lithography. Beam pen lithography A two dimensional array of (PDMS) deformable transparent pyramid shaped tips are coated with an opaque layer of metal. The metal is then removed from the very tip of the pyramid, leaving an aperture for light to pass through. The array is then scanned across a surface and light is directed to the base of each pyramid via a micromirror array, which funnels the light toward the tip. Depending on the distance between the tips and the surface, light interacts with the surface in a near-field or far-field fashion, allowing sub-diffraction scale features (100 nm features with 400 nm light) or larger features to be fabricated. Common misconceptions Direct comparisons to other techniques The criticism most often directed at DPN is the patterning speed. The reason for this has more to do with how it is compared to other techniques rather than any inherent weaknesses. For example, the soft lithography method, microcontact printing (μCP), is the current standard for low cost, bench-top micro and nanoscale patterning, so it is easy to understand why DPN is compared directly to microcontact printing. The problem is that the comparisons are usually based upon applications that are strongly suited to μCP, instead of comparing them to some neutral application. μCP has the ability to pattern one material over a large area in a single stamping step, just as photolithography can pattern over a large area in a single exposure. Of course DPN is slow when it is compared to the strength of another technique. DPN is a maskless direct write technique that can be used to create multiple patterns of varying size, shape, and feature resolution, all on a single substrate. No one would try to apply microcontact printing to such a project because then it would never be worth the time and money required to fabricate each master stamp for each new pattern. Even if they did, microcontact printing would not be capable of aligning multiple materials from multiple stamps with nanoscale registry. The best way to understand this misconception is to think about the different ways to apply photolithography and e-beam lithography. No one would try to use e-beam to solve a photolithography problem and then claim e-beam to be "too slow". Directly compared to photolithography's large area patterning capabilities, e-beam lithography is slow and yet, e-beam instruments can be found in every lab and nanofab in the world. The reason for this is because e-beam has unique capabilities that cannot be matched by photolithography, just as DPN has unique capabilities that cannot be matched by microcontact printing. Connection to atomic force microscopy DPN evolved directly from AFM so it is not a surprise that people often assume that any commercial AFM can perform DPN experiments. In fact, DPN does not require an AFM, and an AFM does not necessarily have real DPN capabilities. There is an excellent analogy with scanning electron microscopy (SEM) and electron beam (E-beam) lithography. E-beam evolved directly from SEM technology and both use a focused electron beam, but it is not possible to perform modern E-beam lithography experiments on a SEM that lacks the proper lithography hardware and software components. It is also important to consider one of the unique characteristics of DPN, namely its force independence. With virtually all ink/substrate combinations, the same feature size will be patterned no matter how hard the tip is pressing down against the surface. As long as robust SiN tips are used, there is no need for complicated feedback electronics, no need for lasers, no need for quad photo-diodes, and no need for an AFM. See also Nanolithography References Lithography (microfabrication) Microtechnology Scanning probe microscopy Biological engineering Tissue engineering
Dip-pen nanolithography
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,380
[ "Biological engineering", "Microtechnology", "Cloning", "Chemical engineering", "Materials science", "Tissue engineering", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Medical technology", "Lithography (microfabrication)" ]
2,984,476
https://en.wikipedia.org/wiki/Kaup%E2%80%93Kupershmidt%20equation
The Kaup–Kupershmidt equation (named after David J. Kaup and Boris Abram Kupershmidt) is the nonlinear fifth-order partial differential equation It is the first equation in a hierarchy of integrable equations with the Lax operator . It has properties similar (but not identical) to those of the better-known KdV hierarchy in which the Lax operator has order 2. References External links Partial differential equations Integrable systems
Kaup–Kupershmidt equation
[ "Physics", "Mathematics" ]
94
[ "Integrable systems", "Mathematical analysis", "Theoretical physics", "Mathematical analysis stubs" ]
2,987,124
https://en.wikipedia.org/wiki/Salt%20bridge%20%28protein%20and%20supramolecular%29
In chemistry, a salt bridge is a combination of two non-covalent interactions: hydrogen bonding and ionic bonding (Figure 1). Ion pairing is one of the most important noncovalent forces in chemistry, in biological systems, in different materials and in many applications such as ion pair chromatography. It is a most commonly observed contribution to the stability to the entropically unfavorable folded conformation of proteins. Although non-covalent interactions are known to be relatively weak interactions, small stabilizing interactions can add up to make an important contribution to the overall stability of a conformer. Not only are salt bridges found in proteins, but they can also be found in supramolecular chemistry. The thermodynamics of each are explored through experimental procedures to access the free energy contribution of the salt bridge to the overall free energy of the state. Salt bridges in chemical bonding In water, formation of salt bridges or ion pairs is mostly driven by entropy, usually accompanied by unfavorable ΔH contributions on account of desolvation of the interacting ions upon association. Hydrogen bonds contribute to the stability of ion pairs with e.g. protonated ammonium ions, and with anions is formed by deprotonation as in the case of carboxylate, phosphate etc; then the association constants depend on the pH. Entropic driving forces for ion pairing (in absence of significant H-bonding contributions) are also found in methanol as solvent. In nonpolar solvents contact ion pairs with very high association constants are formed; in the gas phase the association energies of e.g. alkali halides reach up to 200 kJ/mol. The Bjerrum or the Fuoss equation describe ion pair association as function of the ion charges zA and zB and the dielectric constant ε of the medium; a corresponding plot of the stability ΔG vs. zAzB shows for over 200 ion pairs the expected linear correlation for a large variety of ions. Inorganic as well as organic ions display at moderate ionic strength I similar salt bridge association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability etc) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye–Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. The stabilities of the alkali-ion pairs as function of the anion charge z by can be described by a more detailed equation. Salt bridges found in proteins The salt bridge most often arises from the anionic carboxylate (RCOO−) of either aspartic acid or glutamic acid and the cationic ammonium (RNH3+) from lysine or the guanidinium (RNHC(NH2)2+) of arginine (Figure 2). Although these are the most common, other residues with ionizable side chains such as histidine, tyrosine, and serine can also participate, depending on outside factors perturbing their pKa's. The distance between the residues participating in the salt bridge is also cited as being important. The N-O distance required is less than 4 Å (400 pm). Amino acids greater than this distance apart do not qualify as forming a salt bridge. Due to the numerous ionizable side chains of amino acids found throughout a protein, the pH at which a protein is placed is crucial to its stability. Salt bridges found in protein - ligand complexes Salt bridges also can form between a protein and small molecule ligands. Over 1100 unique protein-ligand complexes from the Protein Databank were found to form salt bridges with their protein targets, indicating that salt bridges are frequent in drug-protein interaction. These contain structures from different enzyme classes, including hydrolase, transferases, kinases, reductase, oxidoreductase, lyases, and G protein-coupled receptors (GPCRs). Methods for quantifying salt bridge stability in proteins The contribution of a salt bridge to the overall stability to the folded state of a protein can be assessed through thermodynamic data gathered from mutagenesis studies and nuclear magnetic resonance techniques. Using a mutated pseudo-wild-type protein specifically mutated to prevent precipitation at high pH, the salt bridge’s contribution to the overall free energy of the folded protein state can be determined by performing a point-mutation, altering and, consequently, breaking the salt bridge. For example, a salt bridge was identified to exist in the T4 lysozyme between aspartic acid (Asp) at residue 70 and a histidine (His) at residue 31 (Figure 3). Site-directed mutagenesis with asparagine (Asn) (Figure 4) was done obtaining three new mutants: Asp70Asn His31 (Mutant 1), Asp70 His31Asn (Mutant 2), and Asp70Asn His31Asn (Double Mutant). Once the mutants have been established, two methods can be employed to calculate the free energy associated with a salt bridge. One method involves the observation of the melting temperature of the wild-type protein versus that of the three mutants. The denaturation can be monitored through a change in circular dichroism. A reduction in melting temperature indicates a reduction in stability. This is quantified through a method described by Becktel and Schellman where the free energy difference between the two is calculated through ΔTΔS. There are some issues with this calculation and can only be used with very accurate data. In the T4 lysozyme example, ΔS of the pseudo-wild-type had previously been reported at pH 5.5 so the midpoint temperature difference of 11 °C at this pH multiplied by the reported ΔS of 360 cal/(mol·K) (1.5 kJ/(mol·K)) yields a free energy change of about −4 kcal/mol (−17 kJ/mol). This value corresponds to the amount of free energy contributed to the stability of the protein by the salt bridge. The second method utilizes nuclear magnetic resonance spectroscopy to calculate the free energy of the salt bridge. A titration is performed, while recording the chemical shift corresponding to the protons of the carbon adjacent to the carboxylate or ammonium group. The midpoint of the titration curve corresponds to the pKa, or the pH where the ratio of protonated: deprotonated molecules is 1:1. Continuing with the T4 lysozyme example, a titration curve is obtained through observation of a shift in the C2 proton of histidine 31 (Figure 5). Figure 5 shows the shift in the titration curve between the wild-type and the mutant in which Asp70 is Asn. The salt bridge formed is between the deprotonated Asp70 and protonated His31. This interaction causes the shift seen in His31’s pKa. In the unfolded wild-type protein, where the salt bridge is absent, His31 is reported to have a pKa of 6.8 in H2O buffers of moderate ionic strength. Figure 5 shows a pKa of the wild-type of 9.05. This difference in pKa is supported by the His31’s interaction with Asp70. To maintain the salt bridge, His31 will attempt to keep its proton as long as possible. When the salt bridge is disrupted, like in the mutant D70N, the pKa shifts back to a value of 6.9, much closer to that of His31 in the unfolded state. The difference in pKa can be quantified to reflect the salt bridge’s contribution to free energy. Using Gibbs free energy: ΔG = −RT ln(Keq), where R is the universal gas constant, T is the temperature in kelvins, and Keq is the equilibrium constant of a reaction in equilibrium. The deprotonation of His31 is an acid equilibrium reaction with a special Keq known as the acid dissociation constant, Ka: His31-H+ His31 + H+. The pKa is then related to Ka by the following: pKa = −log(Ka). Calculation of the free energy difference of the mutant and wild-type can now be done using the free energy equation, the definition of pKa, the observed pKa values, and the relationship between natural logarithms and logarithms. In the T4 lysozyme example, this approach yielded a calculated contribution of about 3 kcal/mol to the overall free energy. A similar approach can be taken with the other participant in the salt bridge, such as Asp70 in the T4 lysozyme example, by monitoring its shift in pKa after mutation of His31. A word of caution when choosing the appropriate experiment involves the location of the salt bridge within the protein. The environment plays a large role in the interaction. At high ionic strengths, the salt bridge can be completely masked since an electrostatic interaction is involved. The His31-Asp70 salt bridge in T4 lysozyme was buried within the protein. Entropy plays a larger role in surface salt bridges where residues that normally have the ability to move are constricted by their electrostatic interaction and hydrogen bonding. This has been shown to decrease entropy enough to nearly erase the contribution of the interaction. Surface salt bridges can be studied similarly to that of buried salt bridges, employing double mutant cycles and NMR titrations. Although cases exist where buried salt bridges contribute to stability, like anything else, exceptions do exist and buried salt bridges can display a destabilizing effect. Also, surface salt bridges, under certain conditions, can display a stabilizing effect. The stabilizing or destabilizing effect must be assessed on a case by case basis and few blanket statements are able to be made. Supramolecular chemistry Supramolecular chemistry is a field concerned with non-covalent interactions between macromolecules. Salt bridges have been used by chemists within this field in both diverse and creative ways, including sensing of anions, the synthesis of molecular capsules and double helical polymers. Anion complexation Major contributions of supramolecular chemistry have been devoted to recognition and sensing of anions. Ion pairing is the most important driving force for anion complexation, but selectivity e.g. within the halide series has been achieved, mostly by hydrogen bonds contributions. Molecular capsules Molecular capsules are chemical scaffolds designed to capture and hold a guest molecule (see molecular encapsulation). Szumna and coworkers developed a novel molecular capsule with a chiral interior. This capsule is made of two halves, like a plastic easter egg (Figure 6). Salt bridge interactions between the two halves cause them to self-assemble in solution (Figure 7). They are stable even when heated to 60 °C. Double helical polymers Yashima and coworkers have used salt bridges to construct several polymers that adopt a double helix conformation much like DNA. In one example, they incorporated platinum to create a double helical metallopolymer. Starting from their monomer and platinum(II) biphenyl (Figure 8), their metallopolymer self assembles through a series of ligand exchange reactions. The two halves of the monomer are anchored together through the salt bridge between the deprotonated carboxylate and the protonated nitrogens. References Chemical bonding Protein engineering
Salt bridge (protein and supramolecular)
[ "Physics", "Chemistry", "Materials_science" ]
2,493
[ "Chemical bonding", "Condensed matter physics", "nan" ]
2,987,828
https://en.wikipedia.org/wiki/Copper%28II%29%20acetate
Copper(II) acetate, also referred to as cupric acetate, is the chemical compound with the formula Cu(OAc)2 where AcO− is acetate (). The hydrated derivative, Cu2(OAc)4(H2O)2, which contains one molecule of water for each copper atom, is available commercially. Anhydrous copper(II) acetate is a dark green crystalline solid, whereas Cu2(OAc)4(H2O)2 is more bluish-green. Since ancient times, copper acetates of some form have been used as fungicides and green pigments. Today, copper acetates are used as reagents for the synthesis of various inorganic and organic compounds. Copper acetate, like all copper compounds, emits a blue-green glow in a flame. Structure Copper acetate hydrate adopts the paddle wheel structure seen also for related Rh(II) and Cr(II) tetraacetates. One oxygen atom on each acetate is bound to one copper atom at 1.97 Å (197 pm). Completing the coordination sphere are two water ligands, with Cu–O distances of 2.20 Å (220 pm). The two copper atoms are separated by only 2.62 Å (262 pm), which is close to the Cu–Cu separation in metallic copper. The two copper centers interact resulting in a diminishing of the magnetic moment such that at temperatures below 90 K, Cu2(OAc)4(H2O)2 is essentially diamagnetic. Cu2(OAc)4(H2O)2 was a critical step in the development of modern theories for antiferromagnetic exchange coupling, which ascribe its low-temperature diamagnetic behavior to cancellation of the two opposing spins on the adjacent copper atoms. Synthesis Copper(II) acetate is prepared industrially by heating copper(II) hydroxide or basic copper(II) carbonate with acetic acid. Uses in chemical synthesis Copper(II) acetate has found some use as an oxidizing agent in organic syntheses. In the Eglinton reaction Cu2(OAc)4 is used to couple terminal alkynes to give a 1,3-diyne: Cu2(OAc)4 + 2 RC≡CH → 2 CuOAc + RC≡C−C≡CR + 2 HOAc The reaction proceeds via the intermediacy of copper(I) acetylides, which are then oxidized by the copper(II) acetate, releasing the acetylide radical. A related reaction involving copper acetylides is the synthesis of ynamines, terminal alkynes with amine groups using Cu2(OAc)4. It has been used for hydroamination of acrylonitrile. It is also an oxidising agent in Barfoed's test. It reacts with arsenic trioxide to form copper acetoarsenite, a powerful insecticide and fungicide called Paris green. Related compounds Heating a mixture of anhydrous copper(II) acetate and copper metal affords copper(I) acetate: Cu + Cu(OAc)2 → 2 CuOAc Unlike the copper(II) derivative, copper(I) acetate is colourless and diamagnetic. "Basic copper acetate" is prepared by neutralizing an aqueous solution of copper(II) acetate. The basic acetate is poorly soluble. This material is a component of verdigris, the blue-green substance that forms on copper during long exposures to atmosphere. Other uses A mixture of copper acetate and ammonium chloride is used to chemically color copper with a bronze patina. Mineralogy The mineral hoganite is a naturally occurring form of copper(II) acetate. A related mineral, also containing calcium, is paceite. Both are very rare. References External links Copper.org – Other Copper Compounds 5 Feb. 2006 Infoplease.com – Paris green 6 Feb. 2006 Verdigris – History and Synthesis 6 Feb. 2006 Australian - National Pollutant Inventory 8 Aug. 2016 USA NIH National Center for Biotechnology Information 8 Aug. 2016 Copper(II) compounds Acetates Oxidizing agents Catalysts
Copper(II) acetate
[ "Chemistry" ]
893
[ "Catalysis", "Catalysts", "Redox", "Oxidizing agents", "Chemical kinetics" ]
2,987,843
https://en.wikipedia.org/wiki/Floquet%20theory
Floquet theory is a branch of the theory of ordinary differential equations relating to the class of solutions to periodic linear differential equations of the form with and being a piecewise continuous periodic function with period and defines the state of the stability of solutions. The main theorem of Floquet theory, Floquet's theorem, due to , gives a canonical form for each fundamental matrix solution of this common linear system. It gives a coordinate change with that transforms the periodic system to a traditional linear system with constant, real coefficients. When applied to physical systems with periodic potentials, such as crystals in condensed matter physics, the result is known as Bloch's theorem. Note that the solutions of the linear differential equation form a vector space. A matrix is called a fundamental matrix solution if the columns form a basis of the solution set. A matrix is called a principal fundamental matrix solution if all columns are linearly independent solutions and there exists such that is the identity. A principal fundamental matrix can be constructed from a fundamental matrix using . The solution of the linear differential equation with the initial condition is where is any fundamental matrix solution. Floquet's theorem Let be a linear first order differential equation, where is a column vector of length and an periodic matrix with period (that is for all real values of ). Let be a fundamental matrix solution of this differential equation. Then, for all , Here is known as the monodromy matrix. In addition, for each matrix (possibly complex) such that there is a periodic (period ) matrix function such that Also, there is a real matrix and a real periodic (period-) matrix function such that In the above , , and are matrices. Consequences and applications This mapping gives rise to a time-dependent change of coordinates (), under which our original system becomes a linear system with real constant coefficients . Since is continuous and periodic it must be bounded. Thus the stability of the zero solution for and is determined by the eigenvalues of . The representation is called a Floquet normal form for the fundamental matrix . The eigenvalues of are called the characteristic multipliers of the system. They are also the eigenvalues of the (linear) Poincaré maps . A Floquet exponent (sometimes called a characteristic exponent), is a complex such that is a characteristic multiplier of the system. Notice that Floquet exponents are not unique, since , where is an integer. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative, Lyapunov stable if the Lyapunov exponents are nonpositive and unstable otherwise. Floquet theory is very important for the study of dynamical systems, such as the Mathieu equation. Floquet theory shows stability in Hill differential equation (introduced by George William Hill) approximating the motion of the moon as a harmonic oscillator in a periodic gravitational field. Bond softening and bond hardening in intense laser fields can be described in terms of solutions obtained from the Floquet theorem. Dynamics of strongly driven quantum systems are often examined using Floquet theory. In superconducting circuits, Floquet framework has been leveraged to shed light on the quantum electrodynamics of drive-induced multiqubit interactions. References C. Chicone. Ordinary Differential Equations with Applications. Springer-Verlag, New York 1999. M.S.P. Eastham, "The Spectral Theory of Periodic Differential Equations", Texts in Mathematics, Scottish Academic Press, Edinburgh, 1973. . , Translation of Mathematical Monographs, 19, 294p. W. Magnus, S. Winkler. Hill's Equation, Dover-Phoenix Editions, . N.W. McLachlan, Theory and Application of Mathieu Functions, New York: Dover, 1964. External links Dynamical systems
Floquet theory
[ "Physics", "Mathematics" ]
814
[ "Mathematical objects", "Differential equations", "Equations", "Mechanics", "Dynamical systems" ]
2,987,943
https://en.wikipedia.org/wiki/Duffing%20equation
The Duffing equation (or Duffing oscillator), named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by where the (unknown) function is the displacement at time , is the first derivative of with respect to time, i.e. velocity, and is the second time-derivative of i.e. acceleration. The numbers and are given constants. The equation describes the motion of a damped oscillator with a more complex potential than in simple harmonic motion (which corresponds to the case ); in physical terms, it models, for example, an elastic pendulum whose spring's stiffness does not exactly obey Hooke's law. The Duffing equation is an example of a dynamical system that exhibits chaotic behavior. Moreover, the Duffing system presents in the frequency response the jump resonance phenomenon that is a sort of frequency hysteresis behaviour. Parameters The parameters in the above equation are: controls the amount of damping, controls the linear stiffness, controls the amount of non-linearity in the restoring force; if the Duffing equation describes a damped and driven simple harmonic oscillator, is the amplitude of the periodic driving force; if the system is without a driving force, and is the angular frequency of the periodic driving force. The Duffing equation can be seen as describing the oscillations of a mass attached to a nonlinear spring and a linear damper. The restoring force provided by the nonlinear spring is then When and the spring is called a hardening spring. Conversely, for it is a softening spring (still with ). Consequently, the adjectives hardening and softening are used with respect to the Duffing equation in general, dependent on the values of (and ). The number of parameters in the Duffing equation can be reduced by two through scaling (in accord with the Buckingham π theorem), e.g. the excursion and time can be scaled as: and assuming is positive (other scalings are possible for different ranges of the parameters, or for different emphasis in the problem studied). Then: where and The dots denote differentiation of with respect to This shows that the solutions to the forced and damped Duffing equation can be described in terms of the three parameters (, , and ) and two initial conditions (i.e. for and ). Methods of solution In general, the Duffing equation does not admit an exact symbolic solution. However, many approximate methods work well: Expansion in a Fourier series may provide an equation of motion to arbitrary precision. The term, also called the Duffing term, can be approximated as small and the system treated as a perturbed simple harmonic oscillator. The Frobenius method yields a complex but workable solution. Any of the various numeric methods such as Euler's method and Runge–Kutta methods can be used. The homotopy analysis method (HAM) has also been reported for obtaining approximate solutions of the Duffing equation, also for strong nonlinearity. In the special case of the undamped () and undriven () Duffing equation, an exact solution can be obtained using Jacobi's elliptic functions. Boundedness of the solution for the unforced oscillator Undamped oscillator Multiplication of the undamped and unforced Duffing equation, with gives: with a constant. The value of is determined by the initial conditions and The substitution in H shows that the system is Hamiltonian: When both and are positive, the solution is bounded: with the Hamiltonian being positive. Damped oscillator Similarly, the damped oscillator converges globally, by Lyapunov function method since for damping. Without forcing the damped Duffing oscillator will end up at (one of) its stable equilibrium point(s). The equilibrium points, stable and unstable, are at If the stable equilibrium is at If and the stable equilibria are at and Frequency response The forced Duffing oscillator with cubic nonlinearity is described by the following ordinary differential equation: The frequency response of this oscillator describes the amplitude of steady state response of the equation (i.e. ) at a given frequency of excitation For a linear oscillator with the frequency response is also linear. However, for a nonzero cubic coefficient , the frequency response becomes nonlinear. Depending on the type of nonlinearity, the Duffing oscillator can show hardening, softening or mixed hardening–softening frequency response. Anyway, using the homotopy analysis method or harmonic balance, one can derive a frequency response equation in the following form: For the parameters of the Duffing equation, the above algebraic equation gives the steady state oscillation amplitude at a given excitation frequency. Graphically solving for frequency response We may graphically solve for as the intersection of two curves in the plane:For fixed , the second curve is a fixed hyperbola in the first quadrant. The first curve is a parabola with shape , and apex at location . If we fix and vary , then the apex of the parabola moves along the line . Graphically, then, we see that if is a large positive number, then as varies, the parabola intersects the hyperbola at one point, then three points, then one point again. Similarly we can analyze the case when is a large negative number. Jumps For certain ranges of the parameters in the Duffing equation, the frequency response may no longer be a single-valued function of forcing frequency For a hardening spring oscillator ( and large enough positive ) the frequency response overhangs to the high-frequency side, and to the low-frequency side for the softening spring oscillator ( and ). The lower overhanging side is unstable – i.e. the dashed-line parts in the figures of the frequency response – and cannot be realized for a sustained time. Consequently, the jump phenomenon shows up: when the angular frequency is slowly increased (with other parameters fixed), the response amplitude drops at A suddenly to B, if the frequency is slowly decreased, then at C the amplitude jumps up to D, thereafter following the upper branch of the frequency response. The jumps A–B and C–D do not coincide, so the system shows hysteresis depending on the frequency sweep direction. Transition to chaos The above analysis assumed that the base frequency response dominates (necessary for performing harmonic balance), and higher frequency responses are negligible. This assumption fails to hold when the forcing is sufficiently strong. Higher order harmonics cannot be neglected, and the dynamics become chaotic. There are different possible transitions to chaos, most commonly by successive period doubling. Examples Some typical examples of the time series and phase portraits of the Duffing equation, showing the appearance of subharmonics through period-doubling bifurcation – as well chaotic behavior – are shown in the figures below. The forcing amplitude increases from to The other parameters have the values: and The initial conditions are and The red dots in the phase portraits are at times which are an integer multiple of the period References Citations Bibliography External links Duffing oscillator on Scholarpedia MathWorld page Ordinary differential equations Chaotic maps Nonlinear systems Articles containing video clips
Duffing equation
[ "Mathematics" ]
1,522
[ "Functions and mappings", "Mathematical objects", "Nonlinear systems", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
2,988,093
https://en.wikipedia.org/wiki/Kodaira%20vanishing%20theorem
In mathematics, the Kodaira vanishing theorem is a basic result of complex manifold theory and complex algebraic geometry, describing general conditions under which sheaf cohomology groups with indices q > 0 are automatically zero. The implications for the group with index q = 0 is usually that its dimension — the number of independent global sections — coincides with a holomorphic Euler characteristic that can be computed using the Hirzebruch–Riemann–Roch theorem. The complex analytic case The statement of Kunihiko Kodaira's result is that if M is a compact Kähler manifold of complex dimension n, L any holomorphic line bundle on M that is positive, and KM is the canonical line bundle, then for q > 0. Here stands for the tensor product of line bundles. By means of Serre duality, one also obtains the vanishing of for q < n. There is a generalisation, the Kodaira–Nakano vanishing theorem, in which , where Ωn(L) denotes the sheaf of holomorphic (n,0)-forms on M with values on L, is replaced by Ωr(L), the sheaf of holomorphic (r,0)-forms with values on L. Then the cohomology group Hq(M, Ωr(L)) vanishes whenever q + r > n. The algebraic case The Kodaira vanishing theorem can be formulated within the language of algebraic geometry without any reference to transcendental methods such as Kähler metrics. Positivity of the line bundle L translates into the corresponding invertible sheaf being ample (i.e., some tensor power gives a projective embedding). The algebraic Kodaira–Akizuki–Nakano vanishing theorem is the following statement: If k is a field of characteristic zero, X is a smooth and projective k-scheme of dimension d, and L is an ample invertible sheaf on X, then where the Ωp denote the sheaves of relative (algebraic) differential forms (see Kähler differential). showed that this result does not always hold over fields of characteristic p > 0, and in particular fails for Raynaud surfaces. Later give a counterexample for singular varieties with non-log canonical singularities, and also, gave elementary counterexamples inspired by proper homogeneous spaces with non-reduced stabilizers. Until 1987 the only known proof in characteristic zero was however based on the complex analytic proof and the GAGA comparison theorems. However, in 1987 Pierre Deligne and Luc Illusie gave a purely algebraic proof of the vanishing theorem in . Their proof is based on showing that the Hodge–de Rham spectral sequence for algebraic de Rham cohomology degenerates in degree 1. This is shown by lifting a corresponding more specific result from characteristic p > 0 — the positive-characteristic result does not hold without limitations but can be lifted to provide the full result. Consequences and applications Historically, the Kodaira embedding theorem was derived with the help of the vanishing theorem. With application of Serre duality, the vanishing of various sheaf cohomology groups (usually related to the canonical line bundle) of curves and surfaces help with the classification of complex manifolds, e.g. Enriques–Kodaira classification. See also Kawamata–Viehweg vanishing theorem Mumford vanishing theorem Ramanujam vanishing theorem Note References Phillip Griffiths and Joseph Harris, Principles of Algebraic Geometry Theorems in complex geometry Topological methods of algebraic geometry Theorems in algebraic geometry
Kodaira vanishing theorem
[ "Mathematics" ]
737
[ "Theorems in algebraic geometry", "Theorems in complex geometry", "Theorems in geometry" ]
2,988,563
https://en.wikipedia.org/wiki/Composite%20measure
Composite measure in statistics and research design refer to composite measures of variables, i.e. measurements based on multiple data items. An example of a composite measure is an IQ test, which gives a single score based on a series of responses to various questions. Three common composite measures include: indexes - measures that summarize and rank specific observations, usually on the ordinal scale; scales - advanced indexes whose observations are further transformed (scaled) due to their logical or empirical relationships; typologies - measures that classify observations in terms of their attributes on multiple variables, usually on a nominal scale. Indexes versus scales Indexes are often referred to as scales, but in fact not all indexes are scales. Whereas indexes are usually created by aggregating scores assigned to individual attributes of various variables, scales are more nuanced and take into account differences in intensity among the attribute of the same variable in question. Indexes and scales should provide an ordinal ranking of cases on a given variable, though scales are usually more efficient at this. While indexes are based on a simple aggregation of indicators of a variable, scales are more advanced, and their calculations may be more complex, using for example scaling procedures such as semantic differential. Composite measure validation A good composite measure will ensure that the indicators are independent of one another. It should also successfully predict other indicators of the variable. References Measurement
Composite measure
[ "Physics", "Mathematics" ]
286
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
2,989,336
https://en.wikipedia.org/wiki/DBc
dBc (decibels relative to the carrier) is the power ratio of a signal to a carrier signal, expressed in decibels. For example, phase noise is expressed in dBc/Hz at a given frequency offset from the carrier. dBc can also be used as a measurement of Spurious-Free Dynamic Range (SFDR) between the desired signal and unwanted spurious outputs resulting from the use of signal converters such as a digital-to-analog converter or a frequency mixer. If the dBc figure is positive, then the relative signal strength is greater than the carrier signal strength. If the dBc figure is negative, then the relative signal strength is less than carrier signal strength. Although the decibel (dB) is permitted for use alongside SI units, the dBc is not. Example If a carrier (reference signal) has a power of , and noise signal has power of . Power of reference signal expressed in decibel is : Power of noise expressed in decibel is : The calculation of dBc difference between noise signal and reference signal is then as follows: It is also possible to compute the dBc power of noise signal with respect to reference signal directly as logarithm of their ratio as follows: . References External links Encyclopedia of Laser Physics and Technology Units of measurement Radio frequency propagation Telecommunications engineering Logarithmic scales of measurement
DBc
[ "Physics", "Mathematics", "Engineering" ]
280
[ "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Physical quantities", "Radio frequency propagation", "Quantity", "Electromagnetic spectrum", "Waves", "Logarithmic scales of measurement", "Electrical engineering", "Units of measurement" ]
6,970,787
https://en.wikipedia.org/wiki/Autler%E2%80%93Townes%20effect
In spectroscopy, the Autler–Townes effect (also known as AC Stark effect), is a dynamical Stark effect corresponding to the case when an oscillating electric field (e.g., that of a laser) is tuned in resonance (or close) to the transition frequency of a given spectral line, and resulting in a change of the shape of the absorption/emission spectra of that spectral line. The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes. It is the AC equivalent of the static Stark effect which splits the spectral lines of atoms and molecules in a constant electric field. Compared to its DC counterpart, the AC Stark effect is computationally more complex. While generally referring to atomic spectral shifts due to AC fields at any (single) frequency, the effect is more pronounced when the field frequency is close to that of a natural atomic or molecular dipole transition. In this case, the alternating field has the effect of splitting the two bare transition states into doublets or "dressed states" that are separated by the Rabi frequency. Alternatively, this can be described as a Rabi oscillation between the bare states which are no longer eigenstates of the atom–field Hamiltonian. The resulting fluorescence spectrum is known as a Mollow triplet. The AC Stark splitting is integral to several phenomena in quantum optics, such as electromagnetically induced transparency and Sisyphus cooling. Vacuum Rabi oscillations have also been described as a manifestation of the AC Stark effect from atomic coupling to the vacuum field. History The AC Stark effect was discovered in 1955 by American physicists Stanley Autler and Charles Townes while at Columbia University and Lincoln Labs at the Massachusetts Institute of Technology. Before the availability of lasers, the AC Stark effect was observed with radio frequency sources. Autler and Townes' original observation of the effect used a radio frequency source tuned to 12.78 and 38.28 MHz, corresponding to the separation between two doublet microwave absorption lines of OCS. The notion of quasi-energy in treating the general AC Stark effect was later developed by Nikishov and Ritis in 1964 and onward. This more general method of approaching the problem developed into the "dressed atom" model describing the interaction between lasers and atoms. Prior to the 1970s there were various conflicting predictions concerning the fluorescence spectra of atoms due to the AC Stark effect at optical frequencies. In 1974 the observation of Mollow triplets verified the form of the AC Stark effect using visible light. General semiclassical approach In a semiclassical model where the electromagnetic field is treated classically, a system of charges in a monochromatic electromagnetic field has a Hamiltonian that can be written as: where , , and are respectively the position, momentum, mass, and charge of the -th particle, and is the speed of light. The vector potential of the field, , satisfies . The Hamiltonian is thus also periodic: Now, the Schrödinger equation, under a periodic Hamiltonian is a linear homogeneous differential equation with periodic coefficients, where here represents all coordinates. Floquet's theorem guarantees that the solutions to an equation of this form can be written as Here, is the "bare" energy for no coupling to the electromagnetic field, and has the same time-periodicity as the Hamiltonian, or with the angular frequency of the field. Because of its periodicity, it is often further useful to expand in a Fourier series, obtaining or where is the frequency of the laser field. The solution for the joint particle-field system is, therefore, a linear combination of stationary states of energy , which is known as a quasi-energy state and the new set of energies are called the spectrum of quasi-harmonics. Unlike the DC Stark effect, where perturbation theory is useful in a general case of atoms with infinite bound states, obtaining even a limited spectrum of shifted energies for the AC Stark effect is difficult in all but simple models, although calculations for systems such as the hydrogen atom have been done. Examples General expressions for AC Stark shifts must usually be calculated numerically and tend to provide little insight. However, there are important individual examples of the effect that are informative. Analytical solutions in these specific cases are usually obtained assuming the detuning is small compared to a characteristic frequency of the radiating system. Two level atom dressing An atom driven by an electric field with frequency close to an atomic transition frequency (that is, when ) can be approximated as a two level quantum system since the off resonance states have low occupation probability. The Hamiltonian can be divided into the bare atom term plus a term for the interaction with the field as: In an appropriate rotating frame, and making the rotating wave approximation, reduces to Where is the Rabi frequency, and are the strongly coupled bare atom states. The energy eigenvalues are , and for small detuning, The eigenstates of the atom-field system or dressed states are dubbed and . The result of the AC field on the atom is thus to shift the strongly coupled bare atom energy eigenstates into two states and which are now separated by . Evidence of this shift is apparent in the atom's absorption spectrum, which shows two peaks around the bare transition frequency, separated by (Autler-Townes splitting). The modified absorption spectrum can be obtained by a pump-probe experiment, wherein a strong pump laser drives the bare transition while a weaker probe laser sweeps for a second transition between a third atomic state and the dressed states. Another consequence of the AC Stark splitting here is the appearance of Mollow triplets, a triple peaked fluorescence profile. Historically an important confirmation of Rabi flopping, they were first predicted by Mollow in 1969 and confirmed in the 1970s experimentally. Optical Dipole Trap (Far-Off-Resonance Trap) For ultracold atoms experiments utilizing the optical dipole force from AC Stark shift, the light is usually linearly polarized to avoid the splitting of different magnetic substates with different , and the light frequency is often far detuned from the atomic transition to avoid heating the atoms from the photon-atom scattering; in turn, the intensity of the light field (i.e. AC electric field) is typically high to compensate for the large detuning. Typically, we have , where the atomic transition has a natural linewidth and a saturation intensity: Note the above expression for saturation intensity does not apply to all cases. For example, the above applies for the D2 line transition of Li-6, but not the D1 line, which obeys a different sum rule in calculating the oscillator strength. As a result, the D1 line has a saturation intensity 3 times larger than the D2 line. However, when the detuning from these two lines is much larger than the fine-structure splitting, the overall saturation intensity takes the value of the D2 line. In the case where the light's detuning is comparable to the fine-structure splitting but still much larger than the hyperfine splitting, the D2 line contributes twice as much dipole potential as the D1 line, as shown in Equation (19) of. The optical dipole potential is therefore: Here, the Rabi frequency is related to the (dimensionless) saturation parameter , and is the real part of the complex polarizability of the atom, with its imaginary counterpart representing the dissipative optical scattering force. The factor of 1/2 takes into account that the dipole moment is an induced, not a permanent one. When , the rotating wave approximation applies, and the counter-rotating term proportional to can be omitted; However, in some cases, the ODT light is so far detuned that counter-rotating term must be included in calculations, as well as contributions from adjacent atomic transitions with appreciable linewidth . Note that the natural linewidth here is in radians per second, and is the inverse of lifetime . This is the principle of operation for Optical Dipole Trap (ODT, also known as Far Off Resonance Trap, FORT), in which case the light is red-detuned . When blue-detuned, the light beam provides a potential bump/barrier instead. The optical dipole potential is often expressed in terms of the recoil energy, which is the kinetic energy imparted in an atom initially at rest by "recoil" during the spontaneous emission of a photon: where is the wavevector of the ODT light ( when detuned). The recoil energy, along with related recoil frequency , are crucial parameters in understanding the dynamics of atoms in light fields, especially in the context of atom optics and momentum transfer. In applications that utilize the optical dipole force, it is common practice to use a far-off-resonance light frequency. This is because a smaller detuning would increase the photon-atom scattering rate much faster than it increases the dipole potential energy, leading to undesirable heating of the atoms. Quantitatively, the scattering rate is given by: Adiabatic elimination In quantum system with three (or more) states, where a transition from one level, to another can be driven by an AC field, but only decays to states other than , the dissipative influence of the spontaneous decay can be eliminated. This is achieved by increasing the AC Stark shift on through large detuning and raising intensity of the driving field. Adiabatic elimination has been used to create comparatively stable effective two level systems in Rydberg atoms, which are of interest for qubit manipulations in quantum computing. Electromagnetically induced transparency Electromagnetically induced transparency (EIT), which gives some materials a small transparent area within an absorption line, can be thought of as a combination of Autler-Townes splitting and Fano interference, although the distinction may be difficult to determine experimentally. While both Autler-Townes splitting and EIT can produce a transparent window in an absorption band, EIT refers to a window that maintains transparency in a weak pump field, and thus requires Fano interference. Because Autler-Townes splitting will wash out Fano interference at stronger fields, a smooth transition between the two effects is evident in materials exhibiting EIT. See also Stark effect Stark spectroscopy Electromagnetically induced transparency Fano interference Rabi cycle References Further reading Cohen-Tannoudji et al., Quantum Mechanics, Vol 2, p 1358, trans. S. R. Hemley et al., Hermann, Paris 1977 Atomic physics Quantum optics
Autler–Townes effect
[ "Physics", "Chemistry" ]
2,168
[ "Quantum optics", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
6,971,691
https://en.wikipedia.org/wiki/Jessica%20Mink
Jessica Mink (formerly Douglas John Mink) is an American software developer and a data archivist at the Center for Astrophysics Harvard & Smithsonian. She was part of the team that discovered the rings around the planet Uranus. Early life and career Mink was born in Lincoln, Nebraska, in 1951 and graduated from Dundee Community High School in 1969. She earned an S.B. degree (1973) and an S.M. degree (1974) in Planetary Science from the Massachusetts Institute of Technology (MIT). She worked at Cornell University from 1976 to 1979 as an astronomical software developer. It was during this time that she was part of the team that discovered the rings around Uranus. Within the team she was responsible for the data reduction software and the data analysis. After working at Cornell she moved back to MIT, where she did work that contributed to the discovery of the rings of Neptune. She has written a number of commonly used software packages for astrophysics, including WCSTools and RVSAO. Despite not having a PhD, Mink is a member of the American Astronomical Society and the International Astronomical Union. Personal life Mink is an avid bicycle user. She has served as an officer and director of the Massachusetts Bicycle Coalition and has been the route planner for the Massachusetts portion of the East Coast Greenway since 1991. Mink is a transgender woman, and she publicly came out in 2011 at the age of 60. She has since spoken out about her experiences transitioning. She was also featured in two articles about the experiences of transitioning in a professional environment. She was a co-organiser of the 2015 Inclusive Astronomy conference at Vanderbilt University. Mink currently lives in Massachusetts (USA), and has a daughter. References External links Jessica Mink's Homepage 1951 births Living people People from Lincoln, Nebraska Harvard University staff Massachusetts Institute of Technology alumni LGBTQ people from Nebraska American LGBTQ scientists American transgender women Transgender scientists American planetary scientists American women planetary scientists LGBTQ astronomers
Jessica Mink
[ "Astronomy" ]
402
[ "Astronomers", "LGBTQ astronomers" ]
6,976,689
https://en.wikipedia.org/wiki/Significant%20wave%20height
In physical oceanography, the significant wave height (SWH, HTSGW or Hs) is defined traditionally as the mean wave height (trough to crest) of the highest third of the waves (H1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. The symbol Hm0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to Hm0 or H1/3; the difference in magnitude between the two definitions is only a few percent. SWH is used to characterize sea state, including winds and swell. Origin and definition The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves. Time domain definition Significant wave height H1/3, or Hs or Hsig, as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the N measured waves having the greatest heights: where Hm represents the individual wave heights, sorted into descending order of height as m increases from 1 to N. Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves. Frequency domain definition Significant wave height Hm0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance m0 or standard deviation ση of the surface elevation: where m0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation ση is the easiest and most accurate statistic to be used. Another wave-height statistic in common usage is the root-mean-square (or RMS) wave height Hrms, defined as: with Hm again denoting the individual wave heights in a certain time series. Statistical distribution of the heights of individual waves Significant wave height, scientifically represented as Hs or Hsig, is an important parameter for the statistical distribution of ocean waves. The most common waves are lower in height than Hs. This implies that encountering the significant wave is not too frequent. However, statistically, it is possible to encounter a wave that is much higher than the significant wave. Generally, the statistical distribution of the individual wave heights is well approximated by a Rayleigh distribution. For example, given that Hs is , statistically: 1 in 10 will be larger than 1 in 100 will be larger than 1 in 1000 will be larger than This implies that one might encounter a wave that is roughly double the significant wave height. However, in rapidly changing conditions, the disparity between the significant wave height and the largest individual waves might be even larger. Other statistics Other statistical measures of the wave height are also widely used. The RMS wave height, which is defined as square root of the average of the squares of all wave heights, is approximately equal to Hs divided by 1.4. For example, according to the Irish Marine Institute: "… at midnight on 9/12/2007 a record significant wave height was recorded of 17.2m at with [sic] a period of 14 seconds." Measurement Although most measuring devices estimate the significant wave height from a wave spectrum, satellite radar altimeters are unique in measuring directly the significant wave height thanks to the different time of return from wave crests and troughs within the area illuminated by the radar. The maximum ever measured wave height from a satellite is during a North Atlantic storm in 2011. Weather forecasts The World Meteorological Organization stipulates that certain countries are responsible for providing weather forecasts for the world's oceans. These respective countries' meteorological offices are called Regional Specialized Meteorological Centers, or RSMCs. In their weather products, they give ocean wave height forecasts in significant wave height. In the United States, NOAA's National Weather Service is the RSMC for a portion of the North Atlantic, and a portion of the North Pacific. The Ocean Prediction Center and the Tropical Prediction Center's Tropical Analysis and Forecast Branch (TAFB) issue these forecasts. RSMCs use wind-wave models as tools to help predict the sea conditions. In the U.S., NOAA's Wavewatch III model is used heavily. Generalization to wave systems A significant wave height is also defined similarly, from the wave spectrum, for the different systems that make up the sea. We then have a significant wave height for the wind-sea or for a particular swell. See also Ocean Prediction Center Rogue wave: a wave of over twice the significant wave height Sea state Notes External links Current global map of significant wave height and period NOAA Wavewatch III NWS Environmental Modeling Center Envirtech solid state payload for directional waves measurement Naval architecture Physical oceanography Shipbuilding Water waves
Significant wave height
[ "Physics", "Chemistry", "Engineering" ]
1,041
[ "Naval architecture", "Physical phenomena", "Applied and interdisciplinary physics", "Water waves", "Shipbuilding", "Waves", "Physical oceanography", "Marine engineering", "Fluid dynamics" ]
7,136,985
https://en.wikipedia.org/wiki/Vectorization%20%28mathematics%29
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a matrix A, denoted vec(A), is the column vector obtained by stacking the columns of the matrix A on top of one another: Here, represents the element in the i-th row and j-th column of A, and the superscript denotes the transpose. Vectorization expresses, through coordinates, the isomorphism between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix , the vectorization is . The connection between the vectorization of A and the vectorization of its transpose is given by the commutation matrix. Compatibility with Kronecker products The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular, for matrices A, B, and C of dimensions k×l, l×m, and m×n. For example, if (the adjoint endomorphism of the Lie algebra of all n×n matrices with complex entries), then , where is the n×n identity matrix. There are two other useful formulations: More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices. Compatibility with Hadamard products Vectorization is an algebra homomorphism from the space of matrices with the Hadamard (entrywise) product to Cn2 with its Hadamard product: Compatibility with inner products Vectorization is a unitary transformation from the space of n×n matrices with the Frobenius (or Hilbert–Schmidt) inner product to Cn2: where the superscript † denotes the conjugate transpose. Vectorization as a linear sum The matrix vectorization operation can be written in terms of a linear sum. Let X be an matrix that we want to vectorize, and let ei be the i-th canonical basis vector for the n-dimensional space, that is . Let Bi be a block matrix defined as follows: Bi consists of n block matrices of size , stacked column-wise, and all these matrices are all-zero except for the i-th one, which is a identity matrix Im. Then the vectorized version of X can be expressed as follows: Multiplication of X by ei extracts the i-th column, while multiplication by Bi puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the Kronecker product: Half-vectorization For a symmetric matrix A, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetric matrix A is the column vector obtained by vectorizing only the lower triangular part of A: For example, for the 2×2 matrix , the half-vectorization is . There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the duplication matrix and the elimination matrix. Programming language Programming languages that implement matrices may have easy means for vectorization. In Matlab/GNU Octave a matrix A can be vectorized by A(:). GNU Octave also allows vectorization and half-vectorization with vec(A) and vech(A) respectively. Julia has the vec(A) function as well. In Python NumPy arrays implement the flatten method, while in R the desired effect can be achieved via the c() or as.vector() functions. In R, function vec() of package 'ks' allows vectorization and function vech() implemented in both packages 'ks' and 'sn' allows half-vectorization. Applications Vectorization is used in matrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices. It is also used in local sensitivity and statistical diagnostics. Notes See also Duplication and elimination matrices Voigt notation Packed storage matrix Column-major order Matricization References Linear algebra Matrices
Vectorization (mathematics)
[ "Mathematics" ]
916
[ "Matrices (mathematics)", "Linear algebra", "Mathematical objects", "Algebra" ]
7,137,951
https://en.wikipedia.org/wiki/Schlenk%20line
The Schlenk line (also vacuum gas manifold) is a commonly used chemistry apparatus developed by Wilhelm Schlenk. It consists of a dual manifold with several ports. One manifold is connected to a source of purified inert gas, while the other is connected to a vacuum pump. The inert-gas line is vented through an oil bubbler, while solvent vapors and gaseous reaction products are prevented from contaminating the vacuum pump by a liquid-nitrogen or dry-ice/acetone cold trap. Special stopcocks or Teflon taps allow vacuum or inert gas to be selected without the need for placing the sample on a separate line. Schlenk lines are useful for manipulating moisture- and air-sensitive compounds. The vacuum is used to remove air or other gasses present in closed, connected glassware to the line. It often also removes the last traces of solvent from a sample. Vacuum and gas manifolds often have many ports and lines, and with care, it is possible for several reactions or operations to be run simultaneously in inert conditions. When the reagents are highly susceptible to oxidation, traces of oxygen may pose a problem. Then, for the removal of oxygen below the ppm level, the inert gas needs to be purified by passing it through a deoxygenation catalyst. This is usually a column of copper(I) or manganese(II) oxide, which reacts with oxygen traces present in the inert gas. In other cases, a purge-cycle technique is often employed, where the closed, reaction vessel connected to the line is filled with inert gas, evacuated with the vacuum and then refilled. This process is repeated 3 or more times to make sure air is rigorously removed. Moisture can be removed by heating the reaction vessel with a heat gun. Techniques The main techniques associated with the use of a Schlenk line include: counterflow additions, where air-stable reagents are added to the reaction vessel against a flow of inert gas; the use of syringes and rubber septa to transfer liquids and solutions; cannula transfer, where liquids or solutions of air-sensitive reagents are transferred between different vessels stoppered with septa using a long thin tube known as a cannula. Liquid flow is supported by vacuum or inert-gas pressure. Glassware are usually connected by tightly fitting and greased ground glass joints. Round bends of glass tubing with ground glass joints may be used to adjust the orientation of various vessels. Glassware is necessarily purged of outside air by using the purge cycling technique. The solvents and reagents that are used can use a technique called "sparging" to remove air. This is where a cannula needle, which is connected to the inert gas on the line, is inserted into the reaction vessel containing the solvent; this effectively bubbles the inert gas into the solution, which will actively push out trapped gas molecules from the solvent. Filtration under inert conditions poses a special challenge. It is usually achieved using a "cannula filter". Classically, filtration is tackled with a Schlenk filter, which consists of a sintered glass funnel fitted with joints and stopcocks. By fitting the pre-dried funnel and receiving flask to the reaction flask against a flow of nitrogen, carefully inverting the set-up and turning on the vacuum appropriately, the filtration may be accomplished with minimal exposure to air. A glovebox is often used in conjunction with the Schlenk line for storing and reusing air- and moisture-sensitive solvents in a lab. Dangers The main dangers associated with the use of a Schlenk line are the risks of an implosion or explosion. An implosion can occur due to the use of vacuum and flaws in the glass apparatus. An explosion can occur due to the common use of liquid nitrogen in the cold trap, used to protect the vacuum pump from solvents. If a reasonable amount of air is allowed to enter the Schlenk line, liquid oxygen can condense into the cold trap as a pale blue liquid. An explosion may occur due to reaction of the liquid oxygen with any organic compounds also in the trap. Gallery See also Air-free technique gives a broad overview of methods including: Glovebox – used to manipulate air-sensitive (oxygen- or moisture-sensitive) chemicals. Schlenk flask – reaction vessel for handling air-sensitive compounds. Perkin triangle – used for the distillation of air-sensitive compounds. References Further reading "Handling Air-Sensitive Reagents" Sigma-Aldrich. External links Preparation of a Manganese oxide column for inert gas purification from oxygen traces Laboratory equipment Laboratory glassware Air-free techniques
Schlenk line
[ "Chemistry", "Engineering" ]
991
[ "Vacuum systems", "Air-free techniques" ]
7,137,983
https://en.wikipedia.org/wiki/Enzyme%20catalysis
Enzyme catalysis is the increase in the rate of a process by an "enzyme", a biological molecule. Most enzymes are proteins, and most such processes are chemical reactions. Within the enzyme, generally catalysis occurs at a localized site, called the active site. Most enzymes are made predominantly of proteins, either a single protein chain or many such chains in a multi-subunit complex. Enzymes often also incorporate non-protein components, such as metal ions or specialized organic molecules known as cofactor (e.g. adenosine triphosphate). Many cofactors are vitamins, and their role as vitamins is directly linked to their use in the catalysis of biological process within metabolism. Catalysis of biochemical reactions in the cell is vital since many but not all metabolically essential reactions have very low rates when uncatalysed. One driver of protein evolution is the optimization of such catalytic activities, although only the most crucial enzymes operate near catalytic efficiency limits, and many enzymes are far from optimal. Important factors in enzyme catalysis include general acid and base catalysis, orbital steering, entropic restriction, orientation effects (i.e. lock and key catalysis), as well as motional effects involving protein dynamics Mechanisms of enzyme catalysis vary, but are all similar in principle to other types of chemical catalysis in that the crucial factor is a reduction of energy barrier(s) separating the reactants (or substrates) from the products. The reduction of activation energy (Ea) increases the fraction of reactant molecules that can overcome this barrier and form the product. An important principle is that since they only reduce energy barriers between products and reactants, enzymes always catalyze reactions in both directions, and cannot drive a reaction forward or affect the equilibrium position – only the speed with which is it achieved. As with other catalysts, the enzyme is not consumed or changed by the reaction (as a substrate is) but is recycled such that a single enzyme performs many rounds of catalysis. Enzymes are often highly specific and act on only certain substrates. Some enzymes are absolutely specific meaning that they act on only one substrate, while others show group specificity and can act on similar but not identical chemical groups such as the peptide bond in different molecules. Many enzymes have stereochemical specificity and act on one stereoisomer but not another. Induced fit The classic model for the enzyme-substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. The advantages of the induced fit mechanism arise due to the stabilizing effect of strong enzyme binding. There are two different mechanisms of substrate binding: uniform binding, which has strong substrate binding, and differential binding, which has strong transition state binding. The stabilizing effect of uniform binding increases both substrate and transition state binding affinity, while differential binding increases only transition state binding affinity. Both are used by enzymes and have been evolutionarily chosen to minimize the activation energy of the reaction. Enzymes that are saturated, that is, have a high affinity substrate binding, require differential binding to reduce the energy of activation, whereas small substrate unbound enzymes may use either differential or uniform binding. These effects have led to most proteins using the differential binding mechanism to reduce the energy of activation, so most substrates have high affinity for the enzyme while in the transition state. Differential binding is carried out by the induced fit mechanism – the substrate first binds weakly, then the enzyme changes conformation increasing the affinity to the transition state and stabilizing it, so reducing the activation energy to reach it. It is important to clarify, however, that the induced fit concept cannot be used to rationalize catalysis. That is, the chemical catalysis is defined as the reduction of Ea‡ (when the system is already in the ES‡) relative to Ea‡ in the uncatalyzed reaction in water (without the enzyme). The induced fit only suggests that the barrier is lower in the closed form of the enzyme but does not tell us what the reason for the barrier reduction is. Induced fit may be beneficial to the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. Mechanisms of an alternative reaction route These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. After binding takes place, one or more mechanisms of catalysis lowers the energy of the reaction's transition state, by providing an alternative chemical pathway for the reaction. There are six possible mechanisms of "over the barrier" catalysis as well as a "through the barrier" mechanism: Proximity and orientation Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that they collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality – which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state. However, the situation might be more complex, since modern computational studies have established that traditional examples of proximity effects cannot be related directly to enzyme entropic effects. Also, the original entropic proposal has been found to largely overestimate the contribution of orientation entropy to catalysis. Proton donors or acceptors Proton donors and acceptors, i.e. acids and base may donate and accept protons in order to stabilize developing charges in the transition state. This is related to the overall principle of catalysis, that of reducing energy barriers, since in general transition states are high energy states, and by stabilizing them this high energy is reduced, lowering the barrier. A key feature of enzyme catalysis over many non-biological catalysis, is that both acid and base catalysis can be combined in the same reaction. In many abiotic systems, acids (large [H+]) or bases ( large concentration H+ sinks, or species with electron pairs) can increase the rate of the reaction; but of course the environment can only have one overall pH (measure of acidity or basicity (alkalinity)). However, since enzymes are large molecules, they can position both acid groups and basic groups in their active site to interact with their substrates, and employ both modes independent of the bulk pH. Often general acid or base catalysis is employed to activate nucleophile and/or electrophile groups, or to stabilize leaving groups. Many amino acids with acidic or basic groups are this employed in the active site, such as the glutamic and aspartic acid, histidine, cystine, tyrosine, lysine and arginine, as well as serine and threonine. In addition, the peptide backbone, with carbonyl and amide N groups is often employed. Cystine and Histidine are very commonly involved, since they both have a pKa close to neutral pH and can therefore both accept and donate protons. Many reaction mechanisms involving acid/base catalysis assume a substantially altered pKa. This alteration of pKa is possible through the local environment of the residue. pKa can also be influenced significantly by the surrounding environment, to the extent that residues which are basic in solution may act as proton donors, and vice versa. The modification of the pKa's is a pure part of the electrostatic mechanism. The catalytic effect of the above example is mainly associated with the reduction of the pKa of the oxyanion and the increase in the pKa of the histidine, while the proton transfer from the serine to the histidine is not catalyzed significantly, since it is not the rate determining barrier. Note that in the example shown, the histidine conjugate acid acts as a general acid catalyst for the subsequent loss of the amine from a tetrahedral intermediate.  Evidence supporting this proposed mechanism (Figure 4 in Ref. 13) has, however been controverted. Electrostatic catalysis Stabilization of charged transition states can also be by residues in the active site forming ionic bonds (or partial ionic charge interactions) with the intermediate. These bonds can either come from acidic or basic side chains found on amino acids such as lysine, arginine, aspartic acid or glutamic acid or come from metal cofactors such as zinc. Metal ions are particularly effective and can reduce the pKa of water enough to make it an effective nucleophile. Systematic computer simulation studies established that electrostatic effects give, by far, the largest contribution to catalysis. It can increase the rate of reaction by a factor of up to 107. In particular, it has been found that enzyme provides an environment which is more polar than water, and that the ionic transition states are stabilized by fixed dipoles. This is very different from transition state stabilization in water, where the water molecules must pay with "reorganization energy". In order to stabilize ionic and charged states. Thus, the catalysis is associated with the fact that the enzyme polar groups are preorganized The magnitude of the electrostatic field exerted by an enzyme's active site has been shown to be highly correlated with the enzyme's catalytic rate enhancement. Binding of substrate usually excludes water from the active site, thereby lowering the local dielectric constant to that of an organic solvent. This strengthens the electrostatic interactions between the charged/polar substrates and the active sites. In addition, studies have shown that the charge distributions about the active sites are arranged so as to stabilize the transition states of the catalyzed reactions. In several enzymes, these charge distributions apparently serve to guide polar substrates toward their binding sites so that the rates of these enzymatic reactions are greater than their apparent diffusion-controlled limits. Covalent catalysis Covalent catalysis involves the substrate forming a transient covalent bond with residues in the enzyme active site or with a cofactor. This adds an additional covalent intermediate to the reaction, and helps to reduce the energy of later transition states of the reaction. The covalent bond must, at a later stage in the reaction, be broken to regenerate the enzyme. This mechanism is utilised by the catalytic triad of enzymes such as proteases like chymotrypsin and trypsin, where an acyl-enzyme intermediate is formed. An alternative mechanism is schiff base formation using the free amine from a lysine residue, as seen in the enzyme aldolase during glycolysis. Some enzymes utilize non-amino acid cofactors such as pyridoxal phosphate (PLP) or thiamine pyrophosphate (TPP) to form covalent intermediates with reactant molecules. Such covalent intermediates function to reduce the energy of later transition states, similar to how covalent intermediates formed with active site amino acid residues allow stabilization, but the capabilities of cofactors allow enzymes to carryout reactions that amino acid side residues alone could not. Enzymes utilizing such cofactors include the PLP-dependent enzyme aspartate transaminase and the TPP-dependent enzyme pyruvate dehydrogenase. Rather than lowering the activation energy for a reaction pathway, covalent catalysis provides an alternative pathway for the reaction (via to the covalent intermediate) and so is distinct from true catalysis. For example, the energetics of the covalent bond to the serine molecule in chymotrypsin should be compared to the well-understood covalent bond to the nucleophile in the uncatalyzed solution reaction. A true proposal of a covalent catalysis (where the barrier is lower than the corresponding barrier in solution) would require, for example, a partial covalent bond to the transition state by an enzyme group (e.g., a very strong hydrogen bond), and such effects do not contribute significantly to catalysis. Metal ion catalysis A metal ion in the active site participates in catalysis by coordinating charge stabilization and shielding. Because of a metal's positive charge, only negative charges can be stabilized through metal ions. However, metal ions are advantageous in biological catalysis because they are not affected by changes in pH. Metal ions can also act to ionize water by acting as a Lewis acid. Metal ions may also be agents of oxidation and reduction. Bond strain This is the principal effect of induced fit binding, where the affinity of the enzyme to the transition state is greater than to the substrate itself. This induces structural rearrangements which strain substrate bonds into a position closer to the conformation of the transition state, so lowering the energy difference between the substrate and transition state and helping catalyze the reaction. However, the strain effect is, in fact, a ground state destabilization effect, rather than transition state stabilization effect. Furthermore, enzymes are very flexible and they cannot apply large strain effect. In addition to bond strain in the substrate, bond strain may also be induced within the enzyme itself to activate residues in the active site. Quantum tunneling These traditional "over the barrier" mechanisms have been challenged in some cases by models and observations of "through the barrier" mechanisms (quantum tunneling). Some enzymes operate with kinetics which are faster than what would be predicted by the classical ΔG‡. In "through the barrier" models, a proton or an electron can tunnel through activation barriers. Quantum tunneling for protons has been observed in tryptamine oxidation by aromatic amine dehydrogenase. Quantum tunneling does not appear to provide a major catalytic advantage, since the tunneling contributions are similar in the catalyzed and the uncatalyzed reactions in solution. However, the tunneling contribution (typically enhancing rate constants by a factor of ~1000 compared to the rate of reaction for the classical 'over the barrier' route) is likely crucial to the viability of biological organisms. This emphasizes the general importance of tunneling reactions in biology. In 1971-1972 the first quantum-mechanical model of enzyme catalysis was formulated. Active enzyme The binding energy of the enzyme-substrate complex cannot be considered as an external energy which is necessary for the substrate activation. The enzyme of high energy content may firstly transfer some specific energetic group X1 from catalytic site of the enzyme to the final place of the first bound reactant, then another group X2 from the second bound reactant (or from the second group of the single reactant) must be transferred to active site to finish substrate conversion to product and enzyme regeneration. We can present the whole enzymatic reaction as a two coupling reactions: It may be seen from reaction () that the group X1 of the active enzyme appears in the product due to possibility of the exchange reaction inside enzyme to avoid both electrostatic inhibition and repulsion of atoms. So we represent the active enzyme as a powerful reactant of the enzymatic reaction. The reaction () shows incomplete conversion of the substrate because its group X2 remains inside enzyme. This approach as idea had formerly proposed relying on the hypothetical extremely high enzymatic conversions (catalytically perfect enzyme). The crucial point for the verification of the present approach is that the catalyst must be a complex of the enzyme with the transfer group of the reaction. This chemical aspect is supported by the well-studied mechanisms of the several enzymatic reactions. Consider the reaction of peptide bond hydrolysis catalyzed by a pure protein α-chymotrypsin (an enzyme acting without a cofactor), which is a well-studied member of the serine proteases family, see. We present the experimental results for this reaction as two chemical steps: where S1 is a polypeptide, P1 and P2 are products. The first chemical step () includes the formation of a covalent acyl-enzyme intermediate. The second step () is the deacylation step. It is important to note that the group H+, initially found on the enzyme, but not in water, appears in the product before the step of hydrolysis, therefore it may be considered as an additional group of the enzymatic reaction. Thus, the reaction () shows that the enzyme acts as a powerful reactant of the reaction. According to the proposed concept, the H transport from the enzyme promotes the first reactant conversion, breakdown of the first initial chemical bond (between groups P1 and P2). The step of hydrolysis leads to a breakdown of the second chemical bond and regeneration of the enzyme. The proposed chemical mechanism does not depend on the concentration of the substrates or products in the medium. However, a shift in their concentration mainly causes free energy changes in the first and final steps of the reactions () and () due to the changes in the free energy content of every molecule, whether S or P, in water solution. This approach is in accordance with the following mechanism of muscle contraction. The final step of ATP hydrolysis in skeletal muscle is the product release caused by the association of myosin heads with actin. The closing of the actin-binding cleft during the association reaction is structurally coupled with the opening of the nucleotide-binding pocket on the myosin active site. Notably, the final steps of ATP hydrolysis include the fast release of phosphate and the slow release of ADP. The release of a phosphate anion from bound ADP anion into water solution may be considered as an exergonic reaction because the phosphate anion has low molecular mass. Thus, we arrive at the conclusion that the primary release of the inorganic phosphate H2PO4− leads to transformation of a significant part of the free energy of ATP hydrolysis into the kinetic energy of the solvated phosphate, producing active streaming. This assumption of a local mechano-chemical transduction is in accord with Tirosh's mechanism of muscle contraction, where the muscle force derives from an integrated action of active streaming created by ATP hydrolysis. Examples of catalytic mechanisms In reality, most enzyme mechanisms involve a combination of several different types of catalysis. Triose phosphate isomerase Triose phosphate isomerase () catalyses the reversible interconversion of the two triose phosphates isomers dihydroxyacetone phosphate and D-glyceraldehyde 3-phosphate. Trypsin Trypsin () is a serine protease that cleaves protein substrates after lysine or arginine residues using a catalytic triad to perform covalent catalysis, and an oxyanion hole to stabilise charge-buildup on the transition states. Aldolase Aldolase () catalyses the breakdown of fructose 1,6-bisphosphate (F-1,6-BP) into glyceraldehyde 3-phosphate and dihydroxyacetone phosphate (DHAP). Enzyme diffusivity The advent of single-molecule studies in the 2010s led to the observation that the movement of untethered enzymes increases with increasing substrate concentration and increasing reaction enthalpy. Subsequent observations suggest that this increase in diffusivity is driven by transient displacement of the enzyme's center of mass, resulting in a "recoil effect that propels the enzyme". Reaction similarity Similarity between enzymatic reactions (EC) can be calculated by using bond changes, reaction centres or substructure metrics (EC-BLAST ). See also Catalytic triad Enzyme assay Enzyme inhibitor Enzyme kinetics Enzyme promiscuity Protein dynamics Pseudoenzymes, whose ubiquity despite their catalytic inactivity suggests omic implications Quantum tunnelling The Proteolysis Map Time resolved crystallography References Further reading External links Articles containing video clips es:Catálisis enzimática
Enzyme catalysis
[ "Chemistry" ]
4,285
[ "Catalysis", "Chemical kinetics" ]
7,138,173
https://en.wikipedia.org/wiki/Sanduleak%20-69%20202
Sanduleak -69 202 (Sk -69 202, also known as GSC 09162-00821) was a magnitude 12 blue supergiant star, located on the outskirts of the Tarantula Nebula in the Large Magellanic Cloud. It was the progenitor of supernova 1987A. The star was originally charted by the Romanian-American astronomer Nicholas Sanduleak in 1970, but was not well studied until identified as the star that exploded in the first naked eye supernova since the invention of the telescope, when its maximum reached visual magnitude +2.8. The discovery that a blue supergiant was a supernova progenitor contradicted the prevailing theories of stellar evolution and produced a flurry of new ideas about how such a thing might happen, but it is now accepted that blue supergiants are a normal progenitor for some supernovae. The candidate luminous blue variable HD 168625 possesses a bipolar nebula that is a close twin of that around Sk -69 202. It is speculated that Sk -69 202 may have been a luminous blue variable in the recent past, although it was apparently a normal luminous supergiant at the time it exploded. See also Neutrino astronomy List of supernovae History of supernova observation References Stars in the Large Magellanic Cloud Dorado B-type supergiants Luminous blue variables Large Magellanic Cloud Extragalactic stars Tarantula Nebula
Sanduleak -69 202
[ "Astronomy" ]
295
[ "Dorado", "Constellations" ]
15,947,157
https://en.wikipedia.org/wiki/Non-smooth%20mechanics
Non-smooth mechanics is a modeling approach in mechanics which does not require the time evolutions of the positions and of the velocities to be smooth functions. Due to possible impacts, the velocities of the mechanical system are allowed to undergo jumps at certain time instants in order to fulfill the kinematical restrictions. Consider for example a rigid model of a ball which falls on the ground. Just before the impact between ball and ground, the ball has non-vanishing pre-impact velocity. At the impact time instant, the velocity must jump to a post-impact velocity which is at least zero, or else penetration would occur. Non-smooth mechanical models are often used in contact dynamics. See also Contact dynamics Unilateral contact Jean Jacques Moreau References Acary V., Brogliato, B. Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics. Springer Verlag, LNACM 35, Heidelberg, 2008. Brogliato B. Nonsmooth Mechanics. Models, Dynamics and Control. Communications and Control Engineering Series, Springer-Verlag, London, 2016 (3rd Ed.) Demyanov, V.F., Stavroulakis, G.E., Polyakova, L.N., Panagiotopoulos, P.D. "Quasidifferentiability and Nonsmooth Modelling in Mechanics, Engineering and Economics", Springer 1996. Yang Gao, David, Ogden, Ray W., Stavroulakis, Georgios E. (Eds.) "Nonsmooth/Nonconvex Mechanics Modeling, Analysis and Numerical Methods", Springer 2001 Glocker, Ch. Dynamik von Starrkoerpersystemen mit Reibung und Stoessen, volume 18/182 of VDI Fortschrittsberichte Mechanik/Bruchmechanik. VDI Verlag, Düsseldorf, 1995 Glocker Ch. and Studer C. Formulation and preparation for Numerical Evaluation of Linear Complementarity Systems. Multibody System Dynamics 13(4):447-463, 2005 Jean M. The non-smooth contact dynamics method. Computer Methods in Applied mechanics and Engineering 177(3-4):235-257, 1999 Mistakidis, E.S., Stavroulakis, Georgios E. "Nonconvex Optimization in Mechanics Algorithms, Heuristics and Engineering Applications by the F.E.M.", Springer, 1998 Moreau J.J. Unilateral Contact and Dry Friction in Finite Freedom Dynamics, volume 302 of Non-smooth Mechanics and Applications, CISM Courses and Lectures. Springer, Wien, 1988 Pfeiffer F., Foerg M. and Ulbrich H. Numerical aspects of non-smooth multibody dynamics. Comput. Methods Appl. Mech. Engrg 195(50-51):6891-6908, 2006 Potra F.A., Anitescu M., Gavrea B. and Trinkle J. A linearly implicit trapezoidal method for integrating stiff multibody dynamics with contacts, joints and friction. Int. J. Numer. Meth. Engng 66(7):1079-1124, 2006 Stewart D.E. and Trinkle J.C. An Implicit Time-Stepping Scheme for Rigid Body Dynamics with Inelastic Collisions and Coulomb Friction. Int. J. Numer. Methods Engineering 39(15):2673-2691, 1996 Mechanics Dynamical systems
Non-smooth mechanics
[ "Physics", "Mathematics", "Engineering" ]
736
[ "Mechanics", "Mechanical engineering", "Dynamical systems" ]
15,954,084
https://en.wikipedia.org/wiki/Shear%20and%20moment%20diagram
Shear force and bending moment diagrams are analytical tools used in conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam. These diagrams can be used to easily determine the type, size, and material of a member in a structure so that a given set of loads can be supported without structural failure. Another application of shear and moment diagrams is that the deflection of a beam can be easily determined using either the moment area method or the conjugate beam method. Convention Although these conventions are relative and any convention can be used if stated explicitly, practicing engineers have adopted a standard convention used in design practices. Normal convention The normal convention used in most engineering applications is to label a positive shear force - one that spins an element clockwise (up on the left, and down on the right). Likewise the normal convention for a positive bending moment is to warp the element in a "u" shape manner (Clockwise on the left, and counterclockwise on the right). Another way to remember this is if the moment is bending the beam into a "smile" then the moment is positive, with compression at the top of the beam and tension on the bottom. This convention was selected to simplify the analysis of beams. Since a horizontal member is usually analyzed from left to right and positive in the vertical direction is normally taken to be up, the positive shear convention was chosen to be up from the left, and to make all drawings consistent down from the right. The positive bending convention was chosen such that a positive shear force would tend to create a positive moment. Alternative drawing convention In structural engineering and in particular concrete design the positive moment is drawn on the tension side of the member. This convention puts the positive moment below the beam described above. A convention of placing moment diagram on the tension side allows for frames to be dealt with more easily and clearly. Additionally, placing the moment on the tension side of the member shows the general shape of the deformation and indicates on which side of a concrete member rebar should be placed, as concrete is weak in tension. Relationships among load, shear, and moment diagrams Since this method can easily become unnecessarily complicated with relatively simple problems, it can be quite helpful to understand different relations between the loading, shear, and moment diagram. The first of these is the relationship between a distributed load on the loading diagram and the shear diagram. Since a distributed load varies the shear load according to its magnitude it can be derived that the slope of the shear diagram is equal to the magnitude of the distributed load. The relationship, described by Schwedler's theorem, between distributed load and shear force magnitude is: Some direct results of this is that a shear diagram will have a point change in magnitude if a point load is applied to a member, and a linearly varying shear magnitude as a result of a constant distributed load. Similarly it can be shown that the slope of the moment diagram at a given point is equal to the magnitude of the shear diagram at that distance. The relationship between distributed shear force and bending moment is: A direct result of this is that at every point the shear diagram crosses zero the moment diagram will have a local maximum or minimum. Also if the shear diagram is zero over a length of the member, the moment diagram will have a constant value over that length. By calculus it can be shown that a point load will lead to a linearly varying moment diagram, and a constant distributed load will lead to a quadratic moment diagram. Practical considerations In practical applications the entire stepwise function is rarely written out. The only parts of the stepwise function that would be written out are the moment equations in a nonlinear portion of the moment diagram; this occurs whenever a distributed load is applied to the member. For constant portions the value of the shear and/or moment diagram is written right on the diagram, and for linearly varying portions of a member the beginning value, end value, and slope or the portion of the member are all that are required. See also Bending Euler–Bernoulli beam theory Bending moment Singularity function#Example beam calculation References Further reading Cheng, Fa-Hwa. "Shear Forces and Bending Moments in Beams" Statics and Strength of Materials. New York: Glencoe, McGraw-Hill, 1997. Print. Spotts, Merhyle Franklin, Terry E. Shoup, and Lee Emrey. Hornberger. "Shear and Bending Moment Diagrams." Design of Machine Elements. Upper Saddle River, NJ: Pearson/Prentice Hall, 2004. Print. External links Beam theory Continuum mechanics Diagrams Moment (physics) Structural analysis
Shear and moment diagram
[ "Physics", "Mathematics", "Engineering" ]
948
[ "Structural engineering", "Physical quantities", "Continuum mechanics", "Quantity", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering", "Moment (physics)" ]
15,958,803
https://en.wikipedia.org/wiki/Splitting%20principle
In mathematics, the splitting principle is a technique used to reduce questions about vector bundles to the case of line bundles. In the theory of vector bundles, one often wishes to simplify computations, say of Chern classes. Often computations are well understood for line bundles and for direct sums of line bundles. In this case the splitting principle can be quite useful. The theorem above holds for complex vector bundles and integer coefficients or for real vector bundles with coefficients. In the complex case, the line bundles or their first characteristic classes are called Chern roots. The fact that is injective means that any equation which holds in (say between various Chern classes) also holds in . The point is that these equations are easier to understand for direct sums of line bundles than for arbitrary vector bundles, so equations should be understood in and then pushed down to . Since vector bundles on are used to define the K-theory group , it is important to note that is also injective for the map in the above theorem. The splitting principle admits many variations. The following, in particular, concerns real vector bundles and their complexifications: Symmetric polynomial Under the splitting principle, characteristic classes for complex vector bundles correspond to symmetric polynomials in the first Chern classes of complex line bundles; these are the Chern classes. See also K-theory Grothendieck splitting principle for holomorphic vector bundles on the complex projective line References section 3.1 Raoul Bott and Loring Tu. Differential Forms in Algebraic Topology, section 21. Characteristic classes Vector bundles Mathematical principles
Splitting principle
[ "Mathematics" ]
316
[ "Mathematical principles" ]
15,959,032
https://en.wikipedia.org/wiki/Infinitary%20combinatorics
In mathematics, infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals. Ramsey theory for infinite sets Write for ordinals, for a cardinal number (finite or infinite) and for a natural number. introduced the notation as a shorthand way of saying that every partition of the set of -element subsets of into pieces has a homogeneous set of order type . A homogeneous set is in this case a subset of such that every -element subset is in the same element of the partition. When is 2 it is often omitted. Such statements are known as partition relations. Assuming the axiom of choice, there are no ordinals with , so is usually taken to be finite. An extension where is almost allowed to be infinite is the notation which is a shorthand way of saying that every partition of the set of finite subsets of into pieces has a subset of order type such that for any finite , all subsets of size are in the same element of the partition. When is 2 it is often omitted. Another variation is the notation which is a shorthand way of saying that every coloring of the set of -element subsets of with 2 colors has a subset of order type such that all elements of have the first color, or a subset of order type such that all elements of have the second color. Some properties of this include: (in what follows is a cardinal) In choiceless universes, partition properties with infinite exponents may hold, and some of them are obtained as consequences of the axiom of determinacy (AD). For example, Donald A. Martin proved that AD implies Strong colorings Wacław Sierpiński showed that the Ramsey theorem does not extend to sets of size by showing that . That is, Sierpiński constructed a coloring of pairs of real numbers into two colors such that for every uncountable subset of real numbers , takes both colors. Taking any set of real numbers of size and applying the coloring of Sierpiński to it, we get that . Colorings such as this are known as strong colorings and studied in set theory. introduced a similar notation as above for this. Write for ordinals, for a cardinal number (finite or infinite) and for a natural number. Then is a shorthand way of saying that there exists a coloring of the set of -element subsets of into pieces such that every set of order type is a rainbow set. A rainbow set is in this case a subset of such that takes all colors. When is 2 it is often omitted. Such statements are known as negative square bracket partition relations. Another variation is the notation which is a shorthand way of saying that there exists a coloring of the set of 2-element subsets of with colors such that for every subset of order type and every subset of order type , the set takes all colors. Some properties of this include: (in what follows is a cardinal) Large cardinals Several large cardinal properties can be defined using this notation. In particular: Weakly compact cardinals are those that satisfy α-Erdős cardinals are the smallest that satisfy Ramsey cardinals are those that satisfy Notes References Set theory Combinatorics
Infinitary combinatorics
[ "Mathematics" ]
701
[ "Discrete mathematics", "Mathematical logic", "Set theory", "Combinatorics" ]
13,268,473
https://en.wikipedia.org/wiki/HAZMAT%20Class%205%20Oxidizing%20agents%20and%20organic%20peroxides
An oxidizer is a chemical that readily yields oxygen in reactions, thereby causing or enhancing combustion. Divisions Division 5.1: Oxidizers An oxidizer is a material that may, generally by yielding oxygen, cause or enhance the combustion of other materials. A solid material is classed as a Division 5.1 material if, when tested in accordance with the UN Manual of Tests and Criteria, its mean burning time is less than or equal to the burning time of a 3:7 potassium bromate/cellulose mixture. A liquid material is classed as a Division 5.1 material if, when tested in accordance with the UN Manual of Tests and Criteria, it spontaneously ignites or its mean time for a pressure rise from 690 kPa to 2070 kPa gauge is less than the time of a 1:1 nitric acid (65 percent)/cellulose mixture. Division 5.2: Organic Peroxides An organic peroxide is any organic compound containing oxygen (O) in the bivalent -O-O- structure and which may be considered a derivative of hydrogen peroxide, where one or more of the hydrogen atoms have been replaced by organic radicals, unless any of the following paragraphs applies: The material meets the definition of an explosive as prescribed in subpart C of this part, in which case it must be classed as an explosive (applies to acetone peroxide, for example) The material is forbidden from being offered for transportation according to 49CFR 172.101 of this subchapter or 49CFR 173.21; The Associate Administrator for Hazardous Materials Safety has determined that the material does not present a hazard which is associated with a Division 5.2 material; or The material meets one of the following conditions: For materials containing no more than 1.0 percent hydrogen peroxide, the available oxygen, as calculated using the equation in paragraph (a)(4)(ii) of this section, is not more than 1.0 percent, or For materials containing more than 1.0 percent but not more than 7.0 percent hydrogen peroxide, the available oxygen content (Oa) is not more than 0.5 percent, when determined using the equation: Oa = 16x where for a material containing k species of organic peroxides: = number of -O-O- groups per molecule of the species = concentration (mass percent) of the species = molecular mass of the species Placards Prior to 2007, the placard for 'Organic Peroxide' (5.2) was entirely yellow, like placard 5.1. Compatibility Table Packing Groups References 49 CFR 173.127(a) 49 CFR 173.128(a) Hazardous materials
HAZMAT Class 5 Oxidizing agents and organic peroxides
[ "Physics", "Chemistry", "Technology" ]
556
[ "Materials", "Hazardous materials", "Matter" ]
13,271,310
https://en.wikipedia.org/wiki/NAD%2B%20kinase
{{DISPLAYTITLE:NAD+ kinase}} NAD+ kinase (EC 2.7.1.23, NADK) is an enzyme that converts nicotinamide adenine dinucleotide (NAD+) into NADP+ through phosphorylating the NAD+ coenzyme. NADP+ is an essential coenzyme that is reduced to NADPH primarily by the pentose phosphate pathway to provide reducing power in biosynthetic processes such as fatty acid biosynthesis and nucleotide synthesis. The structure of the NADK from the archaean Archaeoglobus fulgidus has been determined. In humans, the genes NADK and MNADK encode NAD+ kinases localized in cytosol and mitochondria, respectively. Similarly, yeast have both cytosolic and mitochondrial isoforms, and the yeast mitochondrial isoform accepts both NAD+ and NADH as substrates for phosphorylation. Reaction The reaction catalyzed by NADK is ATP + NAD+ ADP + NADP+ Mechanism NADK phosphorylates NAD+ at the 2’ position of the ribose ring that carries the adenine moiety. It is highly selective for its substrates, NAD and ATP, and does not tolerate modifications either to the phosphoryl acceptor, NAD, or the pyridine moiety of the phosphoryl donor, ATP. NADK also uses metal ions to coordinate the ATP in the active site. In vitro studies with various divalent metal ions have shown that zinc and manganese are preferred over magnesium, while copper and nickel are not accepted by the enzyme at all. A proposed mechanism involves the 2' alcohol oxygen acting as a nucleophile to attack the gamma-phosphoryl of ATP, releasing ADP. Regulation NADK is highly regulated by the redox state of the cell. Whereas NAD is predominantly found in its oxidized state NAD+, the phosphorylated NADP is largely present in its reduced form, as NADPH. Thus, NADK can modulate responses to oxidative stress by controlling NADP synthesis. Bacterial NADK is shown to be inhibited allosterically by both NADPH and NADH. NADK is also reportedly stimulated by calcium/calmodulin binding in certain cell types, such as neutrophils. NAD kinases in plants and sea urchin eggs have also been found to bind calmodulin. Clinical significance Due to the essential role of NADPH in lipid and DNA biosynthesis and the hyperproliferative nature of most cancers, NADK is an attractive target for cancer therapy. Furthermore, NADPH is required for the antioxidant activities of thioredoxin reductase and glutaredoxin. Thionicotinamide and other nicotinamide analogs are potential inhibitors of NADK, and studies show that treatment of colon cancer cells with thionicotinamide suppresses the cytosolic NADPH pool to increase oxidative stress and synergizes with chemotherapy. While the role of NADK in increasing the NADPH pool appears to offer protection against apoptosis, there are also cases where NADK activity appears to potentiate cell death. Genetic studies done in human haploid cell lines indicate that knocking out NADK may protect from certain non-apoptotic stimuli. See also Oxidative phosphorylation Electron transport chain Metabolism References Further reading External links ENZYME entry on EC 2.7.1.23 BRENDA entry on EC 2.7.1.23 PDBe-KB provides an overview of all the structure information available in the PDB for Human NAD kinase EC 2.7.1 Cellular respiration Metabolism
NAD+ kinase
[ "Chemistry", "Biology" ]
775
[ "Cellular processes", "Cellular respiration", "Biochemistry", "Metabolism" ]
13,271,682
https://en.wikipedia.org/wiki/GPS/INS
GPS/INS is the use of GPS satellite signals to correct or calibrate a solution from an inertial navigation system (INS). The method is applicable for any GNSS/INS system. Overview GPS/INS method The GPS gives an absolute drift-free position value that can be used to reset the INS solution or can be blended with it by use of a mathematical algorithm, such as a Kalman filter. The angular orientation of the unit can be inferred from the series of position updates from the GPS. The change in the error in position relative to the GPS can be used to estimate the unknown angle error. The benefits of using GPS with an INS are that the INS may be calibrated by the GPS signals and that the INS can provide position and angle updates at a quicker rate than GPS. For high dynamic vehicles, such as missiles and aircraft, INS fills in the gaps between GPS positions. Additionally, GPS may lose its signal and the INS can continue to compute the position and angle during the period of lost GPS signal. The two systems are complementary and are often employed together. Applications GPS/INS is commonly used on aircraft for navigation purposes. Using GPS/INS allows for smoother position and velocity estimates that can be provided at a sampling rate faster than the GPS receiver. This also allows for accurate estimation of the aircraft attitude (roll, pitch, and yaw) angles. In general, GPS/INS sensor fusion is a nonlinear filtering problem, which is commonly approached using the extended Kalman filter (EKF) or the unscented Kalman filter (UKF). The use of these two filters for GPS/INS has been compared in various sources, including a detailed sensitivity analysis. The EKF uses an analytical linearization approach using Jacobian matrices to linearize the system, while the UKF uses a statistical linearization approach called the unscented transform which uses a set of deterministically selected points to handle the nonlinearity. The UKF requires the calculation of a matrix square root of the state error covariance matrix, which is used to determine the spread of the sigma points for the unscented transform. There are various ways to calculate the matrix square root, which have been presented and compared within GPS/INS application. From this work it is recommended to use the Cholesky decomposition method. In addition to aircraft applications, GPS/INS has also been studied for automobile applications such as autonomous navigation, vehicle dynamics control, or sideslip, roll, and tire cornering stiffness estimation. See also GNSS Augmentation References US Patent No. 6900760 Navigation Aerospace engineering Inertial navigation
GPS/INS
[ "Engineering" ]
540
[ "Aerospace engineering" ]
13,276,879
https://en.wikipedia.org/wiki/Racetrack%20memory
Racetrack memory or domain-wall memory (DWM) is an experimental non-volatile memory device under development at IBM's Almaden Research Center by a team led by physicist Stuart Parkin. It is a current topic of active research at the Max Planck Institute of Microstructure Physics in Dr. Parkin's group. In early 2008, a 3-bit version was successfully demonstrated. If it were to be developed successfully, racetrack memory would offer storage density higher than comparable solid-state memory devices like flash memory. Description Racetrack memory uses a spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire about 200 nm across and 100 nm thick. As current is passed through the wire, the domains pass by magnetic read/write heads positioned near the wire, which alter the domains to record patterns of bits. A racetrack memory device is made up of many such wires and read/write elements. In general operational concept, racetrack memory is similar to the earlier bubble memory of the 1960s and 1970s. Delay-line memory, such as mercury delay lines of the 1940s and 1950s, are a still-earlier form of similar technology, as used in the UNIVAC and EDSAC computers. Like bubble memory, racetrack memory uses electrical currents to "push" a sequence of magnetic domains through a substrate and past read/write elements. Improvements in magnetic detection capabilities, based on the development of spintronic magnetoresistive sensors, allow the use of much smaller magnetic domains to provide far higher bit densities. In production, it was expected that the wires could be scaled down to around 50 nm. There were two arrangements considered for racetrack memory. The simplest was a series of flat wires arranged in a grid with read and write heads arranged nearby. A more widely studied arrangement used U-shaped wires arranged vertically over a grid of read/write heads on an underlying substrate. This would allow the wires to be much longer without increasing its 2D area, although the need to move individual domains further along the wires before they reach the read/write heads results in slower random access times. Both arrangements offered about the same throughput performance. The primary concern in terms of construction was practical; whether or not the three dimensional vertical arrangement would be feasible to mass-produce. Comparison to other memory devices Projections in 2008 suggested that racetrack memory would offer performance on the order of 20-32 ns to read or write a random bit. This compared to about 10,000,000 ns for a hard drive, or 20-30 ns for conventional DRAM. The primary authors discussed ways to improve the access times with the use of a "reservoir" to about 9.5 ns. Aggregate throughput, with or without the reservoir, would be on the order of 250-670 Mbit/s for racetrack memory, compared to 12800 Mbit/s for a single DDR3 DRAM, 1000 Mbit/s for high-performance hard drives, and 1000 to 4000 Mbit/s for flash memory devices. The only current technology that offered a clear latency benefit over racetrack memory was SRAM, on the order of 0.2 ns, but at a higher cost. Larger feature size "F" of about 45 nm (as of 2011) with a cell area of about 140 F2. Racetrack memory is one among several emerging technologies that aim to replace conventional memories such as DRAM and Flash, and potentially offer a universal memory device applicable to a wide variety of roles. Other contenders included magnetoresistive random-access memory (MRAM), phase-change memory (PCRAM) and ferroelectric RAM (FeRAM). Most of these technologies offer densities similar to flash memory, in most cases worse, and their primary advantage is the lack of write-endurance limits like those in flash memory. Field-MRAM offers excellent performance as high as 3 ns access time, but requires a large 25-40 F² cell size. It might see use as an SRAM replacement, but not as a mass storage device. The highest densities from any of these devices is offered by PCRAM, with a cell size of about 5.8 F², similar to flash memory, as well as fairly good performance around 50 ns. Nevertheless, none of these can come close to competing with racetrack memory in overall terms, especially density. For example, 50 ns allows about five bits to be operated in a racetrack memory device, resulting in an effective cell size of 20/5=4 F², easily exceeding the performance-density product of PCM. On the other hand, without sacrificing bit density, the same 20 F² area could fit 2.5 2-bit 8 F² alternative memory cells (such as resistive RAM (RRAM) or spin-torque transfer MRAM), each of which individually operating much faster (~10 ns). In most cases, memory devices store one bit in any given location, so they are typically compared in terms of "cell size", a cell storing one bit. Cell size itself is given in units of F², where "F" is the feature size design rule, representing usually the metal line width. Flash and racetrack both store multiple bits per cell, but the comparison can still be made. For instance, hard drives appeared to be reaching theoretical limits around 650 nm²/bit, defined primarily by the capability to read and write to specific areas of the magnetic surface. DRAM has a cell size of about 6 F², SRAM is much less dense at 120 F². NAND flash memory is currently the densest form of non-volatile memory in widespread use, with a cell size of about 4.5 F², but storing three bits per cell for an effective size of 1.5 F². NOR flash memory is slightly less dense, at an effective 4.75 F², accounting for 2-bit operation on a 9.5 F² cell size. In the vertical orientation (U-shaped) racetrack, nearly 10-20 bits are stored per cell, which itself would have a physical size of at least about 20 F². In addition, bits at different positions on the "track" would take different times (from ~10 to ~1000 ns, or 10 ns/bit) to be accessed by the read/write sensor, because the "track" would move the domains at a fixed rate of ~100 m/s past the read/write sensor. Development challenges One limitation of the early experimental devices was that the magnetic domains could be pushed only slowly through the wires, requiring current pulses on the orders of microseconds to move them successfully. This was unexpected, and led to performance equal roughly to that of hard drives, as much as 1000 times slower than predicted. Recent research has traced this problem to microscopic imperfections in the crystal structure of the wires which led to the domains becoming "stuck" at these imperfections. Using an X-ray microscope to directly image the boundaries between the domains, their research found that domain walls would be moved by pulses as short as a few nanoseconds when these imperfections were absent. This corresponds to a macroscopic performance of about 110 m/s. The voltage required to drive the domains along the racetrack would be proportional to the length of the wire. The current density must be sufficiently high to push the domain walls (as in electromigration). A difficulty for racetrack technology arises from the need for high current density (>108 A/cm2); a 30 nm x 100 nm cross-section would require >3 mA. The resulting power draw becomes higher than that required for other memories, e.g., spin-transfer torque memory (STT-RAM) or flash memory. Another challenge associated with racetrack memory is the stochastic nature in which the domain walls move, i.e., they move and stop at random positions. There have been attempts to overcome this challenge by producing notches at the edges of the nanowire. Researchers have also proposed staggered nanowires to pin the domain walls precisely. Experimental investigations have shown the effectiveness of staggered domain wall memory. Recently researchers have proposed non-geometrical approaches such as local modulation of magnetic properties through composition modification. Techniques such as annealing induced diffusion and ion-implantation are used. See also Giant magnetoresistance (GMR) effect Magnetoresistive random-access memory (MRAM) Spintronics Spin transistor References External links Redefining the Architecture of Memory IBM Moves Closer to New Class of Memory (YouTube video) IBM Racetrack Memory Project Computer memory Non-volatile memory IBM storage devices Spintronics
Racetrack memory
[ "Physics", "Materials_science" ]
1,757
[ "Spintronics", "Condensed matter physics" ]
13,276,958
https://en.wikipedia.org/wiki/Initial%20value%20formulation%20%28general%20relativity%29
The initial value formulation of general relativity is a reformulation of Albert Einstein's theory of general relativity that describes a universe evolving over time. Each solution of the Einstein field equations encompasses the whole history of a universe – it is not just some snapshot of how things are, but a whole spacetime: a statement encompassing the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein's theory appears to be different from most other physical theories, which specify evolution equations for physical systems; if the system is in a given state at some given moment, the laws of physics allow you to extrapolate its past or future. For Einstein's equations, there appear to be subtle differences compared with other fields: they are self-interacting (that is, non-linear even in the absence of other fields); they are diffeomorphism invariant, so to obtain a unique solution, a fixed background metric and gauge conditions need to be introduced; finally, the metric determines the spacetime structure, and thus the domain of dependence for any set of initial data, so the region on which a specific solution will be defined is not, a priori, defined. There is, however, a way to re-formulate Einstein's equations that overcomes these problems. First of all, there are ways of rewriting spacetime as the evolution of "space" in time; an earlier version of this is due to Paul Dirac, while a simpler way is known after its inventors Richard Arnowitt, Stanley Deser and Charles Misner as ADM formalism. In these formulations, also known as "3+1" approaches, spacetime is split into a three-dimensional hypersurface with interior metric and an embedding into spacetime with exterior curvature; these two quantities are the dynamical variables in a Hamiltonian formulation tracing the hypersurface's evolution over time. With such a split, it is possible to state the initial value formulation of general relativity. It involves initial data which cannot be specified arbitrarily but needs to satisfy specific constraint equations, and which is defined on some suitably smooth three-manifold ; just as for other differential equations, it is then possible to prove existence and uniqueness theorems, namely that there exists a unique spacetime which is a solution of Einstein equations, which is globally hyperbolic, for which is a Cauchy surface (i.e. all past events influence what happens on , and all future events are influenced by what happens on it), and has the specified internal metric and extrinsic curvature; all spacetimes that satisfy these conditions are related by isometries. The initial value formulation with its 3+1 split is the basis of numerical relativity; attempts to simulate the evolution of relativistic spacetimes (notably merging black holes or gravitational collapse) using computers. However, there are significant differences to the simulation of other physical evolution equations which make numerical relativity especially challenging, notably the fact that the dynamical objects that are evolving include space and time itself (so there is no fixed background against which to evaluate, for instance, perturbations representing gravitational waves) and the occurrence of singularities (which, when they are allowed to occur within the simulated portion of spacetime, lead to arbitrarily large numbers that would have to be represented in the computer model). See also ADM formalism Notes References Kalvakota, Vaibhav R. (July 1, 2021). "A brief account of the Cauchy problem in General Relativity". General relativity
Initial value formulation (general relativity)
[ "Physics" ]
737
[ "General relativity", "Theory of relativity" ]
13,277,538
https://en.wikipedia.org/wiki/Thermal%20conductivity%20detector
The thermal conductivity detector (TCD), also known as a katharometer, is a bulk property detector and a chemical specific detector commonly used in gas chromatography. This detector senses changes in the thermal conductivity of the column eluent and compares it to a reference flow of carrier gas. Since most compounds have a thermal conductivity much less than that of the common carrier gases of helium or hydrogen, when an analyte elutes from the column the effluent thermal conductivity is reduced, and a detectable signal is produced. Operation The TCD consists of an electrically heated filament in a temperature-controlled cell. Under normal conditions there is a stable heat flow from the filament to the detector body. When an analyte elutes and the thermal conductivity of the column effluent is reduced, the filament heats up and changes resistance. This resistance change is often sensed by a Wheatstone bridge circuit which produces a measurable voltage change. The column effluent flows over one of the resistors while the reference flow is over a second resistor in the four-resistor circuit. A schematic of a classic thermal conductivity detector design utilizing a Wheatstone bridge circuit is shown. The reference flow across resistor 4 of the circuit compensates for drift due to flow or temperature fluctuations. Changes in the thermal conductivity of the column effluent flow across resistor 3 will result in a temperature change of the resistor and therefore a resistance change which can be measured as a signal. Since all compounds, organic and inorganic, have a thermal conductivity different from helium or hydrogen, virtually all compounds can be detected. That's why the TCD is often called a universal detector. Used after a separation column (in a chromatograph), a TCD measures the concentrations of each compound contained in the sample. Indeed, the TCD signal changes when a compound passes through it, shaping a peak on a baseline. The peak position on the baseline reflects the compound type. The peak area (computed by integrating the TCD signal over time) is representative of the compound concentration. A sample whose compounds concentrations are known is used to calibrate the TCD: concentrations are affected to peak areas through a calibration curve. The TCD is a good general purpose detector for initial investigations with an unknown sample compared to the FID that will react only to combustible compounds (Ex: hydrocarbons). Moreover, the TCD is a non-specific and non-destructive technique. The TCD is also used in the analysis of permanent gases (argon, oxygen, nitrogen, carbon dioxide) because it responds to all these substances unlike the FID which cannot detect compounds which do not contain carbon-hydrogen bonds. Considering detection limit, both TCD and FID reach low concentration levels (inferior to ppm or ppb). Both of them require pressurized carrier gas (Typically: H2 for FID, He for TCD) but due to the risk associated with storing H2 (high flammability, see Hydrogen safety), TCD with He should be considered in locations where safety is crucial. Considerations One thing to be aware of when operating a TCD is that gas flow must never be interrupted when the filament is hot, as doing so may cause the filament to burn out. While the filament of a TCD is generally chemically passivated to prevent it from reacting with oxygen, the passivation layer can be attacked by halogenated compounds, so these should be avoided wherever possible. If analyzing for hydrogen, the peak will appear as negative when helium is used as the reference gas. This problem can be avoided if another reference gas is used, for example argon or nitrogen, although this will significantly reduce the detector's sensitivity towards any compounds other than hydrogen. Process description It functions by having two parallel tubes both containing gas and heating coils. The gases are examined by comparing the rate of loss of heat from the heating coils into the gas. The coils are arranged in a bridge circuit so that resistance changes due to unequal cooling can be measured. One channel normally holds a reference gas and the mixture to be tested is passed through the other channel. Applications Katharometers are used medically in lung function testing equipment and in gas chromatography. The results are slower to obtain compared to a mass spectrometer, but the device is inexpensive, and has good accuracy when the gases in question are known, and it is only the proportion that must be determined. Monitoring of hydrogen purity in hydrogen-cooled turbogenerators. Detection of helium loss from the helium vessel of an MRI superconducting magnet. Also used within the brewing industry to quantify the amount of carbon dioxide within beer samples. Used within the energy industry to quantify the amount (calorific value) of methane within biogas samples. Used within the food and drink industry to quantify and/or validate food packaging gases. Used within the oil&gas industry to quantify the percentage of HCs when drilling into a formation. References Gas chromatography Measuring instruments
Thermal conductivity detector
[ "Chemistry", "Technology", "Engineering" ]
1,060
[ "Chromatography", "Gas chromatography", "Measuring instruments" ]
9,286,935
https://en.wikipedia.org/wiki/Head%20%28vessel%29
A head is one of the end caps on a cylindrically shaped pressure vessel. Principle Vessel dished ends are mostly used in storage or pressure vessels in industry. These ends, which in upright vessels are the bottom and the top, use less space than a hemisphere (which is the ideal form for pressure containments) while requiring only a slightly thicker wall. Manufacturing The manufacturing of such an end is easier than that of a hemisphere. The starting material is first pressed to a radius r1 and then curled at the edge creating the second radius r2. Vessel dished ends can also be welded together from smaller pieces. Shapes The shape of the heads used can vary. The most common head shapes are: Hemispherical head A sphere is the ideal shape for a head, because the stresses are distributed evenly through the material of the head. The radius (r) of the head equals the radius of the cylindrical part of the vessel. Ellipsoidal head This is also called an elliptical head. The shape of this head is more economical, because the height of the head is just a fraction of the diameter. Its radius varies between the major and minor axis; usually the ratio is 2:1. Semi–Ellipsoidal Dished Heads 2:1 Semi-Ellipsoidal dished heads are deeper and stronger than the more popular torispherical dished heads. The greater depth results in the head being more difficult to form, and this makes them more expensive to manufacture. However, the cost is offset by a potential reduction in the specified thickness due to the dished head having greater overall strength and resistance to pressure. Torispherical head (or flanged and dished head) These heads have a dish with a fixed radius (r1), the size of which depends on the type of torispherical head. The transition between the cylinder and the dish is called the knuckle. The knuckle has a toroidal shape. The most common types of torispherical heads are: ASME F&D head Commonly used for ASME pressure vessels, these torispherical heads have a crown radius equal to the outside diameter of the head (), and a knuckle radius equal to 6% of the outside diameter (). The ASME design code does not allow the knuckle radius to be any less than 6% of the outside diameter. Klöpper head This is a torispherical head. The dish has a radius that equals the diameter of the cylinder it is attached to (). The knuckle has a radius that equals a tenth of the diameter of the cylinder (), hence its alternative designation "decimal head". Also other sizes are: ,(page13) rest of height () . Korbbogen head This is a torispherical head also named Semi ellipsoidal head (According to DIN 28013). The radius of the dish is 80% of the diameter of the cylinder (). The radius of the knuckle is (). Also other sizes are , rest of height () . This shape finds its origin in architecture; see Korbbogen, architectural information. 80-10 head These heads have a crown radius of 80% of outside diameter, and a knuckle radius of 10% of outside diameter. Flat head This is a head consisting of a toroidal knuckle connecting to a flat plate. This type of head is typically used for the bottom of cookware. Diffuser head This type of head is often found on the bottom of aerosol spray cans. It is an inverted torispherical head. Conical head This is a cone-shaped head. Heat treatment Heat treatment may be required after cold forming, but not for heads formed by hot forming. References Pressure vessels
Head (vessel)
[ "Physics", "Chemistry", "Engineering" ]
777
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
5,473,402
https://en.wikipedia.org/wiki/Pair%20distribution%20function
The pair distribution function describes the distribution of distances between pairs of particles contained within a given volume. Mathematically, if a and b are two particles, the pair distribution function of b with respect to a, denoted by is the probability of finding the particle b at distance from a, with a taken as the origin of coordinates. Overview The pair distribution function is used to describe the distribution of objects within a medium (for example, oranges in a crate or nitrogen molecules in a gas cylinder). If the medium is homogeneous (i.e. every spatial location has identical properties), then there is an equal probability density for finding an object at any position : , where is the volume of the container. On the other hand, the likelihood of finding pairs of objects at given positions (i.e. the two-body probability density) is not uniform. For example, pairs of hard balls must be separated by at least the diameter of a ball. The pair distribution function is obtained by scaling the two-body probability density function by the total number of objects and the size of the container: . In the common case where the number of objects in the container is large, this simplifies to give: . Simple models and general properties The simplest possible pair distribution function assumes that all object locations are mutually independent, giving: , where is the separation between a pair of objects. However, this is inaccurate in the case of hard objects as discussed above, because it does not account for the minimum separation required between objects. The hole-correction (HC) approximation provides a better model: where is the diameter of one of the objects. Although the HC approximation gives a reasonable description of sparsely packed objects, it breaks down for dense packing. This may be illustrated by considering a box completely filled by identical hard balls so that each ball touches its neighbours. In this case, every pair of balls in the box is separated by a distance of exactly where is a positive whole number. The pair distribution for a volume completely filled by hard spheres is therefore a set of Dirac delta functions of the form: . Finally, it may be noted that a pair of objects which are separated by a large distance have no influence on each other's position (provided that the container is not completely filled). Therefore, . In general, a pair distribution function will take a form somewhere between the sparsely packed (HC approximation) and the densely packed (delta function) models, depending on the packing density . Radial distribution function Of special practical importance is the radial distribution function, which is independent of orientation. It is a major descriptor for the atomic structure of amorphous materials (glasses, polymers) and liquids. The radial distribution function can be calculated directly from physical measurements like light scattering or x-ray powder diffraction by performing a Fourier Transform. In Statistical Mechanics the PDF is given by the expression Applications Thin Film Pair Distribution Function When thin films are disordered, as they are in electronic devices, pair distribution is used to view the strain and structure-properties of that material or composition. They have these properties that cannot be exploited in the bulk or crystalline form. There is a method with the radial distribution that is able to view the local structure of a disordered thin film of GeSe2. But the creators of this method called a need for a better method to view the mid-range order of disordered films. The creation of thin-film Pair Distribution Function (tfPDF) uses a statistical distribution of a material’s mid-range order that enables viewing important details like the disorder. In this technique, 2D data from a scattering method is integrated and Fourier transformed into 1D data that shows the probability of bonds in that material. TfPDF works best when in conjunction with other characterization methods like transmission electron microscopy. Although a developing methodology, tfPDF can give complete structure-property relationships through a reliable characterization technique. See also classical-map hypernetted-chain method References Fischer-Colbrie, Bienenstock, Fuoss, Marcus. Phys. Rev. B (1988) 38, 12388 Jensen, K. M., Billinge, S. J. (2015). IUCrJ, 2(5), 481-489. Statistical mechanics Condensed matter physics
Pair distribution function
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
867
[ "Phases of matter", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
5,474,018
https://en.wikipedia.org/wiki/Auxiliary-field%20Monte%20Carlo
Auxiliary-field Monte Carlo is a method that allows the calculation, by use of Monte Carlo techniques, of averages of operators in many-body quantum mechanical (Blankenbecler 1981, Ceperley 1977) or classical problems (Baeurle 2004, Baeurle 2003, Baeurle 2002a). Reweighting procedure and numerical sign problem The distinctive ingredient of "auxiliary-field Monte Carlo" is the fact that the interactions are decoupled by means of the application of the Hubbard–Stratonovich transformation, which permits the reformulation of many-body theory in terms of a scalar auxiliary-field representation. This reduces the many-body problem to the calculation of a sum or integral over all possible auxiliary-field configurations. In this sense, there is a trade-off: instead of dealing with one very complicated many-body problem, one faces the calculation of an infinite number of simple external-field problems. It is here, as in other related methods, that Monte Carlo enters the game in the guise of importance sampling: the large sum over auxiliary-field configurations is performed by sampling over the most important ones, with a certain probability. In classical statistical physics, this probability is usually given by the (positive semi-definite) Boltzmann factor. Similar factors arise also in quantum field theories; however, these can have indefinite sign (especially in the case of Fermions) or even be complex-valued, which precludes their direct interpretation as probabilities. In these cases, one has to resort to a reweighting procedure (i.e., interpret the absolute value as probability and multiply the sign or phase to the observable) to get a strictly positive reference distribution suitable for Monte Carlo sampling. However, it is well known that, in specific parameter ranges of the model under consideration, the oscillatory nature of the weight function can lead to a bad statistical convergence of the numerical integration procedure. The problem is known as the numerical sign problem and can be alleviated with analytical and numerical convergence acceleration procedures (Baeurle 2002, Baeurle 2003a). See also Quantum Monte Carlo References Implementations ALF QUEST QMCPACK External links Theory and Computation of Advanced Materials and Sensors Group Quantum mechanics Monte Carlo methods Quantum Monte Carlo
Auxiliary-field Monte Carlo
[ "Physics", "Chemistry" ]
464
[ "Monte Carlo methods", "Quantum chemistry", "Quantum Monte Carlo", "Computational physics" ]
5,477,402
https://en.wikipedia.org/wiki/DCMU
DCMU (3-(3,4-dichlorophenyl)-1,1-dimethylurea) is an algicide and herbicide of the aryl urea class that inhibits photosynthesis. It was introduced by Bayer in 1954 under the trade name of Diuron. History In 1952, chemists at E. I. du Pont de Nemours and Company patented a series of aryl urea derivatives as herbicides. Several compounds covered by this patent were commercialized as herbicides: chlortoluron (3-chloro-4-methylphenyl) and DCMU, the (3,4-dichlorophenyl) example. Subsequently, over thirty related urea analogs with the same mechanism of action reached the market worldwide. Synthesis As described in the du Pont patent, the starting material is 3,4-dichloroaniline, which is treated with phosgene to form a isocyanate derivative. This is subsequently reacted with dimethylamine to give the final product. Aryl-NH2 + COCl2 → Aryl-NCO Aryl-NCO + NH(CH3)2 → Aryl-NHCON(CH3)2 Mechanism of action DCMU is a very specific and sensitive inhibitor of photosynthesis. It blocks the QB plastoquinone binding site of photosystem II, disallowing the electron flow from photosystem II to plastoquinone. This interrupts the photosynthetic electron transport chain in photosynthesis and thus reduces the ability of the plant to turn light energy into chemical energy (ATP and reductant potential). DCMU only blocks electron flow from photosystem II, it has no effect on photosystem I or other reactions in photosynthesis, such as light absorption or carbon fixation in the Calvin cycle. However, because it blocks electrons produced from water oxidation in PS II from entering the plastoquinone pool, "linear" photosynthesis is effectively shut down, as there are no available electrons to exit the photosynthetic electron flow cycle for reduction of NADP+ to NADPH. In fact, it was found that DCMU not only does not inhibit the cyclic photosynthetic pathway, but, under certain circumstances, actually stimulates it. Because of these effects, DCMU is often used to study energy flow in photosynthesis. Toxicity DCMU (Diuron) has been characterized as a known/likely human carcinogen based on animal testing. References Herbicides Ureas Anilines Chlorobenzene derivatives Suspected carcinogens
DCMU
[ "Chemistry", "Biology" ]
553
[ "Organic compounds", "Herbicides", "Biocides", "Ureas" ]
5,477,631
https://en.wikipedia.org/wiki/International%20Association%20for%20Hydro-Environment%20Engineering%20and%20Research
The International Association for Hydro-Environment Engineering and Research (IAHR), founded in 1935, is a worldwide, non-profit, independent organisation of engineers and water specialists working in fields related to the hydro-environment and in particular with reference to hydraulics and its practical application. IAHR was called the International Association of Hydraulic Engineering and Research until 2009. Activities range from river and maritime hydraulics to water resources development, flood risk management and eco-hydraulics, through to ice engineering, hydroinformatics and continuing education and training. IAHR stimulates and promotes both research and its application, and by so doing strives to contribute to sustainable development, the optimisation of world water resources management and industrial flow processes. IAHR accomplishes its goals by a wide variety of member activities including: working groups, research agenda, congresses, specialty conferences, workshops and short courses; Journals, Monographs and Proceedings; by collaborating with international organisations such as UN Water, UNESCO, WMO, IDNDR, GWP, ICSU; and by co-operation with other water-related national and international organisations. IAHR publishes several international scientific journals in collaboration with Taylor & Francis and Elsevier – the Journal of Hydraulic Research, the Journal of River Basin Management, the Journal of Water Engineering and Research, the Revista Iberoamericana del Agua RIBAGUA jointly with the World Council of Civil Engineers (WCCE), the Journal of Ecohydraulics and theJournal of Hydro-Environment Engineering and Research with the Korean Water Resources Association. It also publishes Hydrolink, a quarterly magazine now FREE ACCESS. The activities of IAHR are carried out by two full-time professional secretariats with offices in Madrid, Spain, which is hosted by the consortium Spain Water (CEDEX, Direccion General del Agua, Direccion General de Costas, MAPAMA, Spain), and in Beijing, China, hosted by IWHR. The governing body of the association is a council elected by member ballot every two years. The current president is Prof. Joseph Hun-wei Lee (Hong Kong, China). The current vice-presidents are: Prof. Silke Wieprecht (Germany), Dr. Robert Ettema (USA), and Prof. Hyoseop Woo (South Korea). Dr. Ramon Gutierrez-Serret and Dr. Peng Jing are secretaries general. IAHR is a Scientific Associate of the International Council for Science (ICSU) and is a partner organisation of UN-Water. The IAHR World Congress is one of the most important activities of the International Association for Hydro-Environment Engineering and Research (IAHR) which typically attracts between 800 and 1500 participants from around the world. The 2022 IAHR World Congress, under the overall theme "From Snow to Sea", took place in Granada, Spain. Publications IAHR publishes the Journal of Hydraulic Research in partnership with Taylor & Francis. IAHR publishes the International Journal of River Basin Management together with the International Association of Hydrological Sciences and INBO and in partnership with Taylor & Francis. IAHR publishes the International Journal of Applied Water Engineering and Research together with the World Council of Civil Engineers and in partnership with Taylor & Francis. The IAHR Asia Pacific Division publishes the Journal of Hydro-Environment Research in collaboration with the KWRA, Korean Water Resources Association and Elsevier The IAHR Latin America Division publishes the Revista Iberoamericana del Agua in collaboration with the World Council of Civil Engineers (WCCE) References Hydraulic engineering organizations Members of the International Council for Science Organizations established in 1935 Engineering societies International organisations based in Spain International scientific organizations 1935 establishments in the Netherlands Organisations based in Madrid Members of the International Science Council
International Association for Hydro-Environment Engineering and Research
[ "Engineering" ]
756
[ "Engineering societies", "Hydraulic engineering organizations", "Civil engineering organizations" ]
5,477,889
https://en.wikipedia.org/wiki/Verigy
Verigy Ltd was a Cupertino, California-based semiconductor automatic test equipment manufacturer. The company existed as a business within Hewlett-Packard before it was spun off in 2006 as a standalone company. It was purchased by Advantest in 2011. History Verigy was started by Hewlett-Packard, reported to David Packard in its early days, and was spun off from Agilent Technologies in 2006. The company went public on the NASDAQ in June 2006. The CEO was Keith Barnes, who later became Chairman and CEO. The CFO was Bob Nikl. In 2011 Mr. Barnes moved to Chairman of the Board of Directors and Jorge Titinger became CEO and President. The company's NASDAQ symbol was VRGY. Verigy designs, develops, manufactures, sells and services advanced semiconductor test systems for the flash memory, high-speed memory and system-on-chip (SoC) markets. Verigy's products are used worldwide in design validation, characterization, and high-volume manufacturing test. The company began doing business as Verigy on June 1, 2006 with its global headquarters located in Singapore. On December 6, 2007, Verigy announced the acquisition of Inovys, a privately held company that provides stuff for design debug, failure analysis and yield acceleration for complex semiconductor devices and processes. On June 15, 2009, Verigy acquired Touchdown Technologies, a privately held company that designs, manufactures, and supports advanced Microelectromechanical systems probe cards to support the wafer test needs of semiconductor manufacturers worldwide. On November 18, 2010, Verigy announced its intent to merge with LTX-Credence. On December 7, 2010, Advantest Japan made an all-cash offer for the company, announcing that it planned to acquire Verigy in March 2011, topping LTX-Credence's bid. On July 4, 2011, after two reviews of the transaction by the Department of Justice the company announced that Advantest Corporation (, ) completed its acquisition of Verigy in an all-cash deal valued at $1.100 billion. The resulting company was the largest manufacturer of semiconductor test equipment in the world. Trading of Verigy ordinary shares was suspended subsequently. Products The main test system platforms offered by Verigy were V101 for the low-cost IC market; V6000 for the flash and DRAM memory market; V93000 for the SoC/SiP market; and V93000 HSM for the high-speed memory market. In addition it offered software for design debug, failure analysis and yield acceleration. Market listing and competition Verigy announced its initial public offering of 8.5 million shares of common stock on June 13, 2006, priced at $15.00 per share, and is listed on the NASDAQ National Market under the ticker symbol VRGY. Agilent spun off the remaining Verigy stock to its shareholders in November 2006. Sale of its shares were suspended on its sale to Adventest in 2011. Verigy's principal competitors in the ATE business were Teradyne and its 2011 prospective merger partner, LTX-Credence. References External links Verigy website Equipment semiconductor companies Companies formerly listed on the Nasdaq Electronic test equipment manufacturers Electronics companies established in 2006 Electronics companies disestablished in 2011 2006 establishments in California 2011 disestablishments in California
Verigy
[ "Engineering" ]
707
[ "Equipment semiconductor companies", "Semiconductor fabrication equipment" ]